id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,895,131 | I want to Create a WEbsite for Unit Conversion in wordpress | I want to make website which will be Unit Conversion like , Length Unit Converter, Weight... | 0 | 2024-06-20T18:27:07 | https://dev.to/hammad_nasarhaqitatdek/i-want-to-create-a-website-for-unit-conversion-in-wordpress-17en | javascript, python, devops | I want to make website which will be Unit Conversion like , [Length Unit Converter](https://uzzasoft.com/tools/conversions/length), [Weight Conversions](https://uzzasoft.com/tools/conversions/weight-and-mass), [Volume Converters](https://uzzasoft.com/tools/conversions/Volume) ETC,
I wnat to make bulk pages in wordpress with minimal efforts
How can I do that ?
Can I USE ANY PLUDGIN OR SCRIPT TO MAKE IT FASTER ???? | hammad_nasarhaqitatdek |
1,895,306 | Curso De Inteligência Artificial Gratuito: Do Zero Ao Avançado | Acesse agora um curso 100% gratuito sobre inteligência artificial, do nível básico ao avançado.... | 0 | 2024-06-23T13:49:57 | https://guiadeti.com.br/curso-inteligencia-artificial-gratuito-iniciante/ | cursogratuito, cursosgratuitos, desenvolvimento, inteligenciaartifici | ---
title: Curso De Inteligência Artificial Gratuito: Do Zero Ao Avançado
published: true
date: 2024-06-20 18:26:15 UTC
tags: CursoGratuito,cursosgratuitos,desenvolvimento,inteligenciaartifici
canonical_url: https://guiadeti.com.br/curso-inteligencia-artificial-gratuito-iniciante/
---
Acesse agora um curso 100% gratuito sobre inteligência artificial, do nível básico ao avançado. Aprenda as habilidades do futuro e diferencie-se no mercado de trabalho sem gastar nada.
O curso da Match oferece acesso imediato, aulas gravadas, mentorias ao vivo, correção de atividades, suporte e um certificado ao final do treinamento.
Totalmente online, o curso permite que você assista às vídeo-aulas onde e quando quiser, proporcionando uma formação completa e flexível.
## Curso De Inteligência Artificial
Acesse agora um curso 100% gratuito sobre inteligência artificial, do nível básico ao avançado. Aprenda as habilidades do futuro e destaque-se no mercado de trabalho sem nenhum custo.

_Imagem da página do curso_
### Formação Completa com Acesso Imediato
O curso da Match oferece uma formação completa, totalmente online, com vídeo-aulas que você pode assistir onde e quando quiser. Ao final do treinamento, você receberá um certificado de conclusão de curso.
Treinamento gratuito, desde a inscrição até a emissão do certificado. Chat disponível de segunda à sexta para tirar dúvidas com professores. Seu projeto final será revisado por um dos nossos professores, com feedback de melhorias
### Requisitos
- Ter 18 anos ou mais
- Acesso a um celular, computador ou notebook com internet
### Treinamento Online
Vídeo-aulas gravadas para você assistir onde e quando quiser. Estude no seu próprio ritmo. Confira os módulos:
#### Módulo 1 – Inteligência Artificial
- O que é inteligência artificial;
- Operações de computadores e dispositivos digitais;
- O que são tecnologias emergentes;
- Fundamentos da inteligência artificial;
- Dominando a arte de criar prompts.
#### Módulo 2 – Atendimento ao cliente e habilidades profissionais
- Suporte de clientes digitais;
- Resolução de problemas técnicos;
- Agile Explorer – desenvolvido com Agile na IBM;
- Essenciais para a candidatura de emprego.
#### Módulo Bônus – Desenvolvimento web
- Noções básicas da internet;
- Noções básicas de programação;
- Noções básicas sobre GIT e GitHub;
- Fundamentos de Desenvolvimento web;
- Noções básicas de Python;
- Programação em Python: Algorítimos;
- Teste com Python;
- Programação Orientada a Objetos em Python.
#### Projeto Final – Crie um projeto profissional para o seu portifólio
Você terá acesso a 7 projetos distintos e deverá escolher um deles para desenvolver o conteúdo transmitido tanto no primeiro quanto no segundo módulo, demonstrando de forma efetiva a sua criatividade e as habilidades adquiridas ao longo dos seus estudos.
### Mentorias ao Vivo
Uma vez por semana, participe de uma aula ao vivo para conversar diretamente com os professores e absorver novos conhecimentos
### Transforme Sua Carreira
Independente da sua área de atuação, saber inteligência artificial é um diferencial que pode transformar sua carreira. O Match é a sua oportunidade de se tornar um profissional disputado no mercado de trabalho. Desbloqueie sua vaga e garanta seu certificado, de graça!
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Inteligencia-Artificial-Iniciante-280x210.png" alt="Inteligência Artificial Iniciante" title="Inteligência Artificial Iniciante"></span>
</div>
<span>Curso De Inteligência Artificial Gratuito: Do Zero Ao Avançado</span> <a href="https://guiadeti.com.br/curso-inteligencia-artificial-gratuito-iniciante/" title="Curso De Inteligência Artificial Gratuito: Do Zero Ao Avançado"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Bootcamp-Machine-Learning-AWS-280x210.png" alt="Bootcamp Machine Learning AWS" title="Bootcamp Machine Learning AWS"></span>
</div>
<span>Bootcamp De Machine Learning Para AWS Gratuito</span> <a href="https://guiadeti.com.br/bootcamp-machine-learning-aws-gratuito/" title="Bootcamp De Machine Learning Para AWS Gratuito"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Workshop-Sobre-Figma-280x210.png" alt="Workshop Sobre Figma" title="Workshop Sobre Figma"></span>
</div>
<span>Workshop Sobre Figma Gratuito: Crie Seu Protótipo Do Zero</span> <a href="https://guiadeti.com.br/workshop-figma-gratuito-crise-seu-prototipo/" title="Workshop Sobre Figma Gratuito: Crie Seu Protótipo Do Zero"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Oracle-Learning-Explorer-280x210.png" alt="Oracle Learning Explorer" title="Oracle Learning Explorer"></span>
</div>
<span>Cursos Oracle Gratuitos: Treinamentos e Certificados</span> <a href="https://guiadeti.com.br/cursos-oracle-gratuitos-treinamentos-certificados/" title="Cursos Oracle Gratuitos: Treinamentos e Certificados"></a>
</div>
</div>
</div>
</aside>
## Inteligência Artificial
Nos últimos cinco anos, a inteligência artificial (IA) tem evoluído rapidamente, transformando diversos setores e impactando a vida cotidiana de maneiras significativas.
Desde avanços tecnológicos até aplicações práticas, a IA continua a moldar o futuro de forma acelerada.
### Avanços Tecnológicos
#### Aprendizado Profundo (Deep Learning)
O aprendizado profundo, uma subárea do machine learning, tem sido um dos principais motores do progresso em IA.
Nos últimos anos, temos visto avanços impressionantes em áreas como reconhecimento de imagem, processamento de linguagem natural e jogos, com algoritmos superando o desempenho humano em várias tarefas específicas.
#### Processamento de Linguagem Natural (NLP)
O processamento de linguagem natural tem feito grandes avanços, permitindo que máquinas compreendam e gerem linguagem humana de maneira mais eficaz.
Modelos como o GPT-3, desenvolvido pela OpenAI, demonstraram a capacidade de gerar texto coerente e contextualmente relevante, abrindo portas para novas aplicações em atendimento ao cliente, criação de conteúdo e tradução automática.
#### IA Generativa
A IA generativa, que inclui tecnologias como GANs (Generative Adversarial Networks), tem revolucionado a criação de conteúdo digital.
Essa tecnologia permite a geração de imagens, vídeos e músicas de alta qualidade, que são praticamente indistinguíveis dos criados por humanos. Isso tem aplicações em entretenimento, design e marketing, entre outros.
### Desafios e Considerações Éticas
#### Privacidade e Segurança
Com o aumento do uso de IA, surgem preocupações com privacidade e segurança. O uso extensivo de dados pessoais para treinar modelos de IA levanta questões sobre como esses dados são protegidos e utilizados.
Garantir a segurança dos sistemas de IA contra ataques cibernéticos é crucial para prevenir abusos.
#### Viés e Discriminação
Os algoritmos de IA podem perpetuar vieses existentes se forem treinados em dados enviesados. Isso pode levar a decisões discriminatórias em áreas como recrutamento, justiça e crédito.
Esforços significativos estão sendo feitos para tornar a IA mais justa e inclusiva, desenvolvendo técnicas para identificar e mitigar vieses.
#### Impacto no Mercado de Trabalho
A automação impulsionada pela IA está transformando o mercado de trabalho, substituindo algumas funções enquanto cria novas oportunidades.
Há um crescente debate sobre como lidar com a transição para uma força de trabalho mais automatizada e como garantir que os trabalhadores sejam capacitados para os empregos do futuro.
## Mastertech
A Mastertech é uma plataforma de educação inovadora dedicada a capacitar indivíduos em habilidades tecnológicas essenciais para o mercado de trabalho moderno.
Focada em oferecer cursos práticos e de alta qualidade, a Mastertech prepara seus alunos para enfrentar os desafios das indústrias digitais em constante evolução.
### Metodologia de Ensino
A metodologia de ensino da Mastertech é centrada no aprendizado prático e colaborativo. Os cursos são projetados para simular situações reais do mercado de trabalho, permitindo que os alunos desenvolvam projetos concretos e relevantes.
### Impacto no Mercado de Trabalho
A Mastertech tem um impacto significativo no mercado de trabalho, formando profissionais altamente qualificados e prontos para atender às demandas das empresas modernas.
Os alunos saem dos cursos com um portfólio robusto e experiência prática que os diferencia no competitivo mercado de tecnologia.
A Mastertech mantém parcerias estratégicas com empresas de tecnologia, facilitando a inserção dos alunos no mercado através de programas de estágio e vagas de emprego.
## Link de inscrição ⬇️
As [inscrições para o Curso De Inteligência Artificial](https://match.mastertech.com.br/) devem ser realizadas no site da Match!
## Compartilhe essa oportunidade e transforme sua carreira com o curso gratuito da Match!
Gostou do conteúdo sobre o curso gratuito de Inteligência Artificial? Então compartilhe com a galera!
O post [Curso De Inteligência Artificial Gratuito: Do Zero Ao Avançado](https://guiadeti.com.br/curso-inteligencia-artificial-gratuito-iniciante/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,895,129 | The Open Source Paradox: Fragility and Promise | The problems with open source software highlight the need for sustainable project management, addressing vulnerabilities, ensuring long-term security, and fostering innovation. | 0 | 2024-06-20T18:25:57 | https://opensauced.pizza/blog/problems-with-open-source | opensource | ---
title: The Open Source Paradox: Fragility and Promise
published: true
description: The problems with open source software highlight the need for sustainable project management, addressing vulnerabilities, ensuring long-term security, and fostering innovation.
tags: opensource
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4cw013k7zblovprfsxy.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-20 18:20 +0000
canonical_url: https://opensauced.pizza/blog/problems-with-open-source
---
In December 2021, the software world faced a crisis that would later be known as the “Log4Shell” vulnerability. The critical security flaw was found in Log4j, an open-source Java-based logging utility widely used across various applications and services. This vulnerability allowed remote code execution and exposed millions of systems to potential exploitation. Despite extensive efforts to patch and mitigate the issue, the Log4j vulnerability continues to plague the software industry, highlighting both the strengths and vulnerabilities of the open-source model.
## The Enduring Impact of Log4Shell
[According to a 2023 article by Connor Jones](https://www.theregister.com/2023/12/11/log4j_vulnerabilities/), even two years after the Log4Shell vulnerability was disclosed, nearly 25% of apps may not have updated their Log4j library after the vulnerability was fixed. The issue isn’t just the vulnerability, though; it’s also the ongoing challenge of maintaining up-to-date and secure dependencies within the open source ecosystem. There’s a broader problem of dependency management and the need for more awareness within the development community to effectively decrease these risks.
If we look at [Apache Log4j 2](https://app.opensauced.pizza/s/apache/logging-log4j2?range=360), in the last year, for example, we see that majority of the commits are coming from two contributors, [ppkarwasz](https://app.opensauced.pizza/u/ppkarwasz) and [vy](https://app.opensauced.pizza/u/vy). While we couldn’t say for sure what would happen if ppkarwasz left the project, we can confidently say that the project would feel the impact for a while.

### The Fragility of Dependencies
Our entire digital infrastructure is a house of cards built on good faith. One burned-out maintainer, one vulnerable dependency, one "I'm taking my ball and going home" moment, and suddenly we're all scrambling.
Jana Iris brings up another perspective in the problems with open source in her episode of [The Secret Sauce](https://www.youtube.com/watch?v=vwkyCbfKqB4&list=PLHyZ0Wz_A44VR4BXl_JOWSecQeWcZ-kS3&index=17), "it's actually the biggest thing I always talk about is building developer trust, but also building trust in the enterprise. The biggest thing is like, they don't want to start adopting a solution and then all of a sudden you're not here in a year."
Companies are in a constant state of digital FOMO - they want to innovate, but they don’t want to bet on the wrong project. Imagine adopting a solution only to find out a year later that the maintainer has decided to become a goat farmer in the Andes.
It’s like the xkcd comic that haunts every developer's dreams - the entire world relies on a project maintained by a single volunteer. It's funny because it's true, and it's terrifying because it's true.

But here's where it gets really interesting: this isn't just about managing risk. It's about shaping the future. The projects we choose to support today are the innovations we'll be using tomorrow. It's like we're all time travelers, using pull requests and commits to shape the future.
So, what's the solution? It's not simple, but it starts with dialogue. We need to be talking about this - not just after a major outage, but openly and constantly. We need to rethink how we understand, value, and support open source projects, contributors, and maintainers.
Because here's the truth: in the world of open source, we're all maintainers now. Every line of code we use is a responsibility we inherit. It's time we started acting like it.
But it goes beyond the dependencies we’re using. It’s also about looking to the future and the projects that are driving innovation. This is another one of the problems with open source software : redundancy in efforts and lack of visibility into past projects. We often rebuild solutions because we don’t know similar efforts were made before. Tools like [StarSearch](https://app.opensauced.pizza/star-search) can help by providing a comprehensive view of past and present contributions, preventing redundant work and saving valuable time and resources.
## The Promise of Informed Engagement: the Open Source AI Landscape
While we see these challenges across the open source landscape, they are significant in open source AI because it’s moving incredibly quickly and they represent a frontier. Addressing these issues will impact the whole landscape, including AI.
[](https://www.linkedin.com/posts/collin-wallace_opensource-ai-activity-7207906901576015872-W8Br?utm_source=share&utm_medium=member_desktop)
Supporting open source AI promotes transparency and accountability in AI development, allowing for greater examination of algorithms and models to address concerns about bias and ethical considerations. This openness is incredibly important as AI increasingly impacts our daily lives and decision-making processes.
Because researchers and developers worldwide can contribute to and build on existing projects, we’ll see benefits for the entire field. This collaborative approach also facilitates cross-domain applications, bringing AI to a variety of sectors.
Open source AI isn't just another tech trend. It's our best shot at democratizing one of the most powerful technologies of our time.
Take a look at the [Open Source AI workspace](https://app.opensauced.pizza/workspaces/79f026cd-4a32-43e7-a8b7-ea6e2b67333). Projects like [PyTorch](https://app.opensauced.pizza/s/pytorch/pytorch), [scikit-learn](https://app.opensauced.pizza/s/scikit-learn/scikit-learn), and [llama_index](https://app.opensauced.pizza/s/run-llama/llama_index) have high activity and low lottery factors.
By focusing on open source AI, we can tackle key open source issues and drive innovation by increasing contributor confidence, providing clear growth pathways, and ensuring project sustainability.
### The Sustainability Problem
One of the most pressing problems with open source software is the long-term sustainability of projects. To be a sustainable project, you need returning contributors and viewers who are willing to engage and support the project as a contributor. What does that mean? That means going beyond looking at stars and forks as metrics of success and understanding if those actions translate to contributions in the open source ecosystem and your own project.
At OpenSauced, we call this [Contributor Confidence](https://opensauced.pizza/docs/features/repo-pages/#insights-into-contributor-confidence). For instance, if your project has a lot of stars and forks, but not a lot of those translate to contributions, then it indicates the project isn’t effectively converting initial interest into contributions.

In a conversation from [The Secret Sauce podcast](https://youtu.be/xno59bpro50?feature=shared), Jordan Harband discusses the sustainability problem and highlights a potential solution. He mentions that while companies have resources like money and people, individual maintainers often lack these, leading to project stagnation:
> “I get a lot of complaints from people and companies about where we're blocked. We can't, well, companies have money and people and people have time. And that's, those are things that I don't really have at the moment. So if anybody showed up and was able to come up with enough money that I could justify, like at the moment, not looking for a new job for a month so I can work on this instead, I would be happy to do it. And similarly, if any company showed up – one tried and then ended up bailing on it – If any company showed up and said, we have people to put on this. Can you help them? Like you have my axe, like I'm here. I will jump on video calls.”
There’s a disconnect between individual contributors and corporate resources. Companies have the means to support open source projects but often lack the commitment or follow-through to sustain them. Maintainers shouldn’t have to wait until the point of complete burnout to receive the support they deserve from users.
### The Contributor Experience Hurdle
Another one of the issues with open source software is the new contributor experience. [@bdougie](https://app.opensauced.pizza/u/bdougie?range=180) captures some of the challenges that we don’t talk about enough in [the Secret Sauce](https://www.youtube.com/watch?v=xno59bpro50&list=PLHyZ0Wz_A44VR4BXl_JOWSecQeWcZ-kS3&index=30):
> "sometimes we need like way more context and intro conversations around that, which is like how to be a good beginner.” The open source community has traditionally focused on initiatives like "good first issue" and "first-timers" to encourage new contributors. However, as the ecosystem continues to grow, the sheer volume of projects and contributors can be overwhelming for new contributors and maintainers alike. "There needs to be like a step before, which is like, this is cause as you mentioned before, like when you got an open source and same when I got an open source, it was a smaller ecosystem."
This lack of introductory guidance and often experience on the part of first-time contributors creates a contributor confidence gap. They might struggle to navigate the open source ecosystem, understand complex codebases, or feel unwelcome within established communities. The result? Potential contributors leaving before they can even begin to contribute. This ultimately weakens the foundation of open source.
But what if the contributors aren’t ready to contribute? What if they need more experience? Encouraging contributors to dive into projects before they are adequately prepared can be counterproductive. Instead of creating a supportive environment, we risk overwhelming them, leading to frustration and disengagement. Additionally, pushing unprepared contributors into active development roles can result in low-quality contributions that require extensive revisions or even rejection, hurting their confidence and willingness to participate.
This premature push not only affects the individual contributor’s experience but also the health of the project. Experienced maintainers may find themselves spending more time than they have correcting errors or providing basic guidance.
If we want to [manage successful communities in open source](https://opensauced.pizza/blog/managing-successful-communities-in-open-source), ultimately, the goal should be to nurture new contributors, providing them with the tools, guidance, and gradual introduction they need to build their skills and confidence.
To be a good contributor, you have to understand [how to be a maintainer](https://opensauced.pizza/blog/maintainer-course), so you can appreciate the challenges and responsibilities maintainers face. This will allow contributors to have a sense of self-awareness about their role in the project’s ecosystem and how their contributions can best support its long-term success.
## Embracing the Open Source Paradox: A Call to Action
Open source is messy, complicated, and absolutely essential. It's a beautiful contradiction - a system built on freely shared code that powers trillion-dollar industries. But here's the kicker: this paradox isn't a bug, it's a feature.
The very qualities that make open source so powerful - its openness, its collaborative nature, its ability to evolve rapidly - are the same ones that create its biggest challenges. It's like trying to build a rocket while it's already in flight. Exciting? Absolutely. Easy? Definitely not.
By leaning into these challenges - sustainability, contributor experience, dependency management - we're not just fixing problems. We're unlocking the next level of innovation. It's like we're collectively leveling up the entire tech ecosystem.
And let's talk about companies for a second. If you're still treating open source like a free buffet, you're doing it wrong. It's time to step up, get strategic, and start actively nurturing the communities you depend on. The future of tech isn't just open - it's wide open. And it's waiting for us all to make our mark.
| bekahhw |
1,895,118 | Securing CouchDB with Keycloak Behind Nginx Reverse Proxy – Part 1 | In the coming weeks, I will face the task of enriching the current CouchDB deployment in one of the... | 27,797 | 2024-06-20T18:23:44 | https://dev.to/kishieel/securing-couchdb-with-keycloak-behind-nginx-reverse-proxy-part-1-m0e | keycloak, couchdb, devops, learning | In the coming weeks, I will face the task of enriching the current CouchDB deployment in one of the projects using it for metadata storage with features like SSO integration and fine-grained access management. As the SSO service of the project uses Keycloak under the hood and I am relatively new to both Keycloak and CouchDB, I decided to make some proof of concept beforehand and share the results within this blog post series.
With the experiments done here, I aim to achieve three goals. First, securing access to the CouchDB instance using JWT authentication handler and Nginx as a reverse proxy. Second, providing a CLI utility that allows authenticating seamlessly using the OAuth2 authorization code flow with PKCE. And third, implementing the required solutions to maintain the authentication and authorization process for applications created and deployed with CouchApps.
Each of these goals will receive a dedicated blog post to address the given requirements and to create a proof of concept that can be further extended for production deployment.
### Introduction
To simulate the production environment where the solutions should be implemented at the end of the process, I decided to use Docker and Bitnami containers as they are quick and easy to set up. The overall architecture will be composed of the following services:
- **Keycloak** — an open-source identity and access management solution that provides user management and fine-grained authorization features.
- **Keycloak Config CLI** — a utility to ensure the desired configuration state for a realm based on a JSON/YAML file.
- **PostgreSQL** — a powerful, open-source object-relational database system that will be used as Keycloak’s data storage.
- **CouchDB** — a document-oriented, open-source database, access to which we will secure using the OpenID Connect protocol offered by Keycloak.
- **Nginx** — a web server that can also be used as a reverse proxy and load balancer. Technically, we will use the OpenResty distribution, which comes with a Lua just-in-time compiler, but I will often refer to it as Nginx either way.
The following diagram presents a visualization of the interactions between the parties involved in the whole process.

### Setting Up the Environment
Skipping further discussion, let’s dive straight into the implementation of the docker-compose file. Within this paragraph, we will implement and explain each service one by one.
Starting with Keycloak, we can write the following YAML:
```yaml
version: "3.9"
services:
keycloak:
image: "bitnami/keycloak:24.0.3"
environment:
KEYCLOAK_HTTP_PORT: "8080"
KEYCLOAK_CREATE_ADMIN_USER: "true"
KEYCLOAK_ADMIN: "admin"
KEYCLOAK_PROXY: "edge"
KEYCLOAK_ADMIN_PASSWORD: "admin"
KEYCLOAK_DATABASE_HOST: "postgres"
KEYCLOAK_DATABASE_USER: "postgres"
KEYCLOAK_DATABASE_PASSWORD: "postgres"
KEYCLOAK_DATABASE_NAME: "postgres"
KEYCLOAK_DATABASE_PORT: "5432"
depends_on:
- "postgres"
networks:
- "cks-network"
networks:
cks-network:
driver: "bridge"
```
As mentioned previously, it is based on the one of Bitnami’s containers. Details about environment variables available to be set can be found on [Bitnami’s GitHub](https://github.com/bitnami/containers/tree/main/bitnami/keycloak). Here, we set up the default admin account and database credentials. Additionally, we set the proxy option to "edge" which basically means that communication with Keycloak will happen over HTTP and not over HTTPS protocol. This is acceptable as the Nginx reverse proxy will handle SSL for us.
This container depends on the PostgreSQL container, as Keycloak will use it as data storage, and belongs to the `cks-network`, the same as any other services we will add next.
For the next container, we will have the Keycloak Config CLI.
```yaml
services:
keycloak-config-cli:
image: "bitnami/keycloak-config-cli:5.12.0"
environment:
KEYCLOAK_URL: "http://keycloak:8080"
KEYCLOAK_USER: "admin"
KEYCLOAK_PASSWORD: "admin"
IMPORT_FILES_LOCATIONS: "/config/*"
depends_on:
- "keycloak"
volumes:
- "./keycloak/master.yaml:/config/master.yaml"
networks:
- "cks-network"
```
Again, we have Bitnami’s container described in detail on [GitHub](https://github.com/bitnami/containers/tree/main/bitnami/keycloak-config-cli). What we need to set up here are basically environment variables related to Keycloak access and the config directory. We also specify that this container belongs to our default network and depends on the Keycloak container. Furthermore, we attach a volume here where the YAML file with realm configuration will be placed in the latter sections of this blog post.
Furthermore, we will have PostgreSQL, which in this case is strongly related to Keycloak as well.
```yaml
services:
postgres:
image: "bitnami/postgresql:15.6.0"
environment:
POSTGRESQL_USERNAME: "postgres"
POSTGRESQL_PASSWORD: "postgres"
POSTGRESQL_DATABASE: "postgres"
volumes:
- "cks-postgres-data:/bitnami/postgresql"
networks:
- "cks-network"
volumes:
cks-postgres-data:
driver: "local"
```
Quite simple. Once more, it’s [Bitnami’s container](https://github.com/bitnami/containers/tree/main/bitnami/postgresql) with default database access configuration stored in environment volumes. Additionally, we also have a volume attached to prevent data loss in case of container restart. Same network as previously.
As we’ve configured the first database, we can configure another one, so now it’s time for CouchDB.
```yaml
services:
couchdb:
image: "bitnami/couchdb:3.3.3"
environment:
COUCHDB_USER: "admin"
COUCHDB_PASSWORD: "admin"
COUCHDB_SECRET: "top-secret"
COUCHDB_BIND_ADDRESS: "0.0.0.0"
COUCHDB_PORT_NUMBER: "5984"
volumes:
- "cks-couchdb-data:/bitnami/couchdb"
- "./couchdb/10-config.ini:/opt/bitnami/couchdb/etc/local.d/10-config.ini"
networks:
- "cks-network"
volumes:
cks-postgres-data:
driver: "local"
```
One last time, we use Bitnami’s container version, the description of which can be found [here](https://github.com/bitnami/containers/tree/main/bitnami/couchdb). In environment variables, we have default admin credentials, secret used for cookie encryption, and startup config for CouchDB. Here, we have a persistent volume for data as well, and additionally, we also have a volume with a config file which will be described in detail later.
Last, but not least, there will be Nginx — our reverse proxy.
```yaml
version: "3.9"
services:
nginx:
build: "./nginx"
ports:
- "80:80"
- "443:443"
volumes:
- "./nginx/certs:/opt/bitnami/openresty/nginx/conf/bitnami/certs:ro"
- "./nginx/server_blocks:/opt/bitnami/openresty/nginx/conf/server_blocks:ro"
- "./nginx/lua:/opt/bitnami/openresty/lua:ro"
depends_on:
- "couchdb"
- "keycloak"
networks:
- "cks-network"
```
This service will also use the Bitnami container, but we will add a few packages there. Don’t worry about it now as we will cover it in a separate section. Nginx is also the single service that exposes ports so we can communicate with it. It depends on both the CouchDB and Keycloak services and belongs to the same network as all other containers. In volumes, we have separate directories attached for SSL certificates, server block definitions, and Lua scripts.
Now that the overall infrastructure is ready, we can focus on configuring individual services.
### Configuring Keycloak
Theoretically, we could configure the entire Keycloak realm manually by clicking appropriate options in the GUI. However, I believe that posting all the screenshots here would not be practical as there would be a lot of them, and they may change over time. That’s why I decided to incorporate the Keycloak Config CLI. Thanks to this utility, we can store the configuration in a convenient YAML file that will be simple to present and describe.
In our application, we will have only one realm called “master”, and the initial setup looks as follows:
```yaml
realm: "master"
attributes:
frontendUrl: "https://auth.oblivio.localhost"
```
This is not very interesting but defines the realm name and the URL of the frontend application. We will later define an appropriate server block in Nginx to proxy this particular subdomain to the Keycloak container.
If you are wondering about the part with “oblivio”, it is nothing special. I just decided to name this application somehow and chose this particular word. It means “forgetfulness” or “loss of remembrance.”
The next part of our configuration is groups definitions. We will have one main group for CouchDB users with two subgroups for admins and regular users. Later, using attribute mapper, we will add appropriate roles for the access token so CouchDB can use it to properly identify user roles.
```yaml
groups:
- name: "couchdb"
path: "/couchdb"
subGroups:
- name: "admins"
path: "/couchdb/admins"
attributes:
_couchdb.roles:
- "_admin"
- name: "users"
path: "/couchdb/users"
attributes:
_couchdb.roles:
- "_user"
```
The attribute mentioned is called `_couchdb.roles`, and it is the default property name used by CouchDB to infer user roles from the access token, but it can also be changed to another value if needed.
Later, we have clients configuration. For now, we have only one client which will be used by Nginx to authorize access to CouchDB, but in the next part of the series, we will add one more.
```yaml
clients:
- clientId: "couchdb-proxy"
name: "CouchDB Proxy"
publicClient: "false"
clientAuthenticatorType: "client-secret"
secret: "32scbZbgGNSaVOAAuZHgYeTjdQrkfwTh"
redirectUris:
- "https://couchdb.oblivio.localhost/*"
standardFlowEnabled: "true"
directAccessGrantsEnabled: "false"
optionalClientScopes:
- "couchdb"
- "profile"
- "email"
```
This client type is confidential and has a secret set up so Nginx would be able to store it securely. The CouchDB Proxy client allows for only the authorization code flow known for OAuth2 and permits only redirection URIs to the specified subdomain where the CouchDB instance will be available.
Additionally, it comes with three optional client scopes. Email and profile scopes are shipped with the default Keycloak config, but the scope for CouchDB is custom, and we can define it with the following YAML.
```yaml
clientScopes:
- name: "couchdb"
description: "CouchDB"
protocol: "openid-connect"
protocolMappers:
- name: "couchdb-roles"
protocol: "openid-connect"
protocolMapper: "oidc-usermodel-attribute-mapper"
config:
user.attribute: "_couchdb.roles"
claim.name: "_couchdb\\.roles"
jsonType.label: "String"
userinfo.token.claim: "true"
access.token.claim: "true"
id.token.claim: "false"
multivalued: "true"
aggregate.attrs: "true"
```
This scope contains a protocol mapper for OpenID Connect which defines to which property the attribute we added to the group will be mapped. Notice that the claim name contains two backslashes as otherwise the dot would mean that “roles” should be an object inside of the “_couchdb” property, which is not what we want. Backslashes prevent this behavior and store the whole string as one property name. As the user may have more than one role, we set this claim as a multivalued and aggregated attribute. At the end, we define in which tokens it should be present.
The last part of the configuration is the RSA key configuration that will be later used to sign generated tokens.
```yaml
components:
org.keycloak.keys.KeyProvider:
- name: "rsa"
providerId: "rsa"
config:
active: ["true"]
enabled: ["true"]
priority: ["1000"]
algorithm: ["RS256"]
kid: ["xvAsHaF2w0M1y9GG6bmFannhp9aFLKvHQRaAAb8gUYc"]
privateKey: ["-----BEGIN PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDM98i2/CFRiFYNlUtJ5ppUNyZUOa2+7SMnya3tzfrPOEVma6AJAMJ9YR2CL6SIkyz6q5RqnhQSXTzvPO9OasKuXLWtxpjVZRawCXoCciyaJTLe8qcmb6SOCOsjRSiGB1PaivJ/7NbCDiP6r8BxX4TXsYfdGW2EDBot+klxG6a+FObCA7KJ1bp/yPbgpP+mNyj7P8lG22E3USRjE3g8ag/J8b3UK+Azu1yBmdYAEPG1qz8q46tgF9qJiDo7QNDroRDLxoclypMsHJ3AIbJh5lquAl4uTALYMLI2foKJqXlc+JZ9tdTzxYg04R7SKuAcizdjZ2VccJNpGySGs5i8XguTAgMBAAECggEAOjpiQOmbpYfvumghPVtPmIEaWG8SVt0TUahPyvDrQZcg0BnfGu+mUOwX7/YM7eexrXy06x0BYr4uI2DSMxrNN6+KxVVX8beIHHZ0vOEmnpvWudOBfL/WpasO8bQh8QF/5uP2RDVKRVKzEfJ/3zVdjdEXYc5peEvf3BPwbTuHwRO4F69hgCZi8saBNXBinnOwQ8MSsUeA4RsC7+WcxygufBhNqjkqrYbpznkaZI5nrVdw7mb5E9KcOxbg0BUWnz141gPuUpu2O0iFiiAZoSlDtIwKCtdcvc2UJMYbXK9ORsIscRwP6b8T6O3Uhq1zkXQyjtLWbrfcpoNGGJ2udRFDEQKBgQD/UUVugyEOP98+S5e/2d2W+rk96HX7CKMQwj/ZA9NETugGAiD7fuUpbu3NzrAN7tsTXGOyArBEfdcXwjhvg52WhO9eOB/mcaiXEWVTk89ZrbcvZknDljKWM5zXOMvVZXAK6ci53jfVn5RLA2RIjK5mds1pLXIWgDHPrXFI7Nkb/wKBgQDNhA4/aaAyko8NiAWvk0hYZYQCFSH3YBl28lt90zzDjoZaQd/s8mpaRO4TY+KwFGBEznlFa28e1g0YzpZh13+V1ss4WT9NV/3tux2rblJWa2kmYbA/PeQDLHVfC/fq46T90uUW0wRIkc+nVTKG94Oo+tPUERjlmSJbzsjUDJLgbQKBgCXW9LRhSNfkzYBdEbuEXZwPwr6TIlE3QXutXmsabwhTrX2eeSbs8qfGYgY7mMon2V4wNjJexaMRB3zk8xpL5mI1h4huRwQPWk4xbNQLNxLydRDYVxxeuVabhaY8K7GP3CAx7+bkMWA+y2qmsQkzmHFlMCJjcuI0060U5pJJUBAfAoGALI30iMrdcBlV6hkTIn1Lsd5QQCNUucybuK3SJ/Ujt0Gu3uJpKXVkmS1Yb9u3yXShaklZATPJY2YEcNxYvd16S4HFjPHMR3hMFL38MK46K4IdybRkAVHpnMaGq5Rsqv+vRVfzUn9s7k6uNhjCW4BNitTWF6OdQilwyXaLE22magECgYAx1tGChvGQM3rYyJDA22ZNU4b+olc2bBJ0v45EX0unjGseuzPTKQRaGp8LqgByXcMuZqCCidsvlfrrz16hGHnsqPQFSV4ZL4D1pOKshmWhscLtF10FeC8z0QoJCNFaPuRkMCXyx+X+XjGsdfKRO/84z/7FfCI0t2QvvGWSnRpKUw==\n-----END PRIVATE KEY-----n"]
certificate: ["-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIUVv9/pUd6omHb9BhGmfZb/jVPmz4wDQYJKoZIhvcNAQELBQAwUzELMAkGA1UEBhMCUEwxEzARBgNVBAgMClNvbWUtU3RhdGUxHTAbBgNVBAcMFEdyZWF0ZXIgUG9sYW5kV2Fyc2F3MRAwDgYDVQQKDAdPYmxpdmlvMB4XDTI0MDUxNzE0NTkxNloXDTI2MDUxNzE0NTkxNlowUzELMAkGA1UEBhMCUEwxEzARBgNVBAgMClNvbWUtU3RhdGUxHTAbBgNVBAcMFEdyZWF0ZXIgUG9sYW5kV2Fyc2F3MRAwDgYDVQQKDAdPYmxpdmlvMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzPfItvwhUYhWDZVLSeaaVDcmVDmtvu0jJ8mt7c36zzhFZmugCQDCfWEdgi+kiJMs+quUap4UEl087zzvTmrCrly1rcaY1WUWsAl6AnIsmiUy3vKnJm+kjgjrI0UohgdT2oryf+zWwg4j+q/AcV+E17GH3RlthAwaLfpJcRumvhTmwgOyidW6f8j24KT/pjco+z/JRtthN1EkYxN4PGoPyfG91CvgM7tcgZnWABDxtas/KuOrYBfaiYg6O0DQ66EQy8aHJcqTLBydwCGyYeZargJeLkwC2DCyNn6Cial5XPiWfbXU88WINOEe0irgHIs3Y2dlXHCTaRskhrOYvF4LkwIDAQABo1MwUTAdBgNVHQ4EFgQU9poNBVHTqAYUxc5c4naQhd2kOOswHwYDVR0jBBgwFoAU9poNBVHTqAYUxc5c4naQhd2kOOswDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEANcDxRDTyi1VLSjA4DFm/s0aSYSRtiGJoYxCSjxW+IthzMDmV6kuI7c/n+O5gIOTBQ2gCF9evbVbFcF/nYq4zKo5WvCfrZ8Hekvjdm5TOSKMRGWaoydOsVsRPlvNN2q+iVFzmymPixWRblLzbYG1T0lRn6tLn2BKH0qkNUUg68ljA8qYgvulYo5FzSLB1KgZRjDyyDS5+IT/vr/M2H/4h1eCPdD2JROfxf4+3OKBXg5N2Y6DJ/mwNqe+8WGOLmaPDV6GaBVR8BcryYBohrEwYwouhqvNYsk5c1wLBS+k4T1PHC53I/9oGrdhX9jDQiHvQ2CzTp5e9rscbVr71nv03ug==\n-----END CERTIFICATE-----\n"]
```
I understand that at this point, you may feel uneasy about hard-coding the private key and secret in the earlier part. Don’t worry I feel the same way, but as this is mostly a proof of concept, I decided that this would be simpler at this point, so we will stick to this for now.
And that’s it, the whole configuration for the Keycloak realm. Later, we will also add a user to test it, but for now, we can move on to the CouchDB configuration.
### Preparing CouchDB for SSO Integration
Since the CouchDB configuration is shorter than the previous one, I will simply put the entire config below and briefly describe it.
```ini
[couchdb]
uuid = 5f1a34cf3b35423690c2474a7527e2ff
[chttpd]
authentication_handlers = {chttpd_auth, jwt_authentication_handler}, {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}
require_valid_user = false
[jwt_auth]
required_claims = exp, iat
roles_claim_path = _couchdb\.roles
[jwt_keys]
rsa:xvAsHaF2w0M1y9GG6bmFannhp9aFLKvHQRaAAb8gUYc = -----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzPfItvwhUYhWDZVLSeaaVDcmVDmtvu0jJ8mt7c36zzhFZmugCQDCfWEdgi+kiJMs+quUap4UEl087zzvTmrCrly1rcaY1WUWsAl6AnIsmiUy3vKnJm+kjgjrI0UohgdT2oryf+zWwg4j+q/AcV+E17GH3RlthAwaLfpJcRumvhTmwgOyidW6f8j24KT/pjco+z/JRtthN1EkYxN4PGoPyfG91CvgM7tcgZnWABDxtas/KuOrYBfaiYg6O0DQ66EQy8aHJcqTLBydwCGyYeZargJeLkwC2DCyNn6Cial5XPiWfbXU88WINOEe0irgHIs3Y2dlXHCTaRskhrOYvF4LkwIDAQAB\n-----END PUBLIC KEY-----\n
```
Moving from top to bottom, we have the UUID, which serves as the unique identifier for the instance. Later, we have a list of possible authentication handlers where we add the JWT authentication handler as it is not enabled by default. Because our users are no longer stored in the CouchDB instance, we also have to disable user validation.
Further, we have the JWT config where we set what claims are required and the path to the claim where the user’s roles are stored. Notice that here there is also a backslash which serves a similar purpose to the two backslashes described in the Keycloak configuration section. Lastly, we have the public key derived from the private key used to sign the access token so CouchDB could know that it can trust tokens provided from Keycloak.
### Nginx as a Reverse Proxy
We are almost there, but before we are ready to test this solution, we have to configure one last thing — the reverse proxy. As this part will be relatively long, it will be split into three subsections for building the Docker image, configuring Nginx’s server blocks, and writing Lua scripts for authentication.
#### Building the Docker Image
The Bitnami image for OpenResty is good as is, but it lacks a few Lua packages that we will need for integration with Keycloak.
```dockerfile
FROM bitnami/openresty:1.25.3-1
RUN opm get zmartzone/lua-resty-openidc
RUN opm get ledgetech/lua-resty-http
RUN opm get bungle/lua-resty-session=3.10
```
We are mostly interested in “zmartzone/lua-resty-openidc” as it implements the OpenID Connect Relying Party functionality, which we will benefit from for authorization with Keycloak. The following two packages are dependencies needed for the first one. As of the time of writing this post, the created solutions do not work with the “bungle/lua-resty-session” version newer than 3.10, so it is fixed to this version here.
### Configuring Server Blocks
Now we can start configuring server blocks for our Nginx container. We will start simple with the following block.
```nginx
server {
server_name _;
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
```
This part listens on port 80 for all domains and subdomains and redirects any request sent over HTTP to the secure version of the protocol. We will later generate self-signed certificates so we can use HTTPS on localhost.
Next, we have a server block for the Keycloak instance.
```nginx
server {
server_name auth.oblivio.localhost;
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
ssl_certificate bitnami/certs/server.crt;
ssl_certificate_key bitnami/certs/server.key;
location / {
proxy_pass http://keycloak:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
This part listens on port 443 for both IPv4 and IPv6 and handles requests sent to the subdomain “auth.oblivio.localhost”, as you may remember we set this address earlier as the frontend URL of Keycloak. There are also paths to SSL certificate and private key. We will generate them at the end of this section. And at the end, we handle all locations for this server to pass the request to the Keycloak container.
The last block here will be for the CouchDB.
```nginx
server {
server_name couchdb.oblivio.localhost;
listen 443 ssl;
listen [::]:443 ssl;
resolver 127.0.0.11 valid=10s;
http2 on;
ssl_certificate bitnami/certs/server.crt;
ssl_certificate_key bitnami/certs/server.key;
location / {
access_by_lua_file /opt/bitnami/openresty/lua/access.lua;
proxy_pass http://couchdb:5984;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
Once more, we have a similar setup. We configure the subdomain to handle, “couchdb.oblivio.localhost” in this case, ports to listen, and paths to SSL certificate and private key.
What’s different here is the “resolver” directive. It points to the special Docker DNS resolver, and it is needed because of subrequests that “zmartzone/lua-resty-openidc” will send for authentication purposes. Additionally, there is the “access_by_lua_file” directive, which points to the Lua script where we will create authentication logic in the next section.
Before moving on to the Lua part, let’s generate self-signed certificates using the “mkcert” utility. It’s relatively simple. This tool will also install this certificate in appropriate system directories so our browser can trust them.
```shell
mkcert -cert-file nginx/certs/server.crt -key-file nginx/certs/server.key oblivio.localhost \*.oblivio.localhost
```
We only have to define the path where the certificate and private key will be stored, as well as which domains it will protect. Quite simple, isn’t it?
### Lua Script for Authentication
Here we will start by defining options for “zmartzone/lua-resty-openidc” that will match our specific use case.
```lua
local opts = {
redirect_uri = "/callback",
discovery = {
issuer = "https://auth.oblivio.localhost/realms/master",
authorization_endpoint = "https://auth.oblivio.localhost/realms/master/protocol/openid-connect/auth",
end_session_endpoint = "https://auth.oblivio.localhost/realms/master/protocol/openid-connect/logout",
token_endpoint = "http://keycloak:8080/realms/master/protocol/openid-connect/token",
jwks_uri = "http://keycloak:8080/realms/master/protocol/openid-connect/certs",
userinfo_endpoint = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo",
revocation_endpoint = "http://keycloak:8080/realms/master/protocol/openid-connect/revoke",
introspection_endpoint = "http://keycloak:8080/realms/master/protocol/openid-connect/token/introspect"
},
client_id = "couchdb-proxy",
client_secret = "32scbZbgGNSaVOAAuZHgYeTjdQrkfwTh",
scope = "openid couchdb",
renew_access_token_on_expiry = true,
access_token_expires_in = 60,
accept_none_alg = false,
accept_unsupported_alg = false,
session_contents = {
id_token = true,
access_token = true,
refresh_token = true
}
}
```
There is quite a lot of them, but going up to bottom, you can see that first we defined the callback where the user should be redirected after successful authentication. In our case, it is a relative path to the current subdomain, which is “auth.oblivio.localhost”. Next, we have the URL endpoints for the OpenID Connect provider. Here you may notice that some of them are pointing directly to the container, while others use the whole subdomain. This depends on who will be actually using the given URL. If it is meant to be used by the browser, then we will go with the subdomain. If it is meant to be used by the library itself, we use the direct container address.
Later we have to choose which client we want to use and provide its secret, as well as the scopes that we want to use. We have the “couchdb” scope there, which will add the CouchDB roles based on the group configured in Keycloak to the access token. Further, we have token configuration like expiration time, whether it should be renewed when expired, and if unsupported algorithms are allowed.
At the end, we define session content, which means what information will be stored in the session. In our case, we want to have the ID Token, access token, and refresh token.
When we have options prepared, we can invoke the authentication function from “zmartzone/lua-resty-openidc” and benefit from it doing all the job for us.
```lua
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 500
ngx.say(err)
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
if res then
ngx.req.set_header("Authorization", "Bearer " .. res.access_token)
end
```
It is quite simple. If the method invocation ends with an error, we also exit the whole process with an internal server error. Otherwise, if we have a successful response from the authentication provider, we add the authorization header of “Bearer” type with the access token returned by Keycloak to the request so CouchDB could use it to verify user identity.
### Running and Testing the Setup
Whew — the configuration part is already behind us! Now we can start testing the solutions proposed. We can initiate the entire application with the following command:
```shell
docker compose up -d
```
Hopefully, if everything went well, our application should be already up and running. You can confirm it by navigating to the “https://auth.oblivio.localhost" address in your web browser. You should see the default Keycloak authentication page there.

As we are here, we can log in as the default administrator. We set its credentials in Docker environment variables, so now it is time to make use of them.
If you signed in successfully, you can go through all pages to check whether all options from the YAML file we prepared are present as expected here. What’s more, I would like you to also create a new user. We will try to sign in to CouchDB with its credentials.
To do this, go to the “Users” tab and click the “Add user” button. Here, click the “Email verified” toggle and set the user details according to your preference. Then, join this user to the “/couchdb/admins” group by clicking the “Join Groups” button and checking the correct group.

Click create and go to the “Credentials” tab. Then press the “Set password” button and create a password of your choice. For simplicity, disable the “Temporary” option.

Perfect! You just created a new user. Now open a private browser tab and go to “https://couchdb.oblivio.localhost" to check if you will be able to log in with its credentials to CouchDB. As you are not authenticated yet, you should be redirected to “https://auth.oblivio.localhost" where you have to provide the credentials you set previously.

Click the “Sign In” button, and you will be redirected to the CouchDB welcome page. There is only JSON with some basic information about the CouchDB instance, but from this place, you can go to “/_session” path where you can see information about the authenticated user or to “/_utils” to access Fauxton, which is a GUI for CouchDB management. From “/_session”, you should receive JSON similar to the one below:
```json
{
"ok":true,
"userCtx":{
"name":"287e5d17-4937-48b7-a6fe-cc2029c1cf68",
"roles":["_admin"]
},
"info":{
"authentication_handlers":["proxy","jwt","cookie","default"],
"authenticated":"jwt"
}
}
```
Here you can find out that you are authenticated as the user with the given UUID. This is a unique identifier assigned to the user by Keycloak. There is also information about user roles. If you were to join another group we created in Keycloak, you would see the “_user” role here. At the end, there is information about available authentication handlers and which one was used to authenticate the current user.
### Summary
Yeah! We have completed the first part of the series. With this blog post, we have discovered that by using Keycloak, Nginx, Lua, and a bunch of configuration files, we are able to access the CouchDB instance with our own Single Sign-On managed by Keycloak. It was quite a long process, but in the end, we achieved what we set out to accomplish in this part. In the next parts, we will aim to extend this solution to also be able to access the CouchDB instance from the shell with our custom-made curl wrapper and to ensure that this solution is able to work with CouchApps as well.
If you need all the code from this article in one place, you can find it in my [GitHub repository](https://github.com/kishieel/couchdb-keycloak-sso).
At this point, thank you for reading this article. I would love to hear your thoughts about this solution. Whether you work actively with CouchDB or Keycloak, can you spot weaknesses in this solution? Or maybe you would improve something? I would love to hear about it in the comments.
Don’t forget to check out my other articles for more tips and insights and other parts of this series when they are created. Happy hacking!
| kishieel |
1,895,128 | Short Overview: Frontend Architectures 🧩 | Frontend architectures define how the components of a web application are organized and interact.... | 0 | 2024-06-20T18:22:26 | https://dev.to/buildwebcrumbs/short-overview-frontend-architectures-4778 | webdev, javascript, frontend | Frontend architectures define how the components of a web application are organized and interact. Here are some common types:
### Monolithic Architecture 🏛️
- **Single Codebase**: Everything is in one place.
- **Pros**: Simple to set up and maintain.
- **Cons**: Hard to scale, changes affect the entire app.
### Micro Frontends 🧩
- **Multiple Independent Frontends**: Each frontend is a self-contained micro-app.
- **Pros**: Scalable, team autonomy.
- **Cons**: Complex to integrate, potential for inconsistencies.
### Single-Page Application (SPA) 🖥️
- **Dynamic Loading**: Loads a single HTML page and updates dynamically.
- **Pros**: Fast and smooth user experience.
- **Cons**: Initial load time can be slow, SEO challenges.
### Multi-Page Application (MPA) 📄📄
- **Separate Pages**: Each interaction loads a new page.
- **Pros**: Better SEO, simpler initial load.
- **Cons**: Slower navigation, more server load.
### Server-Side Rendering (SSR) 🏗️
- **Pre-Rendered on Server**: Pages are generated on the server before being sent to the client.
- **Pros**: Better for SEO, fast initial load.
- **Cons**: More complex server setup, potentially slower page transitions.
### Static Site Generation (SSG) 🗂️
- **Pre-Built Pages**: Pages are pre-rendered at build time.
- **Pros**: Fast load times, great for SEO.
- **Cons**: Not suitable for highly dynamic content.
### Progressive Web App (PWA) 📲
- **Hybrid of Web and Mobile**: Web apps that provide a mobile app-like experience.
- **Pros**: Offline capabilities, app-like experience.
- **Cons**: Limited access to device features compared to native apps.
Each **frontend** architecture comes with its own set of benefits and trade-offs. The optimal choice hinges on your project’s unique requirements and goals. Understanding these different architectures enables you to make informed decisions that lead to the creation of robust, scalable, and maintainable web applications.
**Thank you.**
[](tools.webcrumbs.org) | m4rcxs |
1,895,121 | Looking for coding friends | Hello wonderful people of Dev.to! Let me tell you a back story of myself. I recently finished a... | 0 | 2024-06-20T18:19:42 | https://dev.to/dlocodes/looking-for-coding-friends-jae | community, webdev, frontend, react | Hello wonderful people of Dev.to!
Let me tell you a back story of myself. I recently finished a coding bootcamp in Fall 2023. I finished one of the top students in my cohorts and was always looking to improve at the time. I was hooked. I took a major break and thought to myself I could do so and pick up right where I left off. I came back from my vacation and was unmotivated thinking, I'll still take a couple of months off because that's just how my brain works.
I realizd how wrong I was. I opened up VSCode and understood somethings, but at the same time I also realized that I wasn't nearly as efficient as I was once before. Yes, I know that this was my fault. I put myself in this position. However, I started coding again and realized why I might have stopped before and seemed unmotivated coming back to the world. However, this time the journey is so much more satisfying because I'm doing it because I want to and my thought process isn't skewed because of monetary motivation.
What do I mean? Well I'm working on my own now. I don't have the feeling of 'I have to go to this class because if not, I'm going to miss this'. Rather, it's I want to spend time on this because I want to get better.
Anyways, I've created a Discord community server to help myself stay motivated and helping motivate others along the way. While it's a very small community, I do have plans to help it grow eventually. If you're new to coding, I encourage you to come, join and introduce yourself. Heck, you might even be the smartest one in the room =)
Here is the link: https://discord.gg/XYtUhynk | dlocodes |
1,895,120 | Fat Burn Active Supplement Review | Fat Burn Active is a weight loss supplement that claims to help you burn fat, manage weight, and... | 0 | 2024-06-20T18:17:16 | https://dev.to/trustreviews/fat-burn-active-supplement-review-3anc | review | Fat Burn Active is a weight loss supplement that claims to help you burn fat, manage weight, and improve your metabolism. However, there is limited independent research on the effectiveness of this specific supplement.
Here's a closer look at Fat Burn Active:
**Ingredients**
* Acetyl L-Carnitine: This amino acid helps your body transport fat into cells where it can be burned for energy.
* Green Tea Extract: Green tea extract is a natural source of caffeine and antioxidants, which may help boost metabolism and promote fat burning.
* Guarana Seed Extract: Guarana is a plant similar to coffee that contains caffeine. It may increase energy expenditure and help with weight loss.
* African Mango Seed Extract: This extract may help regulate blood sugar levels and reduce appetite.
* Cayenne Pepper Extract: Cayenne pepper contains capsaicin, which may help boost metabolism and increase calorie burning.
**Potential Benefits**
* Increased fat burning
* Improved metabolism
* Reduced appetite
* More energy
**Potential Risks**
* Fat burners can interact with other medications you are taking.
* Some of the ingredients in Fat Burn Active can cause side effects such as anxiety, insomnia, and upset stomach.
* There is limited scientific evidence to support the claims that Fat Burn Active works.
**Overall**
Fat Burn Active is a weight loss supplement that contains some ingredients that have been shown to promote weight loss. However, there is limited independent research on the effectiveness of this specific supplement.
Here are some general tips for safe and effective weight loss:
* Eat a healthy diet that is low in calories and processed foods.
* Exercise regularly.
* Talk to your doctor before taking any weight loss supplements.
[View full content ](https://thedailylifereview.com/content-review-00) | trustreviews |
1,895,119 | Buy Bering Watches Online For Men and Women | Sai Creations Watch | In a world where trends come and go, Bering Watches stand as a beacon of timeless elegance and... | 0 | 2024-06-20T18:15:07 | https://dev.to/saicreationswatches/buy-bering-watches-online-for-men-and-women-sai-creations-watch-152p | watches |
In a world where trends come and go, Bering Watches stand as a beacon of timeless elegance and enduring quality. Inspired by the pristine beauty of the Arctic, Bering combines minimalist Danish design with durable materials to create timepieces that are both stylish and functional. Whether you’re looking for a sophisticated accessory for a formal occasion or a reliable companion for everyday wear, Bering offers a diverse range of watches for both men and women.
**A Glimpse into Bering’s Philosophy**
Bering Watches draw their inspiration from the Arctic landscape, where the beauty lies in simplicity and purity. This philosophy is evident in every aspect of their design. The brand emphasizes clean lines, sleek forms, and a modern aesthetic that transcends fleeting fashion trends. By blending classic elegance with contemporary minimalism, Bering creates watches that are always in style.
**Bering Watches for Men: Understated Sophistication**
Men’s watches from Bering exude a sense of understated sophistication. They are perfect for those who appreciate the finer things in life but prefer a subtle approach to luxury. For the discerning men of Noida, Bering offers a range of watches that embody understated elegance and precision.
**1. Classic Collection:**
The Classic Collection features watches with ultra-slim cases, scratch-resistant sapphire crystal, and high-quality stainless steel. These timepieces are ideal for the modern gentleman who values precision and style.
**2. Solar Collection:**
For the environmentally conscious, the Solar Collection offers watches powered by light. These watches combine innovative technology with sleek design, ensuring you never have to worry about replacing a battery.
**3. Automatic Collection:**
For those who admire the craftsmanship of traditional watchmaking, the Automatic Collection showcases the intricate beauty of mechanical movements. These watches are not only functional but also a testament to exquisite engineering.
**Bering Watches for Women: Graceful Elegance**
Women’s watches from Bering are the epitome of graceful elegance. Designed to complement any outfit, they are perfect for both everyday wear and special occasions.
**
1. Ceramic Collection:**
The Ceramic Collection is known for its luxurious materials and modern design. The use of high-tech ceramic, which is both lightweight and scratch-resistant, ensures these watches remain pristine over time.
**2. Classic Collection:**
Similar to the men’s line, the Women’s Classic Collection features timeless designs with a feminine touch. These watches are versatile enough to transition from day to night effortlessly.
**3. Ultra Slim Collection:**
For those who prefer a more delicate look, the Ultra Slim Collection offers watches with exceptionally thin cases that sit elegantly on the wrist. Despite their slim profile, these watches do not compromise on durability or style.
**Why Choose Bering Watches in Noida?
1. Durability:**
Bering Watches are made with high-quality materials like stainless steel, sapphire crystal, and high-tech ceramic, ensuring they withstand the test of time.
2. Design:**
With a focus on minimalism and elegance, Bering Watches are versatile accessories that complement any wardrobe. Their designs are timeless, making them suitable for any occasion.
**3. Innovation:**
Bering continually incorporates innovative features, such as solar-powered movements and scratch-resistant materials, demonstrating their commitment to both aesthetics and functionality.
**4. Sustainability:**
Bering is also mindful of the environment, offering eco-friendly options like their Solar Collection. This dedication aligns with the growing eco-conscious mindset of Noida’s residents.
**Where to Find Bering Watches in Noida**
Bering Watches are available at select high-end retail stores and authorized dealers across Noida. Whether you prefer shopping at a premium mall or a boutique store, you can find a range of Bering Watches that suit your style and preferences. Additionally, online shopping options provide a convenient way to explore the latest collections and make a purchase from the comfort of your home.
**Conclusion**
Bering Watches for men and women are more than just timepieces; they are a blend of art, engineering, and philosophy. With their commitment to timeless design, durability, and innovation, Bering continues to be a favorite among watch enthusiasts worldwide. Whether you are purchasing your first Bering watch or adding to your collection, you can be confident that you are investing in a piece of timeless elegance and superior craftsmanship. Discover the world of Bering Watches today and find the perfect timepiece that reflects your style and values.
**Contact Us:**
Shop number G-6,
JS Arcade, Near Bikaner Wala,
D Block, Pocket K,
Sector 18, Noida
Website: [](https://saicreationswatches.com) | saicreationswatches |
1,865,560 | How to Implement Data Validation in NestJS Using nestjs-joi and joi-class-decorators | Here are some topics I can cover: Introduction to Data Validation in Web... | 0 | 2024-06-20T18:14:08 | https://dev.to/alifathy1999/how-to-implement-data-validation-in-nestjs-using-nestjs-joi-and-joi-class-decorators-1bk3 | nestjs, joi, typescript, node | ## Here are some topics I can cover:
1. Introduction to Data Validation in Web Applications.
2. Setting Up NestJS for Validation.
- Prerequisites and project setup.
- Creating and Using Validation Schemas
3. Introduction to pipe and how to use it with Joi
4. Validation on Param and Query Params using Joi.
5. Practical Implementation: Explore a complete NestJS application utilizing nestjs-joi and joi-class-decorators on GitHub: [NestJS Sample App with nestjs-joi](https://github.com/AliFathy-1999/nestJs-joi-sample-app).
## 1. Introduction to Data Validation in Web Applications.
- Data validation is the process of ensuring that data entered by users or obtained from external sources satisfies the specified criteria and format. Data validation can be done at several levels, including client-side, server-side, and database-level.
## 2. Setting Up NestJS for Validation.
- _**Prerequisites and project setup:**_
### 1. Install Node and npm :
Make sure that you installed **Node** on your device
`node -v` to detect whether you have installed the **Node** or not. If you installed node output is `v20.13.1` or any version. If you didn't install node output will be `node: command not found.`
You need to install node by going to **Nodejs website** [NodeJS website](https://nodejs.org/en)
Make sure that you installed **Node Package Manager npm** on your device `npm -v` to detect whether you have installed the npm or not. If you installed npm output is `10.8.0` or any version. If you didn't install npm output will be `npm: command not found.`
### 2. Install NestJs and create new nestApp :
```
npm i -g @nestjs/cli
nest new my-nestjs-app
cd ./my-nestjs-app
```
### 3. create a new pipe called validation:
```
// --no-spec => Disable spec files generation
// --flat => Do not generate a folder for the element.
nest g pipe validation --no-spec --flat
```
### 4. Installing necessary packages (nestjs-joi, joi-class-decorators)
```
npm i class-transformer joi nestjs-joi joi-class-decorators
```
- _**Creating and Using Validation Schemas:**_
### 1. Create Endpoint '/testBody', Method type: Post,In app controller
```
import { Body, Controller, HttpCode, HttpStatus, Post, Req, Res } from '@nestjs/common';
import { AppService } from './app.service';
import { Request, Response } from 'express';
import { validationBodyDto } from './validate.dto';
@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
@Post('/testBody')
@HttpCode(HttpStatus.OK)
testJoiValidation(@Req() req: Request, @Res() res: Response, @Body() reqBody: validationBodyDto) {
const data = reqBody;
res.json(data);
}
}
```
### 2. Create Dto file called (validate.dto.ts) to validate this endpoint and create joi schema class (validationBodyDto):
```
import { Expose } from "class-transformer";
import { JoiSchema, JoiSchemaOptions } from "joi-class-decorators";
import * as Joi from 'joi';
interface reviewInterface {
rating: number;
comment: string;
}
// @Expose ==> is used to mark properties that should be included in the transformation process, typically for serialization and deserialization. However.
// @JoiSchema() ==> Define a schema on a type (class) property. Properties with a schema annotation are used to construct a full object schema.
//It ensures strict validation by disallowing any properties that are not explicitly defined in your schema.
@JoiSchemaOptions({
allowUnknown: false
})
export class validationBodyDto {
//Basic Validation is type string and required
@Expose() @JoiSchema(Joi.string().trim().required())
fullName: string;
//Check on length, and is valid egyptian phone number
@Expose() @JoiSchema(Joi.string().length(11).pattern(/^(011|012|015|010)\d{8}$/).required())
phoneNumber: string;
//Check is valid email
@Expose() @JoiSchema(Joi.string().email().optional())
email?: string;
//Check value is valid in case of M or F only
@Expose() @JoiSchema(Joi.string().valid('M', 'F').required())
gender: string;
//militaryStatus is mendatory if gender is M otherwise is optional
@Expose() @JoiSchema(
Joi.when('gender', {
is: 'M',
then: Joi.string().required(),
otherwise: Joi.string().optional(),
}),
)
militaryStatus: string;
//Check age is number, min 14 and max age is 100
@Expose() @JoiSchema(Joi.number().min(14).max(100).message('Invalid age'))
age: number;
//Check on Array of object is valid or invalid
@Expose()
@JoiSchema(
Joi.array().items(
Joi.object({
rating: Joi.number().min(0.1).required(),
comment: Joi.string().min(3).max(300).required(),
}).required(),
).required(),
)
reviews: reviewInterface[];
//allow this field with empty string
@Expose() @JoiSchema(Joi.string().allow('').optional())
profilePicture?: string;
//profileFileName is mendatory if profilePicture has an value otherwise it optional
@Expose() @JoiSchema(
Joi.when('profilePicture', {
is: Joi.string().exist(),
then: Joi.string().required(),
otherwise: Joi.string().allow('').optional(),
}))
profileFileName: string;
//Check if isVerified is boolean and required
@Expose() @JoiSchema(Joi.boolean().required())
isVerified: boolean;
}
```
## 3. Introduction to pipe and how to use it with Joi
- In NestJS, a "pipe" is a class annotated with the @Injectable() decorator that implements the PipeTransform interface. Pipes are typically used for transforming or validating data. They can be used at various levels, including method-level, controller-level, or globally.
- **_Introduction to Pipes_**
- **Transformation**: Pipes can transform input data to a desired format.
- **Validation**: Pipes can validate the data before passing it to the request handler. If the data is invalid, the pipe can throw an exception, which will be handled by NestJS.
- In our case, we use it to transform plain object into a typed object so that we can apply validation.
- So let us use the validation pipe that we created before:
```
import { BadRequestException, Injectable, PipeTransform, Type } from '@nestjs/common';
import { plainToInstance } from 'class-transformer';
import { getClassSchema } from 'joi-class-decorators';
@Injectable()
export class ValidationPipe implements PipeTransform {
transform(value: any, metadata: ArgumentMetadata) {
const { metatype } = metadata;
const bodyDto = metatype; // Dto Schema
/*
To transform our plain JavaScript argument object into a typed object so that we can apply validation.
The reason we must do this is that the incoming post body object, when deserialized from the network request, does not have any type information.
*/
// getClassSchema(bodyDto) ==> A function from joi-class-decorators to retrieve the Joi validation schema associated with a class.
const bodyInput = plainToInstance(bodyDto, value); // Convert plain Dto object to instance to transform it manually
const bodySchema = getClassSchema(bodyDto); // Get Joi Schema from Dto
// Validates the class instance against the Joi schema. If validation fails, error will contain the validation errors.
const error = bodySchema.validate(bodyInput).error;
if (error) {
throw new BadRequestException(`Validation failed: ${error.details.map((err) => err.message).join(', ')}.`);
}
return value
}
}
interface ArgumentMetadata {
type: 'body' | 'query' | 'param' | 'custom';
metatype?: Type<unknown>;
data?: string;
}
```
- To use this validation pipe on our endpoint, we have four ways:
- Use **Global scoped pipes**, It will be applied on every route handler across the entire application.
```
// In main.ts
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.useGlobalPipes(new ValidationPipe());
await app.listen(3000);
}
bootstrap();
```
- Use **parameter-scoped pipes**, It will be applied on param reqBody.
```
@Post('/testBody')
@HttpCode(HttpStatus.OK)
testJoiValidation(@Body(new ValidationPipe()) reqBody: validationBodyDto, @Res() res: Response) {
const data = reqBody;
res.json(data);
}
```
- Use **method-scoped pipes**, It will be applied on method testJoiValidation.
```
@Post('/testBody')
@HttpCode(HttpStatus.OK)
@UsePipes(new ValidationPipe()) // Method Scope
testJoiValidation(@Body() reqBody: validationBodyDto, @Res() res: Response) {
const data = reqBody;
res.json(data);
}
```
- Use **controller-scoped pipes**, It will be applied on method testJoiValidation.
```
@Controller()
@UsePipes(new ValidationPipe()) //Controller-scoped
export class AppController {
constructor(private readonly appService: AppService) {}
@Post('/testBody')
@HttpCode(HttpStatus.OK)
testJoiValidation(@Body() reqBody: validationBodyDto, @Res() res: Response) {
const data = reqBody;
res.json(data);
}
}
```
### 4. Validation on Param and Query Params using Joi.
- Create an endpoint '/testParams/:category', method type: GET, It took param named category ('Fashions', 'Electronics', 'MobilesPhones', 'Perfumes') and two Query Params limit and page.
```
@Get('/testParams/:category')
@HttpCode(HttpStatus.OK)
@UsePipes(new ValidationPipe())
testJoiValidationParam(
@Param() category: validationParamDto,
@Query() limitAndPageSize: validationQueryParamDto,
@Res() res: Response
) {
res.json({
category,
limitAndPageSize
});
}
```
- Create two dtos for those params:
```
export class validationParamDto {
@Expose() @JoiSchema(Joi.string().valid('Fashions', 'Electronics', 'MobilesPhones', 'Perfumes').required())
category: string;
}
@JoiSchemaOptions({
allowUnknown: false
})
export class validationQueryParamDto {
@Expose() @JoiSchema(Joi.number().min(0).max(100).message('Invalid limit'))
limit: string;
@Expose() @JoiSchema(Joi.number().min(0).max(100).message('Invalid page size'))
page: string;
}
```
### Finally, I want to thank you for taking the time to read my article and I hope this article is useful for you :).
For hands-on implementation and further exploration, you can access the complete codebase of a sample NestJS application using nestjs-joi and joi-class-decorators on GitHub. The repository includes practical examples and configurations demonstrating how to integrate and leverage robust data validation in NestJS:
[Explore the NestJS Sample App on GitHub](https://github.com/AliFathy-1999/nestJs-joi-sample-app)
Feel free to clone, fork, or contribute to enhance your understanding and implementation of data validation in NestJS.
| alifathy1999 |
1,893,363 | 1.Python Selenium Architecture 2.Significance of Python Virtual Environment 3.Examples of Python Virtual Environment | ## 1.Python Selenium Architecture Enter fullscreen mode Exit fullscreen... | 0 | 2024-06-20T18:03:36 | https://dev.to/sunmathi/1python-selenium-architecture-2significance-of-python-virtual-environment-3examples-of-python-virtual-environment-252i | ## 1.Python Selenium Architecture
Selenium is a popular open-source framework used for automating web browser interactions.It is widely used for web applications.The architecture of selenium is designed to support a variety of browsers and programming language making it highly flexible and powerful.Here is detailed explanation of selenium architecture.
**Selenium Components**:-
Selenium has been in the industry for the long time and used by automation testers all around the globe.here are four major components of selenium as follows
1.Selenium IDE
2.Selenium Remote Control(RC)
3.Selenium WenDriver
4.Selenium Grid
**1.Selenium IDE**:-
Selenium IDE serves as an innovative toolkit for web testing allowing users to record interactions with web applications.Selenium IDE was initially created by "Shinya Kasatani" in 2006.SeleniumIDE also helps to simplify the testing process.It is a friendly space for testers and developers to team up. This helps everyone quickly share important testing information and results,making things work better and feel accomplished.
IDE-Integrated Development Environment,It is nothing but a simple web browser extension.We just need to download and install the extension for that particular web browser
It can automate as well as record the entire automation process.people generally prefer to write test-scripts using python,java,javascripts etc.
_Features of Selenium IDE_:-
_Record_ - With Selenium IDE users can record how they use a web application
_PlayBack_ - Selenium IDE automatically repeats what you recorded earlier
_Browser check_ - Selenium IDE works on various browsers for testing.
_Check Elements_ - Users can easily look at different parts of a webpage and setup how to work with them
_Spotting Errors_ - Selenium IDE helps users find and fix issues in their automated tests one step at a time.
_Exporting Tests_ - We can save tests created in Selenium IDE in different programming languages(like java,python or c#).This lets you use them with other selenium tools.
**2.Selenium Remote Control(RC):-**
Selenium RC was one of the earliest selenium tools proceding webdriver.It allowed testers to write automated web application tests in various programming languages like java,python,c#,etc.The key feature of selenium Rc was its ability to interact with web browsers using a server which acted as an intermediary between the testing code and the browser.
It has been deprecated and not used these days.It is been replaced selenium webdriver or better to say the python selenium webdriver manager.
Web Driver is often considered the better choice over selenium Rc for several reasons are follows:
Improved API - WebDrivers offers a more straight forward and intuitive API compared to Selenium RC making it easier for developers and testers to write and maintain automated tests.
Better Performance - Web Driver interacts directly with the browsers bypassing the need for an intermediary server like Selenium RC which leads to faster test execution and improved performance.
Support for modern web technologies - Webdriver has better support for modern web technologies such as HTML5,CSS3 and javascript frameworks ensuring compatibilty with the latest web applications.
**Selenium WebDriver:-**
Selenium WedDriver is a robust open-source framework for automating web browsers primarily aimed at easing the testing and verification of web applications.As an important part of the selenium suite is WebDriver offers a programming interface to interact with web browsers allowing developers and testers to automate browser actions seamlessly.
It is a major component of selenium test suite.It provides as an interface between the programming language and the web-browser itself.
**Selenium WedDriver Architecture:-**
The architecture of selenium webdriver consists of the following components
_Selenium Client Library_ -
They are language binding commands which you will use to write your automation scripts.This Libraries are available in different prgramming languages.They provide an interface to write test scripts.when a test script is executed it sends commands to the browser driver via JSON over HTTP.
This commands are compatible with HTTP,TCP-IP protocols.
They are nothing but wrappers which send the script commands to the network for execution into a web browser.
_Selenium API _-
It is a set of rules which our python program uses to communicate with each other.
It helps us in automation without the need for the user to understand what is happening in the background.
_JSON Wire protocol_ -
The commands that you write gets converted into JSON which is then transmitted across the network or to your web browser so that it can be executed for automation and testing.Commands are sent in JSON format over HTTP
The JSON requests are sent to the client using the HTTP protocol.
_Browser Driver_ -
It acts as a bridge between the selenium scripts,libraries and web browser. It helps us to run the Selenium test scripts which comprises selenium commands on a particular web browser
Each browser has a corresponding driver that translate that translate the commands from the webdriver API into actions in the browser:
ChromeDriver for google chrome
GeckoDriver for Mozilla firefox
SafariDriver for safari
IEDriver for Internet Explorer
EdgeDriver for MicrosoftEdge
_Core Components_ -
Web Driver is the core component of selenium and interacts directly with the web browser.It provides a programming interface to create and executes test scripts.WebDriver drives the browser the same way as a real user world.
_Language Support_ -
WebDriver supports multiple programming Languages including java,c#,python,Ruby and JavaScript.
_Browsers_-
The browsers are the target environments where the testers are executed.WebDriver interacts with the browsers to perform the desired actions like clicking,typing and navigating,etc.
**Workflow:-**
Here is a step by step workflow of how selenium WebDriver works
_Test Script Execution_ :
The user writes the test scripts using a selenium client library in their preferred programming.
The script is executed and commands are sent to the webdriver.
_Command Transformation_ :
The client library converts the commands into JSON format and sends them via HTTP to the corresponding browser driver.
_Driver Interaction_:
The browser drivers receives the commands and translates them into browser specific actions.
The driver user browser specific mechanisms to execute these commands in the browser.
**Browser Interaction** :
The browser performs the actions as instructed by the driver(e.g-clicking a button,entering text).
The browser responds back to the driver with the results of the executed actions.
_Result Translation_ :
The browser results sends the results back to the client library in JSON format.
The client library processes the results and presents them to the user.
**Example:Python SeleniumWebDriver:-**
Here is simple example of how selenium webdriver can be used with python to open a browser and navigate to a website
from selenium import webdriver
from selenium.wedriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from time import sleep
class Example:
def__init__(self,url):
self.url=url
self.driver= webdriver.Chrome(service=Service(ChromeDriverManager.install()))
def using_chrome_browse(self):
self.driver.maximize_window()
self.driver.get(self.url)
def shut_down(self):
self,driver.close()
url = "http://www.guvi.in"
example=Example(url)
example.using_chrome_browser()
`
**Features of Selenium Web Driver**:
features of selenium web driver are as follows,
_Direct Communication with Browsers_- Unlike selenium RC WebDriver interacts directly with the browsers native support for automation leading to more stable and reliable testing.
_Support for parallel Execution_ - WebDrivers allows for parallel test execution enabling faster test cycles and efficient utilization of resources.
_Rich sets of API_ - Web Driver provides a comprehensive set of API's for navigating through web pages interacting with web elements,managing windows,handling alerts,etc.
**Selenium Grid:-**
Selenium Grid is a server that allows tests to use web browser instances running on remote machines.With selenium Grid one server acts as the hub.Tests contact the hub to obtain access to browser instances.
_Features of Selenium GRID are as follows:_
SeleniumGrid allows running tests in parallel on multiple machines and managing different browser versions.
The ability to run the tests on remote browser instances is useful to spread th load of testing across several machines.
Run tests in browsers running on different platforms.
**Conclusion:-**
Selenium is a dynamic tool for automation web browsers which offers the components like Selenium IDE,RC,WebDriver and Grid for more information of web testing.It is a support for various languages and parallel execution which makes it a powerful choice for automation.
## 2.Significance of Python Virtual Machine
A python virtual environment is an isolated environment that allows you to install and manage dependencies for a python project seperately from other projects and the system-wide python installation.This isolation ensures that the dependencies and packages of one project do not interface with those of another.
A python virtual environment is simply a directory with a particular file structure.It has a 'bin' subdirectory that includes links to a python interpreter as well as subdirectories that hold packages installed in the specific 'venv'.
By invoking the python interpreter using the path to the 'venv's ' bin subdirectory, the python interpreter knows to use the associated packages within the 'venv'(as opposed to any packages installed alongside the actual location of the python interpreter).It is in this sense that 'venvs' are "virtual",they are not virtual in the sense of say a virtual machine.
When we setup a virtual environment we can immediately use it by invoking python using the full path to the 'bin' subdirectory within your 'venv'.
For convenience when you setup a venv it provides an activate script that you can invoke which will put the bin subdirectory for your venv first on your path.(It also update your shell prompt to let you know this change is in effect).When your venc is activated you no longer need to use the full path to the python interpreter.
_Important_ :If we invoke python program with the full path to the python interpreter in the virtual environment,it will run in the virtual environment even if you are not in an interactive session where you used the 'activate' script.
Here are the detailed reasons why Python virtual environments are significant.
**Dependency Management**:-
_Isolation of Dependencies_ - virtual environments create isolated spaces for projects ensuring that libraries and dependencies installed for one project do not affect other projects.
_Version Conflicts_ - Different projects might require different versions of the same package.
Virtual Environments allow you to install and use multiple versions of a package simultaneously without conflict.
**Reproducibility**:-
_Consistent Development Environment_- By using a virtual environment yo can ensure that all developers working on a project have a consistent environment.This consistency minimize the works on my machine problem.
_Requirements File_ - Virtual Environment often use a requirement.txt file to list all dependencies.This file can be shared with others to recreate the exact environment using
> pip install -r requirment.txt
**Security:-**
_Reduced Risk of System Contamination_ - Installing packages system wide can potentially interfere with system tools and other applications.Using virtual environments limits the scope of installations to the project directory,reducing the risk of breaking the system python or other applications.
_Controlled Environment_ - You can test how your application behaves with different versions of dependencies or even with new or experimental packageswithout affecting the system-wide python environment.
**Development And Testing:-**
_Testing Across Environments_ - Virtual Environmenta allow developers to test their applications in environments that mimic production settings or other specific configurations.This is crucial for debugging and ensuring compatibility.
_Continuous Integration_ - Many CI/CD pipelines use virtual environments to create clean,isolated environments for running tests and building applications.This ensure that tests are run in a consistant state.
**Portability:-**
_Project Portability_ - projects with a defined environment(requirement.txt) can be easily shared and run on different machines,ensuring the same dependencies are installed.
_Ease of Setup_ - New developers can quickly setup their development environment by creating a virtual environment and installing dependencies as specified in the projects requirements file.
**Flexibility:-**
_Multiple Environments_ - Developers can work on multiple projects simultaneously each with its own set of dependencies and configurations without interference.
_Custom Configurations_ - Virtual environments allow for custom configurations and package versions tailored specifically to each projects needs.
**Importance of Python Virtual Environment**
The importance of python virtual environments becomes apparent when we have various python projects on the same machine that depend on different versions of the same packages.
For Example,imagine working on two different projects one using recent version of the package name and other using oldest version.This would lead to compatibility issues because python cannot simultaneously use multiple versions of the same package.
The other use case that magnifies the importance of using python virtual environments is when you are working on managed servers or production environments where you cant modify the system-wide packages because of specific requirements.
Python virtual environments create isolated contexts to keep dependencies required by different projects seperate so they dont interfere with other projects or system-wide packages.
Basically setting up virtual environments is the best way to isolate different python projects,especially if these projects have different and conflicting dependencies.
Imagine yourself walking into a grocery store for a specific item.However to your surprise there is absolutely no organization within the entire store,no way to distinguish between products.no way to tell what product is for what purpose,simply no way to find your item.
you go to the counter and ask the grocer where the specific product is,but all he tells you is to "search for it".
Now what do you do? The only option left for you is to find the item you so desperately want on your own by searching every product in the store.
This grocery store is your computer, your python package bin.All those disorganised products lying on the shelf are the endless torrent of packages you have installed over the years for your random projects.
The next time you start a project ,you will not understand if the version is up to date,if it collides with another packageor if the package exist at all.such organization can cause setbacks and that not only disrupts your projects but takes away the valuable moment that otherwise could have been used for something more productive.
The solution for that is python virtual environment helps decoulpe and isolate versions of python and their related pip versions.This authorizes end-users to install and manage their own set of software that is independent of those provided by the system.
**Use of Virtual Environment:-**
With a virtual environment you have complete control over the environment.you would know the package versions that are required to be updated and what versions are installed.virtual environment give you a replicable and stable environment.
You have complete contol over the versions of python used,the installed packages,and their scheduled upgrades.In fact the modern versions of the python support virtual environments over the boundary.
If you need to have a say on your updates to new packages of python, all you need is to create your own python interpreter and create a virtual world based on the interpreter.By this process you can disengage the servers from the system python update schedule.
There is however an exception to using a virtual environment.suppose you have a simple program that only uses modules from the PythonStandardLibrary.In such a case,you might contemplate not using a simulated environment.
Python has various modules and versions for different applications.A project may required a third party library which is installed.Another project uses a similar directory for retrieval and storage but doesnt require third party software.So the simulated environment can come into play and create a different scheduled environment for both the projects and each project can store and retrieve packages from their specific environments.
**Tools and Usage:-**
_Virtualenv and venv_ :
Tools like 'virtualenv' python's build in 'venv'module are used to create virtual environments.'virtualenv' is a third-party tool that offers additional features,while 'venv' included in python's standard library.
_Pipenv_ :
pipenv that another tool that combines the functionalities of 'pip' and 'virtualen'providing a comprehensive tool for managing virtual environments and dependencies.
**Workflow-Comments:-**
_Installing Python Virtual Environment Module_:
pip install virtualenv
_Verifying Python Virtual Environment Module_ :
virtualenv --version
_Creating Virtual Environment_:
It will created per project and it will create a folder for your project.
virtualenv <project-folder-name>
cd <project-folder-name>
_Activating the virtual Environment_:
Just activate your virtual environment and start writting your python code
scripts\activate
_deactivating the virtual environment_:
Just deactivating virtual environment and go to the globall access.
scripts\deactivate
_Installing dependencies_:
pip install selenium
_Freezing Requirements_:
pip freeze>requirement.txt
**Pros of Virtual Environment** :
you can use any package of python you want for a particular environment without having to worry about collisions.
you can arrange your packages much better and know exactly which packages you need to run your code in case someone else wants to run the code on their machine.
your main python versions directly does not get flooded with unnecessary python packages.
comes in stock with pythond and no extra tools are required.
Builds a primary virtualenv that works with almost all the tools:requirement.txt suports every domain manager using command pip.
**Cons of Virtual Environment** :
It acknowledges the software that is installed:builds a domain with the help of everything that had invoked python to build it,so you are still in the loop of manually controlling the versions of python.
No whistles and no bells but only the installable _pip in the domain.
**Conclusion**:
Python virtual environments are a fundamental tool for modern python development.They provide essential isolation,sensuring that dependencies and packages required for one project do not interfere with those of another.This isolation not only facilitates smooth development and testing but also enhances security,reproducibility and portability of projects.
## 3.Examples of Python Virtual Environment
Here are some specific examples that illustrate the significance of python virtual environments showing how they can be used to solve real world development problems.
**Example-1:Dependency Isolation**:
Virtual environment creates isolated spaces for your projects allowing you to manage dependencies for each project independently.
_Preventing Conflicts_:
Different projects often require different versions of the same packages.A virtual environment ensures that the packages used in one project do not intefere with those in another.
_Scenario_:
you are working on two different projects that require different versions of the same library.
_Details_:
Project A requires 'Selenium-3.6'.
Project B requires 'Selenium-4.0'.
_Without Virtual Environment_:
Installing selenium globally means you can only have one version installed at a time.
If you switch version to work on one project,the project may break.
_With Virtual Environment_:
Create a Virtual Environment for each project.
#create and activate virtual environment for project A
python -m venv project_a_env
source project_a_env/bin/activate
pipn install selenium ==3.6
deactivate
#create and activate virtual environment for project B
python -m venv project_b_env
source project_b_env/bin/activate
pipn install selenium ==4.0
deactivate
Each project now has its own isolated environment with the rquired version of selenium.
**Example-2:Reproducibility**
Virtual Environments help ensure that your development environment can be replicated exactly which is crucial for consistent behavior across different systems.
_Ensuring Consistency:_
By using virtual environments you can share the exact environment setup with your team.
_Scenario_:
you are collaborating on a project with team.To ensure everyone has the same development environment you use a 'requirements.txt'file.
_Details_:
Generate 'requirements.txt':
'pip freeze>requirements.txt'
Share 'requirements.txt' with your team.
_Without Virtual Environment_ :
Team members might have different versions of packages installed globally leading to inconsistence.
_Witn Virtual Environment_:
Each team member can create a virtual environment and install dependencies from 'requirements.txt':
python -m venv project_env
source project_env/bin/activate
pip install -r requirements.txt
This ensures that all team members are working with the exact same set of dependencies,avoiding 'it works on my machine' issues.
**Example-3:Safe Experimentation**
Virtual environment allows you to experiment with new packages or different versions of packages without risking your main project environment.
_Sandbox for Testing:_
you can create a temporary virtual environment to test new libraries or versions.If something goes wrong it doesn't affect your main project.
_Scenario_:
you want to test a new library or a different version of a library without affecting your main development environment.
_Details:_
Create a new Virtual Environment:
python -m venv test_env
source test_env/bin/activate
Install the new library or version:
pip install some_new_library
Test the library in isolation.
_Without Virtual Environments:_
Installing or upgrading packages globally might break your existing projects due to incompatible dependencies.
_With Virtual Environments:_
Experimentation is done safely within an isolated environment.If things go wrong your main project remains unaffected
**Example-4:Multiple projects with different requirements**
Virtual environments make it easier to manage dependencies for multiple projects on the same machine.
_Project-specific dependencies:_
Each project can have its own virtual environment with its own dependencies which avoids the complexity of managing a single global environment.
_Scenario:_
you have multiple projects that require different sets of libraries and versions.
_Details:_
Create seperate virtual environments for each project:
# for project1
pythin -m venv project1_env
source project1_env/bin/activate
pip install libraryA==1.0 libraryB==2.0
deactivate
#for project2
pythin -m venv project2_env
source project2_env/bin/activate
pip install libraryA==3.0 libraryC==1.0
deactivate
_Without Virtual Environment:_
Managing different versions of libraries becomes cumbersome and error-prone.
_With Virtual Environments:_
Each project has its own isolated environment with the specific libraries and versions it requires,preventing conflicts and ensuring stability.
**Example-5:Deployment Consistency**
Virtual environments ensure that the environment on your development machine matches the environment on the production server.
_Seamless Deployment:_
By deploying the same virtual environment you used for development,you avoid issues caused by discrepancies between development and production environments.
_Scenario:_
you need to deploy a web application to a production server and it is critical that the server has the exact same environmentas your development machine.
_Details:_
Create a virtual environment and install dependencies:
python -m venv deploy_env
source deploy_env/bin/activate
pip install -r requirements.txt
Package the application along with the virtual environment.
On the production server activate the environment and run the application:
source deply_env/bin/activate
python app.py
_Without Virtual Environment:-_
Differences in installed packages and their versions between development and production environments can cause deployment issues.
_With Virtual Environment:_
The production server has the same libraries and versions as the development environment,reducing deployment issues and ensuring consistency.
**Example6:Using different Python versions**
Virtual Environment can be created with different versions of python which is useful for testing compatibility and managing legacy systems.
_Python Version Management_:
you can create environmemnts with specific python versions making it easier to test your code across multiple python versions.
_Scenario_:
you need to test your code on different versions of python to ensure compatibility.
_Details_:
Install multiple python versions on your machine.
Create Virtual Environments with different python interpreters:
python 3.6 -m venv env36
python 3.7 -m venv env37
python 3.8 -m venv env38
_Without Virtual Environment:_
Switching python versions and managing dependencies manually can be error-prone and time consuming.
_With Virtual Environments:_
Each environment can use a specific python version and set of dependencies making it easy to test and ensure compatibility across different python versions.
Pycharm can be used as an example to demonstrate the significance and practical benefits of using Python virtual environments.Here is a detailed explanation of how Pycharm leverages virtual environments and why this integration is significant:
**Pycharm and Python Virtual Environments:-**
**Automatic Creation and Management**:
Pycharm simplifies the process of creating and managing virtual environments,which underscores the importance of virtual environments in maintaining clean and organized development setup.
_Creating Virtual Environments_-
When starting a new project pycharm offers to create a new virtual environment automatically.This ensure that each project is isolated from others and has its oen dependencies.
'When u create a new project pycharm prompts:
-Select interpreter:[New virtualenv environment,conda environmentpipenv environment,etc.]
-Location:[specify path]
-Base Interpreter:[select python version]
_Dependency Isolation_ -
Using virtual environments within Pycharm ensures that dependencies for one project do not interfere with those of another.This isolation is crucial for projects with conflicting library requirements.
Example Scenario:
Project A requires 'Django 2.2'
Project B requires 'Django 3.2'
By creating seperate virtual environment for each project within Pycharm,you can manage these dependencies without conflicts.
[project A : Virtual Environment with Django 2.2]
[project B : Virtual Environment with Django 3.2]
_Reproducibility_ -
Pycharm helps ensure that the development environment can be reproduced accurately on different machines which is vital for team collaboration and deployment.
Requirements File Management:
Pycharm can generate 'requirements.txt' file from the installed packages in the virtual environment.This file can be shared with team members to recreate the same environmet.
-Generate requirement.txt: pip freeze>requirements.txt
-Install from requirements.txt: pip install -r requirements.txt
**Safe Experimentation**:
Pycharm allows developers to experiment with new package or different versions of existing packages in an isolated environment without affecting the main project.
Example Scenario:
you want to test a new version of a library.
create a new virtual environment in Pycharm and install the new library version there.
If the experiment fails,the main project remains unaffected.
-create a test environment: python -m venv test_venv
-Activate and test new packages: pip install new_library
**Multiple Python Versions**:
Pycharm supports managing projects with different Python versions which is essential for maintaining legacy codebases while adopting new python features.
Example Scenarion:
Project A requires Python-3.6
Project B requires Python-3.8
By creating seperate virtual environments with different python interpreters,Pycharm helps manage these requirements seamlessly.
-Project A: Virtual Environment with Python-3.6
-Project B: Virtual Environment with python-3.8
**Practical Steps in Pycharm**
**Creating a Virtual Environment for a New Project**:
_Open Pycharm and Create a New Project_:
Select "create new project".
Choose the project location.
Select "new environment using" and choose 'virtualenv','conda'or 'pipenv'.
Specify the base interpreter(e.g-python 3.8).
_Configuring an Existing Project_:
Open the existing project in pycharm.
Go to file>setting.
Navigate to 'project:<your-project-name>python interpreter'.
Clicking the gear icon and selct 'Add'.
Choose 'Virtualenv Environment' and specify the location or create a new one.
**Managing Dependencies**
_Installing Packages_:
Open the terminal inpycharm.
Ensure the virtual environment is active.
Install packages using 'pip install<package-name>'.
_Generating and using 'requirements.txt'_:
Generate the file :
'pip freeze>requirements.txt'
Install dependencies from the file:
'pip install -r requirements.txt'
** Conclusion:**
Using Pycharm as an example highlights the significance of python virtual environments in several key areas like dependency isolation,reproducibility,safe experimentation and managing multiple python versions.Pycharm's integrated tool make it easy to create,manage and utilize virtual environments showcasing the practical benefits and necessity of virtual environments in python development.
| sunmathi | |
1,895,117 | I am wanting to loop this entire code infinitely | Hi guys I have this code which i will add below, i am trying to loop the entirety of it infinitely... | 0 | 2024-06-20T18:03:00 | https://dev.to/jaime_irvine_/i-am-wanting-to-loop-this-entire-code-infinitely-4nbp | webdev, css, csshelp, learning | Hi guys I have this code which i will add below, i am trying to loop the entirety of it infinitely after the last image (imagetest5) finishes its animation and have is start back at (imagetest1)
Please could you assist!!
Code:
@keyframes start-animation {
0% { opacity: 0; }
10% { opacity: 1; }
20% { opacity: 0; }
100% { opacity: 0; }
}
@keyframes flicker {
0%, 100% { opacity: 1; }
50% { opacity: 0.3; }
}
.imagetest {
opacity: 0;
animation: start-animation 20s infinite, flicker 2s infinite;
}
.imagetest1 {
opacity: 0;
animation:
fade-in 0.5s ease-in 1s infinite,
flicker1 1s cubic-bezier(0.4, 0, 1, 1) 1s infinite alternate,
disappear1 0.5s cubic-bezier(0.4, 0, 1, 1) 2s forwards;
}
.imagetest2 {
opacity: 0;
animation:
fade-in 0.5s ease-in 6s infinite,
flicker2 1s cubic-bezier(0.4, 0, 1, 1) 3s infinite alternate,
disappear2 0.5s cubic-bezier(0.4, 0, 1, 1) 4s forwards;
}
.imagetest3 {
opacity: 0;
animation:
fade-in 0.5s ease-in 10.5s infinite,
flicker3 1s cubic-bezier(0.4, 0, 1, 1) 5s infinite alternate,
disappear3 0.5s cubic-bezier(0.4, 0, 1, 1) 6s forwards;
}
.imagetest4 {
opacity: 0;
animation:
fade-in 0.5s ease-in 15s infinite,
flicker4 1s cubic-bezier(0.4, 0, 1, 1) 7s infinite alternate,
disappear4 0.5s cubic-bezier(0.4, 0, 1, 1) 8s forwards;
}
.imagetest5 {
opacity: 0;
animation:
fade-in 0.5s ease-in 17s infinite,
flicker4 1s cubic-bezier(0.4, 0, 1, 1) 7s infinite alternate,
disappear4 0.5s cubic-bezier(0.4, 0, 1, 1) 8s forwards;
}
@keyframes fade-in {
0% { opacity: 0; }
100% { opacity: 1; }
}
@keyframes flicker1 {
0% { opacity: 1; }
5%, 25%, 50%, 75%, 95%, 100% { opacity: 0.8; }
10%, 20%, 30%, 40%, 60%, 70%, 80%, 90% { opacity: 0.6; }
15%, 35%, 55%, 85% { opacity: 0.4; }
45%, 65%, 88% { opacity: 0.2; }
}
@keyframes flicker2 {
0% { opacity: 1; }
5%, 25%, 50%, 75%, 95%, 100% { opacity: 0.8; }
10%, 20%, 30%, 40%, 60%, 70%, 80%, 90% { opacity: 0.6; }
15%, 35%, 55%, 85% { opacity: 0.4; }
45%, 65%, 88% { opacity: 0.2; }
}
@keyframes flicker3 {
0% { opacity: 1; }
5%, 25%, 50%, 75%, 95%, 100% { opacity: 0.8; }
10%, 20%, 30%, 40%, 60%, 70%, 80%, 90% { opacity: 0.6; }
15%, 35%, 55%, 85% { opacity: 0.4; }
45%, 65%, 88% { opacity: 0.2; }
}
@keyframes flicker4 {
0% { opacity: 1; }
5%, 25%, 50%, 75%, 95%, 100% { opacity: 0.8; }
10%, 20%, 30%, 40%, 60%, 70%, 80%, 90% { opacity: 0.6; }
15%, 35%, 55%, 85% { opacity: 0.4; }
45%, 65%, 88% { opacity: 0.2; }
}
@keyframes disappear1 {
0% { opacity: 1; }
100% { opacity: 0; }
}
@keyframes disappear2 {
0% { opacity: 1; }
100% { opacity: 0; }
}
@keyframes disappear3 {
0% { opacity: 1; }
100% { opacity: 0; }
}
@keyframes disappear4 {
0% { opacity: 1; }
100% { opacity: 0; }
} | jaime_irvine_ |
1,895,116 | Layout Panes | JavaFX provides many types of panes for automatically laying out nodes in a desired location and... | 0 | 2024-06-20T18:01:33 | https://dev.to/paulike/layout-panes-2n8n | java, programming, learning, beginners | JavaFX provides many types of panes for automatically laying out nodes in a desired location and size. JavaFX provides many types of panes for organizing nodes in a container, as shown in Table below. You have used the layout panes **Pane**, **StackPane**, and **HBox** in the preceding sections for containing nodes. This section introduces the panes in more details.

You have used the **Pane** in [here](https://dev.to/paulike/panes-ui-controls-and-shapes-hb6), ShowCircle.java. A **Pane** is usually used as a canvas for displaying shapes. **Pane** is the base class for all specialized panes. You have used a specialized pane **StackPane** in [here](https://dev.to/paulike/panes-ui-controls-and-shapes-hb6), ButtonInPane.java. Nodes are placed in the center of a **StackPane**. Each pane contains a list for holding nodes in the pane. This list is an instance of **ObservableList**, which can be obtained using pane’s **getChildren()** method. You can use the **add(node)** method to add an element to the list, use **addAll(node1, node2, ...)** to add a variable number of nodes to the pane.
## FlowPane
**FlowPane** arranges the nodes in the pane horizontally from left to right or vertically from top to bottom in the order in which they were added. When one row or one column is filled, a new row or column is started. You can specify the way the nodes are placed horizontally or vertically using one of two constants: **Orientation.HORIZONTAL** or **Orientation.VERTICAL**. You can also specify the gap between the nodes in pixels. The class diagram for **FlowPane** is shown in Figure below.

Data fields **alignment**, **orientation**, **hgap**, and **vgap** are binding properties. Each binding property in JavaFX has a getter method (e.g., **getHgap()**) that returns its value, a setter method (e.g., **sethGap(double)**) for setting a value, and a getter method that returns the property itself (e.g., **hGapProperty()**). For a data field of **ObjectProperty<T>** type, the value getter method returns a value of type **T** and the property getter method returns a property value of type **ObjectProperty<T>**.
The program below gives a program that demonstrates **FlowPane**. The program adds labels and text fields to a **FlowPane**.

The program creates a **FlowPane** (line 15) and sets its **padding** property with an **Insets** object (line 16). An **Insets** object specifies the size of the border of a pane. The constructor **Insets(11, 12, 13, 14)** creates an **Insets** with the border sizes for top (11), right (12), bottom (13), and left (14) in pixels, as shown in Figure below. You can also use the constructor **Insets(value)** to create an **Insets** with the same value for all four sides. The **hGap** and **vGap** properties are in lines 17–18 to specify the horizontal gap and vertical gap between two nodes in the pane, as shown in Figure below.

Each **FlowPane** contains an object of **ObservableList** for holding the nodes. This list can be obtained using the **getChildren()** method (line 21). To add a node into a **FlowPane** is to add it to this list using the **add(node)** or **addAll(node1, node2, ...)** method. You can also remove a node from the list using the **remove(node)** method or use the **removeAll()** method to remove all nodes from the pane. The program adds the labels and text fields into the pane (lines 21–24). Invoking **tfMi.setPrefColumnCount(1)** sets the preferred column count to **1** for the MI text field (line 23). The program declares an explicit reference **tfMi** for a **TextField** object for MI. The explicit reference is necessary, because we need to reference the object directly to set its **prefColumnCount** property.
The program adds the pane to the scene (line 27), sets the scene in the stage (line 29), and displays the stage (line 30). Note that if you resize the window, the nodes are automatically rearranged to fit in the pane. In Figure below (a), the first row has three nodes, but in Figure below (b), the first row has four nodes, because the width has been increased.

Suppose you wish to add the object **tfMi** to a pane ten times; will ten text fields appear in the pane? No, a node such as a text field can be added to only one pane and once. Adding a node to a pane multiple times or to different panes will cause a runtime error. A node can be placed only in one pane. Therefore, the relationship between a pane and a node is the composition denoted by a filled diamond.
## GridPane
A **GridPane** arranges nodes in a grid (matrix) formation. The nodes are placed in the specified column and row indices. The class diagram for **GridPane** is shown in Figure below.

The program below gives a program that demonstrates **GridPane**, as shown in Figure below.

```
package application;
import javafx.application.Application;
import javafx.geometry.HPos;
import javafx.geometry.Insets;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.Label;
import javafx.scene.control.TextField;
import javafx.scene.layout.GridPane;
import javafx.stage.Stage;
public class ShowGridPane extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane and set its properties
GridPane pane = new GridPane();
pane.setAlignment(Pos.CENTER);
pane.setPadding(new Insets(11.5, 12.5, 13.5, 14.5));
pane.setHgap(5.5);
pane.setVgap(5.5);
// Place nodes in the pane
pane.add(new Label("First Name"), 0, 0);
pane.add(new TextField(), 1, 0);
pane.add(new Label("MI:"), 0, 1);
pane.add(new TextField(), 1, 1);
pane.add(new Label("Last Name"), 0, 2);
pane.add(new TextField(), 1, 2);
Button btAdd = new Button("Add Name");
pane.add(btAdd, 1, 3);
GridPane.setHalignment(btAdd, HPos.RIGHT);
// Create a scene and place it in the stage
Scene scene = new Scene(pane);
primaryStage.setTitle("ShowGridPane"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The program creates a **GridPane** (line 18) and sets its properties (line 19–22). The alignment is set to the center position (line 19), which causes the nodes to be placed in the center of the grid pane. If you resize the window, you will see the nodes remains in the center of the grid pane.
The program adds the label in column **0** and row **0** (line 25). The column and row index starts from **0**. The **add** method places a node in the specified column and row. Not every cell in the grid needs to be filled. A button is placed in column 1 and row 3 (line 32), but there are no nodes placed in column 0 and row 3. To remove a node from a **GridPane**, use **pane.getChildren().remove(node)**. To remove all nodes, use **pane.getChildren().removeAll()**.
The program invokes the static **setHalignment** method to align the button right in the cell (line 33).
Note that the scene size is not set (line 36). In this case, the scene size is automatically computed according to the sizes of the nodes placed inside the scene.
## BorderPane
A **BorderPane** can place nodes in five regions: top, bottom, left, right, and center, using the **setTop(node)**, **setBottom(node)**, **setLeft(node)**, **setRight(node)**, and **setCenter(node)** methods. The class diagram for **GridPane** is shown in Figure below.

The program below gives a program that demonstrates **BorderPane**. The program places five buttons in the five regions of the pane, as shown in Figure below.
```
package application;
import javafx.application.Application;
import javafx.geometry.Insets;
import javafx.scene.Scene;
import javafx.scene.control.Label;
import javafx.scene.layout.BorderPane;
import javafx.scene.layout.StackPane;
import javafx.stage.Stage;
public class ShowBorderPane extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane and set its properties
BorderPane pane = new BorderPane();
// Place nodes in the pane
pane.setTop(new CustomPane("Top"));
pane.setRight(new CustomPane("Right"));
pane.setBottom(new CustomPane("Buttom"));
pane.setLeft(new CustomPane("Left"));
pane.setCenter(new CustomPane("Center"));
// Create a scene and place it in the stage
Scene scene = new Scene(pane);
primaryStage.setTitle("ShowBorderPane"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
// Define a custom pane to hold a label in the center of the pane
class CustomPane extends StackPane{
public CustomPane(String title) {
getChildren().add(new Label(title));
setStyle("-fx-border-color: red");
setPadding(new Insets(11.5, 12.5, 13.5, 14.5));
}
}
```

The program defines **CustomPane** that extends **StackPane** (line 36). The constructor of **CustomPane** adds a label with the specified title (line 38), sets a style for the border color, and sets a padding using insets (line 40).
The program creates a **BorderPane** (line 14) and places five instances of **CustomPane** into five regions of the border pane (lines 17–21). Note that a pane is a node. So a pane can be added into another pane. To remove a node from the top region, invoke **setTop(null)**. If a region is not occupied, no space will be allocated for this region.
## HBox and VBox
An **HBox** lays out its children in a single horizontal row. A **VBox** lays out its children in a single vertical column. Recall that a **FlowPane** can lay out its children in multiple rows or multiple columns, but an **HBox** or a **VBox** can lay out children only in one row or one column. The class diagrams for **HBox** and **VBox** are shown in Figures below.


The program below gives a program that demonstrates **HBox** and **VBox**. The program places two buttons in an **HBox** and five labels in a **VBox**, as shown in Figure below.
```
package application;
import javafx.application.Application;
import javafx.geometry.Insets;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.Label;
import javafx.scene.layout.BorderPane;
import javafx.scene.layout.HBox;
import javafx.scene.layout.VBox;
import javafx.stage.Stage;
import javafx.scene.image.Image;
import javafx.scene.image.ImageView;
public class ShowHBoxVBox extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane and set its properties
BorderPane pane = new BorderPane();
// Place nodes in the pane
pane.setTop(getHBox());
pane.setLeft(getVBox());
// Create a scene and place it in the stage
Scene scene = new Scene(pane);
primaryStage.setTitle("ShowHBoxVBox"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
private HBox getHBox() {
HBox hBox = new HBox(15);
hBox.setPadding(new Insets(15, 15, 15, 15));
hBox.setStyle("-fx-background-color: gold");
hBox.getChildren().add(new Button("Computer Science"));
hBox.getChildren().add(new Button("Chemistry"));
ImageView imageView = new ImageView(new Image("file:/C:/Users/Paul/development/MyJavaFX/src/application/image/ke.jpg"));
hBox.getChildren().add(imageView);
return hBox;
}
private VBox getVBox() {
VBox vBox = new VBox(15);
vBox.setPadding(new Insets(15, 15, 15, 15));
vBox.getChildren().add(new Label("Courses"));
Label[] courses = {new Label("CSCI 1301"), new Label("CSCI 1302"), new Label("CSCI 2410"), new Label("CSCI 3720")};
for(Label course: courses) {
VBox.setMargin(course, new Insets(0, 0, 0, 15));
vBox.getChildren().add(course);
}
return vBox;
}
public static void main(String[] args) {
Application.launch(args);
}
}
```

The program defines the **getHBox()** method. This method returns an **HBox** that contains two buttons and an image view (lines 31–40). The background color of the **HBox** is set to gold using Java CSS (line 35). The program defines the **getVBox()** method. This method returns a **VBox** that contains five labels (lines 42–55). The first label is added to the **VBox** in line 45 and the other four are added in line 51. The **setMargin** method is used to set a node’s margin when placed inside the **VBox** (line 50). | paulike |
1,893,773 | A React Global State that can persist data too! | In my previous article, I introduced a powerful yet simple hook called useDbState that lets you... | 0 | 2024-06-20T17:56:28 | https://dev.to/ajejey/a-react-global-state-that-can-persist-data-too-41ib | usedbstate, tutorial, frontend, react | In my [previous article](https://dev.to/ajejey/persist-your-react-state-in-the-browser-2bgm), I introduced a powerful yet simple hook called [useDbState](https://www.npmjs.com/package/use-db-state) that lets you persist state in your React components just like useState. But wait, there's more!
## Introducing useDbState 2.0.0
Hold on to your hats because useDbState just got an upgrade! Now, with useDbState 2.0.0, you can use the same state across multiple components without any extra boilerplate code or wrappers. Yep, you read that right—global state management made easy!
### How Does useDbState Work?
Before diving into the new features, let's quickly recap how useDbState works. This custom React hook stores and retrieves state using IndexedDB, making your state persistent across page reloads. It's as easy to use as useState but with superpowers!
### Basic Usage
Here's a quick example:
```js
import React from 'react';
import { useDbState } from 'use-db-state';
const Counter = () => {
const [count, setCount] = useDbState('count', 0);
return (
<div>
<p>{count}</p>
<button onClick={() => setCount(count + 1)}>Increment</button>
</div>
);
};
export default Counter;
```
This code snippet stores the count state in IndexedDB under the key count, making it persistent across page reloads. Pretty cool, right?
## New Features in useDbState 2.0.0
### Global State Across Components
One of the standout features in the new version is the ability to share state across different components effortlessly. No more prop drilling or complex state management libraries. Just pure simplicity!
#### Example
Let's look at a practical example where we share user data across components:
#### Component A:
```js
import React from 'react';
import { useDbState } from 'use-db-state';
const ComponentA = () => {
const [user, setUser] = useDbState('user', { name: 'Alice', age: 25 });
return (
<div>
<input
type="text"
value={user.name}
onChange={(e) => setUser({ ...user, name: e.target.value })}
/>
<input
type="number"
value={user.age}
onChange={(e) => setUser({ ...user, age: parseInt(e.target.value) })}
/>
</div>
);
};
export default ComponentA;
```
#### Component B:
```js
import React from 'react';
import { useDbState } from 'use-db-state';
const ComponentB = () => {
const [user] = useDbState('user');
return (
<div>
<h1>{user?.name}</h1>
<p>Age: {user?.age}</p>
</div>
);
};
export default ComponentB;
```
In this example, ComponentA and ComponentB share the user state. Any updates in ComponentA will automatically reflect in ComponentB, and vice versa.
Observe in ComponentB the `useDbState` hook is passed with only the key and no initilization. This will let you access the data stored in IndexedDB under the key `user`
### How It Works
`useDbState` leverages IndexedDB to store state data persistently. When initialized, it fetches the value from IndexedDB. If the value exists, it sets the state to the fetched value; otherwise, it uses the provided default value. It also subscribes to changes in the state associated with the key, ensuring all components using the same key are synchronized.
### API
`useDbState`
```js
const [value, setValue] = useDbState(key: string, defaultValue: any, dbName?: string = 'userDatabase', storeName?: string = 'userData');
```
#### Parameters:
* `key` (string): The key under which the value is stored in IndexedDB.
* `defaultValue` (any): Initial value of the state. You can make this `undefined` when you need to read already stored data under the `key`.
* `dbName` (string, optional): The name of the IndexedDB database. Defaults to 'userDatabase'.
* `storeName` (string, optional): The name of the IndexedDB object store. Defaults to 'userData'.
#### Returns:
* `value` (any): The current state value.
* `setValue` (function): A function to update the state value.
## More Practical Examples
### Storing User Preferences
You can use `useDbState` to store user preferences like theme settings. You can change the theme from any component in your app and browser will remember what you set the next time you open your app:
```js
import React from 'react';
import { useDbState } from 'use-db-state';
const ThemeToggle = () => {
const [theme, setTheme] = useDbState('theme', 'light');
return (
<div>
<button onClick={() => setTheme(theme === 'light' ? 'dark' : 'light')}>
Toggle Theme
</button>
<p>Current theme: {theme}</p>
</div>
);
};
export default ThemeToggle;
```
### Managing a Shopping Cart
Easily manage a shopping cart across different components:
```js
import React from 'react';
import { useDbState } from 'use-db-state';
const AddToCart = ({ item }) => {
const [cart, setCart] = useDbState('cart', []);
const addToCart = () => {
setCart([...cart, item]);
};
return <button onClick={addToCart}>Add to Cart</button>;
};
const Cart = () => {
const [cart] = useDbState('cart');
return (
<div>
<h2>Shopping Cart</h2>
<ul>
{cart.map((item, index) => (
<li key={index}>{item.name}</li>
))}
</ul>
</div>
);
};
export { AddToCart, Cart };
```
## Conclusion
The `useDbState` hook could be a game-changer for your React state management. It simplifies persistence with IndexedDB and introduces a seamless way to share state across components without extra hassle. With these new features, managing global state in your React apps has never been easier.
So, go ahead and try out [useDbState](https://www.npmjs.com/package/use-db-state) in your projects. Comment below if you find any interesting application or not-so-interesting bugs 😅. Would love to colloborate with you to make the hook better and more useful! 💪
Happy coding! 🚀
-----------------
Feel free to reach out if you have any questions or need further assistance. You can find the full documentation on my [GitHub repository](https://github.com/ajejey/use-db-state-hook).
If you found this article helpful, share it with your fellow developers and spread the word about `use-db-state`!
You can follow me on [github](https://github.com/ajejey) and [Linkedin](https://www.linkedin.com/in/ajey-nagarkatti-28273856/)
| ajejey |
1,895,082 | Securing Centralized Crypto Exchanges | Introduction The realm of cryptocurrency has expanded exponentially over the past... | 27,673 | 2024-06-20T17:08:28 | https://dev.to/rapidinnovation/securing-centralized-crypto-exchanges-4gde | ## Introduction
The realm of cryptocurrency has expanded exponentially over the past decade,
introducing a new paradigm of financial transactions and investment
opportunities. As digital currencies like Bitcoin, Ethereum, and others have
grown in popularity and market capitalization, the platforms that facilitate
their exchange have become critically important.
## What is Centralized Crypto Exchange Development?
Centralized Crypto Exchange Development refers to the process of creating a
platform where users can buy, sell, or trade cryptocurrencies. These platforms
act as intermediaries managed by a company that maintains full control over
all transactions.
## How to Secure Centralized Crypto Exchanges
Centralized cryptocurrency exchanges are pivotal in the digital asset
ecosystem, facilitating the buying, selling, and trading of cryptocurrencies.
However, their centralized nature makes them attractive targets for
cybercriminals. Enhancing security measures is crucial to protect both the
platforms and their users' assets.
## Types of Security Threats to Centralized Exchanges
Centralized exchanges in the cryptocurrency market are prime targets for a
variety of security threats. These platforms, which act as third-party
intermediaries in facilitating the buying and selling of digital assets,
possess significant amounts of liquid assets making them attractive to
cybercriminals.
## Benefits of Securing Centralized Exchanges
Securing centralized exchanges offers numerous benefits, not only to the
exchange operators but also to their users. Firstly, robust security measures
enhance the reliability of the exchange, ensuring that it can resist attacks
and function smoothly under various conditions.
## Challenges in Securing Centralized Exchanges
Centralized cryptocurrency exchanges are pivotal in the digital asset
ecosystem, facilitating the trading of billions of dollars daily. However,
their centralized nature makes them prime targets for a variety of security
threats.
## Future Trends in Crypto Exchange Security
The inherent properties of blockchain technology, such as decentralization,
immutability, and transparency, are being leveraged to enhance the security
frameworks of cryptocurrency exchanges. By adopting blockchain, exchanges can
significantly mitigate risks associated with fraud, cyber-attacks, and
operational errors.
## Real-World Examples
One notable example of a secured centralized exchange is Coinbase, which has
established itself as a leader in the realm of cryptocurrency exchanges by
prioritizing security and user trust. Coinbase employs a variety of security
measures to protect user assets and data.
## Why Choose Rapid Innovation for Implementation and Development
Choosing rapid innovation for implementation and development is crucial in
today’s fast-paced technological landscape. Rapid innovation refers to the
ability to quickly develop, test, and refine applications and systems to meet
the evolving needs of businesses and consumers.
## Conclusion
Throughout the discussion on security practices, several key points have been
highlighted that underscore the importance of robust security measures in
today's digital landscape. The evolving nature of cyber threats necessitates
that organizations and individuals remain vigilant and proactive in their
security strategies.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa) [AI Software
Development](https://www.rapidinnovation.io/ai-software-development-company-
in-usa)
## URLs
* <http://www.rapidinnovation.io/post/how-to-secure-centralized-crypto-exchange-development-solutions-to-enhance-trust-in-the-market>
## Hashtags
#CryptoSecurity
#BlockchainTechnology
#AIinFinance
#CyberThreats
#SecureExchanges
| rapidinnovation | |
1,895,115 | salons development tips | To develop an ICT salon, focus on integrating advanced booking and CRM systems to streamline client... | 0 | 2024-06-20T17:49:14 | https://dev.to/tyba_hassan_d9e718e326a8e/salons-development-tips-3ok3 | To develop an [ICT salo](https://salonkyliepittsburgh.com/eyelash-services-in-pittsburgh/)n, focus on integrating advanced booking and CRM systems to streamline client management. Enhance your online presence with a user-friendly website and active social media engagement to attract and retain clients.
| tyba_hassan_d9e718e326a8e | |
1,895,111 | Understanding the Testing Pyramid: A Comprehensive Guide | Introduction Software development is a complex process involving numerous stages and disciplines.... | 0 | 2024-06-20T17:41:16 | https://dev.to/keploy/understanding-the-testing-pyramid-a-comprehensive-guide-1p5k | webdev, javascript, beginners, programming |

**Introduction**
Software development is a complex process involving numerous stages and disciplines. One critical aspect is testing, which ensures that software is reliable, functional, and free of defects. The Testing Pyramid is a conceptual framework that helps developers and testers prioritize and structure their testing efforts effectively. This article delves into the [Testing Pyramid](https://keploy.io/blog/community/understanding-the-different-levels-of-the-software-testing-pyramid), explaining its components, benefits, and best practices.
**The Concept of the Testing Pyramid**
The Testing Pyramid was popularized by Mike Cohn, a prominent figure in the Agile software development community. The pyramid is a visual metaphor representing the different types of tests that should be performed on software, arranged in layers with a broad base and a narrow top. The three primary layers are:
**Unit Tests**
**Integration Tests**
**End-to-End (E2E) or UI Tests**
Each layer serves a distinct purpose and has different characteristics regarding scope, speed, and complexity.
**The Layers of the Testing Pyramid**
Unit Tests
Scope: Unit tests are the most granular type of tests. They focus on individual components or functions of the software, such as methods in a class or small modules.
Speed: Unit tests are generally fast to execute because they test isolated pieces of code without dependencies.
Complexity: They are straightforward to write and maintain, often requiring minimal setup.
Tools: Common tools for unit testing include JUnit for Java, NUnit for .NET, and Jest for JavaScript.
Integration Tests
Scope: Integration tests verify that different components or systems work together correctly. They test the interactions between units, such as data flow between modules, API interactions, and database operations.
Speed: These tests are slower than unit tests because they involve multiple components and may require setting up external dependencies like databases or web servers.
Complexity: Integration tests are more complex to write and maintain due to the need for a more extensive setup and teardown process.
Tools: Tools like Postman, RestAssured for API testing, and Selenium for browser automation are commonly used for integration testing.
End-to-End (E2E) or UI Tests
Scope: E2E tests cover the entire application flow from start to finish, simulating real user scenarios. They validate that the system as a whole meets the requirements and behaves as expected.
Speed: E2E tests are the slowest to run due to their extensive coverage and the involvement of multiple layers of the application stack.
Complexity: These tests are the most complex to write, maintain, and debug. They require a complete environment that closely mimics production.
Tools: Popular tools for E2E testing include Selenium, Cypress, and TestCafe.
The Importance of the Testing Pyramid
The Testing Pyramid helps teams achieve a balanced and efficient testing strategy. Here are some of the key benefits:
Cost-Effectiveness: Unit tests, being the cheapest and quickest to run, form the foundation of the pyramid. By catching defects early at the unit level, teams can prevent costly fixes later in the development cycle.
Fast Feedback Loop: The pyramid structure promotes a fast feedback loop. Since unit tests run quickly, they provide immediate feedback to developers, allowing them to identify and fix issues promptly.
Reduced Maintenance Effort: Focusing more on unit and integration tests reduces the reliance on E2E tests, which are harder to maintain. This leads to a more stable and maintainable test suite.
Comprehensive Coverage: The pyramid ensures that all aspects of the application are tested thoroughly. Unit tests ensure individual components work correctly, integration tests validate interactions, and E2E tests confirm the overall system functionality.
Best Practices for Implementing the Testing Pyramid
Adopt Test-Driven Development (TDD): TDD is a practice where tests are written before the code itself. This approach ensures that tests are an integral part of the development process and encourages a high level of unit test coverage.
Automate Tests: Automation is crucial for maintaining an efficient and effective testing strategy. Automated tests can run frequently and consistently, providing ongoing assurance of software quality.
Use Mocks and Stubs: In unit and integration tests, use mocks and stubs to simulate dependencies and isolate the unit under test. This practice helps keep tests fast and focused.
Prioritize Testing: While unit tests should form the bulk of your test suite, ensure that integration and E2E tests are not neglected. Each type of test serves a unique purpose and is essential for comprehensive coverage.
Continuously Refactor Tests: As the codebase evolves, tests should be refactored and updated to remain relevant and effective. Regularly review and refactor test cases to maintain their usefulness and accuracy.
Maintain Test Data: Proper management of test data is crucial, especially for integration and E2E tests. Ensure that test data is consistent, predictable, and easily manageable to avoid flaky tests.
Monitor Test Performance: Regularly monitor the performance of your test suite. Identify and address any bottlenecks, such as slow-running tests or redundant test cases, to maintain an efficient testing process.
Conclusion
The Testing Pyramid is a valuable framework for organizing and prioritizing testing efforts in software development. By emphasizing a strong foundation of unit tests, supported by integration tests and a smaller number of E2E tests, teams can achieve a balanced, efficient, and effective testing strategy. Implementing the best practices associated with the Testing Pyramid will lead to higher software quality, faster feedback loops, and more maintainable test suites, ultimately contributing to the success of software projects.
| keploy |
1,895,106 | Help post ! | Can anyone help me with this problem why this is overlapping?? | 0 | 2024-06-20T17:31:35 | https://dev.to/hossain99987/help-post--4jnj | webdev, css, beginners, programming |


Can anyone help me with this problem why this is overlapping?? | hossain99987 |
1,895,086 | MICROSOFT APPLIED SKILL. Guided Project: Provide storage for the IT department testing and training | This is exercise 1 of the Microsoft Applied skill guided project. What is Azure storage account An... | 0 | 2024-06-20T17:14:28 | https://dev.to/sethgiddy/microsoft-applied-skill-guided-project-provide-storage-for-the-it-department-testing-and-training-31je | azure, storage | This is exercise 1 of the Microsoft Applied skill guided project.
What is Azure storage account
An azure storage account is a container that groups a set of Azure Storage services together. Only data services from Azure Storage can be included in a storage account.
Below are the 4 types of azure storage account
1.Azure Blobs. 2. Azure Files. 3.Azure Queues. 4. Azure Tables.
How to create an azure storage account Azure portal.
There are 4 different tools to create storage account with, depending on whether an organisation want GUI and automation.
Azure portal
Azure CLI (Command-line interface)
Azure PowerShell
Management client libraries
But in this write up we will be exploring the use of Azure portal to complete exercise.
**A. CREATE A STORAGE ACCOUNT WITH HIGH AVAILABILITY**
1.On the Basics tab, enter the required details, subscription, resource group, Storage name. From Performance select Standard and from Redundancy Select Locally redundant storage (LRS) to safe cost for now.

2. It takes a couple of minutes for the deployment to complete, select Go to resource to view Essential details about your new storage account.

3.This storage requires high availability if there’s a regional outage. Additionally, enable read access to the secondary region, Learn more about storage account redundancy.
4. This storage requires high availability if there’s a regional outage. Additionally, enable read access to the secondary region, Learn more about storage account redundancy.
-In the storage account, in the Data management section, select the Redundancy blade.
-Ensure Read-access Geo-redundant storage is selected.
-Review the primary and secondary location information.

5.Information on the public website should be accessible without requiring customers to login.
-In the storage account, in the Settings section, select the Configuration blade.
-Ensure the Allow blob anonymous access setting is Enabled.
-Be sure to Save your changes.

**B. CREATE A BLOB STORAGE CONTAINER WITH ANONYMOUS READ ACCESS**
1. The public website has various images and documents. Create a blob storage container for the content. Learn more about storage containers.
-In your storage account, in the Data storage section.
-Select the Containers blade.
-Select + Container.
-Ensure the Name of the container is public.
-Select Create.

C. PRACTICE UPLOADING FILES AND TESTING ACCESS.
1.**For testing, upload a file to the public container. The type of file doesn’t matter. A small image or text file is a good choice.**
-Ensure you are viewing your container.
-Select Upload.
-Browse to files and select a file. Browse to a file of your choice.
-Select Upload.
-Close the upload window, Refresh the page and ensure your file was uploaded.

2.**Determine the URL for your uploaded file. Open a browser and test the URL**.
-Select your uploaded file.
-On the Overview tab, copy the URL.
-Paste the URL into a new browser tab.
-If you have uploaded an image file it will display in the browser. Other file types should be downloaded.

D.CONFIGURE SOFT DELETE
1.**It’s important that the website documents can be restored if they’re deleted. Configure blob soft delete for 21 days. Learn more about soft delete for blobs**.
-Go to the Overview blade of the storage account.
-On the Properties page, locate the Blob service section.
-Select the Blob soft delete setting.
-Ensure the Enable soft delete for blobs is checked.
-Change the Keep deleted blobs for (in days setting is 21.
-Notice you can also Enable soft delete for containers.
-Don’t forget to Save your changes.

2.**If something gets deleted, you need to practice using soft delete to restore the files**.
-Navigate to your container where you uploaded a file.
-Select the file you uploaded and then select Delete.
-Select OK to confirm deleting the file.
-On the container Overview page, toggle the slider Show deleted blobs. This toggle is to the right of the search box.
-Select your deleted file, and use the ellipses on the far right, to Undelete the file.
-Refresh the container and confirm the file has been restored.

CONFIGURE BLOB VERSIONING
1.**It’s important to keep track of the different website product document versions. Learn more about blob versioning**.
-Go to the Overview blade of the storage account.
-In the Properties section, locate the Blob service section.
-Select the Versioning setting.
-Ensure the Enable versioning for blobs checkbox is checked.
-Notice your options to keep all versions or delete versions after.
-Don’t forget to Save your changes.

2.As you have time experiment with restoring previous blob versions.
-Upload another version of your container file. This overwrites your existing file.
-Your previous file version is listed on Show deleted blobs page.

Exercise 2 Next. Stay tuned
Thanks
| sethgiddy |
1,892,248 | Deploy from git to hostinger (Shared hosting). | What is hostinger. Hostinger is a web hosting company that was established in 2004. It... | 0 | 2024-06-20T16:14:28 | https://dev.to/vimuth7/deploy-from-git-to-hostinger-shared-hosting-333n | ##What is hostinger.
Hostinger is a web hosting company that was established in 2004. It offers a variety of hosting services, including shared hosting, cloud hosting, Virtual Private Server (VPS) hosting, and more recently, WordPress hosting. The company is known for providing affordable and scalable hosting solutions aimed at both beginners and experienced web developers.
In this tutorial we talk about deploying from git to hostinger shared web hosting.
**1.First create a git repocitory.**

then clone it to your local machine.
If it is a private repository you can create a PAT or personal access token. And clone like.
```
git clone https://gitusername:pat_token@github.com/Account/repo.git .
```
It is better if you can give expire time for personal access token to **never expire** since it will help if you can create an github action.
**2.Create a file folder and subdomain**
This tutorial assumes that you have already bought a domain and connected to hosting.

Go there and create a folder inside **public_html** called **test**.
Then create a subdomain from dashboard and connect to this folder.
Got to **Domains** -> **Subdomains**

This will automatically choose **test** inside **public_html**.

Now in browser https://test.yourdomain.com/ you can see the test page.
**3.Connect github**
- Go to github get ssh url like this

- now got to git section in hostinger dashboard

- Now add this ssh url and give your branch. For me it was default main branch. And give "test" as Directory.
These are example urls:
For public repositories: https://github.com/WordPress/WordPress.git
For private repositories: git@github.com:WordPress/WordPress.git

Now go below and you can see an entry like this.

When you click the Deploy button the changes will not deployed from to hostinger server. The reason is you need to verify the server with a **Deploy key**. This is how we get a Deploy key from hostinger and add it to git.
The deploy key has to be taken from here

now copy it and add to git repo. For this go to the repository and add deploy key like this

Now you click yellow **Deploy** button the changes from git will deployed to hostinger server. But there is an automatic deploy option. What this does is every time we commit to the github they will automatically deploy to the hostinger. Github has given two ways for this. Easiest way is **GitHub Webhooks**.
**4.Github Webhooks**
**A webhook in GitHub is a mechanism that allows you to configure GitHub to send real-time data to an external server whenever certain events occur in a repository. This is useful for automating workflows, integrating with external services, and keeping systems synchronized.**
Here's how it works:
**<u>Setting Up a Webhook:</u>**
1.You configure a webhook in your GitHub repository settings.
2.You specify a payload URL, which is the endpoint on your server where GitHub will send HTTP POST requests.
3.You choose the events that will trigger the webhook (e.g., push events, pull requests, issues).
This is how you do it.
- Got to repository and enter settings->webhooks

- Then we need to get webhook URL from Hostinger.

After click this you can get the webhook URL. And add it to setting->webhook screen inside "Payload URL".
Now every time we commit to repo a webhook will work and contents will be deployed to server.
| vimuth7 | |
1,895,081 | MsSQL on MacOs | MSSql database is easy to configure on a Windows System . For MacOs we need to take care of few steps... | 0 | 2024-06-20T17:08:24 | https://dev.to/pranjal_sharma_38482a3041/mssql-on-macos-2l3e | microsoft, apple, database, macos | MSSql database is easy to configure on a Windows System . For MacOs we need to take care of few steps to get it installed and run properly .
Lets see what all steps we need to follow →
### 01 : Download Docker
Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. We would need it to run Microsoft SQL on Mac.
→ Check docker version
`$ docker --version`
→ Download and install docker from here :point_right: [Docker Desktop](https://www.docker.com/products/docker-desktop/)
### 02 : Download the MS SQL Server Image to Docker
→ After that, you need to pull the SQL Server 2019 Linux container image from Microsoft Container Registry.
[ _Make sure docker is running in background_ ]
`$ sudo docker pull mcr.microsoft.com/mssql/server:2019-latest`
→ Then you can run the docker images command and verify whether the docker image has been pulled successfully.
### 03 : Run the docker container
→ Command to run the docker container.
```
🔥 Command to run the container
docker run -d --name sql_server_demo -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=reallyStrongPwd123' -p 1433:1433 mcr.microsoft.com/mssql/server:2019-latest
🔥 Command for M1 Chip, please try this
docker run -e "ACCEPT_EULA=1" -e "MSSQL_SA_PASSWORD=reallyStrongPwd123" -e "MSSQL_PID=Developer" -e "MSSQL_USER=SA" -p 1433:1433 -d --name=sql mcr.microsoft.com/azure-sql-edge
```
> → Make sure to put you own password in SA_PASSWORD.
> → You can name your container after the --name flag.
> → -d flag represents the detach mode that releases the terminal after you run the above command.
→ Then run the `docker ps` command to verify whether your container has started to run
→ If your container stops after a few seconds it started, run `docker ps -a` command & > docker logs <container-id> to check what’re the errors.
### 04 : Install the MS SQL CLI
→ Next, you need to install **sql-cli** via npm.
```
$ npm install -g sql-cli
OR
$ sudo npm install -g sql-cli
```
→ [Link](https://nodejs.org/en/download/package-manager) to install npm if not present .
### 05 : Test the Installation by Login In
→ Testing the mssql integration by logging in
`$ mssql -u sa -p <Your Pass word>`
→ If correctly done : `mssql>` prompt will come up.
→ Then run `select @@version` to verify the connectivity.
```
$ mssql -u sa -p reallyStrongPwd123
Connecting to localhost...done
sql-cli version 0.6.2
Enter ".help" for usage hints.
mssql> select @@version
--------------------------------------------------------------------
Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64)
Jan 12 2022 22:30:08
Copyright (C) 2019 Microsoft Corporation
Developer Edition (64-bit) on Linux (Ubuntu 20.04.3 LTS) <X64>
1 row(s) returned
Executed in 1 ms
mssql>
```
### 06 : [OPTIONAL] Download and install the GUI application - Azure Data Studio
[Azure Data Studio](https://learn.microsoft.com/en-us/sql/azure-data-studio/download-azure-data-studio?view=sql-server-ver15&tabs=redhat-install%2Credhat-uninstall)
### 07 : :blush: We Are Done ! Stop the services once completed with the work
→ `docker stop <container-id> ` to stop the docker container.
| pranjal_sharma_38482a3041 |
1,895,079 | From DreamHost to Office 365: A Streamlined Migration Guide | Leaving DreamHost email behind and embracing the robust features of Office 365 can be a strategic... | 0 | 2024-06-20T17:02:44 | https://dev.to/petergroft/from-dreamhost-to-office-365-a-streamlined-migration-guide-379j | dreamhost, office365 | Leaving DreamHost email behind and embracing the robust features of Office 365 can be a strategic move for your business. But navigating the DreamHost to Office 365 migration process can seem daunting. Fear not! This guide simplifies your journey:
Planning is Key:
Define Your Scope: Identify the data types to migrate during your DreamHost to Office 365 migration (emails, contacts, calendars).
User Assessment: Determine the number of users and their email usage patterns to inform your DreamHost to Office 365 migration strategy.
Tool Selection: Choose a migration method (IMAP, third-party tool) based on complexity and technical expertise for your DreamHost to Office 365 migration. Consider security features offered by different tools.
Migration Made Easy:
IMAP Migration: For smaller email volumes, utilize the built-in IMAP functionality of Office 365 for a free, manual transfer during your DreamHost to Office 365 migration.
Third-Party Assistance: For larger datasets or complex needs, explore third-party migration tools that automate the DreamHost to Office 365 migration process and offer advanced features like data filtering and scheduling.
Post-Migration Success:
Verification is Vital: Ensure all data is transferred successfully by verifying email counts, contacts, and calendar entries after your DreamHost to Office 365 migration.
User Adoption Strategies: Provide training and support materials to help users navigate the new Office 365 environment following your successful DreamHost to Office 365 migration.
By following these steps and leveraging the power of Office 365, you can streamline your [DreamHost to Office 365 migration](https://www.o365cloudexperts.com/blog/dreamhost-to-office-365-migration/) process and unlock the benefits of enhanced collaboration, security, and productivity for your team. Consider partnering with a trusted migration specialist like Apps4Rent. Their team of experts can provide valuable guidance on selecting the right tools, ensure a smooth data transfer during your DreamHost to Office 365 migration, and offer ongoing support to help your users adapt to the new platform. This additional layer of expertise can ensure a seamless transition for your entire team. | petergroft |
1,892,892 | HOW TO CREATE AND CONNECT TO A LINUX VM USING A PUBLIC KEY. | ## INTRODUCTION Just like creating your Azure windows VM, the Azure linux is slightly similar from... | 0 | 2024-06-20T16:59:26 | https://dev.to/agana_adebayoo_876a06/how-to-create-and-connect-to-a-linux-vm-using-a-public-key-550o | linux, create, public | **## INTRODUCTION**
Just like creating your Azure windows VM, the Azure linux is slightly similar from the begin at which you create your basic settings all through your network before they are then subsequently DEPLOYED,it is at this point your generated ip address is used with your ssh server is used to create a PUBLIC AZURE LINUX VM.
## **TABLE OF CONTENT**
1.LOG INTO YOUR AZURE PORTAL.
2.CLICIK ON VIRTUAL MACHINE.
3. CLICK ON CREATE.
4. SET UP OF YOUR OVERVIEW PARAMETERS.
## **CLICIK ON VIRTUAL MACHINE.**
This is the second step in the creation of your AZURE LINUX PUBLIC VM, there are varioys ways to click on the virtual machine.

## **CLICK ON CREATE**


<u>**## SET UP OF YOUR OVERVIEW PARAMETERS.**</u>
.
TO set up your basic parameters , you must create a virtual machine name, in this instance our VM name is AGANALINUX3VM , Please note that the region you select will determine what Azure will charge (US REGION ARE USUALLY CHEAPER.
Some zones will give you the options of having access to all **security type** lock up list while some are restricted to few, in this instance we will be using the UBUNTU SERVER 20.04 LTS-x64 GEN2 linux operating system.


USER NAME AND PASSWORD NEEEDS TO BE CREATED , BECAUSE THEY WILL BE NEEDED WHEN WORKING ON OUR CMD OR WINDOWS POWER SHELL TERMINALS.

CLICKING ON OUR** CREATE AND REVIEW** BUTTON WILL HELP CHECK SET-UP PARAMETERS THAT NEEDS ATTENTION AND ADJUSTMENT , PLEASE NOTE FOR THE PURPOSE OF THIS WRITE-UP WE WILL BE USING DEFAULT SETTINGS FOR OTHERS EXCEPT DEPLOYMENT RECOMMENDATION ADVICE OTHERWISE.




After clicking on the review and create button if all parameters are right, your VM settings will **PASS VALIDATION**, While an estimated cost will be published, please note that this is not your final cost.

N.B
**PLEASE MOTE THAT IF YOUR PARAMETER SETTING STOPS AT THIS POINT DUE TO POWER OUTAGE YOUR COST WILL BE RE-ESTIMATED AND IT WILL CHANGE**.


**AFTER THE CREATION AND REVIEW IS COMPLTED THE FOLLOWING SETTINGS WILL BE ON THE DISPLAY FOR ONWARD REVIEW**.




**THE NEXT STEP IS TO CLICK THE CREATE BUTTON AND ALL THE RESOURCES CREATED WILL BE DEPLOYED.
**

AFTER YOUR DEPLOYMENT IS COMPLETE CLICK ON GO TO RESOURCES TO START YOUR CMD OR WINDOWS POWER SHELL LINUX COMMAND DEPLOYMENT.


**GO TO YOUR WINDOWS LOG-ON AREA TO SEARCH FOR YOUR CMD OR POWER SHELL , CLICK ON EITHER OF THE TWO TO ENTER INTO YOUR DOS ENVIRONMENT AT WHICH OF LINUX COMMANDS WILL BE DEPLOYED.
**

**ENTER THE FOLLOWING TO START UP THE LINUX PROCESS.**
(1)SSH
(2)USER NAME AND
(3)IP ADDRESS in the seququence below
>space SSH space Aganalinux@172.191.17.145 **enter
> SSH Aganalinux@172.191.17.145

**TYPE IN YOUR PASSWORD TO LOG YOU ON INTO YOUR LINUX VM , PLEASE NOTE BTHAT THE PASSWORD WILL NOT SHOW AND IT IS PREFERRABLE TO TYPE IN YOUR PASSWORD RATHER THAN COPY AND PASTE.
**




## **SUMMARY**
**ONCE THE LINUX LOOP IS LOG-ON THE USER CAN THEN DEPLOY OTHER COMMAND BASED ON WHAT HE OR SHE NEEDS TO CREATE.**
| agana_adebayoo_876a06 |
1,895,077 | The Image and ImageView Classes | The Image class represents a graphical image and the ImageView class can be used to display an image.... | 0 | 2024-06-20T16:46:52 | https://dev.to/paulike/the-image-and-imageview-classes-44j1 | java, programming, learning, beginners | The **Image** class represents a graphical image and the **ImageView** class can be used to display an image. The **javafx.scene.image.Image** class represents a graphical image and is used for loading an image from a specified filename or a URL. For example, **new Image("image/us.gif")** creates an **Image** object for the image file **us.gif** under the directory **image** in the Java class directory and **new Image("http://www.cs.armstrong.edu/liang/image/us.gif")** creates an **Image** object for the image file in the URL on the Web.
The **javafx.scene.image.ImageView** is a node for displaying an image. An **ImageView** can be created from an **Image** object. For example, the following code creates an **ImageView** from an image file:
`Image image = new Image("image/us.gif");
ImageView imageView = new ImageView(image);`
Alternatively, you can create an **ImageView** directly from a file or a URL as follows:
`ImageView imageView = new ImageView("image/us.gif");`
The UML diagrams for the **Image** and **ImageView** classes are illustrated in Figures below.


```
package application;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.layout.HBox;
import javafx.scene.layout.Pane;
import javafx.geometry.Insets;
import javafx.stage.Stage;
import javafx.scene.image.Image;
import javafx.scene.image.ImageView;
public class ShowImage extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane to hold the circle
Pane pane = new HBox(10);
pane.setPadding(new Insets(5, 5, 5, 5));
Image image = new Image("file:/C:/Users/Paul/development/MyJavaFX/src/application/image/ke.jpg");
pane.getChildren().add(new ImageView(image));
ImageView imageView2 = new ImageView(image);
imageView2.setFitHeight(100);
imageView2.setFitWidth(100);
pane.getChildren().add(imageView2);
ImageView imageView3 = new ImageView(image);
imageView3.setRotate(90);
pane.getChildren().add(imageView3);
// Create a scene and place it in the stage
Scene scene = new Scene(pane);
primaryStage.setTitle("ShowImage"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```

The program creates an **HBox** (line 16). An **HBox** is a pane that places all nodes horizontally in one row. The program creates an **Image**, and then an **ImageView** for displaying the image, and places the **ImageView** in the **HBox** (line 19).
The program creates the second **ImageView** (line 21), sets its **fitHeight** and **fitWidth** properties (lines 22–23) and places the **ImageView** into the **HBox** (line 24). The program creates the third **ImageView** (line 26), rotates it 90 degrees (line 27), and places it into the **HBox** (line 28). The **setRotate** method is defined in the **Node** class and can be used for any node. Note that an **Image** object can be shared by multiple nodes. In this case, it is shared by three **ImageView**. However, a node such as **ImageView** cannot be shared. You cannot place an **ImageView** multiple times into a pane or scene.
Note that you must place the image file in the same directory as the class file, as shown in the following figure.

If you use the URL to locate the image file, the URL protocal http:// must be present. So the following code is wrong.
`new Image("www.cs.armstrong.edu/liang/image/us.gif");`
It must be replaced by
`new Image("http://www.cs.armstrong.edu/liang/image/us.gif");` | paulike |
1,895,071 | My experiment with HTMX and Astro | I share my first experience with the htmx library. I created a simple site... | 0 | 2024-06-20T16:44:25 | https://dev.to/petrtcoi/my-experiment-with-htmx-and-astro-3io9 | astro, htmx | I share my first experience with the [htmx](https://htmx.org) library. I created a simple site ([https://tubog-showcase.ru](https://tubog-showcase.ru)) consisting of a home page with an initial set of apartment cards and 5 buttons that display apartments filtered by number of rooms.
Stack: Astro + HTMX + Tailwind
Implementation is very simple:
``` ts
// src/pages/index.astro
<form
id='filter-form'
hx-trigger='change'
hx-post='/api/cases'
hx-target='#serch-results'
hx-swap='innerHTML'
hx-indicator='#loading'
>
...
<label>
<input
type='radio'
name='rooms'
value='1'
class='hidden peer'
/>
<div class='hover:border-slate-500 peer-checked:opacity-100 peer-checked:shadow-xl peer-checked:border-slate-400'>
1 комната
</div>
</label>
...
</form>
<div id='loading'>
Loading...
</div>
<div id='serch-results'></div>
```
Соответственно, при выборе нового варианта фильтра, по адресу `/api/cases` выполняется POST-запрос с данными о выбранном варианте.
Индикатором рабты запроса является `<div id='loading'>`. Результат (готовый HTML) выводится внутри `<div id='serch-results'></div>`.
На стороне `api` код выглядит примерно так:
``` ts
// src/pages/api/cases.ts
export const POST: APIRoute = async ({ request }) => {
const formData = await request.formData()
const rooms = formData.get('rooms')
// Получаем список комнат на основе rooms
return new Response(
`
<div>
${filteredFlats.map(flat => ` {some flat html here} `)}
</div>
`,
{
status: 200,
headers: {
'Content-Type': 'text/html+htmx',
},
}
)
```
And that's it.
## First Impressions
This approach to create web applications is rather not for me. Implementing with the same React seems simpler and with more potential for improvement. What I didn't like:
- The blurring of application logic. The backend doesn't just return data, but also deals with frontend tasks;
- The need to harmonize the layout on the client side and the server side (in our case it's the default display of maps and search results output).
Yes, I made it a bit more difficult for myself by not using page partials, but their use within the same Astro in conjunction with HTMX seemed a bit too far-fetched to me.
Still, I have a number of tasks where HTMX is a perfect fit: I have several old static sites and sites implemented on CMS like WordPress. For these, the will to add interactivity (especially using page partials) is a great prospect. For sites written in Next or Astro, I haven't seen a use for it yet.
## P.S. An unexpected problem
In this project I've connected View Transitions and it seems to break the HTMX library. As soon as you make a transition between pages, the site ends up static. Fortunately, I came across this [article](https://flaviocopes.com/htmx-and-astro-view-transitions/) which describes the solution: you need to add the following script to the page with HTXM code.
``` ts
// src/pages/index.astro
<script>
document.addEventListener('astro:page-load', () => {
const contentElement = document.getElementById('filter-form')
if (contentElement) {
htmx.process(document.body)
}
})
</script>
```
| petrtcoi |
1,895,075 | Why Specifying the Node.js Version is Essential | Hey everyone! Today we're going to talk about a super important practice in developing Node.js... | 0 | 2024-06-20T16:43:21 | https://dev.to/robertoumbelino/why-specifying-the-nodejs-version-is-essential-4nhp | node, npm, nvm | Hey everyone! Today we're going to talk about a super important practice in developing Node.js applications: specifying the Node.js version in your project's `package.json` file. It might seem like a small detail, but trust me, it can save you from a lot of headaches in the future. So, let's understand why this is crucial!
#### Why Specify the Node.js Version? 🤔
1. **Consistency in Development**: 🛠️ When working in a team, it's crucial that everyone uses the same version of Node.js. If each developer uses a different version, chaos can ensue, with mysterious bugs appearing only on one colleague's machine. By specifying the version in `package.json`, you ensure everyone is on the same page.
2. **Code Compatibility**: 🐞 New versions of Node.js bring new features but can also introduce changes that break compatibility with previous versions. If you don't specify a version, you might find that a dependency stops working or your code starts throwing errors after an update.
3. **Aligned Production and Development Environments**: 🌐 Imagine you developed and tested your application on one version of Node.js, but the server is running a different version when deploying to production. This can cause unexpected behaviors and hard-to-track errors. By specifying the version in `package.json`, you can configure your production environment to use the same version, avoiding unpleasant surprises.
#### How to Specify the Node.js Version in `package.json` 📜
It's quite simple. In your `package.json`, add a property called `engines` and specify the Node.js version your application requires. Here's a basic example:
```json
{
"name": "my-application",
"version": "1.0.0",
"description": "My awesome application",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"engines": {
"node": ">=14.0.0 <16.0.0"
},
"dependencies": {
"express": "^4.17.1"
}
}
```
In this example, we're stating that the application requires a Node.js version greater than or equal to 14.0.0 and less than 16.0.0. This helps keep everyone aligned and aware of the version that should be used.
#### Tools for Managing Node.js Versions 🧰
In addition to specifying the version in `package.json`, you can use tools to ensure the correct version is installed and active in your development environment. The most popular of these is **nvm (Node Version Manager)**. With it, you can install multiple versions of Node.js and switch between them easily.
- **Install nvm**: Follow the installation instructions on the [official nvm GitHub repository](https://github.com/nvm-sh/nvm).
- **Install a specific version of Node.js**:
```sh
nvm install 14
```
- **Use the installed version**:
```sh
nvm use 14
```
You can also create an `.nvmrc` file in your project, containing the Node.js version you want to use. Here's an example of how to create this file:
```sh
# Create an .nvmrc file in your project's root directory
echo "14" > .nvmrc
```
This `.nvmrc` file contains just the version number of Node.js you want to use. When you or another developer enters the project directory, simply run `nvm use` and nvm will automatically adjust the Node.js version.
#### Conclusion 🎯
Specifying the Node.js version in `package.json` might seem like a small detail, but it's an essential practice to maintain consistency and stability in your project. It avoids many compatibility issues and helps ensure your code works the same way in all environments, from development to production.
So, don't slack! The next time you start a Node.js project, make sure to specify the Node.js version in `package.json` and use tools like nvm to manage your versions. Your future self and your teammates will thank you! 🚀 | robertoumbelino |
1,895,073 | Testing Your API with Fastify and Vitest: A Step-by-Step Guide | Hey there! Let's dive into how to test an API built with Fastify using Vitest and TypeScript. We'll... | 0 | 2024-06-20T16:38:29 | https://dev.to/robertoumbelino/testing-your-api-with-fastify-and-vitest-a-step-by-step-guide-2840 | node, fastify, vitest, api | Hey there! Let's dive into how to test an API built with Fastify using Vitest and TypeScript. We'll focus on a simple health check route and make sure everything's working as it should.
### Why Test an API?
Testing an API is super important to make sure everything is running smoothly. Automated tests help us catch bugs quickly and keep our code solid and reliable.
### Setting Up the Environment
Let's start from scratch and set up our project. First, create a directory for the project and initialize a new Node.js project:
```bash
mkdir my-fastify-api
cd my-fastify-api
npm init -y
```
Next, let's install Fastify and TypeScript:
```bash
npm install fastify
npm install typescript ts-node @types/node --save-dev
```
Create a `tsconfig.json` file to configure TypeScript:
```json
{
"compilerOptions": {
"target": "ESNext",
"module": "CommonJS",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"outDir": "./dist",
"rootDir": "./src"
},
"include": ["src/**/*.ts"]
}
```
### Creating the Fastify Server
Now, let's set up a Fastify server with a simple health check route. Create a `src` directory and inside it, make an `app.ts` file:
```typescript
// src/app.ts
import fastify from 'fastify';
const app = fastify();
app.get('/health-check', async (request, reply) => {
return { status: 'ok' };
});
export default app;
```
Next, let's create the `server.ts` file to start the server:
```typescript
// src/server.ts
import app from './app';
const start = async () => {
try {
await app.listen({ port: 3000 });
console.log('Server is running on http://localhost:3000');
} catch (err) {
app.log.error(err);
process.exit(1);
}
};
start();
```
### Installing Vitest
Now, let's install Vitest so we can write our tests:
```bash
npm install --save-dev vitest
```
Add the following scripts to your `package.json`:
```json
"scripts": {
"dev": "ts-node src/server.ts",
"test": "vitest",
"test:watch": "vitest --watch"
}
```
### Writing Tests
Let's write a test for our `health-check` route. Create a `tests` directory and inside it, make a `health-check.test.ts` file:
```typescript
// tests/health-check.test.ts
import app from '../src/app';
import { test, expect } from 'vitest';
test('GET /health-check should return status OK', async () => {
const response = await app.inject({
method: 'GET',
url: '/health-check'
});
expect(response.statusCode).toBe(200);
expect(response.json()).toEqual({ status: 'ok' });
});
```
### Running the Tests
To run the tests, use this command:
```bash
npm test
```
If you want the tests to run continuously whenever you make changes, use:
```bash
npm run test:watch
```
### Conclusion
And that's it! In this article, we saw how to set up a Fastify project with TypeScript from scratch and how to write tests for a simple route using Vitest. Testing your routes makes sure your API is always working correctly and helps avoid any nasty surprises later on. Happy coding! | robertoumbelino |
1,895,072 | ChatGPT Slack Bot | Slack is one of the most widely used communication tools for teams, and with the integration of... | 0 | 2024-06-20T16:38:20 | https://dev.to/pranjal_sharma_38482a3041/chatgpt-slack-bot-3bfe | chatgpt, slack, bot, ai | Slack is one of the most widely used communication tools for teams, and with the integration of OpenAI’s ChatGPT, it becomes an even more powerful tool. ChatGPT is a highly advanced language model that can generate human-like responses to a given prompt. In this blog, we will show you how to integrate ChatGPT with Slack and use it to answer questions and have conversations with your team members.

> Note: the API has a higher uptime compared to the ChatGPT UI 😄
### 1. Register an app with Slack and gather all the tokens
First step is to register a new app on Slack and obtain the Slack Bot Token and Slack API Token.
- Log in to your Slack workspace and Go to [Slack API website.](https://api.slack.com/)
- Click on **“Create an app”** and select **“From scratch”**
- Give your app a name, select your Slack workspace.
- In Basic information > Add features and functionality. Click on “Permissions” and in Scopes add in Bot Token Scopes: [app_mentions:read](https://api.slack.com/scopes/app_mentions:read) ; [channels:history](https://api.slack.com/scopes/channels:history) ; [channels:read](https://api.slack.com/scopes/channels:read) ; [chat:write](https://api.slack.com/scopes/chat:write).
- In settings, click on **“Socket Mode”**, enable it and give the token a name. Copy the Slack Bot App Token.
- In <u>Basic information > Add features and functionality.</u> Click on **“Event Subscriptions”** and enable it. Furthermore in **“Subscribe to bot events”** select “app_mention”. Save changes.
- Go to the **“OAuth & Permissions”** section and install your app to your workspace.
- Copy the **Slack Bot Token.**
### 2. Get the OpenAI API key . [ Valid for a month for free users ]
Need to have a OpenAI API key to integrate ChatGPT.
- Go to OpenAI website.
- Go to API ket section and create a new API key after you login.
- Copy the API key.
### 3. Install necessary dependencies
```
pip install openai
pip install slack-bolt
pip install slack
```
Install these dependencies . Slack - bolt is a bunch of tools and libraries that allow developers to easily create Slack applications.
####
4. Run the application
Fill in the first 3 tokens in this script with your tokens and run the application.
```
SLACK_BOT_TOKEN = "xoxb-2196501177986-5475158173799-DTxoGAJMjSrqZ1UbKJQDRkYq"
SLACK_APP_TOKEN = "xapp-1-A05DZ02E7JT-5502278073649-6d0d2eabadfa2388189e2bd414393d764d87de68f4df7234f2e87a421eba9440"
OPENAI_API_KEY = "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"
import os
import openai
from slack_bolt.adapter.socket_mode import SocketModeHandler
from slack import WebClient
from slack_bolt import App
# Event API & Web API
app = App(token=SLACK_BOT_TOKEN)
client = WebClient(SLACK_BOT_TOKEN)
# This gets activated when the bot is tagged in a channel
@app.event("app_mention")
def handle_message_events(body, logger):
# Log message
print(str(body["event"]["text"]).split(">")[1])
# Create prompt for ChatGPT
prompt = str(body["event"]["text"]).split(">")[1]
# Let the user know that we are busy with the request
response = client.chat_postMessage(channel=body["event"]["channel"],
thread_ts=body["event"]["event_ts"],
text=f"Hello LazyPay junkies !! :robot_face: \nThanks for your request, I'm on it!")
# Check ChatGPT
openai.api_key = OPENAI_API_KEY
response = openai.Completion.create(
engine="text-davinci-003",
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
temperature=0.5).choices[0].text
# Reply to thread
response = client.chat_postMessage(channel=body["event"]["channel"],
thread_ts=body["event"]["event_ts"],
text=f"Here you go: \n{response}")
if __name__ == "__main__":
SocketModeHandler(app, SLACK_APP_TOKEN).start()
```
Add **here we go** !! Add your bot to a channel in the integration tab. :rocket:
| pranjal_sharma_38482a3041 |
1,895,096 | Bootcamp De Machine Learning Para AWS Gratuito | O Bootcamp Nexa oferece bolsas de estudo para aqueles que desejam dominar o SageMaker Canvas na AWS,... | 0 | 2024-06-23T13:50:02 | https://guiadeti.com.br/bootcamp-machine-learning-aws-gratuito/ | bootcamps, aws, cursosgratuitos, inteligenciaartifici | ---
title: Bootcamp De Machine Learning Para AWS Gratuito
published: true
date: 2024-06-20 16:33:34 UTC
tags: Bootcamps,aws,cursosgratuitos,inteligenciaartifici
canonical_url: https://guiadeti.com.br/bootcamp-machine-learning-aws-gratuito/
---
O Bootcamp Nexa oferece bolsas de estudo para aqueles que desejam dominar o SageMaker Canvas na AWS, aprendendo a treinar e implantar modelos de Machine Learning sem escrever uma única linha de código.
Durante o curso, os participantes passarão pela preparação, visualização e manipulação de grandes volumes de dados, culminando na criação de uma aplicação de Previsão de Estoque Inteligente, ideal para destacar seu portfólio.
O programa inclui 10 horas de conteúdo, um projeto prático e um desafio de código, proporcionando uma experiência completa e prática em Machine Learning na AWS.
## Bootcamp Nexa – Machine Learning para Iniciantes na AWS
O Bootcamp Nexa oferece uma oportunidade exclusiva com bolsas disponíveis para quem deseja iniciar no mundo do Machine Learning utilizando a AWS.

_Imagem da página do curso_
Aproveite esta chance para dominar o SageMaker Canvas e aprender a treinar e implantar modelos de Machine Learning sem escrever uma única linha de código.
### Conteúdo do Curso
Neste bootcamp, você aprenderá a usar o SageMaker Canvas na AWS para treinar e implantar modelos de Machine Learning.
O curso cobre todas as etapas, desde a preparação e visualização até a manipulação de grandes volumes de dados, tudo sem a necessidade de programação.
### Projeto de Previsão de Estoque
Ao final do curso, você criará uma aplicação de Previsão de Estoque Inteligente. Este projeto prático é uma excelente adição ao seu portfólio, destacando suas habilidades em Machine Learning. Confira a ementa:
#### Fundamentos de Machine Learning e IAs Generativas
- Bootcamps DIO: Educação Gratuita e Empregabilidade Juntas!;
- Algoritmos e Aprendizado de Máquina;
- Processamento de Linguagem Natural;
- O que são IAs Generativas;
- Aula Inaugural: Bootcamp Nexa – Machine Learning para Iniciantes na AWS.
#### Machine Learning Sem Código com Amazon SageMaker Canvas
- Introdução ao Desenvolvimento Low-Code;
- Introdução ao SageMaker Canvas: IA Generativa Sem Código;
- Transformando Dados em Insights com SageMaker Canvas;
- Desafios de Código: Aperfeiçoe Sua Lógica e Pensamento Computacional;
- Explorando o SageMaker Canvas com Lógica de Programação;
- Desafios de Projetos: Crie Um Portfólio Vencedor;
- Previsão de Estoque Inteligente na AWS com Sagemaker Canvas;
- Avalie este Bootcamp.
### Desafio de Código
Participe de um desafio de código que reforçará seu aprendizado e demonstrará suas habilidades em um ambiente prático e competitivo.
### Carga Horária
O bootcamp oferece 10 horas de conteúdo intensivo, incluindo material didático e prático para uma experiência de aprendizado completa. Confira as datas importantes:
- Abertura de inscrições: 17/06/2024;
- Data de término das inscrições: 29/07/2024;
- Evento de Lançamento: 02/07/2024.
### Sessões ao Vivo com Experts
Aprenda com experts em sessões ao vivo. Essas aulas proporcionam uma experiência interativa, permitindo que você tire dúvidas em tempo real e obtenha insights valiosos dos profissionais da área.
### Fundamentos do Machine Learning
Você conhecerá os fundamentos do Machine Learning, incluindo uma introdução à Inteligência Artificial Generativa. O curso utiliza produtos e serviços Low Code e No Code, como o SageMaker Canvas, para facilitar o aprendizado.
### Parceria DIO e Nexa
Este bootcamp é oferecido em parceria com a DIO e Nexa, trazendo o que há de mais moderno em tecnologias, ferramentas e bibliotecas que são tendências no mercado.
### Para Quem é Recomendado
O bootcamp é recomendado para profissionais com noções prévias de desenvolvimento, principalmente em Python, que buscam um serviço de integração de dados mais eficiente e produtivo.
### Oportunidades de Carreira
Após concluir o bootcamp, seu perfil ficará disponível para oportunidades em uma das tecnologias mais procuradas por empresas parceiras da DIO na Talent Match. Prepare-se para as oportunidades que estão por vir e tenha sucesso nas entrevistas de recrutamento.
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Bootcamp-Machine-Learning-AWS-280x210.png" alt="Bootcamp Machine Learning AWS" title="Bootcamp Machine Learning AWS"></span>
</div>
<span>Bootcamp De Machine Learning Para AWS Gratuito</span> <a href="https://guiadeti.com.br/bootcamp-machine-learning-aws-gratuito/" title="Bootcamp De Machine Learning Para AWS Gratuito"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Workshop-Sobre-Figma-280x210.png" alt="Workshop Sobre Figma" title="Workshop Sobre Figma"></span>
</div>
<span>Workshop Sobre Figma Gratuito: Crie Seu Protótipo Do Zero</span> <a href="https://guiadeti.com.br/workshop-figma-gratuito-crise-seu-prototipo/" title="Workshop Sobre Figma Gratuito: Crie Seu Protótipo Do Zero"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Oracle-Learning-Explorer-280x210.png" alt="Oracle Learning Explorer" title="Oracle Learning Explorer"></span>
</div>
<span>Cursos Oracle Gratuitos: Treinamentos e Certificados</span> <a href="https://guiadeti.com.br/cursos-oracle-gratuitos-treinamentos-certificados/" title="Cursos Oracle Gratuitos: Treinamentos e Certificados"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Cursos-Sest-Senat-280x210.png" alt="Cursos Sest Senat" title="Cursos Sest Senat"></span>
</div>
<span>SEST SENAT Cursos Gratuitos: LGPD, Excel, Gestão E Informática</span> <a href="https://guiadeti.com.br/sest-senat-cursos-gratuitos/" title="SEST SENAT Cursos Gratuitos: LGPD, Excel, Gestão E Informática"></a>
</div>
</div>
</div>
</aside>
## Machine Learning
Machine Learning, ou Aprendizado de Máquina, é uma subárea da inteligência artificial que se concentra em desenvolver algoritmos e técnicas que permitem aos computadores aprender a partir de dados e fazer previsões ou tomar decisões sem serem explicitamente programados para realizar tarefas específicas.
### Algoritmos e Modelos
Os algoritmos de Machine Learning são instruções passo a passo usadas para transformar dados em um modelo. Um modelo é um programa que foi treinado em um conjunto de dados para identificar padrões ou fazer previsões. Existem vários tipos de algoritmos de Machine Learning, incluindo:
- Supervisionado: O modelo é treinado em dados rotulados, onde a resposta correta é fornecida. Exemplos incluem regressão linear e árvores de decisão.
- Não Supervisionado: O modelo tenta encontrar padrões em dados não rotulados. Exemplos incluem clustering e redução de dimensionalidade.
- Reforço: O modelo aprende a tomar decisões ao receber recompensas ou punições. Este tipo é frequentemente usado em robótica e jogos.
### Processo de Treinamento
O processo de treinamento envolve alimentar o algoritmo com dados históricos e permitir que ele ajuste seus parâmetros para minimizar o erro na previsão. Esse processo inclui várias etapas:
1. Coleta de Dados: Reunir dados relevantes que serão usados para treinar o modelo.
2. Preparação dos Dados: Limpar e formatar os dados para garantir que sejam adequados para o treinamento.
3. Treinamento do Modelo: Aplicar os dados ao algoritmo e permitir que ele aprenda padrões e relações.
4. Avaliação do Modelo: Testar o modelo em novos dados para avaliar sua precisão e desempenho.
5. Ajuste e Otimização: Refinar o modelo para melhorar sua eficácia.
### Aplicações do Machine Learning
Machine Learning tem aplicações significativas na área da saúde, como diagnóstico médico, personalização de tratamentos e previsão de surtos de doenças. Algoritmos podem analisar imagens médicas para detectar tumores ou prever a progressão de doenças crônicas.
No setor financeiro, o Machine Learning é usado para detectar fraudes, prever preços de ações e gerenciar riscos. Modelos podem analisar transações para identificar padrões suspeitos ou prever tendências de mercado.
Empresas utilizam Machine Learning para segmentar clientes, personalizar campanhas de marketing e prever o comportamento do consumidor. Algoritmos podem analisar dados de compra para recomendar produtos ou serviços.
A indústria automotiva utiliza Machine Learning em veículos autônomos, permitindo que os carros aprendam a dirigir em diferentes condições de estrada e tomem decisões em tempo real para garantir a segurança.
### Desafios e Considerações
A eficácia de um modelo de Machine Learning depende da qualidade dos dados de treinamento. Dados ruins ou enviesados podem levar a previsões incorretas e decisões inadequadas.
### Interpretação e Transparência
Um desafio significativo é interpretar como os modelos de Machine Learning tomam decisões. Modelos complexos, como redes neurais, são frequentemente vistos como “caixas-pretas” porque seus processos internos são difíceis de entender.
### Ética e Privacidade
O uso de Machine Learning levanta questões éticas e de privacidade, especialmente quando se trata de dados pessoais. É crucial garantir que os modelos sejam justos, transparentes e respeitem a privacidade dos indivíduos.
### Futuro do Machine Learning
O futuro do Machine Learning é promissor, com avanços contínuos em algoritmos, aumento na disponibilidade de dados e melhorias na capacidade de computação.
Áreas emergentes como aprendizado profundo, inteligência artificial explicável e aprendizado federado estão moldando o futuro da tecnologia, tornando-a ainda mais poderosa e acessível.
Machine Learning continuará a transformar diversas indústrias, impulsionando inovação e eficiência, e desempenhando um papel crucial na evolução da tecnologia e na resolução de problemas complexos.
## DIO
A Digital Innovation One (DIO) é uma plataforma de educação que tem como objetivo democratizar o acesso ao conhecimento tecnológico e capacitar profissionais para o mercado de trabalho.
Fundada com a missão de proporcionar educação acessível e de qualidade, a DIO oferece uma vários cursos e bootcamps em áreas como programação, desenvolvimento web, ciência de dados e mais.
## Metodologia de Ensino
A DIO tem uma metodologia de ensino prática e orientada a projetos, que prepara os alunos para os desafios reais do mercado de trabalho.
Os cursos são ministrados por profissionais experientes e incluem atividades práticas, projetos de capstone e desafios de código
### Parcerias e Impacto no Mercado
A DIO tem parcerias estratégicas com empresas líderes do setor tecnológico, como Santander, Localiza, Carrefour, entre outras, proporcionando aos alunos oportunidades exclusivas de networking e emprego.
Essas parcerias permitem que os estudantes participem de programas de capacitação direcionados, alinhados às necessidades do mercado e às tendências tecnológicas.
## Link de inscrição ⬇️
As [inscrições para o Bootcamp Nexa – Machine Learning para Iniciantes na AWS](https://www.dio.me/bootcamp/bootcamp-nexa-machine-learning-para-iniciantes-na-aws) devem ser realizadas no site da DIO.
## Compartilhe O Bootcamp Nexa e DIO e transforme carreiras em Machine Learning na AWS!
Gostou do conteúdo sobre o bootcamp? Então compartilhe com a galera!
O post [Bootcamp De Machine Learning Para AWS Gratuito](https://guiadeti.com.br/bootcamp-machine-learning-aws-gratuito/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,895,069 | Day 24 of my progress as a vue dev | About today Today was again one of those days where I struggled to follow my routine to the fullest... | 0 | 2024-06-20T16:32:20 | https://dev.to/zain725342/day-24-of-my-progress-as-a-vue-dev-5ggi | webdev, vue, typescript, tailwindcss | **About today**
Today was again one of those days where I struggled to follow my routine to the fullest and spent most of my time on youtube and sleeping which I think is not progressive in any way. But, I'm starting to realize that such instances are going away and I will keep on encountering such days in the future as well. My goal is do as much work as I can and not waste a lot of time, and also not to let this effect me to the point of giving up. Because, I think taking little steps constantly is better than not moving at all. So, I did work on my second landing page and tried to extend the work further and make it more better and I feel good with the work I did today.
**What's next?**
I will completing my landing page and I will also be pushing the code to my github repository for anyone who wants to check it out. And yeah, rest of the plan is still the same.
**Improvements required**
One thing I noticed is in order to get more work done in shorter period of time I was starting to write smelly code which really bothered me today when I was navigating through it. So, I have to consciously try to write clean code while moving forward efficiently time wise.
Wish me luck! | zain725342 |
1,895,068 | How does Nostra utilize mobile game advertising and lock screen games to enhance gaming monetization strategies? | At Nostra, we harness the power of mobile game advertising and lock screen games to optimize gaming... | 0 | 2024-06-20T16:31:33 | https://dev.to/claywinston/how-does-nostra-utilize-mobile-game-advertising-and-lock-screen-games-to-enhance-gaming-monetization-strategies-376g | gamedev, mobilegames, androidgames, nostragames | At Nostra, we harness the power of [**mobile game advertising**](https://medium.com/@adreeshelk/get-to-know-all-about-gaming-platforms-today-e5d4ae7f25dd?utm_source=referral&utm_medium=Medium&utm_campaign=Nostra) and lock screen games to optimize gaming monetization strategies. Through our platform, developers can seamlessly integrate mobile game advertising formats like rewarded videos and interstitial ads, maximizing revenue while preserving the user experience. Additionally, our innovative [**lock screen games**](https://nostra.gg/articles/Lock-Screen-Games-Are-a-Game-Changer-for-Gaming-Developers.html?utm_source=referral&utm_medium=article&utm_campaign=Nostra) offer a unique avenue for engagement, allowing developers to showcase ads or in-game offers directly on users' lock screens. This strategic approach not only increases ad impressions but also enhances user engagement. By leveraging [**mobile game advertising**](https://medium.com/@adreeshelk/140-million-players-by-2027-will-you-be-one-of-them-2049631abb6e?utm_source=referral&utm_medium=Medium&utm_campaign=Nostra) and lock screen games on Nostra, developers can unlock multiple revenue streams, ensuring the success of their gaming monetization strategies. | claywinston |
1,895,067 | I’m switching from Laravel to Rails | I have been using Laravel since version 4 in 2013. Over the years, Laravel has evolved significantly.... | 0 | 2024-06-20T16:29:45 | https://dev.to/reshadman/im-switching-from-laravel-to-rails-50on | rails, laravel, fullstack, ruby | I have been using Laravel since version 4 in 2013. Over the years, Laravel has evolved significantly. I initially chose Laravel over Rails due to its favorable position in our local job market. In 2015, I started building our own business using Laravel. Today, that business is the largest job board in Iran, serving over 3 million job seekers and 100,000 employers. Laravel performed well for us, until it didn't.
Over the years of maintaining this application, I have come to some conclusions, both in terms of code and team dynamics. Our entire product/tech team has never exceeded six people, including designers and product managers. During the COVID-19 pandemic I managed everything solo Product/Tech wise.
I have also been part of projects using Spring, Symfony, and Django. We were among the first adopters of Vue.js back in 2015, starting with Vue 0.12.
I told you the story to picture the pains I've been encountering during maintaining a Laravel application over 9 years.
**Sticking to framework defaults**
Over the years, I have realized that adhering to framework defaults at least 90% of the time is the best way to ensure easy upgrades, address security concerns, adopt new technologies, and hire new developers. Even though there are architectural trade-offs, the benefits outweigh the drawbacks.
**Laravel changes opinions and defaults**
Laravel tends to change its opinions with every major version or introduce new ones. Most of the time, these new opinions permeate every part of your application. In contrast, Rails sticks to its architectural roots. The "Rails way" remains similar to how it was 10 years ago, allowing you to join a project that respects this method and perform easy upgrades, onboard new developers, or add features seamlessly. Laravel, however, is not like that.
Over the past 10 years, Laravel has promoted various combinations, such as (Vue + Laravel), (Laravel + Inertia), Livewire with class components, and Laravel Folio + Volt. This is just on the framework side. The community, including the framework authors and the crew (like Laracasts), has promoted multiple nonsensical architectural ways of writing software. Many of these make no sense, such as repositories with active record models, repositories as service objects, service objects with models, service objects with models and repositories, and service objects with in/out DTOs.
I found my way early on, but this inconsistency makes it hard to find new developers and maintain a consistent codebase. Every community has its challenges, just like the ones I mentioned, but I have never seen DHH, for example, seem confused about how to write Rails software. Look at the source code for first-party Laravel packages like Breeze Scaffoldings, Laravel Fortify, Laravel Telescope, Laravel Sanctum, Laravel Spark, Laravel Pulse, and Laravel Horizon. Each of them has different choices on how to write software, which is precisely the problem I’m facing. Even the framework owner seems inconsistent on how to write business software or near-end user components with Laravel. There are too many choices, and some are abandoned while others are equivalent to each other.
**Laravel is somehow merchant of complexity**
People are always eager for new content, new tools and new approaches to writing software. Developer tooling authors often build their business plans around this demand, which isn't necessarily bad, but it does have side effects. This is the problem when a dedicated company including a full-time team works on a set of open-source products: the framework is designed for revenue after some years. Basecamp does a ton of marketing with Rails, but I respect how they keep their commercial software separate from their open-source extractions.
Part of these new opinions on how to write software stems from trying to attract everyone and every method. In recent years, Laravel has turned into a sort of food court. Need MVC? We've got that. Think Phoenix LiveView is cool? We create Livewire as a first-party package and promote it everywhere in the docs. Is the React handler+view in one file hyped? We create Laravel Volt + Folio. People love React and Vue but find writing Next and Nuxt hard? We create Inertia. We have responses for every question and consideration.
Yet, from a developer's point of view, you have to keep up with every new cognitive load you're faced with. Nearly all of them are just noise, with only a few, like Inertia, offering something valuable. Sometimes, the old ways are completely abandoned. Although not entirely without backward compatibility, they are neglected and receive no care. Laravel has somehow become a merchant of complexity, constantly introducing new paradigms that add to the cognitive load for developers. Then, they sell courses, books, commercial packages, and companion software around these new paradigms. I'm really okay with this part, but it significantly affects community packages and standards, developer skills, and codebases, this is the part I'm not OK with.
Enough philosophical bullshit—let's talk about code.
**ActiveRecord is years ahead of Eloquent**
Eloquent has mimicked ActiveRecord but hasn't brought over the most important parts. ORM is a key component of a battery-included framework, and Eloquent has received nearly zero upgrades since 2015.
Part of this is due to Ruby's influence, but that part is manageable. Eloquent lacks several crucial features, such as self-validating models, commit callbacks, and models acting like aggregate roots. This isn't just about putting validation rules in Laravel form requests, controllers, or models; it's about validations being an important part of the model's existence. They fundamentally change how you define behavior, states and solve problems. In Rails, it's much harder to put a model in the wrong state because of self-validating models, models acting as aggregate roots, and model callbacks being first-class citizens.
With Eloquent, you have to wrap nearly every multi-model interaction in an explicit transaction block, and I've paid the price (even financial price) for missing it, which is easily available in other ORM libraries. Eloquent behaves like a Row Gateway during writes, making it easy to corrupt data states in different parts of your app.
The syntax in Ruby for defining validation rules on a model, accessors, or callbacks is far better than the options available in PHP. I love Active Record pattern because of its progressive behavior during development, but I've reached a point where I define rules, getters, setters, casts, fillable, and guarded attributes independently, instead of defining them for a single field in a data mapper ORM with poor syntax that combines object behavior definition with metadata. How can a battery-included framework that offers multiple ways of building front-end! not offer optimistic locks or commit callbacks for models? I mean callbacks, not event listeners, which introduce the highest level of possible indirection when scanning code—another problem in itself.
**Service providers are PHP bullshit**
Rails has the advantage of not requiring dependency injection (DI) or inversion of control (IoC), so you don't struggle with IoC on simple tasks like saving an input field to the database. You just see business code in your implementation. In contrast, Laravel uses facades and extensions to address this, but they feel like hacks. As a result, you're left with pages of configuration files and limitations on extending functionality.
**Blade is better than ERB**
I like ViewComponent in Rails, but even ERB with ViewComponent doesn't match the capabilities of Blade, especially when it comes to components. The XML-like syntax for writing templates is really nice.
**About Routes, FormRequests, Policies and Middelwares**
When you write a CRUD operation in Laravel, you might end up defining nearly five classes: two Form Request objects, one controller, a policy class, and a simple model, each existing in a separate folder within the application. One of my experiences in programming is the importance of keeping related things together. Rails still has this problem, but you don’t necessarily have to define five classes for a single feature. Using Livewire? You'll need even more classes. Despite this level of abstraction, it's not necessarily enough; there are tons of business-logic-filled middlewares in every Laravel app.
Rails' controller lifecycle callbacks, together with concerns, are a blessing. You still write simple Ruby code without defining a dozen layers just to save a form to the database and introduce some side effects. Route model bindings seem very attractive at first sight, but in a real Laravel project, you either see tons of query logic in explicit route model binding definitions leaking through service providers, middlewares, and route files, or you find yourself performing repetitive find queries for the same fetch logic in hundreds of controllers, passing them to view files each time. Alternatively, you might invent a new class/service/action for these jobs, which every implementation differs from every other Laravel app you've seen.
When learning Laravel, the documentation heavily promotes route model binding, but it seldom mentions that they are not nested through current user and that you MUST define policies for controlling access to them. Just search for sites using Livewire JS file on the internet, create accounts on those sites, and you will likely find many that allow you to access other users' resources by changing parameters in URL.
In Laravel, the routes definition file sometimes contains more logic than your controllers and models combined. Read the documentation for gateways and policies on the Laravel site; they offer dozens of methods for doing the same thing—the scroll is longer than the HTTP server implementation file in Go.
Form Requests advertise nothing about abstraction. Open a standard Laravel project, and you'll see repeated code for rules and authorization everywhere. I've developed the same app for nine years, and believe me, this simple thing annoyed me the most. The annoyance was doubled when they introduced Form Requests. Rails, by default, uses request-lasting query caching for fetching ActiveRecord queries. Open a Laravel app, and you'll see repeating queries in middlewares to control payment walls, bans, etc. There’s still no clear way of sharing data from middleware to controllers. You have dozens of options: singleton classes, request merging, request extending, etc., but no one uses them effectively. At some point, you don't even know what middlewares are bound to your route, and you get surprised.
Laravel controller methods nearly just function as input-output handlers, which is okay, but combining them with dozens of layers and concepts to perform a simple before_action callback directly in the controller or the parent controller is excessive.
Rails has much less noise by not having these features. Rails carefully advertises the way it does things and carefully chooses what not to include to prevent becoming feature-bloated. This is a feature not a bug.
**Rails Documentation**
At first sight, Laravel seems more documented than Rails, which is true when considering a beginner's perspective. However, the feature of Rails writing the real documentation alongside the code itself is really, really nice. You figure out by Ctrl+B and feels really productive.
**Hotwire is better than Livewire**
I really believe that simple MVC with strict resource controllers is the best way to decompose an app with hundreds of screens. However, Livewire somehow prevents this. You write your logic alongside some JavaScript-related code, and changing the UI significantly impacts your backend implementation since they live together in the same component. This leads to a lot of dead code in Livewire for long-term maintenance projects, and edge cases are really hard to fix. Additionally, the Livewire way of writing software is deeply specific in its own environment. For instance, I can discuss about models, controllers, before_action, or middlewares—concepts that exist in nearly every framework or library. However, developing software in Livewire, involves a very specific and unique implementation that doesn't translate well to other approaches.
I've also found that 90% of JSON API exposures are just response format changes, which is strongly utilized in the Rails community and is much harder and nearly impossible when using Livewire. With Livewire, you end up writing multiple controllers that do the same thing. Sticking to this design may seem hard at first, but it really keeps the application maintainable and understandable in the long run.
Additionally, I really wish development teams would recognize the value in the Shape Up method. One of its key components during cycles is progressive development. If you want to implement a feature, you can start by writing dead simple HTML and then implement the functionality, wiring them together and adding interactivity later. I think Rails encourages this behavior by the way Hotwire works, and it is really wise.
I also believe that creating TRULY VERY high-fidelity UIs with Hotwire is not possible or practical, but I have accepted the trade-off. Most of the time, I struggle to create those UIs even in environments like Vue. Overall, I really like the progressive approach of Hotwire and the low footprint it introduces, but I think Alpine.js is a better alternative to Stimulus.
Rails developers seem to be more skilled in Rails' front-end offerings because it recommends a single approach through Hotwire. In contrast, Laravel offers multiple stacks, making it harder to find a full-stack developer. At least in our local community.
**Notifications and Mailer**
Laravel allows defining notification classes that can have multiple destinations: email, push notifications, database, webhooks, etc. Rails has mailers, but they are not designed out of the box with the same semantics as Laravel's notification system. I really wish Rails would make this section more unified by introducing web push notifications in Rails 8. The same lack of important features applies to background jobs and writing console commands.
**First party packages and Ecosystem**
At first sight, it seems Laravel has tons of first-party packages that do a lot of things, but I really think they create much more noise. The Rails community seems more focused on promoting building actual applications, while the Laravel community seems more focused on developer tooling (not developer experience), keeping developers busy figuring them out. That's why you see more end-to-end apps written in Rails on GitHub. Many communities have had their hypes, but relatively few have produced projects like GitLab, Postal, Mastodon, Redmine, and so on compared to other types of open-source projects.
That's the real power of an ecosystem. You have the option to see what real-world code looks like by exploring some of the best end-user-facing applications. I was around the Laravel community for more than a decade and never found a project that improved my knowledge as much as something like Postal or GitLab did. The Laravel community is warm and welcoming, but they are heavily focused on developer tooling.
I don't know if I've successfully transferred my point, but Rails and Django might be the only full-stack frameworks that give me the feeling that you can be cool and show off by building or teaching how to create end-to-end applications, not just by creating libraries.
The only true masterpiece in the Laravel ecosystem, for me, is Forge. It is truly excellent and frees your mind from many concerns. As for libraries, they already exist in the Rails community or are some fancy additions to my workflow. The real pains I experience are those mentioned above.
**Conclusion**
In the end, I'm really thankful to Laravel. It allowed me to make real money as a young developer who was just eager to build a good life through the internet. However, I have chosen to continue my path with Rails. I feel happier and more productive when I write in Rails compared to Laravel. While I can customize Laravel to my needs to a certain extent, as I mentioned earlier, there is great value in sticking to defaults. Alternatively, if you don't want to leverage the offerings of a battery-included framework or they require extensive customizations, you might consider using very verbose and explicit languages like Go. However, these customizations can make tasks like finding a new developer or performing upgrades more challenging.
This is in no way a recommendation for the job market. Job market dynamics vary from place to place and are not usually influenced by personal preferences or personal considerations like the ones I mentioned above or being a one person framework. Typically, the most hyped technology is the one that commands the highest salaries. From this perspective, TypeScript/JavaScript is currently more hyped than anything else on the planet.
| reshadman |
1,894,514 | my unconventional journey into tech | Where it started I’ve hopped from a diploma in Music and Audio Technology to a college... | 0 | 2024-06-20T16:29:30 | https://dev.to/ohchloeho/my-unconventional-journey-into-tech-b4o | beginners, codenewbie, community, discuss | ## Where it started
I’ve hopped from a diploma in Music and Audio Technology to a college degree in Business Management, then landing a job in IT engineering most recently. I’ve read about many self-taught individuals getting into the software industry over the past few years and I’ve started on this journey halfway through completing my business degree. It’s surreal to think how far I’ve come and still have the privilege to have a hand in the things I love like music.
My past 5 years:
2019 - Got a diploma and worked as a freelance audio engineer
2020 - Got into Uni because Covid killed my freelance work
2021 - Took a coding elective and started learning about the programming world
2022 - Absorbed like a sponge from dev meetups, communities and learned about programming ecosystems
2023 - Landed an IT internship that led to a huge upgrade to full-time
However, I’m still incredibly new to the tech industry especially only being in it professionally for less than a year. It’s sometimes overwhelming to think about the stacks and stacks of tools and techniques to learn, but the satisfaction of fixing the smallest bug and finally printing “hello world” to the console is so worth it. Maybe it’s me and my addiction to dopamine and doing things that make me feel good or maybe tech itself just seems way too promising.
## Looking ahead
I’m still really far from the things I want to achieve and breaking them down into an ideal world and unrealistic 5-year plan, this is pretty much what it looks like:
2025 - Land a job in a startup doing either fullstack development / software engineering
2026 - Do such a good job that job upgrades are in order
2027 - Build a SaaS and get at least 500 customers
2028 - Scale the SaaS and build a team
2029 - Become a software rockstar or willfully unemployed
### My ingenious plan to make it work
What I’ve learnt with my short time being in tech is that the ability to learn is more important than anything. The ability to absorb like a sponge and apply everything in a completely different situation regardless of how difficult it may seem. Everything has a pattern and some form of consistency, and one iteration to improve a program even by a little bit is more valuable than building multiple projects. Quality over quantity.
I’ve learnt from my years in the music industry that quality comes from experience too, good habits, and a ton of muscle memory. So for the rest of this year I’m going to try and build these habits one day at a time!
With a webdev starting point in NodeJS, I’ve recently picked up Go and any advice on mastering the high-level language itself is welcome!
Let me know how your journey into tech was, and if it was as random and abrupt as mine! :) | ohchloeho |
1,895,050 | Reinforcement Learning: A Brief Overview | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-20T16:29:19 | https://dev.to/abhinav11234/reinforcement-learning-a-brief-overview-3b87 | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
In Reinforcement Learning (RL), we train models to make decisions based on actions, mistakes, and feedback, like humans learn from experience. In RL, model generate outputs on it's own, store feedback from incorrect outputs to improve future responses.
## Additional Context
When I learned about Reinforcement Learning in a Machine Learning course, I found it to be an interesting topic to explain. It is hard to explain briefly, but I still tried to do my best.
<!-- Thanks for participating! --> | abhinav11234 |
1,895,066 | Precision Estimating: The Key to Efficient and Accurate Construction Projects | In the fast-paced and highly competitive construction world, delivering projects on time and within... | 0 | 2024-06-20T16:26:52 | https://dev.to/precisionestimatorllc/precision-estimating-the-key-to-efficient-and-accurate-construction-projects-32j3 | constructionestimating, materialtakeoffs, quantitysurveyor, precisionestimator | In the fast-paced and highly competitive construction world, delivering projects on time and within budget is critical. At Precision Estimator LLC, we understand that the foundation of any successful construction project lies in accurate and detailed cost estimation. This is where precision estimating comes into play, a crucial service ensuring projects are feasible and financially sound from inception to completion.
Understanding Precision Estimating:
[Precision estimating](https://precisionestimator.com/) is the meticulous process of forecasting the costs associated with a construction project. It involves thoroughly analyzing all elements, from materials and labor to equipment and overheads, ensuring that every aspect of the project is accounted for. This detailed approach minimizes the risk of budget overruns and project delays, providing stakeholders with a clear financial roadmap.
The Role of Material Takeoff in Precision Estimating:
A critical component of precision estimating is the material takeoff process. Material takeoff is the step-by-step breakdown of the materials required for a construction project, including quantities and specifications. At Precision Estimator LLC, our material takeoff services are designed to provide highly accurate and detailed lists of materials, ensuring that nothing is overlooked.
Benefits of Precision Estimating:
1.Enhanced Accuracy: Precision estimating leverages advanced techniques and technologies to deliver highly accurate cost estimates. This accuracy is paramount in preventing unexpected expenses and ensuring the project stays within the planned budget.
2.Improved Planning and Scheduling: Project managers can develop more realistic timelines and schedules with precise estimates. Knowing the exact quantities of materials and labor needed helps coordinate various project phases efficiently.
3.Risk Mitigation: By identifying potential cost overruns and resource shortages early in the planning phase, precision estimating allows for proactive risk management. This foresight helps in avoiding costly project delays and disruptions.
4.Informed Decision-Making: Detailed cost estimates give stakeholders the information they need to make informed decisions. Whether choosing between different materials or evaluating the financial feasibility of design changes, precision estimating ensures that decisions are based on solid data.
The Precision Estimator LLC Advantage:
At Precision Estimator LLC, we pride ourselves on delivering top-notch material takeoff services that set the foundation for accurate and reliable cost estimates. Here’s how we stand out:
1.Expertise and Experience: Our team comprises seasoned professionals with extensive experience in the construction industry. This expertise enables us to understand the intricacies of various projects and provide estimates that reflect real-world conditions.
2.Cutting-Edge Technology: We utilize the latest software and technologies to perform material takeoffs and cost estimations. This enhances the accuracy of our estimates and allows for quicker turnaround times, helping clients meet tight deadlines.
3.Comprehensive Reports: Our material takeoff reports are detailed and comprehensive, covering every aspect of the project. Our reports leave no stone unturned, from the quantity of each material to the specifications and pricing.
4.Customized Solutions: We understand that every construction project is unique. That’s why we offer customized estimating solutions tailored to meet each client's specific needs and requirements. Whether it’s a residential, commercial, or [industrial project](https://www.infoplease.com/search/construction+industry+projects), we provide precise and reliable estimates.
The Process of Precision Estimating:
Our precision estimating process is thorough and systematic, ensuring that every detail is captured accurately:
1.Project Review and Scope Definition: We thoroughly review the project plans and specifications. Understanding the project's scope is crucial in identifying all the elements that must be included in the estimate.
2.Material Takeoff: This involves a detailed breakdown of all the materials required for the project. We measure and quantify each material using advanced software, ensuring accuracy and completeness.
3.Cost Estimation: We estimate the costs associated with each material based on the material takeoff. We also factor in labor, equipment, and other overheads to provide a comprehensive cost estimate.
4.Review and Validation: Before finalizing the estimate, we conduct a thorough review to ensure that all details are accurate. This includes cross-checking quantities, verifying prices, and validating assumptions.
5.Report Generation and Delivery: Once the estimate is finalized, we generate a detailed report and deliver it to the client. Our reports are easy to understand and provide a clear breakdown of all costs, helping clients make informed decisions.
Why Precision Estimating Matters:
In an industry where margins are thin, and competition is fierce, precision estimating is not just a service—it’s a necessity. Accurate cost estimates can differentiate between a profitable project and a financial disaster. At Precision Estimator LLC, we are committed to providing our clients with the precision and reliability they need to succeed.
By partnering with us, clients can rest assured that their projects will be estimated with the highest level of accuracy and professionalism. Our dedication to precision estimating ensures that every project we handle is set up for success from the beginning.
Conclusion:
Precision estimating is essential to the construction process, providing the foundation for successful project management and execution. At Precision Estimator LLC, our material takeoff services are designed to deliver the accuracy and detail needed to ensure that every project is completed on time and within budget. With our expertise, [cutting-edge ](https://dev.to/)technology, and commitment to excellence, we help our clients achieve their construction goals with confidence. Choose Precision Estimator LLC for all your estimating needs and experience the difference that precision makes.
| precisionestimatorllc |
1,895,064 | The Font Class | A Font describes font name, weight, and size. You can set fonts for rendering the text. The... | 0 | 2024-06-20T16:24:30 | https://dev.to/paulike/the-font-class-318k | java, programming, learning, beginners | A **Font** describes font name, weight, and size. You can set fonts for rendering the text. The **javafx.scene.text.Font** class is used to create fonts, as shown in Figure below.

A **Font** instance can be constructed using its constructors or using its static methods. A **Font** is defined by its name, weight, posture, and size. Times, Courier, and Arial are the examples of the font names. You can obtain a list of available font family names by invoking the static **getFamilies()** method. **List** is an interface that defines common methods for a list. **ArrayList** is a concrete implmentation of **List**. The font postures are two constants: **FontPosture.ITALIC** and **FontPosture.REGULAR**. For example, the following statements create two fonts.
`Font font1 = new Font("SansSerif", 16);
Font font2 = Font.font("Times New Roman", FontWeight.BOLD, FontPosture.ITALIC, 12);`
The program below gives a program that displays a label using the font (Times New Roman, bold, italic, and size 20).
```
package application;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.layout.*;
import javafx.scene.paint.Color;
import javafx.scene.shape.Circle;
import javafx.scene.text.*;
import javafx.scene.control.*;
import javafx.stage.Stage;
public class FontDemo extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane to hold the circle
Pane pane = new StackPane();
// Create a circle and set its properties
Circle circle = new Circle();
circle.setRadius(50);
circle.setStroke(Color.BLACK);
circle.setFill(new Color(0.5, 0.5, 0.5, 0.1));
pane.getChildren().add(circle); // Add circle to the pane
// Create a label and set its properties
Label label = new Label("JavaFX");
label.setFont(Font.font("Times New Roman", FontWeight.BOLD, FontPosture.ITALIC, 20));
pane.getChildren().add(label);
// pane.getChildren().addAll(circle, label);
// Create a scene and place it in the stage
Scene scene = new Scene(pane);
primaryStage.setTitle("FontDemo"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```

The program creates a **StackPane** (line 15) and adds a circle and a label to it (lines 22, 27). These two statements can be combined using the following one statement:
`pane.getChildren().addAll(circle, label);`
A **StackPane** places the nodes in the center and nodes are placed on top of each other. A custom color is created and set as a fill color for the circle (line 21). The program creates a label and sets a font (line 26) so the text in the label is displayed in Times New Roman, bold, italic, and 20 pixels.
As you resize the window, the circle and label are displayed in the center of the window, because the circle and label are placed in the stack pane. Stack pane automatically places nodes in the center of the pane.
A **Font** object is immutable. Once a **Font** object is created, its properties cannot be changed. | paulike |
1,895,063 | AI Enhanced Updates | Arcade | About Arcade Arcade delivers AI-enhanced updates right on your Discord server, with all... | 0 | 2024-06-20T16:20:44 | https://dev.to/flameface/ai-enhanced-updates-arcade-4l5o | ai, news, google | 
## [About Arcade](https://arcade.unburn.tech/)
Arcade delivers AI-enhanced updates right on your Discord server, with all types of updates about gaming, entertainment, technology and more.
---

## UI Friendly
We have built a very **friendly dashboard** that makes it easy to subscribe to games, read updates, patch notes, etc. all in one place.
---

## Favorite Channels
You will find all your **favorite** gaming, entertainment, or tech channels in one place, or you can request features or channels on our [Discord server](https://discord.gg/AGVM3d2q9S).
---

## AI Enhanced
Get properly **summarized** updates using our AI-trained model by Google's Gemini, with the source listed from which we fetch the update.
---
Invite **[Arcade](https://arcade.unburn.tech/)** to your Discord server. | flameface |
1,895,056 | The Mysterious Case of Negative Margins: Uncovering the Truth Behind Overlapping Elements | As frontend developers, we've all been there - staring at our code, scratching our heads, and... | 0 | 2024-06-20T16:16:54 | https://dev.to/waelhabbal/the-mysterious-case-of-negative-margins-uncovering-the-truth-behind-overlapping-elements-4d13 | html, browserbehavior, csslayout, frontend | As frontend developers, we've all been there - staring at our code, scratching our heads, and wondering why our elements aren't behaving as expected. Today, I want to share a fascinating case study that highlights the importance of understanding browser behavior and the intricacies of CSS layout.
**The Problem**
Consider the following code snippet:
```html
<div class="box">
some content
</div>
<div class="bottom">
other content
</div>
```
With a simple background color on both divs and a negative margin-top on the second div, you might expect the second div to overlap the first one completely. But, surprisingly, it doesn't. Instead, the second div seems to "slide" between the content and the background of the first div.
**My Initial Thoughts**
At first, I thought this was due to the text content taking precedence over the background styling. After all, isn't text more important than visual styling? Perhaps when we have overlapping elements, the browser decides to place all the text at the top and all other styling at the bottom?
**The Aha Moment**
But then I stumbled upon a crucial insight - there is no actual overlapping occurring here. Instead, we're dealing with **intersections**, where two elements are painted on the same layer. This is a critical distinction!
To understand why, let's take a look at the MDN documentation on z-index and layers:
| **Layer** | **Description** |
| --- | --- |
| Bottom layer | Farthest from the observer |
| Layer -X | Layers with negative z-index values |
| Layer 0 | Default rendering layer |
| Layer X | Layers with positive z-index values |
| Top layer | Closest to the observer |
Notice that both elements are on the same layer (Layer 0), which means they're not creating a new stacking context. This is key!
**The Browser's Perspective**
So, why do we see this strange behavior? The browser is simply painting all backgrounds first because it knows they're on the same layer. It's drawing them in order of their z-index values (which are both 0). Then, it's drawing text based on this information.
**The Takeaway**
In this post, we've uncovered the magic behind negative margins and intersecting elements. By understanding browser behavior and the intricacies of CSS layout, we can better anticipate and troubleshoot unexpected results.
As developers, it's essential to dig deep into these details to ensure our code behaves as expected. I hope this post has helped you gain a deeper understanding of browser behavior and will inspire you to continue exploring the fascinating world of frontend development! | waelhabbal |
1,894,004 | Building a Date Range Picker with React and Day.js. | Welcome to the third and final part of this tutorial series on creating a custom calendar using React... | 0 | 2024-06-20T16:16:26 | https://dev.to/oluwadahunsia/building-a-date-range-picker-with-react-and-dayjs-2b5a | webdev, tutorial, react, javascript | Welcome to the third and final part of this tutorial series on creating a custom calendar using React and Day.js. In this session, we will build upon the date picker we developed in part two. Our goal is to enhance it further by enabling users to select multiple dates and highlighting the selected range — essentially creating a range picker feature.

If you have not checked the first two parts, I encourage you to do so.
The first part:
{% embed https://dev.to/oluwadahunsia/building-a-custom-calendar-with-react-and-dayjs-a-step-by-step-guide-2h1d %}
The second part:
{% embed https://dev.to/oluwadahunsia/building-a-simple-date-picker-with-react-and-dayjs-4oop %}
If you do wish to start from here, I'm providing all the necessary files you need to catch up.
## Starter files.
```css
//style
.calendar__container {
display: flex;
flex-direction: column;
align-items: center;
padding: 25px;
width: max-content;
background: #ffffff;
box-shadow: 5px 10px 10px #dedfe2;
}
.month-year__layout {
display: flex;
margin: 0 auto;
width: 100%;
flex-direction: row;
align-items: center;
justify-content: space-around;
}
.year__layout,
.month__layout {
width: 150px;
display: flex;
padding: 10px;
font-weight: 600;
align-items: center;
text-transform: capitalize;
justify-content: space-between;
}
.back__arrow,
.forward__arrow {
cursor: pointer;
background: transparent;
border: none;
}
.back__arrow:hover,
.forward__arrow:hover {
scale: 1.1;
transition: scale 0.3s;
}
.days {
display: grid;
grid-gap: 0;
width: 100%;
grid-template-columns: repeat(7, 1fr);
}
.day {
flex: 1;
font-size: 16px;
padding: 5px 7px;
text-align: center;
}
.calendar__content {
position: relative;
background-color: transparent;
}
.calendar__items-list {
text-align: center;
width: 100%;
height: max-content;
overflow: hidden;
display: grid;
grid-gap: 0;
list-style-type: none;
grid-template-columns: repeat(7, 1fr);
}
.calendar__items-list:focus {
outline: none;
}
.calendar__day {
position: relative;
display: flex;
justify-content: center;
align-items: center;
}
.calendar__item {
position: relative;
width: 50px;
height: 50px;
cursor: pointer;
background: transparent;
border-collapse: collapse;
background-color: white;
display: flex;
justify-content: center;
align-items: center;
text-align: center;
border: 1px solid transparent;
z-index: 200;
}
button {
margin: 0;
display: inline;
box-sizing: border-box;
}
.calendar__item:focus {
outline: none;
}
.calendar__item.selected {
font-weight: 700;
border-radius: 50%;
background: #1a73e8;
color: white;
outline: none;
border: none;
}
.calendar__item.selectDay {
position: relative;
background: #1a73e8;
color: white;
border-radius: 50%;
border: none;
z-index: 200;
}
.calendar__item.gray,
.calendar__item.gray:hover {
color: #c4cee5;
display: flex;
justify-content: center;
align-items: center;
}
.input__container {
display: flex;
justify-content: space-around;
}
.input {
height: 30px;
border-radius: 8px;
text-align: center;
align-self: center;
border: 1px solid #1a73e8;
}
.shadow {
position: absolute;
display: inline-block;
z-index: 10;
top: 0;
background-color: #f4f6fa;
height: 50px;
width: 50px;
}
.shadow.right {
left: 50%;
}
.shadow.left {
right: 50%;
}
```
The calendar component
```typescript
//Calendar.tsx
import dayjs, { Dayjs } from 'dayjs';
import customParseFormat from 'dayjs/plugin/customParseFormat'; // add line
import backArrow from '../assets/images/back.svg';
import forwardArrow from '../assets/images/forward.svg';
import { useState } from 'react';
import { calendarObjectGenerator } from '../helper/calendarObjectGenerator';
import './style.css';
dayjs.extend(customParseFormat); // add line
const weekDays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat'];
export const Calendar = () => {
const [currentDate, setCurrentDate] = useState<Dayjs>(dayjs(Date.now()));
const [inputValue, setInputValue] = useState<string>('');
const daysListGenerator = calendarObjectGenerator(currentDate);
const dateArrowHandler = (date: Dayjs) => {
setCurrentDate(date);
};
const handleInputChange = (event: React.ChangeEvent<HTMLInputElement>) => {
const date = event.target.value;
setInputValue(date);
// check if the entered date is valid.
const isValidDate = dayjs(date, 'DD.MM.YYYY').isValid();
if (!isValidDate || date.length < 10) return;
//if you pass date without specifying the format ('DD.MM.YYYY'),
// you might get an error when you decide to edit a selected date in the input field.
setCurrentDate(dayjs(date, 'DD.MM.YYYY'));
};
const handlePreviousMonthClick = (day: number) => {
const dayInPreviousMonth = currentDate.subtract(1, 'month').date(day);
setCurrentDate(dayInPreviousMonth);
setInputValue(dayInPreviousMonth.format('DD.MM.YYYY')); // add line
};
const handleCurrentMonthClick = (day: number) => {
const dayInCurrentMonth = currentDate.date(day);
setCurrentDate(dayInCurrentMonth);
setInputValue(dayInCurrentMonth.format('DD.MM.YYYY')); // add line
};
const handleNextMonthClick = (day: number) => {
const dayInNextMonth = currentDate.add(1, 'month').date(day);
setCurrentDate(dayInNextMonth);
setInputValue(dayInNextMonth.format('DD.MM.YYYY')); // add line
};
return (
<div className='calendar__container'>
<input
className='input'
value={inputValue}
onChange={handleInputChange}
/>
<div className='control__layer'>
<div className='month-year__layout'>
<div className='year__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'year'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='title'>{currentDate.year()}</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'year'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
<div className='month__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'month'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='new-title'>
{daysListGenerator.months[currentDate.month()]}
</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'month'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
</div>
<div className='days'>
{weekDays.map((el, index) => (
<div key={`${el}-${index}`} className='day'>
{el}
</div>
))}
</div>
<div className='calendar__content'>
<div className={'calendar__items-list'}>
{daysListGenerator.prevMonthDays.map((el, index) => {
return (
<div
key={`${el}/${index}`}
className='calendar__day'
onClick={() => handlePreviousMonthClick(el)}
>
<button
className='calendar__item gray'
>
{el}
</button>
</div>
);
})}
{daysListGenerator.days.map((el, index) => {
return (
<div
key={`${index}-/-${el}`}
className='calendar__day'
onClick={() => handleCurrentMonthClick(el)
>
<button
className={`calendar__item
${+el === +daysListGenerator.day ? 'selected' : ''}`}
>
<div className='day__layout'>
<div className='text'>{el.toString()}</div>
</div>
</button>
</div>
);
})}
{daysListGenerator.remainingDays.map((el, idx) => {
return (
<div
key={`${idx}----${el}`}
className='calendar__day'
onClick={() => handleNextMonthClick(el)}
>
<button
className='calendar__item gray'
>
{el}
</button>
</div>
);
})}
</div>
</div>
</div>
</div>
);
};
```
```typescript
//back.svg
<?xml version="1.0" encoding="utf-8"?>
<svg width="20px" height="20px" viewBox="0 0 1000 1000" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg"><path d="M768 903.232l-50.432 56.768L256 512l461.568-448 50.432 56.768L364.928 512z" fill="#000000" /></svg>
```
```typescript
//forward.svg
<?xml version="1.0" encoding="utf-8"?>
<svg width="20px" height="20px" viewBox="0 0 1000 1000" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg"><path d="M256 120.768L306.432 64 768 512l-461.568 448L256 903.232 659.072 512z" fill="#000000" /></svg>
```
The helper function for generating necessary data.
```typescript
//calendarObjectGenerator.tsx
import dayjs, { Dayjs } from 'dayjs';
import LocaleData from 'dayjs/plugin/localeData';
dayjs.extend(LocaleData);
type GeneratedObjectType = {
prevMonthDays: number[];
days: number[];
remainingDays: number[];
day: number;
months: string[];
};
export const calendarObjectGenerator = (
currentDate: Dayjs
): GeneratedObjectType => {
const numOfDaysInPrevMonth = currentDate.subtract(1, 'month').daysInMonth();
const firstDayOfCurrentMonth = currentDate.startOf('month').day();
return {
days: Array.from(
{ length: currentDate.daysInMonth() },
(_, index) => index + 1
),
day: Number(currentDate.format('DD')),
months: currentDate.localeData().months(),
prevMonthDays: Array.from(
{ length: firstDayOfCurrentMonth },
(_, index) => numOfDaysInPrevMonth - index
).reverse(),
remainingDays: Array.from(
{ length: 6 - currentDate.endOf('month').day() },
(_, index) => index + 1
),
};
};
```
```typescript
//App.tsx
import { Calendar } from './Calendar/Calendar';
function App() {
return (
<>
<Calendar />
</>
);
}
export default App;
```
Ensure that at this point, your app works as expected.

Most of what we will be doing will be in our Calendar component, so
the other files will stay the same.
## Adding a Second Input field.
Let us add another input for the second day. We can also use a masked input instead of having two different inputs.
Now we are going to be changing things up a bit in the Calendar component.
Our **`inputValue`** state will take a new look, the functions where we use **`setInputValue`** namely **`handlePreviousMonthClick`**, **`handleCurrentMonthClick`** and **`handleNextMonthClick`** will change and we will make some changes to our **input elements** as well:
Make the necessary changes in the Calendar.tsx file as shown below.
```typescript
//Calendar.tsx
//changes in state
const [inputValue, setInputValue] = useState({
firstInput: '',
secondInput: '',
});
// destructure inputValue
const {firstInput, secondInput} = inputValue;
//changes in function
const handlePreviousMonthClick = (day: number) => {
const dayInPreviousMonth = currentDate.subtract(1, 'month').date(day);
setCurrentDate(dayInPreviousMonth);
// setInputValue(dayInPreviousMonth.format('DD.MM.YYYY')); remove this line
};
const handleCurrentMonthClick = (day: number) => {
const dayInCurrentMonth = currentDate.date(day);
setCurrentDate(dayInCurrentMonth);
// setInputValue(dayInCurrentMonth.format('DD.MM.YYYY')); remove this line
};
const handleNextMonthClick = (day: number) => {
const dayInNextMonth = currentDate.add(1, 'month').date(day);
setCurrentDate(dayInNextMonth);
// setInputValue(dayInNextMonth.format('DD.MM.YYYY')); remove this line
};
// changes in input fields
<div className='input__container'> //add this container
<input
name='firstInput' //add this line
className='input'
value={firstInput} //add this line
onChange={handleInputChange}
/>
<input //add the second input
name='secondInput'
className='input'
value={secondInput}
onChange={handleInputChange}
/>
</div>
```
## The RangePicker function.
Next, we want to allow users to select two dates and visually mark the selected dates as well as the days in between them. Let's focus on implementing this feature now.
We are about to create a little complex function in the `Calendar.tsx` file, so bear with me as I explain each line in detail. The primary goal of this function is to allow us to select our `firstInput` (or first day) and `secondInput` (or second day). We'll name this function `rangePicker`.
```typescript
// Calendar.tsx
...
//check explanation
const rangePicker = (day: Dayjs) => {
const isTheSameYear =
currentDate.year() === dayjs(firstInput, 'DD.MM.YYYY').get('year');
const isFirstInputAfterSecondInput = dayjs(
firstInput,
'DD.MM.YYYY'
).isAfter(day);
const isCurrentMonthLessThanFirstInputMonth =
currentDate.month() < dayjs(firstInput, 'DD.MM.YYYY').get('month');
const isCurrentYearLessThanFirstInputYear =
currentDate.year() < dayjs(firstInput, 'DD.MM.YYYY').get('year');
//we do not want to be able to select the same day
const isTheSameDay =
dayjs(firstInput, 'DD.MM.YYYY').format('DD.MM.YYYY') ===
day.format('DD.MM.YYYY');
if (!firstInput && !secondInput) {
// if there is no firstInput and no secondInput,
// then the first clicked value should be the firstInput.
setInputValue({
...inputValue,
firstInput: day.format('DD.MM.YYYY'),
});
} else if (firstInput && !secondInput) {
//we do not want to be able to select the same day
if (isTheSameDay) return;
// if there is a firstInput value, and no secondInput,
// check to see if the newly selected date is not before the firstInput date
// if the newly selected date is earlier than the firstInput date then
// swap the dates
// if not, set the secondInput to the selected date.
if (
isFirstInputAfterSecondInput ||
(isTheSameYear && isCurrentMonthLessThanFirstInputMonth) ||
isCurrentYearLessThanFirstInputYear
) {
setInputValue({
...inputValue,
secondInput: firstInput,
firstInput: day.format('DD.MM.YYYY'),
});
return;
}
setInputValue({
...inputValue,
secondInput: day.format('DD.MM.YYYY'),
});
} else if (firstInput && secondInput) {
//if the user clicks again when there are both inputs,
// clear the secondInput and set the firstInput to the selected value.
setInputValue({
firstInput: day.format('DD.MM.YYYY'),
secondInput: '',
});
}
};
...
```
Do not worry if your calendar does not look exactly like the images below yet, we are going to get there.
The `rangePicker` function checks if there is a `firstInput` and `secondInput`.
- If there is no `firstInput` and no `secondInput`, then we can assume that the first clicked value should be the first input. Therefore, we set the first clicked value to `firstInput`.
- If there is a `firstInput` value and no `secondInput` value then we can assume that the user wants to select a `secondInput` value. The selected value is then set to `secondInput`. But wait, there is a caveat.
- We do not want the second day to be the same as the first day, hence the check for **isTheSameDay.** As you can see, we are not allowed to select the same day.

- We also need to check if the second selected date is after (or greater than) the first date. If the `firstInput` value is after ( or greater than) the second selected date, then we can just swap the dates out. As shown below.
You can see that when 30th June was selected as the `firstInput` and 3rd June as the second input, the dates swapped automatically.

- Lastly if there are both `firstInput` and `secondInput`, then when the user clicks on another day we just need to clear the `secondInput` and set the `firstInput` to the clicked date.
For simplicity, we will only be using this function in the current month. We can always customize it to meet our specifications.
So call the range picker function inside the `handleCurrentMonthClick` function like so:
```typescript
const handleCurrentMonthClick = (day: number) => {
const dayInCurrentMonth = currentDate.date(day);
setCurrentDate(dayInCurrentMonth);
rangePicker(dayInCurrentMonth); // add line
};
```
Now we should be able to select two dates one for the `firstInput` and the other for the `secondInput`.
That is not all though, we need to be able to highlight both of the selected days and the days in-between them to get something like this:

A quick reminder: If you ever get lost, do not worry, I will provide the entire files at the end.
## Highlighting Days of the Current Month
Let’s start by highlighting the selected days. Make the necessary adjustments in your `Calendar.tsx` file.
```typescript
//Calendar.tsx
//add function
const highlightDays = (el: number) => {
if (!secondInput) return;
return (
currentDate?.set('date', el).isAfter(dayjs(firstInput, 'DD.MM.YYYY')) &&
currentDate?.set('date', el).isBefore(dayjs(secondInput, 'DD.MM.YYYY'))
);
};
...
return (
...
{daysListGenerator.days.map((el, index) => {
//add lines
const formattedCurrentMonthDates = currentDate
.set('date', el)
?.format('DD.MM.YYYY');
const isDayEqualFirstInput =
firstInput === formattedCurrentMonthDates;
const isDayEqualSecondInput =
secondInput === formattedCurrentMonthDates;
const applyGrayBackground = !(
isDayEqualFirstInput || isDayEqualSecondInput
);
return (
<div
key={`${index}-/-${el}`}
className='calendar__day'
onClick={() => handleCurrentMonthClick(el)}
>
<button
className={`calendar__item //add lines
${
+el === +daysListGenerator.day &&
!(firstInput || secondInput)
? 'selected'
: isDayEqualFirstInput ||
isDayEqualSecondInput
? 'selectDay'
: ''
}`}
style={{ //add lines
backgroundColor: `${
applyGrayBackground && highlightDays(el)
? '#F4F6FA'
: ''
}`,
}}
...
```
While mapping through the days array, we will create some variables to help us pick some important moments.
**`formattedCurrentMonthDates`** ensures that we convert the numbers in the array to dates of the format ‘DD.MM.YYYY’
**`isDayEqualFirstInput` and `isDayEqualSecondInput`** are checking the day that is equal to the firstInput or the secondInput value, since we need to know these days and style them accordingly. If the day is equal to either `firstInput` or `secondInput`, we can then add the **`selectDay`** class for styling.
**`applyGrayBackground`** checks if the value is not equal to the firstInput or secondInput. We are applying gray background to the days that are neither firstInput or secondInput but that are in-between them.
As you might have noticed, the **`highlightDays`** function takes in each of the integers we are mapping through, converts them to dates and checks if the date is between the `firstInput` and the `secondInput`.
## Highlighting Days of the Previous months.
Next is to ensure that the days of the previous months are also properly highlighted when they fall between selected days. Something like this:

To do this, let’s add some lines to the `prevMonthDays` map function.
```typescript
//Calendar.tsx
...
//add this Day.js plugin
import isBetween from 'dayjs/plugin/isBetween';
...
// add line
dayjs.extend(isBetween);
...
return (
...
//add lines
{daysListGenerator.prevMonthDays.map((el, index) => {
const formatPrevMonthsDates = currentDate
.subtract(1, 'month')
.set('date', el)
?.format('DD.MM.YYYY');
//add lines
const isBetween = currentDate
.subtract(1, 'month')
.set('date', el)
.isBetween(
dayjs(firstInput, 'DD.MM.YYYY'),
dayjs(secondInput, 'DD.MM.YYYY')
);
//add line
const isFirstDay = firstInput === formatPrevMonthsDates;
return (
<div
className='calendar__day'
key={`${el}/${index}`}
onClick={() => handlePreviousMonthClick(el)}
>
<button
className='calendar__item gray'
style={{ //add lines
backgroundColor: `${
isBetween || (isFirstDay && secondInput)
? '#F4F6FA'
: ''
}`,
}}
>
{el}
</button>
</div>
);
})}
...
```
Again, we will convert the numbers in the **`prevMonthDays`** array into the date of the previous month. This is why we are calling the **`subtract(1, 'month')`** method on the currentDate.
The next thing is to check if the date is between the `firstInput` and the `secondInput`. To do this, we need to include another DayJs **`isBetween`** plugin.
Lastly we use **`isFirstDay`** to check if the day of the prevMonth is equal to the selected firstInput, we might need this for styling as well.
If you have done everything, you should be able to see the `prevMonthDays` highlighted provided they fall between the `firstInput` and the `lastInput`.
## Highlighting The Remaining Days (days of the next month).
Lastly we should do the same thing for the `remainingDays`. We will highlight them when they fall between the `firstInput` and the `lastInput`. See, they are currently not highlighted so let’s fix that.

We are going to create another super contrived function to handle the possible cases with the goal of achieving this.

Let us call the function `remainingDaysIsBetween`. Create this function inside your `Calendar.tsx` file.
```typescript
...
const remainingDaysIsBetween = () => {
const firstDay = dayjs(firstInput, 'DD.MM.YYYY');
const secondDay = dayjs(secondInput, 'DD.MM.YYYY');
const firstYear = firstDay?.year();
const secondYear = secondDay?.year();
if (
firstYear === secondYear &&
currentDate.year() === firstYear
) {
return (
firstDay &&
firstDay?.month() <= currentDate.month() &&
secondDay &&
secondDay?.month() > currentDate.month()
);
} else if (
secondYear &&
firstYear &&
secondYear > firstYear &&
currentDate.year() <= secondYear
) {
if (
currentDate.year() === firstYear &&
currentDate.month() < firstDay?.month()
) {
return;
}
if (currentDate.year() < secondYear) {
return (
(firstDay &&
firstDay?.year() === currentDate.year() &&
firstDay?.month() >= currentDate.month()) ||
(secondDay &&
currentDate.year() <= secondDay?.year() &&
currentDate.year() >= firstDay?.year())
);
} else {
return (
(firstDay &&
firstDay?.year() === currentDate.year() &&
firstDay?.month() <= currentDate.month()) ||
(secondDay &&
currentDate.year() <= secondDay?.year() &&
secondDay?.month() > currentDate.month())
);
}
}
};
...
```
Now there is a lot going on here because I was trying to catch the obvious edge cases. You can play around with it and optimize the function.
The function `remainingDaysIsBetween` checks whether the current date falls within a specific range defined by two input dates (`firstInput` and `secondInput`). These dates are expected to be in the format `DD.MM.YYYY`. Let's break down the conditions step by step:
```typescript
const firstDay = dayjs(firstInput, 'DD.MM.YYYY');
const secondDay = dayjs(secondInput, 'DD.MM.YYYY');
const firstYear = firstDay?.year();
const secondYear = secondDay?.year();
```
`firstInput` and `secondInput` are parsed into Day.js objects (`firstDay` and `secondDay`).
We then extract the years of the parsed objects into `firstYear` and `secondYear` .
It is very important to remeber that `currentDate` is a Day.js object representing the last selected date and its value changes as you move back and forth between months and years with the control arrows. Let’s look further into the conditions in the `remainingDaysIsBetween` function.
## Condition 1
We check that both the firstInput and secondInput dates are in the Same Year and that the current year is the same as the first year:
```typescript
if (firstYear === secondYear && currentDate.year() === firstYear) {
return (
firstDay &&
firstDay?.month() <= currentDate.month() &&
secondDay &&
secondDay?.month() > currentDate.month()
);
}
```
If the condition is true, then we ensure that we are only highlighting the remaining days for that particular month where the `firstInput`’s month is less than or equal to the `currentDate`’s month and the `secondInput`’s month is greater than the `currentDate`’s month.
For instance when the firstInput is `18.06.2024` and the secondInput is `16.07.2024`. When move between months with our month control arrows, when we are in June,
**`firstDay?.month() <= currentDate.month()`** returns true because the `firstDay.month` is June and the **`currentDate.month()`** is also June (remember that currentDate changes as you move around) also, the second check **`secondDay?.month() > currentDate.month()`** returns true because **`secondDay.month()`** is July and remember that while we are viewing June, **`currentDate.month()`** is still June. Because both inequalities return true when we are viewing June, the remaining days in June are highlighted with a gray color.
However when we move forward to July, the second check **`secondDay?.month() > currentDate.month()`** returns false since **`secondDay?.month()`** is July and `currentDate.month()`** is also July. Hence the remaining days in July are not highlighted


## Condition 2
We check for when both dates (`firstInput` and `secondInput`) are in different years and the current date is less than or equal to the secondInput year.
```typescript
else if (secondYear && firstYear && secondYear > firstYear && currentDate.year() <= secondYear) {
// Sub-condition 2.1: Current Year Matches First Year but Before First Month
if (currentDate.year() === firstYear && currentDate.month() < firstDay?.month()) {
return;
}
// Sub-condition 2.2: Current Year is Before Second Year
if (currentDate.year() < secondYear) {
return (
(firstDay &&
firstDay?.year() === currentDate.year() &&
firstDay?.month() >= currentDate.month()) ||
(secondDay &&
currentDate.year() <= secondDay?.year() &&
currentDate.year() >= firstDay?.year())
);
} else {
// Sub-condition 2.3: Current Year Matches Second Year
return (
(firstDay &&
firstDay?.year() === currentDate.year() &&
firstDay?.month() <= currentDate.month()) ||
(secondDay &&
currentDate.year() <= secondDay?.year() &&
secondDay?.month() > currentDate.month())
);
}
}
```
## Conclusion
Following the first condition, we can play around with other conditions to meet our specifications. If you have done everything as explained in this article, you should now have a fully functional Calendar with a range picker. There are still multiple ways to improve on this code.
If you have followed along till this point, you can as well have the whole changes we have made to the `Calendar.tsx` file, so here you have it:
```typescript
//Calendar.tsx
import dayjs, { Dayjs } from 'dayjs';
import backArrow from '../assets/images/back.svg';
import forwardArrow from '../assets/images/forward.svg';
import './style.css';
import { useState } from 'react';
import { calendarObjectGenerator } from '../helper/calendarObjectGenerator';
import customParseFormat from 'dayjs/plugin/customParseFormat';
import isBetween from 'dayjs/plugin/isBetween';
dayjs.extend(customParseFormat);
dayjs.extend(isBetween);
const weekDays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat'];
export const Calendar = () => {
const [currentDate, setCurrentDate] = useState<Dayjs>(dayjs(Date.now()));
const [inputValue, setInputValue] = useState({
firstInput: '',
secondInput: '',
});
const { firstInput, secondInput } = inputValue;
const daysListGenerator = calendarObjectGenerator(currentDate);
const dateArrowHandler = (date: Dayjs) => {
setCurrentDate(date);
};
const handleInputChange = (event: React.ChangeEvent<HTMLInputElement>) => {
const date = event.target.value;
setInputValue({ ...inputValue, [event.target.name]: event.target.value });
// check if the entered date is valid.
const isValidDate = dayjs(date, 'DD.MM.YYYY').isValid();
if (!isValidDate || date.length < 10) return;
setCurrentDate(dayjs(date, 'DD.MM.YYYY'));
};
const rangePicker = (day: Dayjs) => {
const isTheSameYear =
currentDate.year() === dayjs(firstInput, 'DD.MM.YYYY').get('year');
const isFirstInputAfterSecondInput = dayjs(
firstInput,
'DD.MM.YYYY'
).isAfter(day);
const isCurrentMonthLessThanFirstInputMonth =
currentDate.month() < dayjs(firstInput, 'DD.MM.YYYY').get('month');
const isCurrentYearLessThanFirstInputYear =
currentDate.year() < dayjs(firstInput, 'DD.MM.YYYY').get('year');
const isTheSameDay =
dayjs(firstInput, 'DD.MM.YYYY').format('DD.MM.YYYY') ===
day.format('DD.MM.YYYY');
if (!firstInput && !secondInput) {
setInputValue({
...inputValue,
firstInput: day.format('DD.MM.YYYY'),
});
} else if (firstInput && !secondInput) {
if (isTheSameDay) return;
if (
isFirstInputAfterSecondInput ||
(isTheSameYear && isCurrentMonthLessThanFirstInputMonth) ||
isCurrentYearLessThanFirstInputYear
) {
setInputValue({
...inputValue,
secondInput: firstInput,
firstInput: day.format('DD.MM.YYYY'),
});
return;
}
setInputValue({
...inputValue,
secondInput: day.format('DD.MM.YYYY'),
});
} else if (firstInput && secondInput) {
setInputValue({
firstInput: day.format('DD.MM.YYYY'),
secondInput: '',
});
}
};
const handlePreviousMonthClick = (day: number) => {
const dayInPreviousMonth = currentDate.subtract(1, 'month').date(day);
setCurrentDate(dayInPreviousMonth);
};
const handleCurrentMonthClick = (day: number) => {
const dayInCurrentMonth = currentDate.date(day);
setCurrentDate(dayInCurrentMonth);
rangePicker(dayInCurrentMonth);
};
const handleNextMonthClick = (day: number) => {
const dayInNextMonth = currentDate.add(1, 'month').date(day);
setCurrentDate(dayInNextMonth);
};
const highlightDays = (el: number) => {
if (!secondInput) return;
return (
currentDate?.set('date', el).isAfter(dayjs(firstInput, 'DD.MM.YYYY')) &&
currentDate?.set('date', el).isBefore(dayjs(secondInput, 'DD.MM.YYYY'))
);
};
const remainingDaysIsBetween = () => {
const firstDay = dayjs(firstInput, 'DD.MM.YYYY');
const secondDay = dayjs(secondInput, 'DD.MM.YYYY');
const firstYear = firstDay?.year();
const secondYear = secondDay?.year();
if (
firstYear === secondYear &&
currentDate.year() === firstYear
) {
return (
firstDay &&
firstDay?.month() <= currentDate.month() &&
secondDay &&
secondDay?.month() > currentDate.month()
);
} else if (
secondYear &&
firstYear &&
secondYear > firstYear &&
currentDate.year() <= secondYear
) {
if (
currentDate.year() === firstYear &&
currentDate.month() < firstDay?.month()
) {
return;
}
if (currentDate.year() < secondYear) {
return (
(firstDay &&
firstDay?.year() === currentDate.year() &&
firstDay?.month() >= currentDate.month()) ||
(secondDay &&
currentDate.year() <= secondDay?.year() &&
currentDate.year() >= firstDay?.year())
);
} else {
return (
(firstDay &&
firstDay?.year() === currentDate.year() &&
firstDay?.month() <= currentDate.month()) ||
(secondDay &&
currentDate.year() <= secondDay?.year() &&
secondDay?.month() > currentDate.month())
);
}
}
};
return (
<div className='calendar__container'>
<div className='input__container'>
<input
name='firstInput'
className='input'
value={firstInput}
onChange={handleInputChange}
/>
<input
name='secondInput'
className='input'
value={secondInput}
onChange={handleInputChange}
/>
</div>
<div className='control__layer'>
<div className='month-year__layout'>
<div className='year__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'year'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='title'>{currentDate.year()}</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'year'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
<div className='month__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'month'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='new-title'>
{daysListGenerator.months[currentDate.month()]}
</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'month'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
</div>
<div className='days'>
{weekDays.map((el, index) => (
<div key={`${el}-${index}`} className='day'>
{el}
</div>
))}
</div>
<div className='calendar__content'>
<div className={'calendar__items-list'}>
{daysListGenerator.prevMonthDays.map((el, index) => {
const formatPrevMonthsDates = currentDate
.subtract(1, 'month')
.set('date', el)
?.format('DD.MM.YYYY');
const isBetween = currentDate
.subtract(1, 'month')
.set('date', el)
.isBetween(
dayjs(firstInput, 'DD.MM.YYYY'),
dayjs(secondInput, 'DD.MM.YYYY')
);
const isFirstDay = firstInput === formatPrevMonthsDates;
return (
<div
className='calendar__day'
key={`${el}/${index}`}
onClick={() => handlePreviousMonthClick(el)}
>
<button
className='calendar__item gray'
style={{
backgroundColor: `${
isBetween || (isFirstDay && secondInput)
? '#F4F6FA'
: ''
}`,
}}
>
{el}
</button>
</div>
);
})}
{daysListGenerator.days.map((el, index) => {
const formattedCurrentMonthDates = currentDate
.set('date', el)
?.format('DD.MM.YYYY');
const isDayEqualFirstInput =
firstInput === formattedCurrentMonthDates;
const isDayEqualSecondInput =
secondInput === formattedCurrentMonthDates;
const applyGrayBackground = !(
isDayEqualFirstInput || isDayEqualSecondInput
);
return (
<div
key={`${index}-/-${el}`}
className='calendar__day'
onClick={() => handleCurrentMonthClick(el)}
>
<button
className={`calendar__item
${
+el === +daysListGenerator.day &&
!(firstInput || secondInput)
? 'selected'
: isDayEqualFirstInput || isDayEqualSecondInput
? 'selectDay'
: ''
}`}
style={{
backgroundColor: `${
applyGrayBackground && highlightDays(el)
? '#F4F6FA'
: ''
}`,
}}
>
<div className='day__layout'>
<div className='text'>{el.toString()}</div>
</div>
</button>
{firstInput && secondInput && isDayEqualFirstInput && (
<span className='shadow right'></span>
)}
{firstInput && secondInput && isDayEqualSecondInput && (
<span className='shadow left'></span>
)}
</div>
);
})}
{daysListGenerator.remainingDays.map((el, idx) => {
return (
<div
key={`${idx}----${el}`}
className='calendar__day'
onClick={() => handleNextMonthClick(el)}
>
<button
className='calendar__item gray'
style={{
background: `${
remainingDaysIsBetween() ? '#F4F6FA' : ''
}`,
}}
>
{el}
</button>
</div>
);
})}
</div>
</div>
</div>
</div>
);
};
```
Thank you for following this till the end of this tutorial.
If you so desire, you can play around with what we have done so far, and find several areas to improve.
| oluwadahunsia |
1,895,055 | The Color Class | The Color class can be used to create colors. JavaFX defines the abstract Paint class for painting a... | 0 | 2024-06-20T16:14:30 | https://dev.to/paulike/the-color-class-4pck | java, programming, learning, beginners | The **Color** class can be used to create colors. JavaFX defines the abstract **Paint** class for painting a node. The **javafx.scene.paint.Color** is a concrete subclass of **Paint**, which is used to encapsulate colors, as shown in Figure below.

A color instance can be constructed using the following constructor:
`public Color(double r, double g, double b, double opacity);`
in which **r**, **g**, and **b** specify a color by its red, green, and blue components with values in the range from **0.0** (darkest shade) to **1.0** (lightest shade). The **opacity** value defines the transparency of a color within the range from **0.0** (completely transparent) to **1.0** (completely opaque). This is known as the RGBA model, where RGBA stands for red, green, blue, and alpha. The alpha value indicates the opacity. For example,
`Color color = new Color(0.25, 0.14, 0.333, 0.51);`
The **Color** class is immutable. Once a **Color** object is created, its properties cannot be changed. The **brighter()** method returns a new **Color** with a larger red, green, and blue values and the **darker()** method returns a new **Color** with a smaller red, green, and blue values. The **opacity** value is the same as in the original **Color** object.
You can also create a **Color** object using the static methods **color(r, g, b)**, **color(r, g, b, opacity)**, **rgb(r, g, b)**, and **rgb(r, g, b, opacity)**.
Alternatively, you can use one of the many standard colors such as **BEIGE**, **BLACK**, **BLUE**, **BROWN**, **CYAN**, **DARKGRAY**, **GOLD**, **GRAY**, **GREEN**, **LIGHTGRAY**, **MAGENTA**, **NAVY**, **ORANGE**, **PINK**, **RED**, **SILVER**, **WHITE**, and **YELLOW** defined as constants in the **Color** class. The following code, for instance, sets the fill color of a circle to red:
`circle.setFill(Color.RED);` | paulike |
1,895,054 | Nucleoid: Neuro-Symbolic AI with Declarative Logic | Nucleoid is a reasoning engine for Neuro-Symbolic AI, implementing symbolic AI through declarative (logic-based) programming. Neuro-symbolic AI combines neural networks (which excel in pattern recognition and data-driven tasks) with symbolic AI (which focuses on reasoning and rule-based problem solving) to create systems that can both interpret complex data and understand abstract concepts. | 0 | 2024-06-20T16:13:50 | https://github.com/NucleoidAI/Nucleoid/blob/main/README.md | ai, showdev, node, javascript | ---
description: Nucleoid is a reasoning engine for Neuro-Symbolic AI, implementing symbolic AI through declarative (logic-based) programming. Neuro-symbolic AI combines neural networks (which excel in pattern recognition and data-driven tasks) with symbolic AI (which focuses on reasoning and rule-based problem solving) to create systems that can both interpret complex data and understand abstract concepts.
---
Nucleoid is Declarative (Logic) Runtime Environment, which is a type of Symbolic AI used for reasoning engine in Neuro-Symbolic AI. Nucleoid runtime that tracks given statements in JavaScript syntax and creates relationships between variables, objects, and functions etc. in the logic graph. In brief, the runtime translates your business logic to fully working application by managing the JavaScript state as well as storing in the built-in data store, so that your application doesn't require external database or anything else.

### Neural Networks: The Learning Component
Neural networks in Neuro-Symbolic AI are adept at learning patterns, relationships, and features from large datasets. These networks excel in tasks that involve classification, prediction, and pattern recognition, making them invaluable for processing unstructured data, such as images, text, and audio. Neural networks, through their learning capabilities, can generalize from examples to understand complex data structures and nuances in the data.
### Symbolic AI: The Reasoning Component
The symbolic component of Neuro-Symbolic AI focuses on logic, rules, and symbolic representations of knowledge. Unlike neural networks that learn from data, symbolic AI uses predefined rules and knowledge bases to perform reasoning, make inferences, and understand relationships between entities. This aspect of AI is transparent, interpretable, and capable of explaining its decisions and reasoning processes in a way that humans can understand.
<br/>

#### Declarative Logic in Symbolic Reasoning
Declarative logic is a subset of declarative programming, a style of building programs that expresses the logic of a computation without describing its control flow. In declarative logic, you state the facts and rules that define the problem domain. The runtime environment or the system itself figures out how to satisfy those conditions or how to apply those rules to reach a conclusion. This contrasts with imperative programming, where the developer writes code that describes the exact steps to achieve a goal.
Symbolic reasoning refers to the process of using symbols to represent problems and applying logical rules to manipulate these symbols and derive conclusions or solutions. In AI and computer science, it involves using symbolic representations for entities and actions, enabling the system to perform logical inferences, decision making, and problem-solving based on the rules and knowledge encoded in the symbols.
By integrating Nucleoid into Neuro-Symbolic AI, the system benefits from enhanced interpretability and reliability. The declarative logic and rules defined in Nucleoid provide clear explanations for the AI's decisions, making it easier for users to understand and trust the system's outputs. Furthermore, the explicit reasoning capabilities help ensure that decisions are made based on logical principles, adding a layer of reliability and consistency to the AI's behavior.

## Hello World :zap:
```javascript
User:
> 'Define human with name'
Assistant:
> 'The user requested to define a human with a name using a declarative class'
---
// Ubiquitous Code:
'use declarative';
class Human {
name: string;
constructor(name: string) {
this.name = name;
}
}
```
```javascript
User:
> 'All humans are mortal'
Assistant: All humans are mortal
> 'All humans were declared as mortal.'
---
// Ubiquitous Code:
'use declarative';
$Human.mortal = true;
```
```javascript
User:
> 'Create a human with name "Socrates"'
Assistant:
> 'A human named 'Socrates' was successfully created.'
---
// Ubiquitous Code:
'use imperative';
var human1 = new Human('Socrates');
// Result: {"id":"human1","name":"Socrates","mortal":true}
```
```javascript
User:
> 'Is human with name 'Socrates' mortal?'
Assistant:
> 'The human named 'Socrates' was determined to be mortal.'
---
// Ubiquitous Code:
`use imperative`;
var human1 = new Human('Socrates');
human1.mortal;
// Result: true
```
> :bulb: **This executes the business logic and stores the state in the data store. `const app = nucleoid()` manages internal data store without requiring an external database.**
Learn more at [nucleoid.com/docs/get-started](https://nucleoid.com/docs/get-started)
### Under the hood: Declarative (Logic) Runtime Environment
Nucleoid is an implementation of symbolic AI for declarative (logic) programming at the runtime. As mentioned, the declarative runtime environment manages JavaScript state and stores each transaction in the built-in data store by declaratively rerendering JavaScript statements and building the knowledge graph (base) as well as an execution plan.
<p align="center">
<img src="https://cdn.nucleoid.com/media/taxonomy.png" width="450" alt="Nucleoid's Taxonomy"/>
</p>
The declarative runtime isolates a behavior definition of a program from its technical instructions and executes declarative statements, which represent logical intention without carrying any technical detail. In this paradigm, there is no segregation regarding what data is or not, instead approaches how data (declarative statement) is related with others so that any type of data including business rules can be added without requiring any additional actions such as compiling, configuring, restarting as a result of plasticity. This approach also opens possibilities of storing data in the same box with the programming runtime.
<div align="center">
<table>
<tr>
<th>
<img src="https://cdn.nucleoid.com/media/diagram1.png" width="225" alt="Logical Diagram 1"/>
</th>
<th>
<img src="https://cdn.nucleoid.com/media/diagram2.png" width="275" alt="Logical Diagram 2"/>
</th>
</tr>
</table>
</div>
In short, the main objective of the project is to manage both of data and logic under the same runtime. The declarative programming paradigm used by Nucleoid allows developers to focus on the business logic of the application, while the runtime manages the technical details.This allows for faster development and reduces the amount of code that needs to be written. Additionally, the sharding feature can help to distribute the load across multiple instances, which can further improve the performance of the system.
## Benchmark
This is the comparation our sample order app in Nucleoid IDE against MySQL and Postgres with using Express.js and Sequelize libraries.
<img src="https://cdn.nucleoid.com/media/benchmark.png" alt="Benchmark" width="550"/>
> Performance benchmark happened in t2.micro of AWS EC2 instance and both databases had dedicated servers with <u>no indexes and default configurations</u>.
This does not necessary mean Nucleoid runtime is faster than MySQL or Postgres, instead databases require constant maintenance by DBA teams with indexing, caching, purging etc. however, Nucleoid tries to solve this problem with managing logic and data internally. As seen in the chart, for applications with average complexity, Nucleoid's performance is close to linear because of on-chain data store, in-memory computing model as well as limiting the IO process.
---
<center>
<b>⭐️ Star us on GitHub for the support</b>
</center>
Thanks to declarative logic programming, we have a brand-new approach to Neuro-Symbolic AI. As we continue to explore the potential of this AI architecture, we welcome all kinds of contributions!
<p align="center">
<img src="https://cdn.nucleoid.com/media/nobel.png" alt="Nobel" />
</p>
<center>
Join us at
<br/>
<a href="https://github.com/NucleoidAI/Nucleoid">https://github.com/NucleoidAI/Nucleoid</a>
</center>
---
{% embed https://github.com/NucleoidAI/Nucleoid %}
| canmingir |
1,893,907 | Building a simple Date Picker with React and Day.js | Welcome! This is the second part of our three-part tutorial on creating a simple calendar using... | 0 | 2024-06-20T16:10:45 | https://dev.to/oluwadahunsia/building-a-simple-date-picker-with-react-and-dayjs-4oop | webdev, react, typescript, javascript | Welcome!
This is the second part of our three-part tutorial on creating a simple calendar using React and Day.js. In the first part, we built a custom calendar with React and Day.js.
You can check the first and the last parts here:
The first part:
{% embed https://dev.to/oluwadahunsia/building-a-custom-calendar-with-react-and-dayjs-a-step-by-step-guide-2h1d %}
The last part:
{% embed https://dev.to/oluwadahunsia/building-a-date-range-picker-with-react-and-dayjs-2b5a %}
In this part, we'll enhance our calendar from the first part by adding a date picker functionality. Our goal is to build the date picker below.

## Starter files.
If you have not gone through the first part where we built a basic calendar, don't worry. I'm providing all the necessary files so we can start together from the same point.
```css
//style
.calendar__container {
display: flex;
flex-direction: column;
align-items: center;
padding: 25px;
width: max-content;
background: #ffffff;
box-shadow: 5px 10px 10px #dedfe2;
}
.month-year__layout {
display: flex;
margin: 0 auto;
width: 100%;
flex-direction: row;
align-items: center;
justify-content: space-around;
}
.year__layout,
.month__layout {
width: 150px;
display: flex;
padding: 10px;
font-weight: 600;
align-items: center;
text-transform: capitalize;
justify-content: space-between;
}
.back__arrow,
.forward__arrow {
cursor: pointer;
background: transparent;
border: none;
}
.back__arrow:hover,
.forward__arrow:hover {
scale: 1.1;
transition: scale 0.3s;
}
.days {
display: grid;
grid-gap: 0;
width: 100%;
grid-template-columns: repeat(7, 1fr);
}
.day {
flex: 1;
font-size: 16px;
padding: 5px 7px;
text-align: center;
}
.calendar__content {
position: relative;
background-color: transparent;
}
.calendar__items-list {
text-align: center;
width: 100%;
height: max-content;
overflow: hidden;
display: grid;
grid-gap: 0;
list-style-type: none;
grid-template-columns: repeat(7, 1fr);
}
.calendar__items-list:focus {
outline: none;
}
.calendar__day {
position: relative;
display: flex;
justify-content: center;
align-items: center;
}
.calendar__item {
position: relative;
width: 50px;
height: 50px;
cursor: pointer;
background: transparent;
border-collapse: collapse;
background-color: white;
display: flex;
justify-content: center;
align-items: center;
text-align: center;
border: 1px solid transparent;
z-index: 200;
}
button {
margin: 0;
display: inline;
box-sizing: border-box;
}
.calendar__item:focus {
outline: none;
}
.calendar__item.selected {
font-weight: 700;
border-radius: 50%;
background: #1a73e8;
color: white;
outline: none;
border: none;
}
.calendar__item.selectDay {
position: relative;
background: #1a73e8;
color: white;
border-radius: 50%;
border: none;
z-index: 200;
}
.calendar__item.gray,
.calendar__item.gray:hover {
color: #c4cee5;
display: flex;
justify-content: center;
align-items: center;
}
.input__container {
display: flex;
justify-content: space-around;
}
.input {
height: 30px;
border-radius: 8px;
text-align: center;
align-self: center;
border: 1px solid #1a73e8;
}
.shadow {
position: absolute;
display: inline-block;
z-index: 10;
top: 0;
background-color: #f4f6fa;
height: 50px;
width: 50px;
}
.shadow.right {
left: 50%;
}
.shadow.left {
right: 50%;
}
```
```typescript
//Calendar.tsx
import dayjs, { Dayjs } from 'dayjs';
import backArrow from '../assets/images/back.svg';
import forwardArrow from '../assets/images/forward.svg';
import './style.css';
import { useState } from 'react';
import { calendarObjectGenerator } from '../helper/calendarObjectGenerator';
const weekDays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat'];
export const Calendar = () => {
const [currentDate, setCurrentDate] = useState<Dayjs>(dayjs(Date.now()));
const daysListGenerator = calendarObjectGenerator(currentDate);
const dateArrowHandler = (date: Dayjs) => {
setCurrentDate(date);
};
const handlePreviousMonthClick = (day: number) => {
const dayInPreviousMonth = currentDate.subtract(1, 'month').date(day);
setCurrentDate(dayInPreviousMonth);
};
const handleCurrentMonthClick = (day: number) => {
const dayInCurrentMonth = currentDate.date(day);
setCurrentDate(dayInCurrentMonth);
};
const handleNextMonthClick = (day: number) => {
const dayInNextMonth = currentDate.add(1, 'month').date(day);
setCurrentDate(dayInNextMonth);
};
return (
<div className='calendar__container'>
<div className='control__layer'>
<div className='month-year__layout'>
<div className='year__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'year'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='title'>{currentDate.year()}</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'year'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
<div className='month__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'month'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='new-title'>
{daysListGenerator.months[currentDate.month()]}
</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'month'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
</div>
<div className='days'>
{weekDays.map((el, index) => (
<div key={`${el}-${index}`} className='day'>
{el}
</div>
))}
</div>
<div className='calendar__content'>
<div className={'calendar__items-list'}>
{daysListGenerator.prevMonthDays.map((el, index) => {
return (
<button
key={`${el}/${index}`}
className='calendar__item gray'
onClick={() => handlePreviousMonthClick(el)}
>
{el}
</button>
);
})}
{daysListGenerator.days.map((el, index) => {
return (
<div
key={`${index}-/-${el}`}
className='calendar__day'
onClick={() => handleCurrentMonthClick(el)}
>
<button
className={`calendar__item
${+el === +daysListGenerator.day ? 'selected' : ''}`}
>
<div className='day__layout'>
<div className='text'>{el.toString()}</div>
</div>
</button>
</div>
);
})}
{daysListGenerator.remainingDays.map((el, idx) => {
return (
<button
className='calendar__item gray'
key={`${idx}----${el}`}
onClick={() => handleNextMonthClick(el)}
>
{el}
</button>
);
})}
</div>
</div>
</div>
</div>
);
};
```
```typescript
//back.svg
<?xml version="1.0" encoding="utf-8"?>
<svg width="20px" height="20px" viewBox="0 0 1000 1000" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg"><path d="M768 903.232l-50.432 56.768L256 512l461.568-448 50.432 56.768L364.928 512z" fill="#000000" /></svg>
```
```typescript
//forward.svg
<?xml version="1.0" encoding="utf-8"?>
<svg width="20px" height="20px" viewBox="0 0 1000 1000" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg"><path d="M256 120.768L306.432 64 768 512l-461.568 448L256 903.232 659.072 512z" fill="#000000" /></svg>
```
The helper function for generating the days in our calendar.
```typescript
//calendarObjectGenerator.tsx
import dayjs, { Dayjs } from 'dayjs';
import LocaleData from 'dayjs/plugin/localeData';
dayjs.extend(LocaleData);
type GeneratedObjectType = {
prevMonthDays: number[];
days: number[];
remainingDays: number[];
day: number;
months: string[];
};
export const calendarObjectGenerator = (
currentDate: Dayjs
): GeneratedObjectType => {
const numOfDaysInPrevMonth = currentDate.subtract(1, 'month').daysInMonth();
const firstDayOfCurrentMonth = currentDate.startOf('month').day();
return {
days: Array.from(
{ length: currentDate.daysInMonth() },
(_, index) => index + 1
),
day: Number(currentDate.format('DD')),
months: currentDate.localeData().months(),
prevMonthDays: Array.from(
{ length: firstDayOfCurrentMonth },
(_, index) => numOfDaysInPrevMonth - index
).reverse(),
remainingDays: Array.from(
{ length: 6 - currentDate.endOf('month').day() },
(_, index) => index + 1
),
};
};
```
```typescript
//App.tsx
import { Calendar } from './Calendar/Calendar';
function App() {
return (
<>
<Calendar />
</>
);
}
export default App;
```
## Adding an input field
To create a single date picker, we need to add an input field to our calender.
Ideally, we should use a masked input to ensure that only a valid date format is entered. Although we're currently checking if the entry is at least 10 characters to satisfy the `DD.MM.YYYY` format, this alone isn't enough to ensure validity. Implementing a masked input will enforce the exact format we require.
Also, because we will be changing date formats often, let us add the customParseFormat plugin from dayjs. Here is the updated Calendar.tsx file.
```typescript
//Calendar.tsx
import dayjs, { Dayjs } from 'dayjs';
import customParseFormat from 'dayjs/plugin/customParseFormat'; // add line
import backArrow from '../assets/images/back.svg';
import forwardArrow from '../assets/images/forward.svg';
import { useState } from 'react';
import { calendarObjectGenerator } from '../helper/calendarObjectGenerator';
import './style.css';
dayjs.extend(customParseFormat); // add line
const weekDays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat'];
export const Calendar = () => {
const [currentDate, setCurrentDate] = useState<Dayjs>(dayjs(Date.now()));
const [inputValue, setInputValue] = useState<string>('');
const daysListGenerator = calendarObjectGenerator(currentDate);
const dateArrowHandler = (date: Dayjs) => {
setCurrentDate(date);
};
const handleInputChange = (event: React.ChangeEvent<HTMLInputElement>) => {
const date = event.target.value;
setInputValue(date);
// check if the entered date is valid.
const isValidDate = dayjs(date, 'DD.MM.YYYY').isValid();
if (!isValidDate || date.length < 10) return;
//if you pass date without specifying the format ('DD.MM.YYYY'),
// you might get an error when you decide to edit a selected date in the input field.
setCurrentDate(dayjs(date, 'DD.MM.YYYY'));
};
const handlePreviousMonthClick = (day: number) => {
const dayInPreviousMonth = currentDate.subtract(1, 'month').date(day);
setCurrentDate(dayInPreviousMonth);
setInputValue(dayInPreviousMonth.format('DD.MM.YYYY')); // add line
};
const handleCurrentMonthClick = (day: number) => {
const dayInCurrentMonth = currentDate.date(day);
setCurrentDate(dayInCurrentMonth);
setInputValue(dayInCurrentMonth.format('DD.MM.YYYY')); // add line
};
const handleNextMonthClick = (day: number) => {
const dayInNextMonth = currentDate.add(1, 'month').date(day);
setCurrentDate(dayInNextMonth);
setInputValue(dayInNextMonth.format('DD.MM.YYYY')); // add line
};
return (
<div className='calendar__container'>
<input
className='input'
value={inputValue}
onChange={handleInputChange}
/>
<div className='control__layer'>
<div className='month-year__layout'>
<div className='year__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'year'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='title'>{currentDate.year()}</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'year'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
<div className='month__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'month'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='new-title'>
{daysListGenerator.months[currentDate.month()]}
</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'month'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
</div>
<div className='days'>
{weekDays.map((el, index) => (
<div key={`${el}-${index}`} className='day'>
{el}
</div>
))}
</div>
<div className='calendar__content'>
<div className={'calendar__items-list'}>
{daysListGenerator.prevMonthDays.map((el, index) => {
return (
<div
key={`${el}/${index}`}
className='calendar__day'
//add this line
onClick={() => handlePreviousMonthClick(el)}
>
<button
className='calendar__item gray'
>
{el}
</button>
</div>
);
})}
{daysListGenerator.days.map((el, index) => {
return (
<div
key={`${index}-/-${el}`}
className='calendar__day'
//add this line
onClick={() => handleCurrentMonthClick(el)}
>
<button
className={`calendar__item
${+el === +daysListGenerator.day ? 'selected' : ''}`}
>
<div className='day__layout'>
<div className='text'>{el.toString()}</div>
</div>
</button>
</div>
);
})}
{daysListGenerator.remainingDays.map((el, idx) => {
return (
<div
key={`${idx}----${el}`}
className='calendar__day'
//add this line
onClick={() => handleNextMonthClick(el)}
>
<button
className='calendar__item gray'
>
{el}
</button>
</div>
);
})}
</div>
</div>
</div>
</div>
);
};
```
At this point, you should have something similar to this:

There you have it, a simple date picker built upon our Calender. In the concluding part, we are going to be building a date range picker on this date picker.
See you in the next one. | oluwadahunsia |
1,895,052 | Why We Adopted a Synchronous API for the New TypeScript ORM | I am developing a TypeScript ORM library called Accel Record. Unlike other TypeScript/JavaScript ORM... | 27,598 | 2024-06-20T16:10:09 | https://dev.to/koyopro/why-we-adopted-a-synchronous-api-for-the-new-typescript-orm-1jm | typescript, orm, opensource, activerecord | I am developing a TypeScript ORM library called [Accel Record](https://www.npmjs.com/package/accel-record). Unlike other TypeScript/JavaScript ORM libraries, Accel Record has adopted a synchronous API instead of an asynchronous one.
In this article, I will explain the background and reasons for adopting a synchronous API in Accel Record.
## The ORM We Wanted to Create
In the article "[Seeking a Type-Safe Ruby on Rails in TypeScript, I Started Developing an ORM](https://dev.to/koyopro/seeking-a-type-safe-ruby-on-rails-in-typescript-i-started-developing-an-orm-1of5)," I introduced the start of my work on a TypeScript ORM library.
My goal was to have a framework for TypeScript that is as efficient as Ruby on Rails. To achieve this, I decided to try creating an ORM in TypeScript with functionalities similar to Rails' Active Record. Hence, the first step in creating this new ORM was to imitate the API of Active Record.
## Problems with Asynchronous APIs
In JavaScript/TypeScript, database access is typically done using asynchronous APIs with Promises or callbacks. Since libraries for database access also return Promises for each operation, the new ORM naturally implemented each API asynchronously.
When executing asynchronous APIs, it is common to use `await` to handle them in sequence. For example, when performing CRUD operations on the User model, it looks like this:
```ts
await User.create({ name: "Foo" }); // Create
const user = await User.find(1); // Read
await user.update({ name: "Bar" }); // Update
await user.delete(); // Delete
```
Although it’s somewhat tedious to write `await` each time, it wasn't a major issue initially. However, the problem became more significant when handling associations. Consider a case where the User model has a hasOne association with the Setting model.
### Problem Example 1: Updating Associations
I wanted to write the update process like this, following the Active Record interface:
```ts
const setting = Setting.build({ theme: "dark" });
await user.setting = setting; // This is not possible
```
The reason for adding `await` here is that this operation **might** involve database access. In Rails' Active Record, if this setter causes a change in either the user or setting, a database access (save operation) occurs.
However, since TypeScript setters cannot be asynchronous, this kind of syntax is not possible. We needed to consider an alternative interface.
### Problem Example 2: Loading Associations
When fetching associations, `await` needs to be written each time because there is a **possibility** of database access.
```ts
const theme = (await user.setting).theme;
```
In Active Record, associations are lazy-loaded, so database access may occur when fetching associations. If the user instance already has the setting cached, no database access occurs; otherwise, it does. This also necessitated reconsidering the interface due to usability issues.
Each of these issues could be somewhat resolved by designing the interface carefully. However, I felt that repeatedly making such adjustments was gradually leading the library’s usability away from the ideal. The asynchronous API was restricting the library's interface.
## The Conceptual Shift to Synchronous APIs
Rails' Active Record abstracts database access processes by associating tables with model classes. However, making these APIs asynchronous means constantly being aware of the timing of database access, which hinders abstraction and reduces development efficiency.
Continuing with an asynchronous API to implement the ORM made me realize that achieving the same level of abstraction as Active Record would be difficult, and so would achieving high development efficiency, which was the initial goal.
So, I reconsidered the API design and thought there might be another approach.
**I decided to question the assumption that "database access APIs in JavaScript/TypeScript must be asynchronous."**
Synchronous API calls without Promises do not require `await`. If we could implement each API as a synchronous API rather than an asynchronous one, the aforementioned issues would not arise. There would be no need to worry about whether to add `await` to each method call, allowing a development experience closer to Active Record.
Thus, the next step was to answer the following questions:
- Why do JS/TS database access libraries adopt asynchronous APIs?
- Is it absolutely necessary to use asynchronous APIs?
- Can an ORM be created using synchronous APIs?
## Node.js Event Loop and Synchronous Processing
The goal of this ORM library is to provide a development experience similar to Ruby on Rails’ Active Record. Therefore, it is intended for server-side use, not for frontend use. The most common execution environment for server-side TypeScript (JavaScript) is Node.js. So, I investigated the challenges of using synchronous APIs in Node.js.
The most relevant information from Node.js official documentation includes:
- [Node.js — The Node.js Event Loop](https://nodejs.org/en/learn/asynchronous-work/event-loop-timers-and-nexttick)
- [Node.js — Don't Block the Event Loop (or the Worker Pool)](https://nodejs.org/en/learn/asynchronous-work/dont-block-the-event-loop)
Node.js uses a single-threaded, asynchronous I/O model, which achieves high performance by utilizing time waiting for I/O to perform other tasks. JavaScript and Node.js have the concept of an event loop, but synchronous processing can block the event loop, preventing other tasks from being processed during that time. This could potentially degrade system performance compared to using asynchronous processing.
## The Downsides of Synchronous Processing and Mitigation Strategies
For example, if a web application handles HTTP requests only with synchronous APIs, it cannot accept other requests until the current one is completed. However, this behavior is common in other languages. In Ruby on Rails, for instance, a process typically cannot handle other requests until one request is completed.
Thus, while using synchronous APIs may lower performance compared to asynchronous Node.js applications, it is not necessarily inferior in performance compared to applications in other languages.
Moreover, this refers to the performance per thread. By running multiple processes or threads in parallel, overall system performance may not significantly decrease. Parallelizing server-side processing with multiple processes is very common in web applications. For instance, in Ruby, it's common to use application servers like Unicorn to run multiple processes.
Even in Node.js, processes can be run in parallel, and there are mechanisms to run certain tasks on separate threads without blocking the event loop . [^1]
While using synchronous processing in Node.js may reduce performance per thread, system-level performance degradation can potentially be avoided through system architecture.
Additionally, in serverless environments like AWS Lambda, where handling multiple requests concurrently in a single process (container) is uncommon, synchronous processing may not impact performance significantly.
## Prioritizing Development Efficiency Over System Performance
Ultimately, my goal is to have a framework for TypeScript with development efficiency comparable to Rails. What I prioritize is development efficiency, not system performance.
Many development environments prioritize development efficiency over system (per-thread) performance. If performance was the only concern, faster languages like C would always be chosen for server-side development. However, languages like PHP and Ruby, which are relatively slower, are also popular. This is because these languages and frameworks are considered to provide a more efficient development environment.
## Adopting a Synchronous API in the New ORM
Based on the investigation, I have organized answers to the initial questions:
- Why do JS/TS database access libraries adopt asynchronous APIs?
- To avoid blocking the JavaScript event loop. Blocking the event loop can lead to degraded system performance.
- Is it absolutely necessary to use asynchronous APIs?
- Not necessarily. If the degradation in performance per thread is acceptable, synchronous APIs can be used. (And, as mentioned, there are ways to mitigate this downside through system architecture.)
- Can an ORM be created using synchronous APIs?
- There seems to be no inherent restriction in JavaScript or Node.js preventing this.
If the ORM is designed with synchronous APIs, the main downside would be a degradation in (per-thread) performance. However, as discussed, this can be mitigated through system architecture. Therefore, weighing this downside against the benefit of improved development efficiency for library users, I concluded that the benefits of adopting synchronous APIs outweigh the downsides.
Thus, the new ORM, Accel Record, has adopted synchronous APIs despite being a TypeScript library.
## Conclusion
In this article, I explained the background and reasons for adopting synchronous APIs in Accel Record.
First, it was challenging to achieve the ideal ORM interface with asynchronous APIs. Asynchronous APIs restricted the library's interface. This led to investigating whether a synchronous API could be adopted. Although there were concerns about system performance, considering the project's goals, we determined that the benefits (improved development efficiency) of realizing the ideal interface were significant.
To see how Accel Record achieves an ideal interface with synchronous APIs, please check out the [Introduction to "Accel Record": A TypeScript ORM Using the Active Record Pattern](https://dev.to/koyopro/introduction-to-accel-record-a-typescript-orm-using-the-active-record-pattern-2oeh) or the [README](https://github.com/koyopro/accella/blob/main/packages/accel-record/README.md).
[^1]: [Cluster | Node.js v22.2.0 Documentation](https://nodejs.org/api/cluster.html#cluster)
| koyopro |
1,895,048 | LayerZero (ZRO) Airdrop Claim Promo by GetBlock: Private RPCs for Degens | As ZRO airdrop is finally happening on Jun.20, GetBlock offers premium RPC nodes for enhanced... | 0 | 2024-06-20T16:08:05 | https://dev.to/getblockapi/layerzero-zro-airdrop-claim-promo-by-getblock-private-rpcs-for-degens-1jja | airdrop, layerzero, nodes, cryptocurrency |

As ZRO airdrop is finally happening on Jun.20, GetBlock offers premium RPC nodes for enhanced airdrop experience. With GetBlock’s private RPCs for EVM networks, airdrop hunters can protect themselves from inevitable network congestion.
## ZRO airdrop with GetBlock: Claim faster with custom RPCs
GetBlock, a premium provider of RPC nodes and Web3 infrastructure, launches a special campaign for LayerZero airdrop participants. On [GetBlock](https://getblock.io/?utm_source=external&utm_medium=article&utm_campaign=devto_zroairdrop), airdrop winners can create custom RPC endpoints to accelerate the process of claim.
While claiming tokens via standard wallet, ZRO community members will highly likely get stuck due to the overload of default RPC endpoints: LayerZero tokens will be distributed to over 1,28 mln accounts. As such, getting custom private RPC endpoints is a smart bet for a proper airdrop farmer.
With GetBlock’s private RPC endpoints, ZRO claim attendees will be able to get their tokens faster than their competitors. As a result, they also will be the first to inject liquidity to exchanges and DeFis.
With GetBlock, airdrop winners don’t need to worry: the service supports a full range of L1 and L2 EVM networks.

_Image by GetBlock_
With a free account on GetBlock, Web3 enthusiasts can get reliable and fast endpoints to Ethereum, Base, Arbitrum, Polygon, BNB Smart Chain, Optimism, and Avalanche.
## LayerZero claim participants create RPC endpoints in three clicks
To customize the wallet for faster claims, users should sign up to [GetBlock](https://getblock.io/?utm_source=external&utm_medium=article&utm_campaign=devto_zroairdrop) via e-mail, MetaMask or Google Account. Then, in a Dashboard, newcomers are welcomed to get their API endpoints for corresponding networks.
GetBlock CEO Arseniy Voitenko sends kudos to all successful participants of ZRO airdrop and highlights the importance of custom RPC endpoints for enhanced claim experience:
_Well, this is highly likely the last big airdrop campaign in crypto. Millions of crypto users were waiting for it for almost two years. It’s crucial for GetBlock to stand together with the most passionate Web3 enthusiasts that have come a long way. As such - just like it was during ARB, STRK, and, most recently, ZK campaigns - our nodes won’t go offline so don’t miss the opportunity to claim ZRO with GetBlock._
Also, for unmatched Web3 experience, GetBlock offers premium dedicated nodes with unlimited speed and requests. They can also be claimed in Dashboard or via configurator on GetBlock.

_Image by GetBlock_
Once a user gets an RPC endpoint, he or she should immediately integrate it into his or her wallet: MetaMask, Trustwallet, and so on. That’s it: once the wallet owner replaces default RPC endpoints with GetBlock’s ones, the wallet is ready to claim tokens faster.
## ARB, STRK, ZK airdrop winners benefited from GetBlock infra
In Q2 2024, this scheme really looks battle-tested. During previous major distributions, our nodes handled thousands of requests from hundreds of airdrop participants.
Recent ZK airdrop wasn't an extinction: hundreds of claim participants created RPC endpoints for zkSync and used it to get tokens as soon as possible.
By offering free RPC endpoints for airdrops, GetBlock contributes to global Web3 adoption and introduces its premium infrastructure to the general public. The platform connects dApps to 50+ blockchains via JSON RPC, WebSockets, gRPC and other mainstream interfaces.
| getblockapi |
1,893,632 | Building a Custom Calendar with React and Day.js: A Step-by-Step Guide. | Welcome! This is the first of a three-part series on building a custom calendar with React and... | 0 | 2024-06-20T16:08:01 | https://dev.to/oluwadahunsia/building-a-custom-calendar-with-react-and-dayjs-a-step-by-step-guide-2h1d | webdev, javascript, beginners, programming | Welcome!
This is the first of a **three-part** series on building a custom calendar with React and Day.js. We are going to start with a simple calendar in this first part and go all the way up to a date range picker in the third part.
When you are done with this, you can check the second part here:
{% embed https://dev.to/oluwadahunsia/building-a-simple-date-picker-with-react-and-dayjs-4oop %}
And the third part here:
{% embed https://dev.to/oluwadahunsia/building-a-date-range-picker-with-react-and-dayjs-2b5a %}
Here, we will create a simple, yet functional calendar using React and Day.js, then we will add more features and functionalites in the subsequent parts.
By the end of this part, you will have a basic calendar that looks like this:

By the end of the third part, we will have a Date Range Picker created from this calendar. Here is a sneak peak:

Try to follow along as much as you can. I have also provided the final version for this part at the end of this article. So, let us dive into creating a dynamic calendar application!
## Setting up our React project with Vite.
To kickstart our React project using Vite, let's begin by opening our terminal and running the following command:
```
npm create vite@latest
```
If you prefer, you can name the project **“Calendar”**. Otherwise, you are free to choose a name that suits you. Also, make sure to select **Typescript** while setting up vite for your application.
```
Project name: Use any name you prefer.
Framework: React.
Language: TypeScript.
```
Once you are all set, create a `Calendar` folder within your `src` directory. Inside the `Calendar` folder, create two files: `Calendar.tsx` for building our Calendar component and `style.css` for our styling needs. Your folder structure should resemble the following:
```
src/
└── Calendar/
├── Calendar.tsx
└── style.css
```
## Installing Day.js
As you already know, we will be using `Day.js` library for date manipulation, I recommend visiting their website to familiarize yourself with its capabilities. To install `Day.js` for our project, open your terminal and execute the following command:
```
npm install dayjs
```
Before proceeding, you might be wondering why we are choosing Day.js for date formatting instead of relying on the built-in Date object. The rationale behind using a third-party library like Day.js includes several advantages, one of which is the fact that Day.js gives simple APIs for date manipulation. I will show this with an example.
Imagine you have to get a date in this format `DD.MM.YYYY HH:mm:sss`. Can you try to achieve this on your own using the built-in Date object?
One possible solution is to write a function that accepts a date and returns it in the format we desire. Let’s do that below.
```typescript
const formatGivenDate = (date) => {
const day = date.getDate();
const month = date.getMonth() + 1; // Add 1 to month because it is zero based
const year = date.getFullYear();
const hours = date.getHours();
const minutes = date.getMinutes();
const seconds = date.getSeconds();
// join with template literal and return formatted date
return `${day < 10 ? "0" + day : day}.${month < 10 ? '0' + month : month}.${year} ${hours}:${minutes}:${seconds}`
}
const date = new Date();
console.log(formatGivenDate(date)) // 15.06.2024 21:48:13
```
That looks quite straightforward, right? But what if we can achieve the same thing with a lot less than that?
Let us see what it looks like with Day.js.
```typescript
const date = dayjs();
const formattedDate = date.format('DD.MM.YYYY HH:mm:ss')
```
Yay, we got the same result in 2 lines of code.
Another reason why we are using Day.js is because it is a really lightweight library compared to a lot of the other available options — it is just 2kb in size.
Okay, back to building our calendar.
## Starter files
To save time, I will provide you with the initial content of `Calendar.tsx` and the `style.css` files.
```css
.calendar__container {
display: flex;
flex-direction: column;
align-items: center;
padding: 25px;
width: max-content;
background: #ffffff;
box-shadow: 5px 10px 10px #dedfe2;
}
.month-year__layout {
display: flex;
margin: 0 auto;
width: 100%;
flex-direction: row;
align-items: center;
justify-content: space-around;
}
.year__layout,
.month__layout {
width: 150px;
display: flex;
padding: 10px;
font-weight: 600;
align-items: center;
text-transform: capitalize;
justify-content: space-between;
}
.back__arrow,
.forward__arrow {
cursor: pointer;
background: transparent;
border: none;
}
.back__arrow:hover,
.forward__arrow:hover {
scale: 1.1;
transition: scale 0.3s;
}
.days {
display: grid;
grid-gap: 0;
width: 100%;
grid-template-columns: repeat(7, 1fr);
}
.day {
flex: 1;
font-size: 16px;
padding: 5px 7px;
text-align: center;
}
.calendar__content {
position: relative;
background-color: transparent;
}
.calendar__items-list {
text-align: center;
width: 100%;
height: max-content;
overflow: hidden;
display: grid;
grid-gap: 0;
list-style-type: none;
grid-template-columns: repeat(7, 1fr);
}
.calendar__items-list:focus {
outline: none;
}
.calendar__day {
position: relative;
display: flex;
justify-content: center;
align-items: center;
}
.calendar__item {
position: relative;
width: 50px;
height: 50px;
cursor: pointer;
background: transparent;
border-collapse: collapse;
background-color: white;
display: flex;
justify-content: center;
align-items: center;
text-align: center;
border: 1px solid transparent;
z-index: 200;
}
button {
margin: 0;
display: inline;
box-sizing: border-box;
}
.calendar__item:focus {
outline: none;
}
.calendar__item.selected {
font-weight: 700;
border-radius: 50%;
background: #1a73e8;
color: white;
outline: none;
border: none;
}
.calendar__item.selectDay {
position: relative;
background: #1a73e8;
color: white;
border-radius: 50%;
border: none;
z-index: 200;
}
.calendar__item.gray,
.calendar__item.gray:hover {
color: #c4cee5;
display: flex;
justify-content: center;
align-items: center;
}
.input__container {
display: flex;
justify-content: space-around;
}
.input {
height: 30px;
border-radius: 8px;
text-align: center;
align-self: center;
border: 1px solid #1a73e8;
}
.shadow {
position: absolute;
display: inline-block;
z-index: 10;
top: 0;
background-color: #f4f6fa;
height: 50px;
width: 50px;
}
.shadow.right {
left: 50%;
}
.shadow.left {
right: 50%;
}
```
```typescript
//Calendar.tsx
import backArrow from '../assets/images/back.svg';
import forwardArrow from '../assets/images/forward.svg';
import './style.css';
const weekDays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat'];
const daysListGenerator = {
day: 15,
prevMonthDays: [26, 27, 28, 29, 30],
days: [
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27, 28, 29, 30,
],
remainingDays: [1, 2, 3, 4, 5],
};
export const Calendar = () => {
return (
<div className='calendar__container'>
<div className='control__layer'>
<div className='month-year__layout'>
<div className='year__layout'>
<button className='back__arrow'>
<img src={backArrow} alt='back arrow' />
</button>
<div className='title'>2024</div>
<button className='forward__arrow'>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
<div className='month__layout'>
<button className='back__arrow'>
<img src={backArrow} alt='back arrow' />
</button>
<div className='new-title'>June</div>
<button className='forward__arrow'>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
</div>
<div className='days'>
{weekDays.map((el, index) => (
<div key={`${el}-${index}`} className='day'>
{el}
</div>
))}
</div>
<div className='calendar__content'>
<div className={'calendar__items-list'}>
{daysListGenerator.prevMonthDays.map((el, index) => {
return (
<div key={`${el}/${index}`} className='calendar__day'>
<button className='calendar__item gray'>
{el}
</button>
</div>
);
})}
{daysListGenerator.days.map((el, index) => {
return (
<div key={`${index}-/-${el}`} className='calendar__day'>
<button
className={`calendar__item
${+el === +daysListGenerator.day ? 'selected' : ''}`}
>
<div className='day__layout'>
<div className='text'>{el.toString()}</div>
</div>
</button>
</div>
);
})}
{daysListGenerator.remainingDays.map((el, idx) => {
return (
<div key={`${idx}----${el}`} className='calendar__day'>
<button className='calendar__item gray' >
{el}
</button>
</div>
);
})}
</div>
</div>
</div>
</div>
);
};
```
```typescript
// src/App.tsx
import { Calendar } from './Calendar/Calendar';
function App() {
return (
<>
<Calendar />
</>
);
}
export default App;
```
You probably have noticed that I imported some svg files in the Calendar component, right?
I got them from svg repo but don’t worry I won’t leave you to source for them yourself. Here you have it.
```typescript
//back.svg
<?xml version="1.0" encoding="utf-8"?>
<svg width="20px" height="20px" viewBox="0 0 1000 1000" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg"><path d="M768 903.232l-50.432 56.768L256 512l461.568-448 50.432 56.768L364.928 512z" fill="#000000" /></svg>
```
```typescript
//forward.svg
<?xml version="1.0" encoding="utf-8"?>
<svg width="20px" height="20px" viewBox="0 0 1000 1000" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg"><path d="M256 120.768L306.432 64 768 512l-461.568 448L256 903.232 659.072 512z" fill="#000000" /></svg>
```
For no particular reason — other than just following along — create an image folder inside your assets directory and put the two svg files in this folder.
At this point, your file structure should look like this.
```
src/
└── assets/
└──images/
├── back.svg
└── forward.svg
└── Calendar/
├── Calendar.tsx
└── style.css
```
The `daysListGenerator` object in the `Calendar.tsx` file holds the values needed to populate our calendar. Let us look at what each of the property in the object means:
`day` is the current day or today. In the picture below, we can see it is `15`.
`prevMonthDays` array contains the days in the previous month that will show in the current month. These days are so important as they will also help to offset the days in the current month and ensure that the first day of the current month matches with the corresponding day of the week. In the picture below these days are: `[26, 27, 28, 29, 30]`
`days` array contains the days in the current month, the length of this array will range from 28 to 31.
`remainingDays` array contains the days in the next month that will show in the current month. Since we have prevMonthDays, we can as well just have the remaining days as well. In the picture below, the remaining days are: `[1, 2, 3, 4, 5, 6, 7]`
Provided you have followed everything up to this point, if you start your application, you should have a calendar that looks like this:

The content of our calendar is currently hardcoded,so we need to fix this and make it as dynamic as we can.
## Adding functionalities to the calendar.
So far, we have successfully initialized our project, installed DayJs, and hardcoded a basic calendar. Now, the next step is to ensure everything functions correctly. Our calendar should be able to display different months and years, as well as the corresponding days. Let’s focus on implementing these features to ensure our calendar is fully operational and not perpetually stuck in the past.
You would agree with me that in order to achieve our aim, all we need to do is ensure that the `daysListGenerator` object is dynamic and each of the properties change with respect to today’s date.
```typescript
//
const daysListGenerator = {
day: 15,
prevMonthDays: [26, 27, 28, 29, 30],
days: [
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27, 28, 29, 30,
],
remainingDays: [1, 2, 3, 4, 5],
};
//
```
So, instead of hardcoding the properties in this object, we will try to generate them based on our current date (or today’s date).
To do that, let us create a helper function. This function will be responsible for generating all the currently hardcoded values in our calendar, such as `day`, `prevMonthDays`, `days`, and `remainingDays`.
In your src directory, create a folder called **`helper`** , inside this folder, create a **`calendarObjectGenerator.ts`** file. Your file structure should now look similar to this:
```
src/
└── assets/
└──images/
├── back.svg
└── forward.svg
└── Calendar/
├── Calendar.tsx
└── style.css
└── helper
└── calendarObjectGenerator.ts
```
Inside the `calendarObjectGenerator.ts` file, we will create a function called `calendarObjectGenerator` . This function will accept the currentDate or (today’s date), and will return an object containing calculated values, based on the current date, for all the properties we previously hardcoded.
This function is provided below:
```typescript
//calendarObjectGenerator.ts
import dayjs, { Dayjs } from 'dayjs';
import isLeapYear from 'dayjs/plugin/isLeapYear';
import LocaleData from 'dayjs/plugin/localeData';
import customParseFormat from 'dayjs/plugin/customParseFormat';
dayjs.extend(customParseFormat);
dayjs.extend(LocaleData);
type GeneratedObjectType = {
days: number[];
day: number;
prevMonthDays: number[];
remainingDays: number[];
months: string[];
};
export const calendarObjectGenerator = (currentDate: Dayjs): GeneratedObjectType => {
const numOfDaysInPrevMonth = currentDate.subtract(1, 'month').daysInMonth();
const firstDayOfCurrentMonth = currentDate.startOf('month').day()
return {
days: Array.from({ length: currentDate.daysInMonth() }, (_, index) => index + 1),
day: Number(currentDate.format('DD')),
months: currentDate.localeData().months(),
//read explanation
prevMonthDays: Array.from({length:firstDayOfCurrentMonth}, (_,index) => numOfDaysInPrevMonth - index).reverse(),
remainingDays: Array.from(
{ length: 6 - currentDate.endOf('month').day() },
(_, index) => index + 1
),
}
```
Let me explain how the values for each of the property in the object is calculated.
`days: Array.from({length: currentDate.daysInMonth()}, (_,index) => index + 1)`
To get the number of days in a month, we can use the `daysInMonth()` method from the DayJs object. This method returns the number of days as a numerical value (e.g 30). We can then create an array of integers from 1 to the number of days in `currentDate.daysInMonth()`.
`day: Number(currentDate.format('DD'))`
We can extract the day of the month from our Day.js object by calling format method on the object and specifying what we want to extract.
`months: currentDate.localeData().months()`
This returns an array of all the months of the year.
`prevMonthDays: Array.from({length:firstDayOfCurrentMonth}, (_,index) => numOfDaysInPrevMonth - index).reverse()`
Now, let's consider the days from the previous month that we want to display. It's important to include these days as they will serve as an offset for the beginning of our current month, ensuring that the first day of the current month falls on the correct day of the week. Although getting the previous and remaining days can be a bit tricky, we can figure it out with some logic.
Let's break it down:
- The method **`dayjs().startOf('month').day()`** gives us the first day of the current month, where 0 = Sunday and 6 = Saturday.
- Using this value, we can create an array representing the number of days before the first day of the current month: **`Array.from({ length: dayjs().startOf('month').day() })`**.
- DayJs makes it easy to find the last day of the previous month. Simply subtract one month from the current month and get the number of days: **`dayjs().subtract(1, 'month').daysInMonth()`**.
- Putting it all together, we can generate the last few days of the previous month: **`Array.from({ length: dayjs().startOf('month').day() }, (_, index) => numOfDaysInPrevMonth - index)`**.
- Finally, reverse the array to sort the days in increasing order.
`remainingDays: Array.from( { length: 6 - currentDate.endOf('month').day() }, (_, index) => index + 1 )`
To fill in the remaining days of our calendar, we can use one of two approaches:
- **Subtract from the Last Day of the Month:**
- Subtract the value of the last day of the month (0 = Sunday to 6 = Saturday) from 6.
- Using this result, create an array of numbers in increasing order.
- **Total Days in the Calendar:**
**`remainingDays: 42 - (firstDayOfCurrentMonth + currentDate.daysInMonth())`**
- Determine the total number of days you want to display in the calendar, for example, 42.
- Calculate the remaining days by subtracting the sum of the total offset and the total number of days in the current month from 42.
- If you're unsure why 42 is used, count the number of days in the hardcoded calendar above. You can choose the total number of days you want to display on your calendar.
## Updating the days of the month.
Now that we have completed the `calendarObjectGenerator` function, let's put it to use to make our calendar dynamic.
Create a state to hold the current date:
`const [currentDate, setCurrentDate] = useState<Dayjs>(dayjs(Date.now()));`
Import `calendarObjectGenerator` into the `Calendar.tsx` file and replace the hardcoded object assinged to the `daysListGenerator` variable with `calendarObjectGenerator(currentDate)`.
```typescript
import dayjs, { Dayjs } from 'dayjs';
import backArrow from '../assets/images/back.svg';
import forwardArrow from '../assets/images/forward.svg';
import dayjs, { Dayjs } from 'dayjs';
import { calendarObjectGenerator } from '../helper/calendarObjectGenerator'; // add this line
import './style.css';
const weekDays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat'];
export const Calendar = () => {
const [currentDate, setCurrentDate] = useState<Dayjs>(dayjs(Date.now())); // add line
const daysListGenerator = calendarObjectGenerator(currentDate); //add this line
... // other lines are the same as the content of Calendar.tsx above
}
```
Now your calendar should be updated and match the current month.
## Enable click on month and year control arrows.
Even though, we have partly ensured that the calendar is not hardcoded anymore, and we will not get stuck in the past, there is still no way to peep into the future or travel back in time.
So our next challenge is to ensure we can go into the previous and future months and years.
That means we have to activate our arrows to respond to clicks.
Let’s create a function that will handle clicks on our arrows. We will call it **dateArrowHandler**. We will then use this function to control our months and years.
```typescript
import dayjs, { Dayjs } from 'dayjs';
import backArrow from '../assets/images/back.svg';
import forwardArrow from '../assets/images/forward.svg';
import dayjs, { Dayjs } from 'dayjs';
import { calendarObjectGenerator } from '../helper/calendarObjectGenerator';
import './style.css';
const weekDays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat'];
export const Calendar = () => {
const [currentDate, setCurrentDate] = useState<Dayjs>(dayjs(Date.now()));
const daysListGenerator = calendarObjectGenerator(currentDate);
//add this function
const dateArrowHandler = (date:Dayjs) => {
setCurrentDate(date);
};
return (
<div className='calendar__container'>
<div className='control__layer'>
<div className='month-year__layout'>
<div className='year__layout'>
<button
className='back__arrow'
//add line onClick={() => dateArrowHandler(currentDate.subtract(1, 'year'))}
>
<img src={backArrow} alt='back arrow' />
</button>
//add line <div className='title'>{currentDate.year()}</div>
<button
className='forward__arrow'
//add line onClick={() => dateArrowHandler(currentDate.add(1, 'year'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
... // other lines are the same as the content of Calendar.tsx above
}
```
If you have been following, try making the months dynamic on your own before proceeding to the solution.
```typescript
import dayjs, { Dayjs } from 'dayjs';
import backArrow from '../assets/images/back.svg';
import forwardArrow from '../assets/images/forward.svg';
import dayjs, { Dayjs } from 'dayjs';
import { calendarObjectGenerator } from '../helper/calendarObjectGenerator';
import './style.css';
const weekDays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat'];
export const Calendar = () => {
const [currentDate, setCurrentDate] = useState<Dayjs>(dayjs(Date.now()));
const daysListGenerator = calendarObjectGenerator(currentDate);
//add this function
const dateArrowHandler = (date:Dayjs) => {
setCurrentDate(date);
};
return (
<div className='calendar__container'>
<div className='control__layer'>
<div className='month-year__layout'>
<div className='year__layout'>
<button
className='back__arrow'
//add line onClick={() => dateArrowHandler(currentDate.subtract(1, 'year'))}
>
<img src={backArrow} alt='back arrow' />
</button>
//add line <div className='title'>{currentDate.year()}</div>
<button
className='forward__arrow'
//add line onClick={() => dateArrowHandler(currentDate.add(1, 'year'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
<div className='month__layout'>
<button
className='back__arrow'
//add line onClick={() => dateArrowHandler(currentDate.subtract(1, 'month'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='new-title'>
//add line {daysListGenerator.months[currentDate.month()]}
</div>
<button
className='forward__arrow'
//add line onClick={() => dateArrowHandler(currentDate.add(1, 'month'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
... // other lines are the same as the content of Calendar.tsx above
}
```
## Enable click on days.
Finally we want to allow the user to click on the days (numbers) on our calendar. We will be adding three main functions for this purspose. One for handling clicks on the `prevMonthDays`, the second for handling clicks on the `days` of the current month and the last one for handling clicks on the `remainingDays`.
```typescript
...
const handlePreviousMonthClick = (day: number) => {
const dayInPreviousMonth = currentDate.subtract(1, 'month').date(day);
setCurrentDate(dayInPreviousMonth);
};
const handleCurrentMonthClick = (day: number) => {
const dayInCurrentMonth = currentDate.date(day);
setCurrentDate(dayInCurrentMonth);
};
const handleNextMonthClick = (day: number) => {
const dayInNextMonth = currentDate.add(1, 'month').date(day);
setCurrentDate(dayInNextMonth);
};
...
```
As a little task for you, can you figure out where to put these functions in your code to enable clicks on the various days? Try to do that on your own. And once you are done, ensure your calendar looks somewhat like this:

That's all we need to do for this first part. I have provided the files containing everything we have written so far.
We will build upon this calendar in the subsequent parts. See you in the next one.
## Summary.
Below is the whole code for what we have so far:
The complete css file is at the beginning.
```typescript
// src/Calendar/Calendar.tsx
import dayjs, { Dayjs } from 'dayjs';
import backArrow from '../assets/images/back.svg';
import forwardArrow from '../assets/images/forward.svg';
import './style.css';
import { useState } from 'react';
import { calendarObjectGenerator } from '../helper/calendarObjectGenerator';
const weekDays = ['Sun', 'Mon', 'Tue', 'Wed', 'Thur', 'Fri', 'Sat'];
export const Calendar = () => {
const [currentDate, setCurrentDate] = useState<Dayjs>(dayjs(Date.now()));
const daysListGenerator = calendarObjectGenerator(currentDate);
const dateArrowHandler = (date: Dayjs) => {
setCurrentDate(date);
};
const handlePreviousMonthClick = (day: number) => {
const dayInPreviousMonth = currentDate.subtract(1, 'month').date(day);
setCurrentDate(dayInPreviousMonth);
};
const handleCurrentMonthClick = (day: number) => {
const dayInCurrentMonth = currentDate.date(day);
setCurrentDate(dayInCurrentMonth);
};
const handleNextMonthClick = (day: number) => {
const dayInNextMonth = currentDate.add(1, 'month').date(day);
setCurrentDate(dayInNextMonth);
};
return (
<div className='calendar__container'>
<div className='control__layer'>
<div className='month-year__layout'>
<div className='year__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'year'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='title'>{currentDate.year()}</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'year'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
<div className='month__layout'>
<button
className='back__arrow'
onClick={() => dateArrowHandler(currentDate.subtract(1, 'month'))}
>
<img src={backArrow} alt='back arrow' />
</button>
<div className='new-title'>
{daysListGenerator.months[currentDate.month()]}
</div>
<button
className='forward__arrow'
onClick={() => dateArrowHandler(currentDate.add(1, 'month'))}
>
<img src={forwardArrow} alt='forward arrow' />
</button>
</div>
</div>
<div className='days'>
{weekDays.map((el, index) => (
<div key={`${el}-${index}`} className='day'>
{el}
</div>
))}
</div>
<div className='calendar__content'>
<div className={'calendar__items-list'}>
{daysListGenerator.prevMonthDays.map((el, index) => {
return (
<button
key={`${el}/${index}`}
className='calendar__item gray'
onClick={() => handlePreviousMonthClick(el)}
>
{el}
</button>
);
})}
{daysListGenerator.days.map((el, index) => {
return (
<div
key={`${index}-/-${el}`}
className='calendar__day'
onClick={() => handleCurrentMonthClick(el)}
>
<button
className={`calendar__item
${+el === +daysListGenerator.day ? 'selected' : ''}`}
>
<div className='day__layout'>
<div className='text'>{el.toString()}</div>
</div>
</button>
</div>
);
})}
{daysListGenerator.remainingDays.map((el, idx) => {
return (
<button
className='calendar__item gray'
key={`${idx}----${el}`}
onClick={() => handleNextMonthClick(el)}
>
{el}
</button>
);
})}
</div>
</div>
</div>
</div>
);
};
```
```typescript
// src/helper/calendarObjectGenerator.ts
import dayjs, { Dayjs } from 'dayjs';
import LocaleData from 'dayjs/plugin/localeData';
dayjs.extend(LocaleData);
type GeneratedObjectType = {
prevMonthDays: number[];
days: number[];
remainingDays: number[];
day: number;
months: string[];
};
export const calendarObjectGenerator = (
currentDate: Dayjs
): GeneratedObjectType => {
const numOfDaysInPrevMonth = currentDate.subtract(1, 'month').daysInMonth();
const firstDayOfCurrentMonth = currentDate.startOf('month').day();
return {
days: Array.from(
{ length: currentDate.daysInMonth() },
(_, index) => index + 1
),
day: Number(currentDate.format('DD')),
months: currentDate.localeData().months(),
prevMonthDays: Array.from(
{ length: firstDayOfCurrentMonth },
(_, index) => numOfDaysInPrevMonth - index
).reverse(),
remainingDays: Array.from(
{ length: 6 - currentDate.endOf('month').day() },
(_, index) => index + 1
),
};
};
```
| oluwadahunsia |
1,895,047 | Migrate from Cord to SuperViz | With the latest announcement from the Cord Team stating that the company is shutting down in August,... | 0 | 2024-06-20T16:07:08 | https://dev.to/superviz/migrate-from-cord-to-superviz-2f9m | superviz, videosdk, realtime, tooling | With the latest announcement from the Cord Team stating that the company is shutting down in August, many of their users are now seeking an alternative. SuperViz shares the same mission: making it easy for any developer to add real-time collaboration to their apps. Today, we want to show you how we can [help you transition from Cord to SuperViz](https://superviz.com/migrating-from-cord-to-superviz).
### Migrating to SuperViz
We understand that news like this directly affects your team's backlog, and we want to make the migration as easy as possible. That's why we are offering a free subscription until you go into production, as well as free support from the SuperViz team to help you switch from Cord's components to ours.
If you want to know more, [book a meeting with us](https://calendly.com/vtnorton/superviz) to learn more about SuperViz and plan hands-on support for your migration. We are committed to making this transition as smooth as possible, so don't hesitate to reach out to us.
## What is SuperViz
SuperViz offers a variety of products that extend what you could build with Cord, like our AI-Powered Video SDK and a [Real-time Data Engine](https://docs.superviz.com/react-sdk/presence/real-time-data-engine) that can be integrated into your application to enable real-time data synchronization and collaboration. With SuperViz, you can take your app's interactivity and user engagement to the next level.
Just like Cord, we offer not only a Vanilla JavaScript SDK that can be used in any JS/TS project, but also a [React SDK](https://docs.superviz.com/react-sdk/initialization) to make integration with your React application simpler. If you don’t use NPM packages, we have an [option to use it as a CDN](https://docs.superviz.com/guides/misc/using-as-a-cdn).
### Quickstart
When building with SuperViz, you can [add multiple participants easily by creating a room](https://docs.superviz.com/getting-started/quickstart). A place where everyone with the same `roomId` will be together by default. You can [create participants and groups using our REST API](https://docs.superviz.com/rest-api/participants), but you don’t need it to start using multi-user features.
```jsx
const room = await SuperVizRoom(DEVELOPER_KEY, {
roomId: "<ROOM-ID>",
group: {
id: "<GROUP-ID>",
name: "<GROUP-NAME>",
},
participant: {
id: "<USER-ID>",
name: "<USER-NAME>"
},
});
```
Following the initialization of the room, you can add components to it as shown below. In this article, I’m showing you the quickest and simplest form of utilization, but each component has its own set of properties and methods that give it more power.
### Collaboration Components
[**Mouse Pointers**](https://docs.superviz.com/sdk/presence/mouse-pointers): Show where each user is looking and interacting in your application, in an HTML element or a Canvas element, providing a more collaborative experience, just like Cord’s Live Cursors. It’s quite simple to initialize:
```jsx
import { MousePointers } from "@superviz/sdk"
const mousePointers = new MousePointers("my-id"); // ID of the HTML or Canvas element
// Adding the component to the already initialized room.
room.addComponent(mousePointers);
```
[**Contextual Comments**](https://docs.superviz.com/sdk/comments/contextual-comments): Allow users to leave comments and feedback directly within the application, facilitating meaningful conversations and discussions, including in 3D environments like Three.js, Autodesk Viewer, and Matterport.
The initialization of Contextual Comments depends on the environment you want the comments to be displayed in. We offer a range of adapters. Here is how you would initialize it in an HTML element:
```jsx
import { Comments, HTMLPin } from '@superviz/sdk/lib/components';
// Initializing the Adapter with the HTML element ID
const pinAdapter = new HTMLPin("my-id", {
// This property is used to locate the elements that can receive a comment pin.
dataAttributeName: "data-comment-id",
});
// Initializing the Comments component with the adapter created
const comments = new Comments(pinAdapter);
// Adding the component to the already initialized room.
room.addComponent(comments);
```
[**Who-is-Online**](https://docs.superviz.com/sdk/presence/who-is-online): Shows who is currently active in your application, promoting a sense of community and collaboration. When used alongside other components, it includes features like Follow, Gather, and Go-To. With just a few lines of code, you can create a Page Presence replacement:
```jsx
import { WhoIsOnline } from "@superviz/sdk"
//ID of the element ou want the participants to be displayed
const whoIsOnline = new WhoIsOnline("my-id");
// Adding the component to the already initialized room.
room.addComponent(whoIsOnline);
```
[**Form Elements**](https://docs.superviz.com/sdk/presence/form-elements): Integrate interactive form elements in your application, enabling users to input data synchronized in real-time among participants. Here is the code to enable it:
```jsx
import { FormElements } from '@superviz/sdk';
const formElements = new FormElements({
// Use the fields array to input the elements ID you want to sync between participants
fields: ['name', 'email', 'dog', 'cat', 'fish'],
});
// Adding the component to the already initialized room.
room.addComponent(formElements);
```
[**Presence 3D**](https://docs.superviz.com/sdk/presence/AutodeskPresence): Provides a real-time display of users' positions in 3D environments, creating a more immersive collaboration experience. It can be integrated with platforms such as Three.js, Autodesk Viewer, and Matterport.
When using a 3D environment, you will need to install a different plugin for the type of environment you want, then you will have a code similar to this:
```jsx
import { Presence3D } from "@superviz/autodesk-viewer-plugin";
// Creating an Presence 3D with the Autodesk Viewer object,
// depending on your enviroment the list of properties to input can be differente
const autodeskPresence = new Presence3D(viewer);
// Adding the component to the already initialized room.
room.addComponent(autodeskPresence);
```
### Video SDK
Our [Video SDK](https://docs.superviz.com/sdk/video/video-conference) allows you to easily add video conferencing capabilities to your application. It integrates seamlessly with other SuperViz components, providing you with the tools to create a fully interactive and collaborative experience for your users.
Some of the key features include [recently launched meeting recordings](https://docs.superviz.com/releases/2024/v6.3.0) and [generation of transcripts that can be interacted with AI](https://superviz.com/video) to get topics of conversation, follow-ups, and much more.
To start a video meeting, you only need a few lines of code:
```jsx
import { VideoConference } from "@superviz/sdk";
const video = new VideoConference({
participantType: "host" // you will need at least one host on the meeting
});
// Adding the component to the already initialized room.
room.addComponent(video);
```
### Real-time Data Engine
Our [Real-time Data Engine](https://docs.superviz.com/sdk/presence/real-time-data-engine) provides the foundation for the collaboration components. It enables the synchronization of data between users, allowing for shared experiences and interactive sessions. You can create new experiences with this simple-to-use tool.
It uses a PubSub design pattern, where you subscribe to a event and when new data is published to this topic, all subscribers receive it.
```jsx
import { Realtime } from "@superviz/sdk";
let channel;
const realtime = new Realtime();
// Subscribe to the REALTIME_STATE_CHANGED event
realtime.subscribe(RealtimeComponentEvent.REALTIME_STATE_CHANGED, (state) => {
// Check if the state of the component is ready
if (state === RealtimeComponentState.STARTED) {
// If it has, connect to a channel
channel = realtime.connect('<YOUR-CHANNEL-NAME>');
}
});
room.addComponent(realtime);
// Publish an event to the channel with any data
channel.publish("<EVENT_NAME>", anyObject);
// Subscribe to an event on the channel with a callback function
channel.subscribe("<EVENT_NAME>", callbackFunction);
// Define the callback function to handle the event data
function callbackFunction(eventData) {
// Handle event data here
}
```
As you can see [migration from Cord to SuperViz is easy](https://superviz.com/migrating-from-cord-to-superviz), and you can [count with our dedicated team](https://calendly.com/vtnorton/superviz) ready to guide you through the process. We believe in empowering developers to create interactive and collaborative applications, and we are here to help make your transition as seamless as possible. | vtnorton |
1,895,046 | Balancing cost and Resilience | Resilience in Cloud Architecture Resilience refers to an infrastructure's ability to recover quickly... | 0 | 2024-06-20T16:06:52 | https://dev.to/c0dingpanda/balancing-cost-and-resilience-4ido | azure, devops, cloud, design | **Resilience in Cloud Architecture**
Resilience refers to an infrastructure's ability to recover quickly from failures or disruptions and continue operating smoothly. In cloud computing, resilience patterns are designed to ensure that applications remain available and performant even in the face of challenges⁶. When architecting workloads for resilience, several factors come into play:
1. **Design Complexity**: As system complexity increases, so do the emergent behaviors. Each individual workload component must be resilient, and single points of failure across people, processes, and technology elements should be eliminated. Consider whether increasing system complexity or using a disaster recovery (DR) plan is more effective for meeting your resilience requirements.
2. **Cost to Implement**: Higher resilience often involves new software and infrastructure components, which can increase costs. However, these costs should be offset by potential savings from future loss. Shifting mission-critical workloads to the cloud can avoid expensive capital investments in hardware replacement.
3. **Operational Effort**: Continuously optimizing deployments, scripting processes, and keeping things simple contribute to operational excellence. Efficiently creating resources and deploying code is crucial for resilience.
4. **Effort to Secure**: Resilience also involves securing your systems. Implementing security measures without compromising availability is essential. Consider encryption, access controls, and monitoring.
5. **Environmental Impact**: Architecting for resilience affects the environment. Evaluate the trade-offs between resource usage, energy consumption, and sustainability.
**Achieving Resilience with Cost Efficiency**
To achieve sweet resilience with minimal cost, consider the following strategies:
1. **Operational Excellence**: Continuously optimize deployments, script processes, and keep things simple. The key question is how quickly you can create resources and deploy code.
2. **Identify Critical Resources**: Determine which resources are critical for your application. Configure failover and replication for these resources. Most resources can be replicated in a secondary region, ensuring availability even during disasters.
Remember, achieving the right balance between cost and resilience depends on your specific product and business needs. Assess whether the 4x cost increase for maximum resilience is truly worth it, or if a more cost-effective approach can still provide adequate protection. | c0dingpanda |
1,895,045 | Balancing cost and Resilience | Resilience in Cloud Architecture Resilience refers to an infrastructure's ability to recover quickly... | 0 | 2024-06-20T16:06:32 | https://dev.to/c0dingpanda/balancing-cost-and-resilience-3e68 | azure, devops, cloud, design | **Resilience in Cloud Architecture**
Resilience refers to an infrastructure's ability to recover quickly from failures or disruptions and continue operating smoothly. In cloud computing, resilience patterns are designed to ensure that applications remain available and performant even in the face of challenges⁶. When architecting workloads for resilience, several factors come into play:
1. **Design Complexity**: As system complexity increases, so do the emergent behaviors. Each individual workload component must be resilient, and single points of failure across people, processes, and technology elements should be eliminated. Consider whether increasing system complexity or using a disaster recovery (DR) plan is more effective for meeting your resilience requirements.
2. **Cost to Implement**: Higher resilience often involves new software and infrastructure components, which can increase costs. However, these costs should be offset by potential savings from future loss. Shifting mission-critical workloads to the cloud can avoid expensive capital investments in hardware replacement.
3. **Operational Effort**: Continuously optimizing deployments, scripting processes, and keeping things simple contribute to operational excellence. Efficiently creating resources and deploying code is crucial for resilience.
4. **Effort to Secure**: Resilience also involves securing your systems. Implementing security measures without compromising availability is essential. Consider encryption, access controls, and monitoring.
5. **Environmental Impact**: Architecting for resilience affects the environment. Evaluate the trade-offs between resource usage, energy consumption, and sustainability.
**Achieving Resilience with Cost Efficiency**
To achieve sweet resilience with minimal cost, consider the following strategies:
1. **Operational Excellence**: Continuously optimize deployments, script processes, and keep things simple. The key question is how quickly you can create resources and deploy code.
2. **Identify Critical Resources**: Determine which resources are critical for your application. Configure failover and replication for these resources. Most resources can be replicated in a secondary region, ensuring availability even during disasters.
Remember, achieving the right balance between cost and resilience depends on your specific product and business needs. Assess whether the 4x cost increase for maximum resilience is truly worth it, or if a more cost-effective approach can still provide adequate protection. | c0dingpanda |
1,895,043 | Common Properties and Methods for Nodes | The abstract Node class defines many properties and methods that are common to all nodes. Nodes share... | 0 | 2024-06-20T16:05:37 | https://dev.to/paulike/common-properties-and-methods-for-nodes-1p8l | beginners, programming, learning, java | The abstract **Node** class defines many properties and methods that are common to all nodes. Nodes share many common properties. This section introduces two such properties **style** and **rotate**.
JavaFX style properties are similar to cascading style sheets (CSS) used to specify the styles for HTML elements in a Web page. So, the style properties in JavaFX are called JavaFX CSS. In JavaFX, a style property is defined with a prefix **–fx-**. Each node has its own style properties.
The syntax for setting a style is **styleName:value**. Multiple style properties for a node can be set together separated by semicolon (**;**). For example, the following statement
`circle.setStyle("-fx-stroke: black; -fx-fill: red;");`
sets two JavaFX CSS properties for a circle. This statement is equivalent to the following two statements.
`circle.setStroke(Color.BLACK);
circle.setFill(Color.RED);`
If an incorrect JavaFX CSS is used, your program will still compile and run, but the style is ignored.
The **rotate** property enables you to specify an angle in degrees for rotating the node from its center. If the degree is positive, the rotation is performed clockwise; otherwise, it is performed counterclockwise. For example, the following code rotates a button 80 degrees.
`button.setRotate(80);`
The program below gives an example that creates a button, sets its style, and adds it to a pane. It then rotates the pane 45 degrees and set its style with border color red and background color light gray, as shown in Figure below.


As seen in Figure below, the rotate on a pane causes all its containing nodes rotated too.
The **Node** class contains many useful methods that can be applied to all nodes. For example, you can use the **contains(double x, double y)** method to test where a point (_x_, _y_) is inside the boundary of a node. | paulike |
1,895,040 | Understanding Kubernetes: Why It's Essential for Modern Applications | Introduction: Welcome back to our CK2024 blog series! Today we dive into the fundamentals... | 0 | 2024-06-20T15:59:08 | https://dev.to/jensen1806/understanding-kubernetes-why-its-essential-for-modern-applications-1b4m | kubernetes, containerisation, docker, devops | ### Introduction:
Welcome back to our CK2024 blog series! Today we dive into the fundamentals of Kubernetes. This is the fourth instalment in our series where we've already covered the basics of containers, how to containerize an application, why we need containers, and the concept of multi-stage builds. If you haven’t caught up with the previous posts, I highly recommend doing so to get the most out of this one.
## The Fundamentals of Kubernetes
In our last few posts, we explored containerization and its benefits. Now, let’s look at Kubernetes – a powerful orchestration tool that enhances the management and scalability of containerized applications.

#### Why Do We Need Kubernetes?
Imagine you have a small application with a few containers running on a virtual machine. Everything works perfectly until one container crashes, impacting your users. You might assign a team to fix the issue, but what if it happens during off-hours or multiple containers crash simultaneously? The situation becomes even more challenging if your application scales up to hundreds or thousands of containers. Managing such a scenario manually is not feasible. Here’s where Kubernetes comes in.
#### Challenges with Docker Containers Alone
Running applications solely on Docker containers presents several challenges:
- **Manual Recovery**: If a container crashes, someone needs to manually restart it.
- **Scaling Issues**: Manually scaling containers to meet demand can be difficult and inefficient.
- **Resource Management**: Managing resources across multiple containers is complex.
- **Networking**: Establishing secure and reliable networking between containers is challenging.
- **High Availability**: Ensuring your application is always available requires significant effort.
- **Load Balancing**: Distributing traffic effectively across containers is not straightforward.
- **Service Discovery**: Keeping track of running containers and their endpoints is a hassle.
### How Kubernetes Solves These Problems
Kubernetes automates many of these tasks, providing a robust solution for container orchestration:
1. **Self-Healing**: Kubernetes automatically restarts failed containers and reschedules them on healthy nodes.
2. **Automated Scaling**: It can scale applications up or down based on demand using the Horizontal Pod Autoscaler.
3. **Efficient Resource Management**: Kubernetes optimizes the use of resources, ensuring high performance and cost-effectiveness.
4. **Networking Solutions**: It offers a unified networking layer, simplifying communication between containers.
5. **High Availability**: Kubernetes ensures your application remains available through node redundancy and load balancing.
6. **Service Discovery**: Built-in service discovery mechanisms make it easy for containers to find and communicate with each other.
### When Not to Use Kubernetes
While Kubernetes is a powerful tool, it's not always the best choice. For small applications with only a couple of containers, Kubernetes can be overkill. Managing Kubernetes clusters requires additional administrative effort and resources. For simpler needs, tools like Docker Compose or even running containers directly on a virtual machine might be more suitable and cost-effective.
#### Conclusion
I hope this post has given you a solid understanding of why Kubernetes is essential for modern applications and the challenges it addresses. Stay tuned for our next post where we will delve into the architecture and basic fundamentals of Kubernetes.
See you soon in the next installment! | jensen1806 |
1,895,038 | Property Binding | You can bind a target object to a source object. A change in the source object will be automatically... | 0 | 2024-06-20T15:54:53 | https://dev.to/paulike/property-binding-2kgi | java, programming, learning, beginners | You can bind a target object to a source object. A change in the source object will be automatically reflected in the target object. JavaFX introduces a new concept called _property binding_ that enables a _target object_ to be bound to a _source object_. If the value in the source object changes, the target object is also changed automatically. The target object is called a _binding object_ or a _binding property_ and the source object is called a _bindable object_ or _observable object_. As discussed in the preceding listing, the circle is not centered after the window is resized. In order to display the circle centered as the window resizes, the x- and y-coordinates of the circle center need to be reset to the center of the pane. This can be done by binding the **centerX** with pane’s **width/2** and **centerY** with pane’s **height/2**, as shown in the program below.

The **Circle** class has the **centerX** property for representing the x-coordinate of the circle center. This property like many properties in JavaFX classes can be used both as target and source in a property binding. A target listens to the changes in the source and automatically updates itself once a change is made in the source. A target binds with a source using the **bind** method as follows:
`target.bind(source);`
The **bind** method is defined in the **javafx.beans.property.Property** interface. A binding property is an instance of **javafx.beans.property.Property**. A source object is an instance of the **javafx.beans.value.ObservableValue** interface. An **ObservableValue** is an entity that wraps a value and allows to observe the value for changes.
JavaFX defines binding properties for primitive types and strings. For a **double**/**float**/**long**/**int**/**boolean** value, its binding property type is **DoubleProperty**/**FloatProperty**/**LongProperty**/**IntegerProperty**/**BooleanProperty**. For a string, its binding property type is **StringProperty**. These properties are also subtypes of **ObservableValue**. So they can also be used as source objects for binding properties.
By convention, each binding property (e.g., **centerX**) in a JavaFX class (e.g., **Circle**) has a getter (e.g., **getCenterX()**) and setter (e.g., **setCenterX(double)**) method for returning and setting the property’s value. It also has a getter method for returning the property itself. The naming convention for this method is the property name followed by the word **Property**. For example, the property getter method for **centerX** is **centerXProperty()**. We call the **getCenterX()** method as the _value getter method_, the **setCenterX(double)** method as the _value setter method_, and **centerXProperty()** as the _property getter method_. Note that **getCenterX()** returns a **double** value and **centerXProperty()** returns an object of the **DoubleProperty** type.
Figure below (a) shows the convention for defining a binding property in a class and Figure below (b) shows a concrete example in which **centerX** is a binding property of the type **DoubleProperty**.

The program in Listing 14.5 is the same as in Listing 14.4 except that it binds **circle**’s **centerX** and **centerY** properties to half of **pane**’s width and height (lines 16–17). Note that **circle.centerXProperty()** returns **centerX** and **pane.widthProperty()** returns **width**. Both **centerX** and **width** are binding properties of the **DoubleProperty** type. The numeric binding property classes such as **DoubleProperty** and **IntegerProperty** contain the **add**, **subtract**, **multiply**, and **divide** methods for adding, subtracting, multiplying, and dividing a value in a binding property and returning a new observable property. So, **pane.widthProperty().divide(2)** returns a new observable property that represents half of the **pane**’s width. The statement
`circle.centerXProperty().bind(pane.widthProperty().divide(2));`
is same as
`centerX.bind(width.divide(2));`
Since **centerX** is bound to **width.divide(2)**, when **pane**’s width is changed, **centerX** automatically updates itself to match **pane**’s width / 2.
The program below gives another example that demonstrates bindings.

The program creates an instance of **DoubleProperty** using
**SimpleDoubleProperty(1)** (line 7). Note that **DoubleProperty**, **FloatProperty**, **LongProperty**, **IntegerProperty**, and **BooleanProperty** are abstract classes. Their concrete subclasses **SimpleDoubleProperty**, **SimpleFloatProperty**, **SimpleLongProperty**, **SimpleIntegerProperty**, and **SimpleBooleanProperty** are used to create instances of these properties. These classes are very much like wrapper classes **Double**, **Float**, **Long**, **Integer**, and **Boolean** with additional features for binding to a source object.
The program binds **d1** with **d2** (line 9). Now the values in **d1** and **d2** are the same. After setting **d2** to **70.2** (line 11), **d1** also becomes **70.2** (line 12).
The binding demonstrated in this example is known as _unidirectional binding_. Occasionally, it is useful to synchronize two properties so that a change in one property is reflected in another object, and vice versa. This is called a _bidirectional binding_. If the target and source are both binding properties and observable properties, they can be bound bidirectionally using the **bindBidirectional** method. | paulike |
1,895,035 | A WhatsApp game where you create your own Adventure | This is a submission for the Twilio Challenge What I Built A story-driven text-based... | 0 | 2024-06-20T15:52:54 | https://dev.to/kamecat/a-whatsapp-game-where-you-create-your-own-adventure-47o7 | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
A story-driven text-based game that is played through WhatsApp in an asynchronous, non intrusive way, adapting to players' schedules. The game is inspired in interactive fiction, choose your own adventure books, D&D, and busy lives.
Players, together with a group of friends (or solo), are part of a story of 7 chapters, crafted by an AI Game Master and delivered one chapter at a time via text (WhatsApp). At the end of each chapter, each player choose what to do do next and their answers are combined to create the next chapter of the story.
## Demo
You can see it in action at 👉 https://universe1340.com and even give it a spin with the "Join" button.

## Twilio and AI
The backend integrates Twilio’s WhatsApp API to manage player interactions and deliver both images and story segments. Setting everything with Twilio was actually very easy and straightforward - The hard part is getting the approval from Meta for a business account. The "Game Master", powered by OpenAI's ChatGPT api, generates story advancements based on player decisions and a virtual luck system, and it ensures a unique experience for each participant and story.
## Additional Prize Categories
Impactful Innovators: The game offers an innovative way to experience storytelling and engage users in a non intrusive way.
Entertaining Endeavors: As an entertaining and immersive game, it captivates players with its dynamic and evolving storyline.
## Team Submissions
We built this as a team together with @delbronski
## Where's the source code?
A messy, undocumented code would not help anyone, so in our todo list is to clean it up and make the repo public. Or at least to write dev post with our learning and snippets.
## Is this a full product?
Not really. It is fully functional, but only available to a limited amount of users. There's a cost related limitation: Haven't found a way to lower the costs enough to open it fully.
| kamecat |
1,895,036 | The Fascinating Evolution of JavaScript: A Comprehensive Historical Journey | JavaScript, the ubiquitous language of the web, has a captivating history that spans over two... | 0 | 2024-06-20T15:51:32 | https://dev.to/danielemanca/the-fascinating-evolution-of-javascript-a-comprehensive-historical-journey-2pim | htmlcss, webdev, javascript | JavaScript, the ubiquitous language of the web, has a captivating history that spans over two decades. From its humble beginnings as a simple scripting language to its current status as a powerful and versatile tool, JavaScript's journey is a testament to the ingenuity and perseverance of its creators and the ever-growing community of developers.
**The Birth of JavaScript
**
In 1995, Brendan Eich, a software engineer at Netscape Communications Corporation, was tasked with creating a scripting language for the Netscape Navigator web browser. The goal was to add interactivity and dynamic behaviour to web pages, which at the time were largely static and limited in functionality.[1]
Eich had a mere 10 days to develop the prototype for what would become JavaScript. Drawing inspiration from various programming languages, including Java, Self, and Scheme, he crafted a language that combined object-oriented and functional programming paradigms. Initially named "Mocha," the language was later rebranded as "LiveScript" and finally christened "JavaScript" as a marketing strategy to capitalise on the popularity of Java.[2]
Despite sharing a similar name, JavaScript is not directly related to Java and has its own unique syntax and semantics. This initial confusion led to some misconceptions about the language's capabilities and purpose, but it ultimately helped to garner attention and interest from developers.
** Standardisation and Adoption
**
As the World Wide Web gained traction, it became evident that a standardised version of JavaScript was necessary to ensure cross-browser compatibility and consistency. In 1997, Netscape submitted a proposal to the European Computer Manufacturers Association (ECMA) to standardise JavaScript, leading to the creation of ECMAScript, the official specification for the language.[3]
The first edition of ECMAScript (ES1) was released in 1997, followed by ES2 in 1998 and ES3 in 1999. ES3 introduced significant improvements, such as strict equality, regular expressions, and try/catch handling, solidifying JavaScript's position as a powerful scripting language for web development.
However, after the release of ES3, the standardization process slowed down, and it took nearly a decade for the next major update, ECMAScript 5 (ES5), to be released in 2009. This version brought much-needed improvements, including strict mode, array methods, and JSON support, among others.
**The Renaissance of JavaScript
**
The release of ES5 marked the beginning of a renaissance for JavaScript. The introduction of Node.js in 2009 by Ryan Dahl enabled developers to use JavaScript for server-side programming, opening up new possibilities and applications beyond the web browser.[4]
This newfound versatility, combined with the rise of powerful front-end frameworks like React, Angular, and Vue.js, propelled JavaScript to new heights. Developers could now build complex, scalable, and performant applications entirely in JavaScript, blurring the lines between front-end and back-end development.
In 2015, ECMAScript 6 (ES6), also known as ES2015, was released, bringing a wealth of new features and syntactical improvements to the language. This version introduced concepts like arrow functions, classes, modules, destructuring, and promises, among many others, making JavaScript more expressive and easier to write and maintain.
Since then, the ECMAScript specification has been updated annually, with each release introducing new features and improvements based on proposals from the JavaScript community. We can reference proposals in stage three and four to get a glimpse of the future of JavaScript, such as decorators, private fields, and the pipeline operator.[5]
**The Future of JavaScript
**
JavaScript's journey is far from over. With the increasing adoption of technologies like WebAssembly, which allows non-JavaScript code to run in web browsers, and the growing popularity of serverless architectures, JavaScript's role in the tech ecosystem continues to evolve and expand.
Moreover, the JavaScript community remains vibrant and active, constantly pushing the boundaries of what's possible with the language. Initiatives like the TC39 committee, which oversees the standardisation process, and the numerous open-source projects and frameworks, ensure that JavaScript stays relevant and continues to adapt to the ever-changing needs of developers and users alike.
As we look to the future, one thing is certain: JavaScript's impact on the world of technology is undeniable, and its story is a testament to the power of collaboration, innovation, and the relentless pursuit of better tools and solutions.
In the coming weeks and months, I will be publishing more articles about this fantastic and very powerful language, diving more deeply in its features, from beginner level, to advanced.
Watch this space.
Citations:
[1] https://softteco.com/blog/history-of-javascript
[2] https://www.tutorialspoint.com/javascript/javascript_history.htm
[3] https://dl.acm.org/doi/10.1145/3386327
[4] https://en.wikipedia.org/wiki/JavaScript
[5] https://www.ample.co/blog/javascript-history
| danielemanca |
1,895,034 | On the internet, people are racing to get each other’s attention. The internet is getting cluttered, and here is the solution. | In today’s world, the internet is filled with endless content. On platforms like Instagram, YouTube,... | 0 | 2024-06-20T15:47:42 | https://dev.to/lakshya_gurha/on-the-internet-people-are-racing-to-get-each-others-attention-the-internet-is-getting-cluttered-and-here-is-the-solution-39ng | In today’s world, the internet is filled with endless content. On platforms like Instagram, YouTube, and Facebook, people are constantly racing to grab each other’s attention. This race has led to a cluttered internet where finding valuable information has become a challenge.
Take Instagram, for example. Users scroll through countless posts, stories, and ads, each vying for a moment of their time. YouTube is no different, with creators uploading videos daily, all trying to capture views and likes. Facebook is filled with posts, videos, and ads that often overshadow meaningful content. This competition for attention means that important, useful information gets lost in the noise.
Imagine you’re looking for a tutorial on how to start a small business. On YouTube, you might have to sift through numerous videos filled with clickbait titles and flashy thumbnails before finding one that genuinely helps. On Instagram, valuable posts from experts can easily get buried under a flood of memes, ads, and viral content. Even on Facebook, finding a well-written article or insightful discussion can feel like finding a needle in a haystack.
This is where Keplr comes in. Keplr is a platform designed to curate and share the most valuable and insightful content on the internet. Our mission is to cut through the clutter and bring you the best information available. With Keplr, you no longer have to spend hours searching for useful resources. We gather and organize them all in one place, making it easy for you to find what you need.
Keplr works by allowing users to create accounts and upload helpful resources they find online. Each resource includes a title, description, voice, image, link, and tags. This way, you can quickly understand what the resource is about and how it can help you. Our community of users helps ensure that only the best and most valuable content gets featured on Keplr.
One of the unique features of Keplr is our AI voice tool. This tool provides an audio explanation of each resource, detailing what it does and why it is useful. This means you don’t have to visit the website to understand the tool’s purpose. The AI voice gives you a quick overview, saving you time and effort.
Keplr covers a wide range of categories to suit different interests and needs. Whether you’re looking for business advice, educational resources, health tips, or tech tutorials, you’ll find curated content that adds real value to your life. Our goal is to create a focused, distraction-free environment where you can easily access high-quality information.

If you want to contribute to Keplr, it’s simple. Sign up for an account and start uploading the valuable resources you find online. Share your insights and help others discover useful content. By contributing, you become part of a community dedicated to making the internet a better place.
visit keplr at https://www.keplr.in

In summary, Keplr offers a solution to the cluttered internet by curating and sharing valuable content. Our platform ensures that important information is easily accessible, cutting through the noise of attention-seeking content. With features like AI voice explanations and a wide range of categories, Keplr makes finding and sharing useful resources simple and efficient. Join us and help make the internet a clearer, more valuable space for everyone. | lakshya_gurha | |
1,894,890 | Understanding Environment Variables | Environment variables are required for configuring the operating system and applications without... | 0 | 2024-06-20T15:15:30 | https://dev.to/madgan95/understanding-environment-variables-55nm | operatingsystems, beginners, programming, tutorial | Environment variables are required for configuring the operating system and applications without hardcoding values. They can be set at two levels: **user-level** and **system-level**.
## User Environment Variables
**Scope**: Only affect the user account under which they are set.
**Use Case**: Ideal for settings that are specific to a single user and do not need to affect other users on the system.

## System Environment Variables
**Scope:** Affect all user accounts on the system.
**Use Case:** Ideal for settings that should be available to all users, such as paths to critical software or libraries required by multiple applications.

## The PATH Variable
The PATH variable is one of the most important environment variables in an operating system. It specifies a list of directories that the system searches to find executable files.
When you run a command in the terminal or command prompt, the system looks through the directories listed in the PATH variable to find the executable file for that command.
## How the PATH Variable Works?
**Execution of Commands:**
When you type a command (e.g., javac, node) and press Enter, the system looks for the corresponding executable file in the directories listed in the PATH variable.
Example: **Running Node from the Command Line**
Let's say you have installed Node on your system and the executable is located in the directory C:\NodeMain. To run Node from any directory in the command prompt, you need to add C:\NodeMain to your PATH variable.
**Set the PATH Variable:**
1) Open the Start Search, type in "env", and select "Edit the system environment variables".

2) Click the "Environment Variables" button.

3) In the "System variables" section, find the PATH variable and click "Edit".

4) Add C:\NodeMain to the list of paths.

**Run Node:**
Open a new terminal or command prompt.
Type node and press Enter. The system will look through the directories listed in the PATH variable and find the Node executable in C:\NodeMain, launching the Node interpreter.

## Environment variables in CMD:
1)**Viewing Environment Variables**:
```
set
```
2)**View a Specific Environment Variable:**
```
echo %VARIABLE_NAME%
```
3)**Setting a Variable (Temporarily)**:
```
set MY_VARIABLE=VALUE
```
4)Setting a Variable (Permanently):
```
setx MY_VARIABLE VALUE
```
-----------------------------------------------------------------
Feel free to comment below with any questions or tips about this particular topic. Thank you 🤗 | madgan95 |
1,895,033 | WhatsApp info: +1 (571) 541‑2918 SCAM BITCOIN RECOVERY COMPANY HIRE ADWARE RECOVERY SPECIALIST | Website info: www.adwarerecoveryspecialist.expert Email info: Adwarerecoveryspecialist@auctioneer.... | 0 | 2024-06-20T15:43:26 | https://dev.to/jayson_irwin_17cfcd94ddf7/whatsapp-info-1-571-541-2918-scam-bitcoin-recovery-company-hire-adware-recovery-specialist-12n | Website info: www.adwarerecoveryspecialist.expert
Email info: Adwarerecoveryspecialist@auctioneer. net
WhatsApp info: +1 (571) 541‑2918
Amidst the bustling streets of New York, I found myself into a devastating loss: 5 BTC, vanished into thin air due to a bitcoin investing platform's betrayal. The frustration of being unable to reclaim either my returns or my initial investment weighed heavily on me. However, hope flickered to life when I stumbled upon ADWARE RECOVERY SPECIALIST during my extensive research.The first glimmer of optimism came from the plethora of positive internet reviews that adorned their online presence. It was apparent that ADWARE RECOVERY SPECIALIST had been instrumental in assisting numerous individuals who, like me, had fallen victim to similar financial losses. Encouraged by these testimonials, I promptly reached out to them, Telegram info: @adwarerecoveryspecialist providing the necessary information to aid in the retrieval of my funds.What followed was nothing short of extraordinary. ADWARE RECOVERY SPECIALIST swiftly sprang into action, utilizing their expertise and resources to navigate the complex world of cryptocurrency recovery. Their professionalism and dedication were evident from the outset, instilling in me a newfound sense of confidence.As days turned into weeks, I watched in amazement as ADWARE RECOVERY SPECIALIST tirelessly pursued every avenue available to secure the return of my lost finances. Their unwavering commitment to my cause was both reassuring and commendable, serving as a beacon of hope in an otherwise bleak situation.Finally, after what felt like an eternity, the moment of triumph arrived. ADWARE RECOVERY SPECIALIST succeeded where others had failed, managing to reclaim every last satoshi of my lost funds and return them safely to my wallet. The overwhelming sense of relief and gratitude that washed over me cannot be overstated.I am eternally grateful to ADWARE RECOVERY SPECIALIST for their invaluable assistance during my time of need. Their professionalism, expertise, and unwavering dedication were instrumental in turning what seemed like a hopeless situation into a resounding success. Without their intervention, I shudder to think of the financial hardship that I would have endured.In light of my experience, I wholeheartedly endorse and recommend the services of ADWARE RECOVERY SPECIALIST to anyone who finds themselves in a similar predicament. Their proven track record of success, coupled with their genuine desire to assist those in need, sets them apart as a beacon of hope in the realm of cryptocurrency recovery. ADWARE RECOVERY SPECIALIST has been nothing short of remarkable. From the depths of despair to the pinnacle of success, they guided me every step of the way with unwavering support and unwavering determination. For anyone facing the daunting prospect of cryptocurrency loss kindly reach out to ADWARE RECOVERY SPECIALIST by the information above:
 | jayson_irwin_17cfcd94ddf7 | |
1,895,031 | How to Create a QR Code Generator in JavaScript: Easy Tutorial | QR codes have become an integral part of modern technology, enabling quick access to information... | 0 | 2024-06-20T15:42:14 | https://raajaryan.tech/how-to-create-a-qr-code-generator-in-javascript-easy-tutorial | javascript, beginners, tutorial, opensource | [](https://buymeacoffee.com/dk119819)
QR codes have become an integral part of modern technology, enabling quick access to information with a simple scan. Whether you're a developer looking to integrate QR codes into your projects or just curious about how they work, this guide will walk you through building a QR code generator using JavaScript.
### Why QR Codes?
QR codes (Quick Response codes) are two-dimensional barcodes that can store various types of data, such as URLs, contact information, and text. They are widely used in marketing, payments, authentication, and much more.
### Getting Started
For this project, we'll use a popular JavaScript library called `qrcode.js`. This library simplifies the process of generating QR codes in the browser.
### Project Setup
First, let's set up our project directory with the following structure:
```
qr-code-generator/
│
├── index.html
├── script.js
├── style.css
└── qrcode.min.js
```
### Step 1: Create the HTML File
The HTML file will contain a simple form for inputting the text you want to convert into a QR code and a canvas where the QR code will be displayed.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>QR Code Generator</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<div class="container">
<h1>QR Code Generator</h1>
<form id="qr-form">
<input type="text" id="qr-input" placeholder="Enter text to generate QR Code" required>
<button type="submit">Generate QR Code</button>
</form>
<div id="qr-result">
<div id="qr-canvas"></div>
</div>
</div>
<script src="qrcode.min.js"></script>
<script src="script.js"></script>
</body>
</html>
```
### Step 2: Style with CSS
To make our application visually appealing, we'll add some basic styles.
```css
body {
font-family: Arial, sans-serif;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
margin: 0;
background-color: #f0f0f0;
}
.container {
text-align: center;
background: #fff;
padding: 20px;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
#qr-form {
margin-bottom: 20px;
}
#qr-input {
padding: 10px;
width: calc(100% - 24px);
margin-bottom: 10px;
}
button {
padding: 10px 20px;
border: none;
background-color: #007bff;
color: white;
cursor: pointer;
border-radius: 4px;
}
button:hover {
background-color: #0056b3;
}
#qr-result {
margin-top: 20px;
}
```
### Step 3: Add JavaScript for Functionality
The core functionality of our QR code generator will be handled by the `script.js` file. We'll use the `qrcode.js` library to generate the QR code based on the input text.
First, download the `qrcode.min.js` file from the [qrcode.js GitHub repository](https://github.com/davidshimjs/qrcodejs) and place it in your project directory.
Next, add the following JavaScript code to `script.js`:
```javascript
document.getElementById('qr-form').addEventListener('submit', function(event) {
event.preventDefault();
const input = document.getElementById('qr-input').value;
const qrCanvas = document.getElementById('qr-canvas');
// Clear any previous QR Code
qrCanvas.innerHTML = '';
// Generate new QR Code
if (input) {
new QRCode(qrCanvas, {
text: input,
width: 256,
height: 256
});
}
});
```
### Testing the Application
Open the `index.html` file in your browser. You should see a form where you can input text to generate a QR code. Upon submitting the form, the QR code will be displayed below the input field.
### Conclusion
Congratulations! You've successfully created a QR code generator using JavaScript. This project demonstrates how easy it is to integrate QR code functionality into your web applications. Whether for personal projects or professional use, understanding how to generate and utilize QR codes can be a valuable skill.
Feel free to expand this project by adding features such as downloading the QR code image, customizing the QR code's appearance, or integrating it with a backend service. The possibilities are endless!
Happy coding!
---
## 💰 You can help me by Donating
[](https://buymeacoffee.com/dk119819)
| raajaryan |
1,895,030 | Version Checking in MiniScript | MiniScript runs in a lot of different environments. Moreover, the language itself changes from time... | 0 | 2024-06-20T15:40:35 | https://dev.to/joestrout/version-checking-in-miniscript-gob | miniscript, minimicro, programming, beginners | MiniScript runs in a lot of different environments. Moreover, the language itself changes from time to time. So, to write a script that works in multiple places, sometimes you need to check the version of the language and environment in which it's running.
## The `version` intrinsic
Since MiniScript version 1.4 (which came out in June 2019), there has been a `version` intrinsic which gives you information about the language itself, as well as the host environment. Here's what it looks like in Mini Micro:

You can see that this screen shot was taken in Mini Micro (`hostName`), and in version 1.2.1 — represented in numeric form as 1.21 (don't worry, we'll never go beyond version 1.9.9 before advancing to version 2!). This Mini Micro was built on April 1, 2024, and the language itself is version 1.6.2. It also gives you some handy URLs where you can find out more about the language and host.
Now let's look at the same information in command-line MiniScript.

The `pprint` function doesn't exist here (in Mini Micro, it's one of the handy little extras defined in /sys/startup.ms). So, in the screenshot above, I used regular `print`, and then followed it up with a little one-liner that prints each key-value pair on its own line, similar to `pprint` in Mini Micro.
Here you can see that the screen shot was taking using the Unix command-line version of MiniScript, built on March 15, 204. The version of that command-line host is 1.3, and the MiniScript language itself is again 1.6.2.
## Handling Host Differences
You can use this to make your code do different things depending on where it's running. For example, the [Acey-Deucey game](https://miniscript.org/MiniMicro/index.html?cmd=run%20%22%2Fsys%2Fdemo%2Facey-deucey.ms%22) included with Mini Micro at [/sys/lib/acey-deucey.ms](https://github.com/JoeStrout/minimicro-sysdisk/blob/master/sys/demo/acey-deucey.ms) is designed to work on command-line MiniScript, too. But the latter doesn't have the Sound class, so we "mock" that class at the start of the program, so the rest of the code doesn't need to worry about it.
```
// Define some common stuff that makes this code work both
// in Mini Micro, and in command-line MiniScript.
if version.hostName == "Mini Micro" then
clear
chaChing = file.loadSound("/sys/sounds/cha-ching.wav")
hit = file.loadSound("/sys/sounds/hit.wav")
else
// For command-line MiniScript, make dummy mock-ups
// for the Sound class and our two sounds.
Sound = {}
Sound.play = null
chaChing = new Sound
hit = new Sound
end if
```
## Handling Version Differences
As another example, the [Release Notes](https://miniscript.org/files/MiniScript-Release-Notes.txt) show that the second parameter to `print`, allowing you to change the line delimiter, was added in MiniScript 1.6. So you might write something like:
```
for i in range(1, 10)
if version.miniscript >= "1.6" then
// print all our counts on one line (yay!)
print i, ""
if i < 10 then print "...", " "
else
// print each count on its own line (boo)
print i
end if
wait 0.5
end for
```
It's important to note that, for historical reasons, `version.miniscript` is a string, while `version.host` is a number. So you'll want to do a string or numeric comparison depending on which version you're looking at.
## Dealing with very old versions
As `version` has been around since 2019 -- and there have been a ton of other improvements since then -- hopefully you don't need to handle any environment older than that. But what if you did?
In this case, you'd have to look for behaviors that differ from modern MiniScript, but don't actually generate an error. Here are some thoughts (again, from the [Release Notes](https://miniscript.org/files/MiniScript-Release-Notes.txt)):
- If a non-empty map like `{1:1}` evaluates to false in an `if` statement, you're running MiniScript 1.0 (prior to September 2017).
- If comparing a map to itself with `==` returns `null`, you're running 1.1 or before (prior to June 2018).
- If expressions like `1 and null` are not evaluated correctly, you're in version 1.2 or earlier (prior to March 2019).
- If string subtraction like `"foobar" - "bar"` doesn't give the right result, then you're prior to 1.4 (June 2019) -- and if it does, then you should be safe to check `version`!
Have you ever needed to write MiniScript code that works in multiple environments (or language versions)? Share your experiences below!
| joestrout |
1,895,029 | loading page dancing doggo | _I just did a silly loading page dancing doggo!!!! _ | 0 | 2024-06-20T15:39:19 | https://dev.to/ujiron/loading-page-dancing-doggo-efa | codepen, html, css, javascript | **_I just did a silly loading page dancing doggo!!!! _**
{% codepen https://codepen.io/salomejb/pen/WNBMWOm %} | ujiron |
1,895,028 | How Salesforce Cloud Solutions Boost Business Efficiency | In today’s fast-paced business environment, efficiency is more crucial than ever. Companies strive to... | 0 | 2024-06-20T15:38:32 | https://dev.to/jameskevinb/how-salesforce-cloud-solutions-boost-business-efficiency-e9f | In today’s fast-paced business environment, efficiency is more crucial than ever. Companies strive to streamline operations, reduce costs, and enhance customer satisfaction to stay competitive. Salesforce Cloud Solutions offer comprehensive tools to achieve these goals. From sales and service to marketing and analytics, Salesforce provides an integrated platform that optimizes various business functions. This article explores how Salesforce Cloud Solutions enhance business efficiency, driving growth and profitability.
## Salesforce Sales Cloud
Salesforce Sales Cloud is designed to simplify and enhance the sales process. By centralizing sales data and providing real-time insights, Sales Cloud helps sales teams work more efficiently and close deals faster.
### Streamlining Sales Processes
Salesforce Sales Cloud transforms how businesses handle their sales operations. By offering a comprehensive set of tools for managing leads, opportunities, and sales forecasting, it enables sales teams to operate more effectively.
### Features
Lead Management: Efficiently track and manage leads from initial contact through conversion.
Opportunity Tracking: Monitor sales opportunities and forecast potential revenue.
Sales Forecasting: Utilize predictive analytics to project future sales and make informed business decisions.
### Benefits
Implementing Sales Cloud leads to improved sales productivity and higher lead conversion rates. By automating routine tasks and providing actionable insights, sales teams can focus on building relationships and closing deals.
## Salesforce Service Cloud
Salesforce Service Cloud empowers businesses to deliver exceptional customer service. It integrates various service channels, ensuring that customer inquiries are handled efficiently and effectively.
### Enhancing Customer Service Operations
Service Cloud centralizes customer service operations, providing a unified platform for managing customer interactions. This leads to faster issue resolution and improved customer satisfaction.
### Features
Omni-channel Routing: Automatically route customer queries to the most appropriate agents based on their skills and availability.
Case Management: Manage customer cases from a single dashboard, allowing agents to resolve issues quickly.
Telephony Integration: Integrate phone systems with Service Cloud to provide a seamless customer service experience.
### Benefits
Service Cloud improves issue resolution times and increases customer satisfaction. By centralizing customer interactions and automating workflows, businesses can provide timely and effective support.
## Salesforce Marketing Cloud
Salesforce Marketing Cloud enables businesses to create personalized marketing campaigns that resonate with their audience. It automates marketing processes, ensuring consistent and targeted messaging.
### Automating and Personalizing Marketing Efforts
Marketing Cloud helps businesses deliver tailored marketing messages and automate marketing tasks, enhancing the overall effectiveness of their campaigns.
### Features
Email Marketing: Design and send personalized email campaigns.
Social Media Integration: Manage social media interactions and track engagement.
Customer Journey Mapping: Visualize and optimize the customer journey across all touchpoints.
### Benefits
Marketing Cloud enhances marketing ROI by delivering relevant content to the right audience at the right time. Automated processes reduce manual effort, allowing marketing teams to focus on strategy and creativity.
## Salesforce Commerce Cloud
Salesforce Commerce Cloud provides tools to create seamless and personalized shopping experiences. It supports businesses in managing product catalogs, processing orders, and leveraging AI-driven recommendations.
### Optimizing E-commerce Experiences
Commerce Cloud enhances online shopping experiences by offering robust tools for product management and order processing, along with AI-driven insights to personalize customer interactions.
### Features
Product Management: Centralize product information and streamline catalog management.
Order Processing: Automate order fulfillment and track shipments in real-time.
AI-driven Recommendations: Utilize AI to suggest products and enhance the shopping experience.
### Benefits
Commerce Cloud increases online sales by providing a smooth and personalized customer journey. Efficient order processing and AI recommendations boost customer satisfaction and loyalty.
## Salesforce Community Cloud
Salesforce Community Cloud fosters collaboration and engagement by connecting customers, partners, and employees. It provides a platform for sharing information, solving problems, and building relationships.
### Building and Managing Customer and Partner Communities
Community Cloud creates a space for interaction and collaboration, helping businesses strengthen relationships with customers and partners.
### Features
Community Management: Create and manage customer and partner communities.
Engagement Tracking: Monitor community interactions and measure engagement.
Collaboration Tools: Facilitate communication and collaboration within the community.
### Benefits
Community Cloud strengthens relationships and drives engagement. It provides a space for users to share knowledge, collaborate on projects, and resolve issues collectively.
## Salesforce Analytics Cloud
Salesforce Analytics Cloud transforms data into actionable insights. It offers real-time analytics and customizable dashboards, helping businesses make data-driven decisions.
### Leveraging Data for Business Insights
Analytics Cloud enables businesses to derive meaningful insights from their data, facilitating informed decision-making and strategic planning.
### Features
Real-time Analytics: Access up-to-date data for timely decision-making.
Customizable Dashboards: Create dashboards tailored to specific business needs.
Predictive Analytics: Utilize predictive models to forecast trends and identify opportunities.
### Benefits
Analytics Cloud empowers businesses to make informed decisions based on real-time data. By visualizing key metrics and trends, organizations can optimize strategies and improve performance.
## Salesforce App Cloud
Salesforce App Cloud provides tools for developing, deploying, and managing custom applications. It enables businesses to create tailored solutions that meet their specific needs.
### Developing and Deploying Custom Applications
App Cloud supports the creation of customized applications that cater to unique business requirements, enhancing overall operational efficiency.
### Features
App Development Tools: Utilize a range of development tools and frameworks.
Integration Capabilities: Seamlessly integrate with other Salesforce products and third-party applications.
Scalability: Scale applications to meet growing business demands.
### Benefits
App Cloud allows businesses to develop custom applications that streamline processes and address specific challenges. Its integration capabilities ensure that these applications work seamlessly within the broader Salesforce ecosystem.
## Conclusion
Salesforce Cloud Solutions significantly boost business efficiency across various domains, from sales and customer service to marketing and analytics. By leveraging these powerful tools, businesses can streamline operations, enhance customer interactions, and make data-driven decisions. Investing in Salesforce Implementation Services ensures that companies fully realize the benefits of these cloud solutions, leading to sustained growth and profitability. | jameskevinb | |
1,895,014 | Tips for aspiring professionals: 4 mindset you can apply in your career and everyday life | See my original post here in my blog. 4 things I realized as I reflected going through a tough day.... | 0 | 2024-06-20T15:38:29 | https://blog.chardskarth.me/blog/tips-for-aspiring-professionals/ | career, tips, firstpost | See my [original post here in my blog](https://blog.chardskarth.me/blog/tips-for-aspiring-professionals/).
4 things I realized as I reflected going through a tough day. This tips will hopefully help you prosper in your chosen field.
## 1. As you PRACTICE, be ALWAYS on the lookout for small things that can be improved
## 2. Do your best to adhere to best practices
## 3. Keep in mind that you want to deliver value, but think in long term.
## 4. At the end of the day, It's all just work.
| chardskarth |
1,895,027 | A beginner's guide to the Clarity-Upscaler model by Philz1337x on Replicate | clarity-upscaler | 0 | 2024-06-20T15:38:13 | https://aimodels.fyi/models/replicate/clarity-upscaler-philz1337x | coding, ai, beginners, programming | *This is a simplified guide to an AI model called [Clarity-Upscaler](https://aimodels.fyi/models/replicate/clarity-upscaler-philz1337x) maintained by [Philz1337x](https://aimodels.fyi/creators/replicate/philz1337x). If you like these kinds of guides, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Model overview
The `clarity-upscaler` is a high-resolution image upscaler and enhancer developed by AI model creator [philz1337x](https://aimodels.fyi/creators/replicate/philz1337x). It is a free and open-source alternative to the commercial Magnific tool, allowing users to upscale and improve image quality. The model can handle a variety of input images and provides a range of customization options to fine-tune the upscaling process.
## Model inputs and outputs
The `clarity-upscaler` takes an input image and a set of parameters to control the upscaling process. Users can adjust the seed, prompt, dynamic range, creativity, resemblance, scale factor, tiling, and more. The model outputs one or more high-resolution, enhanced versions of the input image.
### Inputs
- **Image**: The input image to be upscaled and enhanced
- **Prompt**: A textual description to guide the upscaling process
- **Seed**: A random seed value to control the output randomness
- **Dynamic**: The HDR range to use, from 3 to 9
- **Creativity**: The level of creativity to apply, from 0.3 to 0.9
- **Resemblance**: The level of resemblance to the input image, from 0.3 to 1.6
- **Scale Factor**: The factor to scale the image up, typically 2x
- **Tiling Width/Height**: The size of tiles used for fractality, lower values increase fractality
- **Lora Links**: Links to Lora models to use during upscaling
- **Downscaling**: Whether to downscale the input image before upscaling
### Outputs
- One or more high-resolution, enhanced images based on the input and parameters
## Capabilities
The `clarity-upscaler` can dramatically improve the quality and detail of input images through its advanced upscaling and enhancement algorithms. It can handle a wide range of input images, from photographs to digital art, and provide customizable results. The model has been optimized for speed and can produce high-quality outputs quickly.
## What can I use it for?
The `clarity-upscaler` is a versatile tool that can be used for a variety of creative and practical applications. Some potential use cases include:
- Enhancing low-resolution images for print or web use
- Upscaling and improving the quality of digital art and illustrations
- Generating high-quality backgrounds, textures, or elements for visual design projects
- Improving the visual quality of images for use in presentations, social media, or other digital content
## Things to try
One interesting feature of the `clarity-upscaler` is its ability to adjust the "fractality" of the output image by manipulating the tiling width and height parameters. Lower values for these parameters can introduce more fractality, creating a unique and visually striking effect. Users can experiment with different combinations of these settings to achieve their desired aesthetic.
Another useful feature is the ability to incorporate Lora models into the upscaling process. Lora models can introduce additional style, details, and characteristics to the output, allowing users to further customize the results. Exploring different Lora models and mixing them with the `clarity-upscaler` settings can lead to a wide range of creative possibilities.
**If you enjoyed this guide, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,895,026 | A beginner's guide to the Whisperx model by Erium on Replicate | whisperx | 0 | 2024-06-20T15:37:39 | https://aimodels.fyi/models/replicate/whisperx-erium | coding, ai, beginners, programming | *This is a simplified guide to an AI model called [Whisperx](https://aimodels.fyi/models/replicate/whisperx-erium) maintained by [Erium](https://aimodels.fyi/creators/replicate/erium). If you like these kinds of guides, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Model overview
`WhisperX` is an automatic speech recognition (ASR) model that builds upon OpenAI's [Whisper](https://github.com/openai/whisper) model, providing improved timestamp accuracy and speaker diarization capabilities. Developed by Replicate's maintainer [erium](https://aimodels.fyi/creators/replicate/erium), `WhisperX` incorporates forced phoneme alignment and voice activity detection (VAD) to produce transcripts with accurate word-level timestamps. It also includes speaker diarization, which identifies different speakers within the audio.
Compared to similar models like [whisper-diarization](https://aimodels.fyi/models/replicate/whisper-diarization-thomasmol), [whisperx](https://aimodels.fyi/models/replicate/whisperx-daanelson) and [whisperx](https://aimodels.fyi/models/replicate/whisperx-victor-upmeet), `WhisperX` offers faster inference speed (up to 70x real-time) and improved accuracy for long-form audio transcription tasks. It is particularly useful for applications that require precise word timing and speaker identification, such as video subtitling, meeting transcription, and audio indexing.
## Model inputs and outputs
`WhisperX` takes an audio file as input and produces a transcript with word-level timestamps and speaker labels. The model supports a variety of input audio formats and can handle multiple languages, with default models provided for languages like English, German, French, and more.
### Inputs
- **Audio file**: The audio file to be transcribed, in a supported format (e.g., WAV, MP3, FLAC).
- **Language**: The language of the audio file, which is automatically detected if not provided. Supported languages include English, German, French, Spanish, Italian, Japanese, and Chinese, among others.
- **Diarization**: An optional flag to enable speaker diarization, which will identify and label different speakers in the audio.
### Outputs
- **Transcript**: The transcribed text of the audio, with word-level timestamps and optional speaker labels.
- **Alignment information**: Details about the alignment of the transcript to the audio, including the start and end times of each word.
- **Diarization information**: If enabled, the speaker labels for each word in the transcript.
## Capabilities
`WhisperX` excels at transcribing long-form audio with high accuracy and precise word timing. The model's forced alignment and VAD-based preprocessing result in significantly improved timestamp accuracy compared to the original Whisper model, which can be crucial for applications like video subtitling and meeting transcription.
The speaker diarization capabilities of `WhisperX` allow it to identify different speakers within the audio, making it useful for multi-speaker scenarios, such as interviews or panel discussions. This added functionality can simplify the post-processing and analysis of transcripts, especially in complex audio environments.
## What can I use it for?
`WhisperX` is well-suited for a variety of applications that require accurate speech-to-text transcription, precise word timing, and speaker identification. Some potential use cases include:
- **Video subtitling and captioning**: The accurate word-level timestamps and speaker labels generated by `WhisperX` can streamline the process of creating subtitles and captions for video content.
- **Meeting and lecture transcription**: `WhisperX` can capture the discussions in meetings, lectures, and webinars, with speaker identification to help organize the transcript.
- **Audio indexing and search**: The detailed transcript and timing information can enable more advanced indexing and search capabilities for audio archives and podcasts.
- **Assistive technology**: The speaker diarization and word-level timestamps can aid in applications like real-time captioning for the deaf and hard of hearing.
## Things to try
One interesting aspect of `WhisperX` is its ability to handle long-form audio efficiently, thanks to its batched inference and VAD-based preprocessing. This makes it well-suited for transcribing lengthy recordings, such as interviews, podcasts, or webinars, without sacrificing accuracy or speed.
Another key feature to explore is the speaker diarization functionality. By identifying different speakers within the audio, `WhisperX` can provide valuable insights for applications like meeting transcription, where knowing who said what is crucial for understanding the context and flow of the conversation.
Finally, the model's multilingual capabilities allow you to transcribe audio in a variety of languages, making it a versatile tool for international or diverse audio content. Experimenting with different languages and benchmarking the performance can help you determine the best fit for your specific use case.
**If you enjoyed this guide, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,895,025 | A beginner's guide to the Multilingual-E5-Large model by Beautyyuyanli on Replicate | multilingual-e5-large | 0 | 2024-06-20T15:37:04 | https://aimodels.fyi/models/replicate/multilingual-e5-large-beautyyuyanli | coding, ai, beginners, programming | *This is a simplified guide to an AI model called [Multilingual-E5-Large](https://aimodels.fyi/models/replicate/multilingual-e5-large-beautyyuyanli) maintained by [Beautyyuyanli](https://aimodels.fyi/creators/replicate/beautyyuyanli). If you like these kinds of guides, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Model overview
The `multilingual-e5-large` is a multi-language text embedding model developed by [beautyyuyanli](https://aimodels.fyi/creators/replicate/beautyyuyanli). This model is similar to other large language models like [qwen1.5-72b](https://aimodels.fyi/models/replicate/qwen15-72b-lucataco), [llava-13b](https://aimodels.fyi/models/replicate/llava-13b-yorickvp), [qwen1.5-110b](https://aimodels.fyi/models/replicate/qwen15-110b-lucataco), [uform-gen](https://aimodels.fyi/models/replicate/uform-gen-zsxkib), and [cog-a1111-ui](https://aimodels.fyi/models/replicate/cog-a1111-ui-brewwh), which aim to provide large-scale language understanding capabilities across multiple languages.
## Model inputs and outputs
The `multilingual-e5-large` model takes text data as input and generates embeddings, which are numerical representations of the input text. The input text can be provided as a JSON list of strings, and the model also accepts parameters for batch size and whether to normalize the output embeddings.
### Inputs
- **texts**: Text to embed, formatted as a JSON list of strings (e.g. ["In the water, fish are swimming.", "Fish swim in the water.", "A book lies open on the table."])
- **batch_size**: Batch size to use when processing text data (default is 32)
- **normalize_embeddings**: Whether to normalize the output embeddings (default is true)
### Outputs
- An array of arrays, where each inner array represents the embedding for the corresponding input text.
## Capabilities
The `multilingual-e5-large` model is capable of generating high-quality text embeddings for a wide range of languages, making it a useful tool for various natural language processing tasks such as text classification, semantic search, and data analysis.
## What can I use it for?
The `multilingual-e5-large` model can be used in a variety of applications that require text embeddings, such as building multilingual search engines, recommendation systems, or language translation tools. By leveraging the model's ability to generate embeddings for multiple languages, developers can create more inclusive and accessible applications that serve a global audience.
## Things to try
One interesting thing to try with the `multilingual-e5-large` model is to explore how the generated embeddings capture the semantic relationships between words and phrases across different languages. You could experiment with using the embeddings for cross-lingual text similarity or clustering tasks, which could provide valuable insights into the model's language understanding capabilities.
**If you enjoyed this guide, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,894,899 | House of XYZ | 👋 Welcome to House of XYZ! Your full-service¹ software studio. With over a decade of experience... | 0 | 2024-06-20T15:14:07 | https://dev.to/xyz_steven/house-of-xyz-3d8d | softwaredevelopment, startup, entrepreneurship, webdev | 👋 Welcome to House of XYZ! Your full-service¹ software studio. With over a decade of experience collaborating with public companies, hyper-growth unicorns, and YC-backed startups, there ain't S#!T we can't build. Whether you’re a startup looking to innovate, an established company aiming to scale, or an entrepreneur with a vision, we got you!
¹ You don't have to do a damn thing — we design, develop, and deploy anything you need built.
Traditional agencies ...
🔒 Lock you into contracts
🕒 Add more meetings to your already crowded schedule
🐌 Take months to deliver
💰Cost $40,000+ for a buggy web app (plus hosting and maintenance)
Hiring an employee means ...
📑 You spend weeks hiring just to end up with a professional coffee drinker
🛋️ You get to play therapist in weekly 1-on-1s
👥 The elite team of individual contributors you've cultivated gets to level up into elite micro-managers
💰 Cost $30,000+ per month for one designer and one software engineer (plus benefits)
TL'DR Working with traditional agencies and managing employees is rigid, slow, and expensive.
With House of XYZ you get ...
⚡️ Software in weeks (or days 👀)
♾️ Unlimited requests
🪪 A senior-level product team
🧠 Product consultations
🧰 Continuous software maintenance
🔓 NO CONTRACTS! (cancel or pause anytime)
🤑 Cost $5,000 per sprint (2 weeks)
Let's build 👉 https://cal.com/house-of-xyz/intro
Find us at https://www.houseofxyz.com | xyz_steven |
1,895,022 | The Role of Cataract Surgery in Preventing Blindness | Cataracts, a condition characterized by the clouding of the eye's natural lens, are one of the... | 0 | 2024-06-20T15:35:36 | https://dev.to/balamurugan_1857d8cb1038d/the-role-of-cataract-surgery-in-preventing-blindness-3147 | besteyespecialistchennai | Cataracts, a condition characterized by the clouding of the eye's natural lens, are one of the leading causes of blindness globally. This condition affects millions, particularly the elderly, significantly impairing vision and quality of life. However, advancements in medical science have made cataract surgery a highly effective solution for preventing blindness. In this article, we will explore the importance of cataract surgery in preventing blindness, with a special focus on the exceptional services provided by ophthalmologists in Chennai, including the best eye doctors at Shakthi Eye Care Centre and other eye hospitals in Virugambakkam.
**Understanding Cataracts**
A cataract forms when proteins in the [eye's lens clump together](https://www.shakthieyecare.com/contact_lens.html), causing the lens to become opaque. This clouding distorts light entering the eye, leading to blurred vision. As cataracts progress, they can cause significant vision impairment and eventually lead to blindness if left untreated. The primary symptoms include blurry vision, difficulty with night vision, sensitivity to light, and seeing "halos" around lights.
**The Impact of Cataract Surgery**
Cataract surgery involves the removal of the cloudy lens and its replacement with an artificial intraocular lens (IOL). This procedure is typically performed on an outpatient basis and is known for its high success rate. Patients often experience a dramatic improvement in vision shortly after the surgery. The surgery not only restores vision but also significantly improves the quality of life, allowing individuals to return to their daily activities without the limitations caused by impaired vision.
**Cataract Surgery in Preventing Blindness**
The World Health Organization (WHO) emphasizes the importance of cataract surgery in global blindness prevention strategies. In many cases, cataracts are the leading cause of blindness, particularly in low- and middle-income countries. Timely surgical intervention can prevent blindness and restore sight, making it a crucial public health measure.
In regions like Chennai, the role of cataract surgery is particularly significant. The city's population includes a large number of elderly individuals who are at a higher risk of developing cataracts. The availability of advanced medical facilities and skilled ophthalmologists in Chennai makes it possible to address this health concern effectively.
**Ophthalmologists in Chennai:**
Leading the Way Chennai is home to some of the best eye doctors in India, who are renowned for their expertise in cataract surgery. Ophthalmologists in Chennai are well-equipped with the latest technology and surgical techniques, ensuring high success rates and minimal complications. These professionals are dedicated to providing top-notch eye care, from diagnosis to postoperative care, ensuring patients receive comprehensive treatment.
**Shakthi Eye Care Centre: A Beacon of Hope**
Among the leading eye care facilities in Chennai, Shakthi Eye Care Centre stands out for its commitment to excellence. Located in Virugambakkam, this eye hospital is renowned for its state-of-the-art infrastructure and a team of highly skilled ophthalmologists. The centre offers a wide range of services, including advanced cataract surgery, making it a preferred choice for patients seeking the best eye doctor in Chennai.
The Shakthi Eye Care Centre employs cutting-edge technology, such as femtosecond laser-assisted cataract surgery, which enhances precision and improves outcomes. The ophthalmologists at this centre are not only experienced but also compassionate, ensuring that patients receive personalized care tailored to their specific needs.
**Eye Hospitals in Virugambakkam:**
Virugambakkam, a bustling locality in Chennai, is home to several reputable eye hospitals. These hospitals play a crucial role in making quality eye care accessible to the local population. With a focus on patient-centric care, these facilities provide comprehensive eye care services, including cataract surgery, glaucoma treatment, and refractive surgery.
The presence of top-tier eye hospitals in Virugambakkam ensures that residents do not have to travel far to receive world-class eye care. This accessibility is vital for elderly patients who may find it challenging to commute long distances.
**Conclusion**
Cataract surgery plays a pivotal role in preventing blindness and restoring vision. In Chennai, the availability of skilled ophthalmologists and advanced medical facilities ensures that patients receive the best possible care. Institutions like Shakthi Eye Care Centre and other eye hospitals in Virugambakkam are at the forefront of this effort, providing cutting-edge treatments and compassionate care.
For anyone facing vision impairment due to cataracts, seeking the expertise of the [best eye doctors in Chennai](https://www.shakthieyecare.com/) can be life-changing. With timely intervention and advanced surgical techniques, cataract surgery can effectively prevent blindness and significantly enhance the quality of life.
| balamurugan_1857d8cb1038d |
1,895,019 | >1 RDBMS in Spring Data JPA | This document deals with building the backend application that uses Spring Data JPA with multiple... | 0 | 2024-06-20T15:34:35 | https://dev.to/pranjal_sharma_38482a3041/1-rdbms-in-spring-data-jpa-5ge4 | rdbms, springboot, mysql, beginners | This document deals with building the backend application that uses Spring Data JPA with multiple relational databases.
For an example we will connect to **MySQL + MSSQL** database.
### Main task here is to seperate properties and configurations for all the multiple databases that have to integrated.
### Other JPA layers in code remain the same as for single integration. [ Repository + Entity]
[ Point to remember : Define these in different packages for different databases as we would need them when defining configs. ]

For specific code refer [this](embed https://medium.com/javarevisited/springboot-with-spring-data-jpa-using-multi-data-source-databases-mysql-sqlserver-3ce5f69559).
### Sample Configurations in application properties
```
spring.datasource.url=jdbc:mysql://127.0.0.1/heimdall_db?useSSL=false
spring.datasource.username=root
spring.datasource.password=pranjal
spring.datasource.driverClassName=com.mysql.cj.jdbc.Driver
##SQL Server
sqlserver.datasource.url=jdbc:sqlserver://localhost;databaseName=jpa_test
sqlserver.datasource.username=sa
sqlserver.datasource.password=reallyStrongPwd123
sqlserver.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.jpa.database=default
```
> Don't define other hibernate configurations specific to database here.
### Defining Separate Config Classes for all the databases
```
package com.sma.backend.multidb.config;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.orm.jpa.EntityManagerFactoryBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;
import javax.persistence.EntityManagerFactory;
import javax.sql.DataSource;
@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "sqlServerEntityManagerFactory",
transactionManagerRef = "sqlServerTransactionManager",
basePackages = "com.sma.backend.multidb.database.sqlserver.repository")
public class SqlServerConfig {
@Bean
@ConfigurationProperties(prefix = "sqlserver.datasource")
public DataSourceProperties sqlServerDataSourceProperties() {
return new DataSourceProperties();
}
@Bean
public DataSource sqlServerDataSource(@Qualifier("sqlServerDataSourceProperties") DataSourceProperties dataSourceProperties) {
return dataSourceProperties.initializeDataSourceBuilder().build();
}
@Bean(name = "sqlServerEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean sqlServerEntityManagerFactory(@Qualifier("sqlServerDataSource") DataSource sqlServerDataSource, EntityManagerFactoryBuilder builder) {
return builder.dataSource(sqlServerDataSource)
.packages("com.sma.backend.multidb.database.sqlserver.domain")
.persistenceUnit("sqlserver")
.build();
}
@Bean
public PlatformTransactionManager sqlServerTransactionManager(@Qualifier("sqlServerEntityManagerFactory")
EntityManagerFactory factory) {
return new JpaTransactionManager(factory);
}
}
```
### MySqlConfig
```
package com.sma.backend.multidb.config;
@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "mysqlEntityManagerFactory", transactionManagerRef = "mysqlTransactionManager", basePackages = {"com.sma.backend.multidb.database.mysql.repository"})
public class MySqlConfig {
@Primary
@Bean
@ConfigurationProperties(prefix = "spring.datasource")
public DataSourceProperties mysqlDataSourceProperties() {
return new DataSourceProperties();
}
@Primary
@Bean
public DataSource mysqlDataSource(@Qualifier("mysqlDataSourceProperties") DataSourceProperties dataSourceProperties) {
return dataSourceProperties.initializeDataSourceBuilder().build();
}
@Primary
@Bean
public LocalContainerEntityManagerFactoryBean mysqlEntityManagerFactory(@Qualifier("mysqlDataSource") DataSource hubDataSource, EntityManagerFactoryBuilder builder) {
return builder.dataSource(hubDataSource).packages("com.sma.backend.multidb.database.mysql.domain")
.persistenceUnit("mysql").build();
}
@Primary
@Bean
public PlatformTransactionManager mysqlTransactionManager(@Qualifier("mysqlEntityManagerFactory") EntityManagerFactory factory) {
return new JpaTransactionManager(factory);
}
}
```
### POINTS TO REMEMBER :
- **hibernate.dialect** → The dialect specifies the type of database used in hibernate so that hibernate generate appropriate type of SQL statements. For connecting any hibernate application with the database, it is required to provide the configuration of SQL dialect.
Hence to specify which language to use we have to define seperate values for this.
We can do that by passing this and all other properties which are specific to the databases in a map tagged as **properties** in `EntityManagerFactoryBuilder`
- If running on Mac local you have to keep different ports for running both the databases on localHost as MSSQL needs Docker to run .
---
I hope that this Blog Post helped you! If you have any questions, feel free to use the comment section! 💬
Oh and if you want more content like this, follow me:
- [Github](https://github.com/pj-iitk)
- [LinkedIn](https://www.linkedin.com/in/pj-iitk/)
| pranjal_sharma_38482a3041 |
1,891,493 | 7 TUI libraries for creating interactive terminal apps | Written by Yashodhan Joshi✏️ When writing applications, a good user interface is just as important... | 0 | 2024-06-20T15:33:42 | https://blog.logrocket.com/7-tui-libraries-interactive-terminal-apps | webdev, tui | **Written by [Yashodhan Joshi](https://blog.logrocket.com/author/yashodhan-joshi/)✏️**
When writing applications, a good user interface is just as important as the actual app’s functionality. A good user interface will make the user continue using the app, whereas a bad, clunky one will drive users away.
This also applies to applications that are completely terminal-based, but making them can be trickier than normal due to the limitations of the terminal.
In this post, we will review seven different TUI libraries that can help us with building interactive terminal applications. Specifically, we will go over:
* Ratatui
* Huh?
* BubbleTea
* Gum
* Textual
* Ink
* Enquirer
Additionally, we will go through a brief comparison between them that will help you choose the library for your next terminal-based project.
## Introduction to terminal user interfaces
[Terminal user interfaces (TUIs)](https://blog.logrocket.com/rust-and-tui-building-a-command-line-interface-in-rust/) can be categorized into two different types: one that is completely flag-based and one that is more like a GUI application.
Most of the Unix command-line utilities provide a flag-based interface. We specify the flags using `-` or `--` and a short or long flag name, and the application changes its behavior accordingly. These are extremely useful when the application is to be used in non-interactive way or as a part of a shell script.
However, they can get notoriously complex — Git is feature-rich and incredibly useful, but its flag and sub-command-based interface can get unwieldy. There is also a website dedicated to generating random and remarkably [real-looking fake Git flags](https://git-man-page-generator.lokaltog.net).
Another consideration for terminal apps is whether they are being used interactively or as part of a pipeline. For example, if you just run `ls` , it will output different colors for normal files, directories, and so on in most of the shells. If you run it as `ls | cat` , it will not output colors.
For terminal-based apps, which are intended to be primarily used interactively, the choice is a bit simplified and there’s more flexibility. They can create GUI-like interfaces and take inputs from both the keyboard and mouse.
In this article, we will focus on this second kind of TUI, which is meant to be more GUI-like and can provide a familiar experience to users who don’t have a lot of experience using terminals.
## 1\. Ratatui
Ratatui is a feature-rich library that can be used to create complex interfaces containing elements similar to graphical interfaces, such as lists, charts, tables scrollbars, etc. It is a powerful library with many options, and you can check out its [resource of examples](https://github.com/ratatui-org/ratatui/blob/main/examples/README.md) to see the possibilities.
For our example, we will be creating a simple directory explorer application. Note that because this is an example, there are lot of `unwrap`s and `clone`s. You should handle these properly in actual applications.
Start with a new project, and add `ratatui` and `crossterm` as dependencies:
```bash
cargo add ratatui crossterm
```
We then add the required imports to `src/main.rs`:
```rust
use crossterm::{
event::{self, KeyCode, KeyEventKind},
terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen},
ExecutableCommand,
};
use ratatui::{prelude::*, widgets::*};
use std::{
io::{stdout, Result},
path::PathBuf,
};
```
And re-write the main function as follows, but do not run this yet:
```rust
fn main() -> Result<()> {
stdout().execute(EnterAlternateScreen)?;
enable_raw_mode()?;
let mut terminal = Terminal::new(CrosstermBackend::new(stdout()))?;
terminal.clear()?;
stdout().execute(LeaveAlternateScreen)?;
disable_raw_mode()?;
Ok(())
}
```
These are taken from the Ratatui example.
Ratatui directly interfaces with the underlying terminal using crossterm. To correctly render the UI, it needs to set the terminal in raw mode where the typed characters, including arrow keys, are passed directly to the application and not intercepted. Thus, we first need to enable the raw mode and then disable it again before exiting.
Now, we start by declaring some required variables in the main after the `terminal.clear()`:
```rust
let mut cwd = PathBuf::from(".");
let mut selected = 0;
let mut state = ListState::default();
let mut entries: Vec<String> = std::fs::read_dir(cwd.clone())
.unwrap()
.map(|entry| entry.unwrap().file_name())
.map(|s| s.into_string().unwrap())
.collect::<Vec<_>>();
```
We set the `cwd` to the current directory and `selected` to `0`. We also create a `state`, which will store the state information for our list widget and create the initial entries by reading the current directory.
Now we add an infinite loop below this:
```rust
loop {
...
}
```
This will act as the main driving loop for the application. We first add in a line to check for events by doing the following:
```rust
if event::poll(std::time::Duration::from_millis(16))? {
if let event::Event::Key(key) = event::read()? {
if key.kind == KeyEventKind::Press {
match key.code {
KeyCode::Char('q') => {
break;
}
_ => {}
}
}}}
```
Here, we first poll for the event, waiting only for 16 milliseconds (similar to their example). This way, we do not block the rendering if there is no event, and instead, skip the processing and continue on with the loop.
If there is an event, we read the event and check if it’s of type `Key`. If it is, we further check if the event is a key press, at which place we are three brackets in, and make sure that some key was in fact pressed. We check the key code, and if it is `q`, then we break out of the loop. Otherwise, we simply ignore it.
Then, the code for rendering the list widget is added. First, we create the list widget:
```rust
let list = List::new(entries.clone())
.block(Block::bordered().title("Directory Entries"))
.style(Style::default().fg(Color::White))
.highlight_style(
Style::default()
.add_modifier(Modifier::BOLD)
.bg(Color::White)
.fg(Color::Black),
)
.highlight_symbol(">");
```
We create a new list with `entries` as the list elements. We set the rendering style to `block` with the title `Directory Entries`. This will render the list with borders around it and a title on the top border. We set the element style as a default of white text, as well as the highlight style. The highlighted section will be bolded with black text on a white background. We also set the highlight symbol to `>` , which will be displayed before the selected item.
Then we actually draw the list UI using:
```rust
terminal.draw(|frame| {
let area = frame.size();
state.select(Some(selected));
frame.render_stateful_widget(list, area, &mut state);
})?;
```
Here, we use `state.select` method to set which item is selected, and then render it on the frame.
Now, to handle the arrow inputs, we add the following to the `match` statement for `key.code`:
```rust
KeyCode::Up => selected = (entries.len() + selected - 1) % entries.len(),
KeyCode::Down => selected = (selected + 1) % entries.len(),
KeyCode::Enter => {
cwd = cwd.join(entries[selected].clone());
entries = std::fs::read_dir(cwd.clone())
.unwrap()
.map(|entry| entry.unwrap().file_name())
.map(|s| s.into_string().unwrap())
.collect::<Vec<_>>();
selected = 0;
}
```
If the key is an up or down arrow, we change the selected to one item before or after, taking care of wrapping around the first and last item. If the key pressed is `Enter`, we update the `cwd` by joining the selected entry to it and reset the `entries` to entries of this new `cwd` . Finally, we reset the selected to `0` .
You can run this by running `cargo run`. Note that this does not handle going a directory back, and panics if you press enter on a file instead of a directory. You can implement them yourself, by taking the above code as a starting point, which can be found in the repo linked at the end.
You should also check out [the Ratatui website](https://ratatui.rs/) for more creative examples and detailed information on available widgets.
## 2\. Huh?
Huh? is a Go library, which can be used to take inputs from users in a form-like manner. It provides functions and classes to create specific types of prompts and take user input, such as select, text input, and confirmation dialogues.
First, we create a Go project, add the library as a dependency, and in `main.go`, import it as:
```go
package main
import "github.com/charmbracelet/huh"
```
For this example, we will create an interface for a program, which searches for a package with a given name. Users can also provide a version needed to select a registry from a predefined list. We start by declaring the variables for these and set the version to `*` as the default:
```go
func main() {
var name string
version := "*"
var registry string
}
```
Then, we create a new `form`, which is the top-level input class in the library:
```go
form := huh.NewForm(...)
```
A form can contain multiple groups of prompts. You can think of a group as a “page” in real-life forms. Each group will be rendered to the screen separately and will clear the screen of questions from previous groups. A form must have at least one group:
```go
form := huh.NewForm(huh.NewGroup(...))
```
In this group, we will add individual questions, starting with name, which is a simple string:
```go
huh.NewInput().
Title("Package name to search for").
CharLimit(100).
Value(&name),
```
We create an `Input` element, which takes a single line of text. The `Title` is displayed as the prompt question to the user. For `Input`, we can also set the character limit if needed using `CharLimit` . Finally, we give the reference of the variable to store the user input.
The `version` input is similar to the `name` input:
```go
huh.NewInput().
Title("Version").
CharLimit(400).
Validate(validateVersion).
Value(&version),
```
If the variable has a value set (like in this case `*`), then that value will be displayed as the default answer. `Validate` is used to specify a validation function for the input. The function should take a single parameter typed according to the field’s value and should return `nil` if input is valid, or an error if it is not. For example, we can define the function as:
```go
func validateVersion(v string) error {
if v == "test" {
return errors.New("Test Error")
}
return nil
}
```
This takes a string, because `version` is of the type string, and returns an error for our special value `test` . Note that this error will be displayed directly to users and they will be prevented from answering further questions until the error is corrected.
Finally, for the registry input, we use the selection input as:
```go
huh.NewSelect[string]().
Title("Select Registry to search in").
Options(
huh.NewOption("Registry 1", "https://reg1.com"),
huh.NewOption("Registry 2", "https://reg2.com"),
huh.NewOption("Registry 3", "https://reg3.com"),
).
Value(®istry)
```
Each option takes two values — the first is what will be displayed to the user and the second is what will be actually stored in the variable when that option is selected.
Finally, to actually take the input, we use the `Run` on the form created:
```go
err := form.Run()
if err != nil {
log.Fatal(err)
}
fmt.Println(name, version, registry)
```
Error will be returned if there are any issues when taking input (but this does not include any error returned by validation functions.)
We finish the example by printing the variables, but in the actual program, you can use these values to connect to the registry and find the package.
## 3\. BubbleTea
BubbleTea is a Go library inspired by the model-view-update architecture of Elm applications. This separates the model (the data structure), update (the method used for processing the input and updating the state), and view (code for rendering the state to the terminal). Thus, we first define a structure, implement the `init`, `update`, and `view` methods on it, and use that to render the TUI.
For this example, we will create a very simple file explorer that displays a list of files and directories in the current directory. If you select a directory, it changes the list to show the contents of that directory and so on.
We start by creating the Go module using `go mod init bubbletea-example` and adding the package. We then declare our imports:
```go
package main
import (
"fmt"
tea "github.com/charmbracelet/bubbletea"
"log"
"os"
)
```
We define our model as:
```go
type model struct {
cwd string
entries []string
selected int
}
```
Here, the `cwd` field will be used to store the current directory path, the `entries` field will be used to store the directory entries, and `selected` field stores the index of entry where the user cursor is currently.
We define a function to get an instance of our struct with default values:
```go
func initialModel() model {
entries, err := os.ReadDir(".")
if err != nil {
log.Fatal(err)
}
var dirs = make([]string, len(entries))
for i, e := range entries {
dirs[i] = e.Name()
}
return model{
cwd: ".",
entries: dirs,
selected: 0,
}
}
```
We first read the `dir` from which the program was invoked, then create a list of the entry names, and create a model using this data.
We also need to add a function `Init`, which will be called the first time the TUI is created for the struct. It should be used to perform any I/O if needed. As we don’t need any, we simply return `nil` from it.
```go
func (m model) Init() tea.Cmd {
return nil
}
```
We then add the `Update` method as:
```go
func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.KeyMsg:
switch msg.String() {
case "ctrl+c", "q":
return m, tea.Quit
case "up":
if m.selected > 0 {
m.selected--
}
case "down":
if m.selected < len(m.entries)-1 {
m.selected++
}
case "enter", " ":
entry := m.entries[m.selected]
m.selected = 0
m.cwd = m.cwd + "/" + entry
entries, err := os.ReadDir(m.cwd)
if err != nil {
log.Fatal(err)
}
var dirs = make([]string, len(entries))
for i, e := range entries {
dirs[i] = e.Name()
}
m.entries = dirs
}
}
return m, nil
}
```
This method gets a parameter of type `Msg` , which is an interface implemented by both keyboard and mouse input events. We check if the concrete type of the `msg` is `KeyMsg`, indicating that the user has pressed some key, and then switch on its value.
If the key pressed is `q` or `Ctrl+C` , we return a special message of `Quit`, which quits the application. If the key is up or down, we update the `selected` value accordingly. If the key is `Enter`, we update the `cwd` by appending the currently selected entry with the current `cwd` and then update the entry list. Finally, we return the model struct itself.
Then, for the rendering method:
```go
func (m model) View() string {
s := "Directory List\n\n"
for i, dir := range m.entries {
cursor := " "
if m.selected == i {
cursor = ">"
}
s += fmt.Sprintf("%s %s\n", cursor, dir)
}
s += "\nPress q to quit.\n"
return s
}
```
We start with a fixed string, then iterate over the entries — adding them one by one to the string. If the entry currently has the cursor on it, we indicate that with `>` and, finally, indicate the quitting information. We return the string to be displayed from the function.
In `main`, we invoke this app as:
```go
func main() {
p := tea.NewProgram(initialModel())
if _, err := p.Run(); err != nil {
fmt.Printf("Error : %v", err)
os.Exit(1)
}
}
```
We create a new program with the default state of our model, and call `run` on it. This will display the TUI application and handle the input provided using the update method.
## 4\. Gum
Gum is a batteries-included library that can be used to take inputs for a shell script. When using Gum, you don’t have to write any Go code, and you can use various inbuilt types of prompts to take input. Of course, for this to work, the `gum` binary must be installed on the user’s machine. This library is also by the same developers who wrote Huh? and BubbleTea.
Here, we will recreate the same program as Huh? example, but using Gum in shell scripts. First, install the Gum binary as per instructions on their [GitHub repo here](https://github.com/charmbracelet/gum).
We can use Gum’s sub-commands to take specific types of inputs, such as:
* `gum input` — Takes a single-line text input
* `gum input --password` — Displays `*` instead of what is typed
* `gum choose` — To select one of the given options
* `gum file` — To select a file from a given directory
…and so on. We can recreate the `huh` example as:
```bash
#! /bin/bash
echo "Name of the package to serach for :"
NAME=$(gum input)
echo "Version of package to find"
VERSION=$(gum input --value="*")
echo "Select registry :"
REGISTRY=$(gum choose https://reg{1..3}.com)
echo "name $NAME, version $VERSION, registry $REGISTRY"
```
We first use `echo` to display a prompt to the user, take the input, and store it in the corresponding variables. We use the `--value` flag to set the default value for the input.
Gum prints out the value given by the user, so the `$(…)` expression evaluates to the user input and is subsequently stored in the variable. Make sure that the input is stored somewhere — either in a variable or redirected to a file using `>` — otherwise it is printed on the terminal. This can be dangerous in the case of password inputs.
## 5\. Textual
Textual is a Python library that can be used for creating rich user interfaces in the terminal. It has similar elements to GUI. For this example, we create a simple directory explorer similar to the Ratatui and BubbleTea examples.
We start by installing the library:
```bash
>pip install textual textual-dev
```
Then, we can add the imports and a helper function:
```python
import os
from textual.app import App
from textual.widgets import Header, Footer, Button, Static
def create_id(s):
return s.replace(".","_").replace("@","_")
```
We create a class that extends Textual’s `Static` class. This will be used to display the list of directory entries and handle user input:
```python
class DirDisplay(Static):
directory = "."
dir_list = [Button(x,id=x) for x in os.listdir(".")]
def on_button_pressed(self, event):
self.directory = os.path.join(self.directory,str(event.button.label))
self.dir_list = [];
for dir in os.listdir(self.directory):
self.dir_list.append(Button(dir,id=create_id(dir)))
self.remove_children()
self.mount_all(self.dir_list)
def compose(self):
return self.dir_list
```
We start by defining the `directory` initialized with `.` , corresponding to the current directory. We then initialize the `dir_list` using `os.listdir()` and creating a button for each of the entries using list comprehension.
The first parameter of the `Button` constructor is the label of the button, which can be any text. The `id` parameter needs to be unique for each button so we can figure out which button was pressed in the button click handler. The `id` has several restrictions, such as no special characters or starting with any numbers. We thus use the `create_id` helper function to convert the directory entry name to an id-compatible string.
The `compose` method is called once at the beginning and must return the widgets to render. Here, we return the `dir_list`, which is the list of buttons.
`on_button_pressed` is a fixed method name and is used as the button-click handler by the `Textual` library. It gets an `event`, which has details of the button clicked. In this, we handle the button click by first concatenating the existing `directory` with the label of the pressed button, which indicates the directory entry name). We then re-initialize the `dir_list` by creating buttons for each entry in this new path.
We then explicitly remove the current children to clear out the old entries and then call `mount_all`to render buttons for the new entries.
Our main app has a separate class:
```python
class Explorer(App):
def compose(self):
yield Header()
yield DirDisplay()
yield Footer()
```
This is a pretty simple class extending the `App` class from the library. Here, instead of returning a complete list, we use `yield` to return the components one by one. For the components, we use the inbuilt `Header`, then our `DirDisplay`, and, finally, the inbuilt `Footer` components.
Finally, we run this app as:
```python
if __name__ == "__main__":
app = Explorer()
app.run()
```
And run it as `python path/to/main.py`. Textual also allows the use of CSS to style the components. You can view the detailed guide in [their documentation](https://textual.textualize.io).
## 6\. Ink
[Ink is a JavaScript library](https://blog.logrocket.com/using-ink-ui-react-build-interactive-custom-clis/) that uses React and React components to develop the TUI. If you are already familiar with the React ecosystem and are developing your application in Node, this can be a great option for you to take user input.
To get started, we use their scaffolding command to set up our project:
```bash
npx create-ink-app ink-example
```
This will create the project directory, `npm project`, install the dependencies, etc. As this is a React-based project, it will also set up the Babel to transpile JSX into JS. If you already have a Node app, you can also manually add `ink` and `react` as dependencies and set up Babel to compile it. The instructions for that can [be found here](https://github.com/vadimdemedes/ink?tab=readme-ov-file#getting-started).
The default template installs several other dependencies, such as `xo`, `meow`, and `ava`, which can be ignored for this example. In the `app.js` file under `source` dir, we have the example component that prints `hello name` with the name part in green.
The `Text` component renders text with various styling options such as color, background color, italics, bold, etc. The `Box` component can be used for creating layouts similar to flexbox.
For this example, we will create a simple app that takes text as input and saves it in a file. We will change the `source/cli.js` as:
```javascript
import React from 'react';
import {render} from 'ink';
import App from './app.js';
render(<App />);
```
In the `source/app.js` , we will update imports and app definition to:
```javascript
import React, {useState} from 'react';
import {Text, Box, Newline, useInput, useApp} from 'ink';
export default function App() {}
```
In the `App` function, we will declare a state using `useState` hook to store the inputted text. We also get the exit function from `useApp` hook to exit the app when the user presses `Ctrl+D` . Note that the `App` in hook’s name is not related to the component’s name:
```javascript
const [ text, setText ] = useState(' ');
const { exit } = useApp();
```
We will return JSX from the function, which will be rendered as our terminal UI. We will start by printing a heading:
```javascript
return (
<>
<Box justifyContent="center">
<Text color="green" bold>
Input your text
</Text>
<Newline />
</Box>
</>
);
```
The `Box` component is given `justifyContent` to center align the header, and we put the text inside of it with a color green and in bold. We also add a new line to separate our heading from the use inputted text.
Next, we add the actual text. Below is the header’s box component:
```javascript
{text.split('\n').map((t, i, arr) => (
<Text key={i}>
{'>'}
{t}
{i == arr.length - 1 ? '_' : ''}
</Text>
))}
```
We split the `text` content by a new line and map it to a `Text` component. We prepend `>` to indicate the input, then the actual text line. For the last line, we append `_` as an indication of a cursor.
To handle the input, we use the `useInput` hook provided by the library, before the `return` statement:
```javascript
useInput((input, key) => {
if (key.ctrl && input === 'd') {
// save the text in a file
exit();
} else {
let newText;
if (key.return) {
newText = text + '\n ';
} else {
newText = text + input;
}
setText(newText);
}
});
```
We get two parameters: `input` and `key` . `input` stores the entered text when there is a key-press. However, if the key is a non-text key, such as `Ctrl`, `Esc` , or `Return`, we get that information in the second argument as a boolean property. You can [see the docs](https://github.com/vadimdemedes/ink?tab=readme-ov-file#key) for a list of available keys, but in the example, we use `Ctrl` and `Return`.
If `Ctrl+D` is pressed, we save the text entered until then in a file and then exit. Otherwise, we check if the `Return` key is pressed and append a new line to the text. For all other key presses, we append the `input` to the text. Finally, we set the text using `setText` call.
You can see the output by running `npm run build && node dist/cli.js` :  The header is centered and the inputted text is displayed correctly. This does not handle backspace yet. You can take the code from the repo linked below, and implement the backspace and delete key handling.
## 7\. Enquirer
[Enquirer is a JavaScript library](https://blog.logrocket.com/creating-a-node-cli/) for designing question-based TUIs, somewhat similar to Huh?. Enquirer allows you to create a prompt-based interface similar to the one presented when we run `npm init`. It has various kinds of prompts, including text input, list input, password, selections, etc. There are some other similar libraries, such as Inquirer, which you might want to check if this does not fit your requirements.
Start by creating a new package and running `npm init`. Add `enquirer` as a dependency by running `npm i enquirer`. We will use this to take initial player data for a game.
We import the `prompt` from the library and create a prompt object:
```javascript
const { prompt } = require("enquirer");
const results = prompt(...);
```
If we want a single question, we can directly use the individual prompt classes such as `input`, `select`, etc., but for multiple questions, we can pass an array of objects with appropriate fields defined to the prompt function:
```javascript
const results = prompt([...]);
```
We will define a main function, call await on the results, and then call the main function itself:
```javascript
async function main() {
const response = await results;
console.log(response);
}
main();
```
Now, let us construct the questions one by one. First up is the character name. Type `input` is used for single-line text:
```javascript
{
type: "input",
name: "name",
message: "What is name of your character?",
},
```
We give the `type` as `input`. The value of `name` field will be used as the key in the results object returned and the `message` will be displayed to the user as the prompt for this question.
Similarly, we add the next question to select the class of the character:
```javascript
{
type: "select",
name: "class",
message: "What is your character class?",
choices: [
{
name: "Dwarf",
value: "dwarf",
},
{ name: "Wizard", value: "wizard" },
{ name: "Dragon", value: "dragon" },
],
},
```
`type`, `name`, and `message` are similar, and here, we also provide `choices`. This is an array, with each object having a `name` that’s displayed to the user and a `value` that’s set in the `answers` object when a user selects the option.
Then, we give the user an option to customize the experience with “advanced” options by using a toggle (a yes or no question):
```javascript
{
type: "Toggle",
name: "custom",
message: "Do advance customization?",
enabled: "Yes",
disabled: "No",
},
```
The `enabled`/`disabled` strings are shown to the users, but the actual value is set to `true`/`false`, depending on whether the user selects the enabled or disabled option.
Then, we give the user two options to customize the experience: difficulty and item randomness:
```javascript
{
type: "select",
name: "difficulty",
message: "Select difficulty level",
choices: [
{ message: "Easy", value: "1" },
{ message: "Medium", value: "2" },
{ message: "Hard", value: "3" },
],
initial: "2",
skip: function () {
return !this.state.answers.custom;
},
},
```
This is again a `select` type, but two fields are added — `initial` and `skip` . The `initial` field must be a string or a function returning a string, which will be set as the default value. The `skip` function must return a boolean and will be used to decide if the question should be skipped.
Note that this is an anonymous function and not an arrow function. This is because we need the `this` object to be bound correctly. We can then access the previous answers by using `this.state.answers` and use the answer `custom` to decide if this question should be asked to the user or not.
If the user has selected `No` for the customization, then this question will not be displayed. Its answer will be set to the `initial` value.
Similarly, we define the item randomness:
```javascript
{
type: "select",
name: "random",
message: "Select item randomness level",
choices: [
{ message: "Minimum", value: "1" },
{ message: "Low", value: "2" },
{ message: "Medium", value: "3" },
{ message: "High", value: "4" },
{ message: "Maximum", value: "5" },
],
initial: "3",
skip: function () {
packages return !this.state.answers.custom;
},
},
```
Now, if you run the project, you will get the prompts, and finally, the answer object will be logged.
## Comparison table
Below is a comparison of the seven TUI libraries we discussed. Keep in mind that some columns in the following table are subjective, such as ease of use:
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Library</th>
<th class="tg-0pky">Language</th>
<th class="tg-0pky">Ease of use</th>
<th class="tg-0pky">Inbuilt components</th>
<th class="tg-0pky">End user requirements</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Ratatui</td>
<td class="tg-0pky">Rust</td>
<td class="tg-0pky">Complex</td>
<td class="tg-0pky">Yes</td>
<td class="tg-0pky">Final compiled Binary</td>
</tr>
<tr>
<td class="tg-0pky">Huh?</td>
<td class="tg-0pky">Go</td>
<td class="tg-0pky">Easy</td>
<td class="tg-0pky">Yes</td>
<td class="tg-0pky">Final compiled Binary</td>
</tr>
<tr>
<td class="tg-0pky">BubbleTea</td>
<td class="tg-0pky">Go</td>
<td class="tg-0pky">Medium</td>
<td class="tg-0pky">No</td>
<td class="tg-0pky">Final compiled Binary</td>
</tr>
<tr>
<td class="tg-0pky">Gum</td>
<td class="tg-0pky">Shell</td>
<td class="tg-0pky">Easy</td>
<td class="tg-0pky">Yes</td>
<td class="tg-0pky">User needs the Gum binary along with script</td>
</tr>
<tr>
<td class="tg-0pky">Textual</td>
<td class="tg-0pky">Python</td>
<td class="tg-0pky">Complex</td>
<td class="tg-0pky">Yes</td>
<td class="tg-0pky">User needs the textual Python library along with the script</td>
</tr>
<tr>
<td class="tg-0pky">Ink</td>
<td class="tg-0pky">JS</td>
<td class="tg-0pky">Easy if familiar With React</td>
<td class="tg-0pky">Yes</td>
<td class="tg-0pky">User needs node and npm dependencies</td>
</tr>
<tr>
<td class="tg-0pky">Enquirer</td>
<td class="tg-0pky">JS</td>
<td class="tg-0pky">Easy</td>
<td class="tg-0pky">Yes</td>
<td class="tg-0pky">User needs node and npm dependencies</td>
</tr>
</tbody>
</table>
## Comparison summary
Before we wrap up, let’s go into more depth and discuss specific use cases, etc.:
* **Ratatui**: If your application is in Rust or you want to create a TUI using some Rust libraries, then Ratatui is obviously a good fit. It has a lot of inbuilt widgets that can satisfy most needs and also provides granular control options if needed. On the other hand, it can get verbose and might be a bit tricky if you are a beginner in Rust
* **Huh?**: If your application is in Go and your input can be done in a simple question-based format, this might be for you. It provides several inbuilt prompt options. However, if you need a more GUI-like interface, this might not fit the requirements
* **BubbleTea**: This gives you a complete model-view-update architecture like Elm, so if you are familiar with Elm, this can be an easy way in Go’s TUI libraries. However, the final output needs to be constructed as a string, so if you want a fancy display, you will need to construct it yourself by string construction
* **Gum**: Gum is extremely useful for taking input in shell scripts without having to write code in other languages or deal with shell-specific quirks while taking input. It provides some good inbuilt input types, however, you will need the Gum binary installed on the user’s system. That is an added dependency you need to care about. This can be a great fit if you are writing shell scripts for yourself and want robust input handling without having to deal with shell’s input methods
* **Textual**: If you want TUI in Python, this would be a good library. While you need the `Textual` package to be installed on the user’s machine, if you are using any Python dependencies apart from standard lib, you need to instruct the user to install them anyway. This might not be that problematic
* **Ink**: Ink is great if you are familiar with the React ecosystem and want to write TUI in JS. It needs a compile and run cycle, but so does any React application
* **Enquirer**: Similar to Huh?, if your input needs can be satisfied by just question-based input, then this is a great library to use in JS for TUI. It provides a lot of inbuilt prompts and ways to write custom prompts as well
## Conclusion
In this post, we went over seven different libraries that can be used to implement various kinds of TUIs . Now you can use the appropriate library in your project to make your interface beautiful and helpful for the user in the terminal.
You can find the example code for these examples [in the repo here](https://github.com/YJDoc2/LogRocket-Blog-Code/tree/main/tui-libraries-for-interactive-apps). Thank you for reading!
---
##Get set up with LogRocket's modern error tracking in minutes:
1. Visit https://logrocket.com/signup/ to get an app ID.
2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side.
NPM:
```bash
$ npm i --save logrocket
// Code:
import LogRocket from 'logrocket';
LogRocket.init('app/id');
```
Script Tag:
```javascript
Add to your HTML:
<script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script>
<script>window.LogRocket && window.LogRocket.init('app/id');</script>
```
3.(Optional) Install plugins for deeper integrations with your stack:
* Redux middleware
* ngrx middleware
* Vuex plugin
[Get started now](https://lp.logrocket.com/blg/signup) | leemeganj |
1,895,017 | Top 5 Docker Alternatives for Software Developers in 2024 | Imagine you’ve worked hard to create an application with various libraries and dependencies. It runs... | 0 | 2024-06-20T15:31:32 | https://dev.to/lunamiller/top-5-docker-alternatives-for-software-developers-in-2024-b04 | docker, webdev, programming | Imagine you’ve worked hard to create an application with various libraries and dependencies. It runs smoothly and efficiently on your system. But what happens when you need to send the application to someone else’s system? That person would need to go through a lot of setup to get it running. Even after setup, a single change in the code and configuration could break the entire application on either system, or worse, on both systems.
This is where Docker comes into play. It helps you deploy and run applications efficiently across various platforms and systems. You simply take a snapshot of your application along with all its settings and send it to another system, where it runs in a similar way as on your local machine. It provides an isolated version of the application that can be shared across different systems and platforms.

### What is Docker?
[Docker](https://www.docker.com/) is an open-source platform used by developers to build, deploy, run, update, and manage applications in containers. It helps decouple applications from the underlying infrastructure, enabling rapid and efficient development. It offers the ability to package and run applications in isolated environments called containers.
Containers are standardized executable components that combine application code with the operating system libraries and dependencies required to set up the application environment. They are lightweight, standalone, and executable software packages. These containers are industry standards, so they can be used anywhere. They share the machine's operating system kernel, which increases server efficiency and reduces server costs. Applications in containers are also secure, as Docker provides the strongest default isolation capabilities.
### Why use Docker?
Developers use Docker to efficiently and consistently package and deploy applications across different environments. It simplifies containerization and isolation of applications for reliable and scalable deployment. The following features make Docker so popular and widely used among developers worldwide:
- Low Resource Consumption: Containers use the host's operating system, so there is no need to install an operating system for each container, making each container smaller and lighter. Containers can run on the cloud, eliminating the need for large servers.
- Scalability: Docker supports both horizontal and vertical scaling. With horizontal scaling, you can deploy and manage multiple containers to handle workloads, and with vertical scaling, you can adjust computing resources by expanding or limiting CPU resources.
- Container Version Control: Docker can manage version control for container images and can roll back to previous versions, even retrieving detailed information for specific versions. It also allows uploading the delta between versions and new versions.
- Flexibility and Versatility: Docker allows for the diversity of programming and system requirements needed by applications, eliminating cross-platform compatibility issues, ensuring flexibility and versatility.
### Need for Docker Alternatives
Despite being a revolutionary way to handle applications, Docker has its downsides, paving the way for alternatives. The need for Docker alternatives arises from the demand for lighter, faster, and more specialized containerization solutions that are better suited for specific use cases.
- **Security**: All containers use the host's operating system and do not have their own operating system. This creates a security vulnerability, causing all containers of a compromised host to crash. This issue does not exist in virtual machines, as each VM has its own operating system.
- **GUI (Graphical User Interface)**: Docker exists only in the Command Line Interface (CLI) and is not available in a Graphical User Interface (GUI), making it useful only if you have prior knowledge of the CLI.
- **Learning Curve**: Docker has a steep learning curve and may take a long time to learn everything about the service.
### Docker Alternatives
Docker is software used to accelerate the development process by packaging software into standardized units called containers. However, in some scenarios, Docker may slow down or not perform as expected, leading developers to build relevant alternatives based on project requirements. These alternatives are also widely used in the industry and are worth knowing about.
**1. Podman**
**2. Linux Container Daemon (LXD)**
**3. Kubernetes (K8s)**
**4. Vagrant**
**5. Containerd**
#### Podman
[Podman](https://podman.io/) is an open-source visualization tool developed by RedHat. It leverages the libpod library as a container lifecycle management tool. It is a daemonless container engine OCI management on Linux. It is primarily made for Linux but can run on Windows and Mac using virtual machines managed by Podman.
**Features**
- The container engine runs on a daemonless architecture, allowing containers to be executed without root privileges.
- Podman can integrate with third-party services to enhance its functionality.
- Commands and operations such as pull and tag can be executed to update and modify OCI images.
- Podman is compatible with other OCI-compliant container formats.

**Podman vs Docker**
- Docker uses a daemon to establish connections between the server and client, while Podman uses subprocesses to handle individual processes.
- Creating containers in Podman does not require root privileges, which is not the case with Docker.
#### Linux Container Daemon (LXD)
[Linux Container Daemon (LXD)](https://linuxcontainers.org/) is a container and virtual machine manager developed by Canonical. It provides flexibility by offering a single process for multiple containers. It connects to the Linux container library (LXC) using a REST API. It is an add-on to LXC, providing more features and functionalities.
**Features**
- It has a powerful command-line interface (CLI) called "lxc" for deploying and managing Linux OS container instances.
- Offers storage and network management features like storage pools.
- Provides data retrieval tools after data processing.

**LXD vs Docker**
- LXD executes applications faster than Docker when using multiple processors.
- LXD is suitable for stateful containers used for containerizing operating systems, while Docker supports stateless containers used for containerizing services.
#### Kubernetes (K8s)
[Kubernetes](https://kubernetes.io/), also known as "K8s," is a container orchestration tool developed by Google. It is used to automate the deployment, scaling, and management of containerized applications. Docker and Kubernetes can be combined for better container management.
**Features**
- Kubernetes offers auto-scaling, helping scale or limit resources based on usage.
- It is a declarative model where developers describe a state, and K8s works in the background to manage the state and handle failures.
- Supports various internal and external load balancing schemes.
- One of its main features is self-healing applications through automatic placement, auto-restart, auto-replication, and auto-scaling.

**Kubernetes vs Docker**
- Kubernetes is a better choice than Docker for orchestrating large distributed applications with numerous microservices (such as databases, secrets, and external dependencies).
- Kubernetes' auto-scaling and self-healing properties give it an edge over Docker.
#### Vagrant
[Vagrant](https://www.vagrantup.com/) is a tool for building and managing virtual machine environments in a single workflow. Developed by Hashicorp, it is used to replicate multiple virtual environments. It can efficiently run in all virtualized environments, providing the highest level of isolation to users.
**Features**
- Vagrant offers interoperability.
- It can easily integrate with continuous integration (CI) tools like Jenkins, enabling test automation and pipeline building.
- Facilitates multi-machine environments using virtual machines that can be used on any operating system.
- Supports version control and sharing of base images called "boxes," which can be shared using Vagrant Cloud.

**Vagrant vs Docker**
- Docker relies on the host's operating system, while Vagrant creates virtual machines with their own operating systems. Docker runs on Linux systems, while VMs can run on any operating system, making Vagrant not restricted by OS.
- Vagrant offers better security than Docker as the VMs they create have their own operating systems and do not share them.
#### Containerd
[Containerd](https://containerd.io/) is a runtime tool used for managing image transfers and storage as well as managing OCI containers. It can be integrated with Docker but can also be used without Docker integration. By using runc, it can function as a standalone component.
**Features**
- Namespaces: Allow separation between groups of containers on the same host, enabling two containers with the same name but different namespaces to run on one machine.
- Snapshot Extensions: Can be extended with other plugins to enhance snapshot functionality.
- Integration: Easily integrates with tools like runc, Kubernetes Engine, Amazon Kubernetes Service, and Azure Kubernetes Service.
- Container Cloning: Can clone containers for transfer and recovery using checkpoints.

**Containerd vs Docker**
- Standalone Container Creation: Containerd can create containers without additional support, whereas Docker cannot.
- Independence from Docker: Containerd can run without Docker, allowing containerized operations to start even in Docker's absence, and vice versa.
| lunamiller |
1,895,013 | Panes, UI Controls, and Shapes | Panes, UI controls, and shapes are subtypes of Node. When you run MyJavaFX in here, the window is... | 0 | 2024-06-20T15:28:15 | https://dev.to/paulike/panes-ui-controls-and-shapes-hb6 | java, programming, learning, beginners | Panes, UI controls, and shapes are subtypes of **Node**. When you run MyJavaFX in [here](https://dev.to/paulike/the-basic-structure-of-a-javafx-program-1b8d), the window is displayed. The button is always centered in the scene and occupies the entire window no matter how you resize it. You can fix the problem by setting the position and size properties of a button. However, a better approach is to use container classes, called _panes_, for automatically laying out the nodes in a desired location and size. You place nodes inside a pane and then place the pane into a scene. A _node_ is a visual component such as a shape, an image view, a UI control, or a pane. A _shape_ refers to a text, line, circle, ellipse, rectangle, arc, polygon, polyline, etc. A _UI control_ refers to a label, button, check box, radio button, text field, text area, etc. A scene can be displayed in a stage, as shown in Figure below (a). The relationship among **Stage**, **Scene**, **Node**, **Control**, and **Pane** is illustrated in the UML diagram, as shown in Figure below (b).

Note that a **Scene** can contain a **Control** or a **Pane**, but not a **Shape** or an **ImageView**. A **Pane** can contain any subtype of **Node**. You can create a **Scene** using the constructor **Scene(Parent, width, height)** or **Scene(Parent)**. The dimension of the scene is automatically decided in the latter constructor. Every subclass of **Node** has a no-arg constructor for creating a default node.
The program below places a button in a pane.


The program creates a **StackPane** (line 12) and adds a button as a child of the pane (line 13). The **getChildren()** method returns an instance of **javafx.collections.ObservableList**. **ObservableList** behaves very much like an **ArrayList** for storing a collection of elements. Invoking **add(e)** adds an element to the list. The **StackPane** places the nodes in the center of the pane on top of each other. Here, there is only one node in the pane. The **StackPane** respects a node’s preferred size. So you see the button displayed in its preferred size.
The program below gives an example that displays a circle in the center of the pane, as shown in Figure below (a).


The program creates a **Circle** (line 13) and sets its center at (100, 100) (lines 14–15), which is also the center for the scene, since the scene is created with the width and height of 200 (line 25). The radius of the circle is set to 50 (line 16). Note that the measurement units for graphics in Java are all in _pixels_.
The stroke color (i.e., the color to draw the circle) is set to black (line 17). The fill color (i.e., the color to fill the circle) is set to white (line 18). You may set the color to **null** to specify that no color is set.
The program creates a **Pane** (line 21) and places the circle in the pane (line 22). Note that the coordinates of the upper left corner of the pane is (0, 0) in the Java coordinate system, as shown in Figure below (a), as opposed to the conventional coordinate system where (**0**, **0**) is at the center of the window, as shown in Figure below (b). The x-coordinate increases from left to right and the y-coordinate increases downward in the Java coordinate system.
The pane is placed in the scene (line 25) and the scene is set in the stage (line 27). The circle is displayed in the center of the stage, as shown in Figure above (a). However, if you resize the window, the circle is not centered, as shown in Figure above (b). In order to display the circle centered as the window resizes, the x- and y-coordinates of the circle center need to be reset to the center of the pane. This can be done by using property binding.
 | paulike |
1,893,567 | How to create and connect to a Linux VM using a Public Key | Creating and connecting to a Linux virtual machine (VM) on Azure using SSH public key authentication... | 0 | 2024-06-20T15:26:37 | https://dev.to/dera2024/how-to-create-and-connect-to-a-linux-vm-using-a-public-key-4847 | azure, linux, techtalks, virtualmachine | [](url)Creating and connecting to a Linux virtual machine (VM) on Azure using SSH public key authentication involves several steps. Here’s a detailed guide to help you through the process:
### Step 1: Sign in to Azure Portal
1. **Sign in**: Go to the [Azure portal](https://portal.azure.com) and sign in with your Azure account.
### Step 2: Create a Resource Group (if needed)
1. **Resource Group**: If you don’t have a resource group where you want to deploy the VM, create one:
- Click on **Resource groups** in the Azure portal's left-hand menu.
- Click **+ Add** to create a new resource group.
- Provide a name, choose a region, and click **Review + create** then **Create**.

### Step 3: Create a Virtual Machine
1. **Create VM**: Now, create a new virtual machine:
- In the Azure portal, click **+ Create a resource** at the top-left corner.
- Search for **Virtual machine** and click **Create**.

2. **Basics**:
- **Subscription**: Choose your Azure subscription.
- **Resource Group**: Select the resource group you created or an existing one.
- **Virtual Machine Name**: Provide a name for your VM.
- **Region**: Choose the Azure region where you want to deploy your VM.
- **Image**: Choose a Linux distribution (e.g., Ubuntu, CentOS) from the list.
- **Size**: Select an appropriate VM size based on your requirements.

3. **Administrator Account**:
- **Username**: Choose a username for the administrator account (e.g., `azureuser`).
- **Authentication type**: Select **SSH public key**.

- **SSH public key**: Paste your SSH public key. If you don't have one, generate it using `ssh-keygen` on your local machine (`ssh-keygen -t rsa -b 4096`).

4. **Disks** and other settings:
- Customize the disk configuration, networking, management, and monitoring settings as per your requirements.
5. **Review + create**:
- Review your configuration settings.
- Click **Create** to start deploying the VM.

### Step 4: Connect to Your Virtual Machine
1. **SSH Connection**: Once the VM is deployed, you can connect to it using SSH:
- In the Azure portal, go to **Virtual machines** and select your VM.
- Under **Connect**, click on **SSH**.
- Use the SSH command provided to connect to your VM from your local terminal or SSH client.
- Another step is to go to command prompt,run as an administrator and type the SSH command shown below to connect.

Example SSH command:
```bash
ssh -i "/path/to/your/private-key-file.pem" username@your-vm-public-ip
```
### Step 5: Manage and Monitor Your VM
1. **Management**: Use Azure portal or Azure CLI to manage and monitor your VM:
- Start, stop, restart your VM.
- Scale up or down to change VM sizes.
- Monitor performance and set up alerts.
### Step 6: Delete Your VM (if no longer needed)
1. **Clean up**: If you no longer need the VM, delete it to avoid incurring charges:
- In the Azure portal, go to **Virtual machines**, select your VM, and click **Delete**.
### Additional Tips:
- **Azure CLI**: You can also use Azure CLI for VM deployment and management if you prefer command-line tools.
- **Templates**: Consider using Azure Resource Manager (ARM) templates for consistent and repeatable deployments.
_By following these steps, you can create and connect to a Linux VM on Azure using SSH public key authentication securely. Adjust settings based on your specific needs and security requirements._ | dera2024 |
1,895,011 | 🚀 Explore My New Interactive Portfolio and GitHub Projects! 🌟 | Hi everyone, I'm excited to share the new version of my portfolio with you! I've been working hard... | 0 | 2024-06-20T15:25:38 | https://dev.to/kporus/explore-my-new-interactive-portfolio-and-github-projects-74k | portfolio, vite, javascript | Hi everyone,
I'm excited to share the new version of my portfolio with you! I've been working hard to improve it using Kaboom.js and Tiled software. Here’s what you can find in my new portfolio:
- **Interactive Projects**: Created with Kaboom.js.
- **Detailed Maps**: Maps designed with Tiled software.
- **Skill Showcase**: My technical and soft skills.
You can view my new portfolio [here](https://idyllic-empanada-3f941d.netlify.app/).
If you’re curious, you can also check out the older version of my portfolio [here](https://singular-mooncake-bead22.netlify.app/).
I also have many projects on GitHub that you might find interesting. Feel free to explore them [here](https://github.com/KPorus).
I’d love to hear your thoughts. Reach out if you have any questions or want to connect.
Thanks for checking out my work!
Happy coding! 🚀 | kporus |
1,873,151 | SOLANA VALIDATORS AND FEE ECONOMICS. | The Meaning of Validators and Fee Economics in Cryptocurrency Systems. The two terms... | 0 | 2024-06-20T15:22:59 | https://dev.to/bravolakmedia/solana-validators-and-fee-economics-39el | solanadevelopers, blockchaindevelopers, solanavalidators, superteamdao | ## The Meaning of Validators and Fee Economics in Cryptocurrency Systems.
The two terms validators and fee economics are used in the concept of Proof-of-Stake (PoS) in cryptocurrency systems or blockchain networks. Solana validators are associated with the Proof-of-Stake (PoS) and Proof-of-History (PoH) as a consensus mechanism in the Solana ecosystem. Solana validators are nodes or computers that validate transactions and create new blocks on the Solana blockchain. The Solana fee economics is the way transaction fees are achieved and distributed in the Solana blockchain.
**The Roles and Significance of Validators in The Solana Ecosystem.**
Solana validators are responsible for running the Solana protocols and creating new blocks on the blockchain after validating credible transactions. The Solana validators give the blockchain a decentralization feature and they are vital to the Solana’s blockchain scalability. Solana validators also secure the ecosystem by executing programs that keep track of all the accounts on Solana clusters as independent entities making it difficult for attackers to alter the clusters.
Solana validator's requirements are not too much, there is no mandatory amount of SOL to be staked to become a validator. A validator can only take part in consensus by having a voting account with 0.02685864 SOL as a rent-exempt reserve and sending a vote transaction of about 1.1 SOL in value for each block he agrees with. The minimum hardware requirements to become a Solana validator are given below.
**<u>Basic Hardware and Software Requirements for A Solana Validator</u>.**
**CPU**
- 12 cores / 24 threads, or more
- 2.8GHz base clock speed or faster.
- SHA extension instruction support
- AMD Gen 3 or newer.
- Intel Ice Lake or newer.
- AVX2 instruction support (to use official release binaries, self-compile otherwise)
- Support for AVX512f is helpful.
**RAM**
- 256GB or more
- Error Correction Code (ECC) memory is suggested.
- A motherboard with 512 GB capacity is suggested.
**DISK**
- PCIe Gen3 x4 NVME SSD, or better.
- Accounts: 500GB, or larger. High TBW (Total Bytes Written)
- Ledger: 1TB or larger. High TBW suggested
- OS: (Optional) 500GB, or larger. SATA OK
- The OS may be installed on the ledger disk, though testing has shown better performance with the ledger on its disk.
- Accounts and ledger can be stored on the same disk, however, due to high IOPS, this is not recommended.
- The Samsung 970 and 980 Pro series SSDs are popular with the validator community.
**GPUs**
- Not necessary at this time.
- Solana operators in the validator community do not use GPUs currently.
The software requirement for Solana validators is Ubuntu 20.04 (recommended). For more information on the Solana validator's requirements check [docs.solanalabs](https://docs.solanalabs.com) and watch the [YouTube video Spotlight: Solana validators and hardware requirements](https://youtu.be/PoVAJvGIdsw?si=if35lg3oyrMTqErv). Validators earn rewards for their services as a transaction fee or with newly minted tokens.
##Challenges Faced by Validators on Solana and Potential Solutions.
Solana validators are facing challenges ranging from high hardware and bandwidth requirements to network congestion and latency, centralization, censorship and slashing risks as well as volatility and uncertainty of rewards. According to Rob Behnke, steep hardware requirements are hindering running a Solana validation node. This is why Solana has only 2,364 validators compared to 440,263 validators on Ethereum as of February 2023. The solution to this is getting more validators without compromising the security, fast speed and low transaction cost that the network is known for.
Network outages have become a norm in the Solana ecosystem. The blockchain experienced 14 network outages in 2022, there was one outage in 2023 on February 25 and one outage was recorded in February 2024. According to Rob Behnke, Solana outages are caused by the inability of Solana validators to handle high transaction loads during peak periods looking at it on a high level. There are not enough validators to spread the workload which eventually leads to network collapse. The tendency for bots to spam the Solana blockchain is high due to its all-time very cheap transaction cost. On the side of the network engineers, bugs have been identified as the root cause of Solana outages. For instance, the cause of the February 5, 2023 outage was traced back to several services on the network running custom block-forwarding software that inadvertently transmitted a huge amount of data, equivalent to several orders of magnitude larger than a normal block. The network’s de-duplication logic was unable to cope with this, overwhelming the Turbine protocol and significantly degrading the network. Solana blockchain core engineers have been implementing and rolling out different network upgrades to solve the outage problem. They have implemented QUIC, fee markets and stake-weighted Quality of Service (QoS) to address the problem. The engineers claimed that QUIC will replace all the UDP-based networking protocols and will be better at enforcing the constraints in the Turbine for a more stable network.
The centralization challenge for the Solana validators arises from the need to increase the hardware requirement for increasing the validator's capability to handle more transactions efficiently. The more stringent the hardware requirements, the higher the cost of running a Solana validator node and the less the number of validators. This will cause centralization rather than decentralization of the Solana blockchain. The best solution is to increase the fees for the congested dApps and retain low fees for normal transactions. This approach retains the decentralization feature of the Solana blockchain, its low cost and prevents bot attacks that cause network outages.
##The Structure and Distribution of Fees within the Solana Network and Their Role in Validator Economics.
Solana has two fee structures the transaction fees and the state fees. The state fee is subdivided into a base fee and a priority fee. The base fee is fixed at 0.000005 SOL at 5000 lamports per signature. A priority fee is usually indicated in the transaction and denominated in microlamports per CU. Transaction fees are paid at the onset of transaction execution and failure to do so renders a transaction invalid and not executed. Half of the transaction fees are kept as an incentive to the leader-validator for including the transaction in the blocks and the remaining half is burned. This design is aimed at retaining leader incentives to include several transactions within its slot time. It provides an inflation-limiting mechanism that prevents tax evasion attacks. The design also reduces the incentive to the censors giving room to detect a malicious censor leader. For instance, in the case of a PoH fork with a malicious censor leader, the total fees destroyed will be less compared to that of an honest fork.
Solana charges a state fee called a rent exemption fee to create a new state. The rent exemption fee is 6.96 SOL per MB and it is assigned to a newly created validator account. Solana transaction fees are set by the network cluster based on the historical throughput and it can be adjusted based on the historical gas usage.
The roles of the transaction fees are many in the validator economics. They provide unit compensation to the validator network for resources used to process the state transaction. They introduce real cost to transactions and therefore reduce network spamming. They open avenues for a transaction market to incentivize validators to accept and process transactions. They introduced a protocol-captured minimum fee per transaction which provides a potential long-term economic stability of the Solana network.
**Comparative Analysis of Fee Economics Between Solana and Other Major Blockchains.**
The three major blockchains that can be compared on a fee economics level are Solana, Polygon and Ethereum. The other blockchains are behind these three. Using the metrics of transaction fees and gas price, Ethereum has a notorious high gas price according to the Blockchain Council. According to the council, Ethereum gas prices and fees usually reach hundreds of dollars per transaction during peak periods and it frustrates smaller transactions. Polygon unlike Ethereum has lower gas prices and transaction fees which is linked to its Proof of Stake consensus algorithms that reduce energy consumption and cost on the network.
In comparison to both Ethereum and Polygon, Solana blockchain stands out with its unique consensus mechanism which combines both Proof of Stake and Proof of History. According to the Blockchain Council, Solana has extremely low transaction fees, charging a few cents per transaction. The network has high throughput, an efficient consensus algorithm, and quick and inexpensive transaction fees. The gas price on the three networks is shown in the table below. Polygon gas price is paid in MATIC while Solana gas price is 0.0001 SOL all converted to their present dollar equivalent at the time of writing.
**<u>Gas Price on Ethereum, Polygon and Solana Blockchains.</u>**
**Blockchain Network : Gas Price ($)**
Ethereum: Several Hundred during peak periods
Polygon: 0.097
Solana: 0.01
##The Impact of Fee Economics on Network Performance, User Experience and Validator Incentives.
Transaction fee is mandatory to get a transaction executed but other factors determine whether a transaction will be executed and added to a block. A validator may lag in block processing, there may be a discrepancy within an RPC pool and the loss of a UDP network packet may lead to transaction drop-off. The loss of a UDP network packet may occur during the peak periods when validators have more transactions than they can handle until the blockhash expires. When a transaction’s recent blockhash is retrieved from an updated segment and later submitted to a slower segment it may lead to discrepancy within an RPC pool. A transaction’s block may end in the minority fork as a result of its validator block processing lagging. Such a block ends up being dropped off.

_Consensus on Solana blockchain. (Source: [Helius.dev](urhttps://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wzu4cqpcr0memh21ko8.pngl))_
In another view, there is evidence that priority fees influence block inclusion on a macro scale but not perfect. Helius RPC data shows that transactions with priority fees have more chances of block inclusion and the higher the priority fees the greater the chance of block inclusion. This is shown in the graphs below, True is the transactions with the priority fee while the y-axis shows the percentage of transactions.

_True = transactions with priority fees, y-axis = Percentage of transactions.
Effect of priority fee on block inclusion. (Source: [Helius.dev](https://www.helius.dev/blog/solana-fees-in-theory-and-practice))_

_Effect of priority fee on block inclusion. (Source: [Helius.dev]_(https://www.helius.dev/blog/solana-fees-in-theory-and-practice))
The second graph also shows that transactions with priority fees also land quickly.

_Solana Priority Fee: (Source: [Helius.dev](https://www.helius.dev/blog/solana-fees-in-theory-and-practice))_
According to Ryan Chern, Solana priority fees experienced an increase in average on January 21, 2024, caused by the mockJUP airdrop has the real JUP airdrop was around the corner. It increases the demand for blockspace and affects users' transaction time and transactions’ land rate. As shown above, the Solana RPC method (getRecentPrioritizationFees) assists developers in determining the priority fees to assign for a transaction. Helius also offers a Priority Fees API that allows additional calculations to determine the best priority fees for a transaction.
It can be stated that Solana validators' incentives come from transaction fees as discussed above and from a protocol-based reward. A protocol-based reward is realized from inflationary issuances from a protocol-defined inflation schedule. Solana inflation design was defined at 8% SOL emissions and reduced by 15% every year. It was activated on February 10th, 2021 with a payment of 213841 SOL according to the Chorus One blog quoting the Solana validator dashboard.

_Solana validators started to receive rewards from inflation on February 10, 2021._
Solana inflation models consider 400ms block time, the present average is about 650ms. A longer block time will result in smaller rewards to validators when the number of epochs (specified consensus operation time) per year is small as shown in the chart below.

_Solana block times over the 35 days ending August 9, 2022._
###The Potential for Negative Commission Rates in Scenarios where Network and Maximum Extractable Value (MEV) Fees Become Sufficiently High.
Maximum Extractable Value (MEV) is the profit that miners or validators can extract through the ordering and execution of transactions strategically within a block. Miners or validators therefore determine the order of transaction especially in front-running or arbitrage where such transaction may offer profit. During periods of network congestion users may compete by paying high prices to have their transactions quickly processed. A validator can try to extract more value through MEV opportunities. Negative commission rates come in when validators are willing to pay users by lowering the commission rates even to a negative value to attract users to delegate with them. Negative commission rate is more supported MEV is prevalent and profitable or when the validator is after maximizing profit.
On the Solana network validators complete the transaction not miners. These validators are grouped in clusters to vote on the validity of a transaction while each validator acts as a leader to avoid corrupt practices in the validation process. This makes only some MEV possible on Solana. According to the Chorus One blog, from July 16 to July 26, 2022, Orca and Raydium recorded 68 MEV with a lower bound of $20,775.
**Analysis of The Long-Term Economic Viability of Validators as Solana’s Inflation Approaches Its Terminal Value.**

_Example of proposed inflation schedule graph on Solana. (Source: [Solana doc](https://docs.solanalabs.com/implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards))_
Solana's inflation schedule is based on the initial inflation rate of 8%, disinflation rate of -15% and long-time inflation rate of 1.5% according to the Solana documentation. Solana has a decreasing inflation rate that will approach its terminal value over time, the time when the Solana network’s token supply will be stabilized. The rate at which new tokens are minted will reduce with a reduction in the validator’s incentive. The reason is that the validator’s reward comes from transaction fees and staking rewards while staking rewards include the new token mints and transaction fees distributed among validators based on their participation in consensus and the amount staked.
The economic viability of a Solana validator depends on its ability to generate income that is above its operational cost. As inflation approaches its terminal value on Solana, the reward for a validator depends more on the transaction fees as the block rewards decrease. This is where Solana's low cost will assist the validators to be economically viable. For individual validators, validators must offer competitive staking rewards and additional services to attract and retain more users staking with them.
##Potential Models or Innovations That Could Support Validator Sustainability under Low Inflation Conditions.
The sustainability of validators under low inflation periods requires alternative revenue apart from rewards from blocks. One of the major potential models for the sustainability of the validator under low inflation conditions on Solana is the implementation of a fee-burn mechanism. The mechanism aims at creating deflationary pressure on native tokens on Solana. It should be recalled that half of the transaction fee is burnt hence permanently removed from the ecosystem. This has an impact on increasing the value of the native tokens from which validators will earn more incentives in the long run.
Validators need to add additional services to attract more users to stake with them. For instance, a validator may add analytic tools or API access to their service. More users will stake their tokens with them to access these value-added services. Validators can also collaborate with projects in the ecosystem by offering integrations or services that will enhance the overall Solana network utility.
There is room for validators to participate in DeFi protocol integrations. Here they can offer services like liquidity providing, asset staking or governance roles. This gives them opportunities to increase their profitability. They can also issue collateralized stable coins and leverage their staked assets.
Validators can also participate in community incentive programs to increase overall network usage and service awareness. They can participate in grants, token distributions, sponsorships and other reward programs that drive active network usage and engagements. A very good example of this is this write-up sponsored by Cogent Crypto. Cogent Crypto is a distinguished validator on the Solana network. They have an annual percentage yield (APY) of 7.74%, a cluster average APY of 7.43% and 1,538,862 delegated SOL. You can contact [Cogent Crypto](https://cogentcrypto.io) to learn more about them.
## The Role of Transaction Fees, Staking and Other Economic Mechanisms in Ensuring Ongoing Validator Incentives.
Transactions on Solana are finalized by validators who put them into blocks by consensus mechanism hence the ongoing incentives for Solana validators help in maintaining the network’s security, stability and decentralization. To maintain the ongoing Solana validators incentives transaction fees, staking and other mechanisms such as block rewards key instruments. It has been established in the sections above that transaction fees are one of the primary sources of revenue for the Solana validators and one of the major sources of revenue as the Solana inflation rate approaches its terminal value.
Solana validators need a reserve of 0.02685864 SOL and to stake SOL tokens to participate in the consensus mechanism. The staking is a collateral that serves as a security deposit to enhance the interest of the validator in the overall Solana network security. The staking also allows the validators to earn staking rewards in the form of newly minted tokens and transaction fees.
Block rewards are another mechanism that ensures Solana validator's incentives. These block rewards consist of newly minted tokens and transaction fees earned by validators for validating and adding new blocks to the blockchain. It can be categorically stated that transaction fees, staking and other mechanisms that ensure the ongoing Solana validators incentives strengthen the relationship between the users, the validators and the network itself. They aid in security enhancement, decentralization and the stability of the Solana ecosystem.
## References.
Solana Validator Economics-A Primer, https://www.helius.dev/blog/solana-validator-economics-a-primer
How to become a Solana Validator, https://medium.com/@Cogent_Crypto/how-to-become-a-validator-on-solana-9dc4288107b7
Official Solana Documentation, https://docs.solanalabs.com
Spotlight: Solana Validator and Hardware Requirements by @nickyfrosty, https://youtu.be/PoVAJvGIdsw?si=if35lg3oyrMTqErv
Solana Fees In Theory and Practice, https://www.helius.dev/blog/solana-fees-in-theory-and-practice
What Problems Are We Facing In Solana? A Deep Dive with SolanaFm, https://solanafm.medium.com
Solana Security Overview by Rob Behnke, https://www.halborn.com
Network Performance, https://solana.com/news/network-performance-report-july-2023
Solana Fees, Part 1 https://www.gate.io
Solana Vs Polygon Vs Ethereum- The Ultimate Comparison, https://www.blockchain-council.org
Solana Fees In Theory and Practice by Ryan Chern, https://www.helius.dev
Exploring Validator Economics on Solana, https://chorus.one
What is MEV and how does Solana solve its MEV issues, https://chainstack.com
Proposed Inflation Schedule, https://solana.com
| bravolakmedia |
1,894,901 | How to deploy Laravel Project to Vercel | Helpful tutorial on deploying laravel project to... | 0 | 2024-06-20T15:18:24 | https://dev.to/msnmongare/how-to-deploy-laravel-project-to-vercel-3ode | webdev, beginners, programming, tutorial | [Helpful tutorial on deploying laravel project to Vercel
https://rzamandala.medium.com/how-to-deploy-laravel-project-to-vercel-7b3c2800e974](url)
[
https://calebporzio.com/easy-free-serverless-laravel-with-vercel](url) | msnmongare |
1,894,900 | The different types of Software Testing | In the dynamic world of software development, ensuring the reliability and functionality of... | 0 | 2024-06-20T15:15:39 | https://dev.to/anthonytran_dev/the-different-types-of-software-testing-4c68 | softwaredevelopment, testing, test, automation | In the dynamic world of software development, ensuring the reliability and functionality of applications is paramount. In this article, I would like to share an insightful overview of various testing types that are essential for maintaining high-quality software. These testing methods are particularly vital within Continuous Delivery (CD) and DevOps environments, where they help achieve faster deployments, robust quality assurance, and reliable software releases.
Understanding the roles, benefits, and best practices of different testing types can greatly enhance your development process. Here's a summary table comparing the various types of software tests:
| Type of Test | Purpose | Level of Automation |
| --- | --- | --- |
| Unit Testing | Tests individual functions or methods in the code for correctness. | Highly automatable |
| Integration Testing | Ensures that different modules or services work well together. | Partially automatable |
| Functional Testing | Verifies that the application functions as per the business requirements. | Automatable with the right tools |
| End-to-End Testing | Simulates user behavior to ensure the entire application works as expected in a full environment. | Partially automatable, often manual |
| Acceptance Testing | Checks if the system complies with the business requirements. | Typically manual |
| Performance Testing | Assesses how the application performs under various conditions. | Automatable |
| Smoke Testing | Basic tests to ensure fundamental functionalities work after major changes. | Highly automatable |
| Exploratory Testing | Encourages testers to explore and find defects not covered by other tests. | Manual |
| Penetration Testing | A simulated cyber attack against your computer system to check for exploitable vulnerabilities. | Automatable |
Each of these testing types plays a critical role in a software development lifecycle, particularly within CD and DevOps frameworks, where they contribute to faster deployments, better quality assurance, and more reliable software releases. Automation is emphasized across most types, except for exploratory and certain acceptance tests, to enhance efficiency and reliability.
These testing strategies help organizations ensure that new features are both functional and reliable, reducing bugs and improving user satisfaction. Automated tests, in particular, are crucial for continuous integration and delivery pipelines, allowing for frequent and dependable software updates. | anthonytran_dev |
1,894,839 | How to start your journey in Polkadot | You might have heard about Polkadot from Twitter, an event, TV, or even a Lyft car with Polkadot's... | 0 | 2024-06-20T15:14:01 | https://dev.to/ayomide_bajo/how-to-start-your-journey-in-polkadot-3j2o | web3, beginners, community, writing | You might have heard about Polkadot from Twitter, an event, TV, or even a Lyft car with Polkadot's banner. Regardless of how you found out, you want to know how to join the community or stay updated.
This article is a soft guide on how you can navigate through Polkadot’s ecosystem. If you don't have much knowledge on web3 you can follow this [free course](https://www.edx.org/learn/blockchain/web3-foundation-introduction-to-blockchain-and-web3?webview=false&campaign=Introduction+to+Blockchain+and+Web3&source=edx&product_category=course&placement_url=https%3A%2F%2Fwww.edx.org%2Fschool%2Fweb3x) by the web3 foundation.
### What is Polkadot?
Polkadot is a decentralised, open-source blockchain platform designed to enable interoperability and scalability among different blockchains, it enables asset transfer between blockchains. These blockchains are built on top of it and batch in transactions like Rollups do in Ethereum.
### What are its key features and philosophy?
- **True interoperability**: This might sound cliche to you but this is provable, I'll spare you the technical details for later. It is possible to build a bridge with verifiable consensus proofs with Polkadot, check out [Hyperbridge](https://blog.polytope.technology/introducing-hyperbridge-interoperability-coprocessor) for more information.
Polkadot enables cross-blockchain transfers of any type of data or asset, not just tokens. Connecting to Polkadot gives you the ability to interoperate with a wide variety of blockchains in the Polkadot network.
- **User-driven governance**: Polkadot features an advanced governance system that gives all stakeholders a say. Network upgrades are managed on-chain and implemented automatically, avoiding the need for network forks. This approach ensures that Polkadot's development is both forward-thinking and guided by the community.
- **Low cost of transactions**: Transaction fees in Polkadot's ecosystem are as high as 0.05 USD😄
You can make as many transactions as you like without worrying about high fees.
- **Security for everyone**: Polkadot's innovative data availability and validity scheme enables meaningful chain interactions. While each chain maintains its governance, they share a unified security framework.
### Blockchains that are thriving on Polkadot.
There are a number of blockchains available for different purposes:
- [Hydration net](https://hydration.net/): Hydra dx (now Hydration net) allows users to trade efficiently an abundance of assets in a single pool. You can also borrow a lot of digital assets after providing collateral in one or several accepted currencies.
- [Polimec](https://www.polimec.org/): Polimec allows for decentralized, transparent, and regulatory-compliant fundraising, aligning stakeholder incentives throughout the process.
- [Hyperbridge](https://docs.hyperbridge.network/): Hyperbridge pioneers a new class of coprocessors that leverage their consensus proofs to attest to the correctness of the computations performed on-chain. With this protocol, you can bridge assets and verify the computation of the bridged assets.
- [Tanssi](https://www.tanssi.network/post/intro-to-tanssi-appchain-infrastructure-protocol): Tanssi stands as a beacon of hope for developers navigating the complex landscape of blockchain application development.
- [Phala network](https://phala.network/): Phala network is an AI coprocessor for blockchains it is shaping the Future of AI and Web3 with Tokenization.
- [Moonbeam](https://moonbeam.network/): Moonbeam provides a strong DeFi ecosystem, allowing easy access to various decentralized finance applications. With full Ethereum compatibility and unique cross-chain features, developers and users can benefit from the Polkadot ecosystem while using familiar Ethereum tools.
### Get identified with Polkadot
You can get identified with Polkadot, it is similar to having an identity ENS(Ethereum Name Service). This is a good place to start if you want to participate in the forum and the open-gov discussions. Plus you could also receive tokens through the address. You can get started [here](https://support.polkadot.network/support/solutions/articles/65000187627-how-to-set-your-on-chain-identity-on-polkassembly)
Polimec also provides KYC features for anyone who wants their identity on-chain, it would also require you to present a means of identification, like a passport or driver's license.
### Contributing to Polkadot’s Community
There are different ways to contribute to the community. You can come in as a developer or a community builder.
### For Developers
To learn anything in development, start doing it for fun. This way, you would be open to exploring different parts of the protocol. There's also a free course that gives you an [intro to Polkadot](https://www.edx.org/learn/computer-programming/web3-foundation-introduction-to-polkadot?webview=false&campaign=Introduction+to+Polkadot&source=edx&product_category=course&placement_url=https%3A%2F%2Fwww.edx.org%2Fschool%2Fweb3x). You can start with something you want to learn for fun or tackle a problem you want to solve. While tackling the problem, you can ask questions if you need help.
There are different ways to start as a dev in Polkadot:
1. Blockchain dev
2. Smart contract dev
3. Application dev
- **Blockchain dev**: To become a chain dev, you must already have some advanced experience with web3. If you want to build a chain without going through rigorous months of building. If you simply want to launch an MVP within days then you should learn more about [Tanssi](https://www.tanssi.network/). Appchains connected to the Tanssi Network transform into ContainerChains, gaining access to a range of tools and resources.
- **Smart contract dev**: Different smart contract environments are available for devs. You can either build your solidity smart contract on [Moonbeam](https://moonbeam.network/) or [Astar](https://astar.network/). You can build Ink! smart contracts on Phala network, which are called Phat contracts, these contracts allow anyone to make API requests into their smart contracts. You could also build ink smart contracts on [AlephZero](https://alephzero.org/).
- **Application dev**: This is the easiest way for web2 devs to build on Polkadot. This is great for people who have an existing product and want to explore or combine web3 primitives for their product. This is also a great way to explore the potential of building on web3 without going through the nitty-gritty of the protocol. One of the best places to start is [Appilion](https://apillon.io/).
### For community builders
Community builders are an important part of the protocol. Polkadot can be described as the largest DAO. The protocol design and upgrade can also be determined by the community.
You can be a threadoor😁, a technical writer contributing to the [wiki](https://wiki.polkadot.network/) or a community builder hosting meetups to foster collaboration and community through the events bounty.
There are a number of regions which have meetups and events for community builders, you can join and follow the region closest to you. Currently, we have:
- [Polkadot global community](https://x.com/Polkadot).
- [Polkadot Latam](https://x.com/polkadotlatam)
- [Polkadot Africa](https://x.com/AfricaPolkadot)
- [Polkadot North America](https://x.com/Polkadot_NA)
- [Polkadot Argentina](https://x.com/PolkadotArgenta)
- [Polkadot vietnam](https://x.com/PolkadotVN)
### For Ambassadors
At the time of this writing, the ambassador program is undergoing a major change which will be determined by the community. There will be Head Ambassadors elected to represent and take charge of the ambassadors in various regions. You can stay updated by joining the [Discord server](https://discord.gg/polkadot).
Joining Polkadot's community doesn't have to be daunting. With the right guidance, you should be able to keep up and start making contributions.
| ayomide_bajo |
1,894,897 | RECOVER SCAMMED BTC, ETH, BNB, SOL, XRP, USDT, USDC, ADA, AVAX, DOGE, CRY, BTES RECOVER WITH BRUNOE QUICK HACK | I NEED TO GET HELP FROM BRUNOE QUICK HACK FOR SCAMMED ETH RECOVERY Brunoe Quick Hack: Your Knights... | 0 | 2024-06-20T15:11:27 | https://dev.to/laura_lilyshelton_4f9850/recover-scammed-btc-eth-bnb-sol-xrp-usdt-usdc-ada-avax-doge-cry-btes-recover-with-brunoe-quick-hack-5505 | webdev, programming, python, devops | I NEED TO GET HELP FROM BRUNOE QUICK HACK FOR SCAMMED ETH RECOVERY
Brunoe Quick Hack: Your Knights in Shining Armor Against Scammers
Enough is enough! It's time to stand against scammers and reclaim what's rightfully ours. Brunoe Quick Hack is the ultimate ally in this fight for justice. Their skilled hackers can infiltrate scammers' networks, retrieve your stolen funds, and ensure that justice is served. Don't let scammers continue their deceitful ways - trust in Brunoe Quick Hack's expertise and let them lead you to victory. As renowned hackers armed with cutting-edge technology, you can find out more at brunoequickhack(.)com or reach out on WhatsApp at +(1)70578-42635 for great assistance we guarantee you can Contact us at (BRUNOEQUICKHACK at gmail dot com) 100% secure and successful retrieval of your scammed funds. Let's band together, overthrow scammers, and reclaim what is rightfully ours. From infiltrating databases | laura_lilyshelton_4f9850 |
1,894,894 | Microsoft Copilot: in-depth overview | This post is a quick overview of an Abto Software blog article. This year’s Microsoft Copilot debut... | 0 | 2024-06-20T15:09:36 | https://dev.to/abtosoftware/microsoft-copilot-in-depth-overview-1jma | ai, productivity, datascience, microsoft | _This post is a quick overview of an Abto Software [blog article](https://www.abtosoftware.com/blog/what-is-copilot-technology-in-depth-overview)._
This year’s Microsoft Copilot debut opened an exciting new chapter in the information & technology industry.
This groundbreaking AI assistant has generated considerable excitement by promising unmatched productivity. With this AI solution, you control your workflows while leveraging facilitated creativity, analytical capabilities, and efficiency in your everyday activities.
## What is Copilot in simple terms?
**A Copilot is an advanced technology that implements artificial intelligence to facilitate business productivity.**
Microsoft’s Copilot is designed to enable the capabilities of large language models all across Microsoft 365. Microsoft Copilot can work within standard MS applications – Word, Excel, PowerPoint, Teams, and Outlook – to help out with digital assets.
## How is Copilot different from ChatGPT?
Microsoft’s Copilot and ChatGPT were designed to automate usually resource-intense everyday processes. While seeming quite similar, those have sensible differences.
In particular:
- Microsoft’s technology, while working alongside popular MS applications, is leveraging the capabilities of large language models to provide personalized assistance
- OpenAI’s ChatGPT, a chatbot and state-of-the-art virtual assistant, is using autoregression mechanisms to generate human-like responses
MS Copilot is automating repetitive workflows by using artificial intelligence and retrieving business insights. The technology is embedded in applications to assist with documents, tables, presentations, content creation, email management, and more.
OpenAI ChatGPT is engaging in dialogues by using machine & deep learning, and natural language processing. The chatbot can mimic human interaction and provide context-rich responses.
## What is Copilot’s divergence among competitors?
### Google Duet
Microsoft Copilot presents several advanced functionalities – text and voice input, image handling, and others. The assistant can empower business leaders by automating customer service, sales activities, and cooperation.
Google Duet offers fewer modern capabilities, but provides regular updates and patches driving performance. The solution is perfect to accelerate software development.
### GitHub Copilot
What about GitHub’s Copilot?
GitHub Copilot is an advanced assistant that provides essential capabilities to automate software development. The solution is perfect to streamline software development and other related activities.
## What is Copilot usually used for?
### Healthcare industry
In the healthcare domain, among many other things, the assistant can handle:
- Data processing and management – data analysis to provide accurate diagnosis and treatment
- Appointment scheduling and reminders – administrative processes to simplify appointment scheduling, patient management, automatic reminders, and other related activities
### Retail industry
In the retail domain, talking about essential functions, the assistant can handle:
- Behavior analysis – data analysis to determine customer needs and preferences, thus facilitating shopping experiences
- Sales forecasting – market analysis to recognize future trends and demand, thereby eliminating potential overstocking and stockouts
### Finance & banking industry
In the finance and banking segment, the solution might improve:
- Risk management – dataset analysis to assess market risks and predict potential problems
- Fraud detection – pattern analysis to detect fraudulent activity and secure financial transactions
### Education industry
In the education segment, the solution might enhance:
- Personalized programs – the analysis of progress to create personalized programs
- Interactive materials – the creation of engaging, interactive materials and presentations
## The cost of integrating MS Copilot
If you get the free subscription, you get to use:
- GPT-4 and GPT-4 Turbo access during non-peak times
- Text, voice, and images for advanced conversational search
- Image generation, enabled through AI algorithms, with 15 daily boosts
- Additional plug-ins
If you get the paid subscription, you access additional functionality, in particular:
- GPT-4 and GPT-4 Turbo access during peak times
- Intelligent assistance all throughout Microsoft applications
- Image generation, enabled through AI algorithms, with 100 daily boosts
- Custom GPTs to cater individual needs and interests
## The process of installing MS Copilot
To activate the assistant and automate your workflows, you’ll need to have several prerequisites:
- Monthly subscription
- Entra ID
- OneDrive account
- Network configuration
- Personal accounts within the Microsoft 365
- Microsoft Whiteboard
After activation, the assistant will provide multiple features within prominent Microsoft applications:
- Within Word – document drafting and revision, improvement suggestions
- Within Excel – insight generation, information visualization, pattern detection, workflow creation
- Within PowerPoint – presentation creation using natural language processing
- Within Outlook – email management, including suggestions
- Power Platform – prototyping assistance in particular
- Business Chat – content generation, event navigation, and more
## Microsoft Copilot’s smooth integration
Any integration should encompass critical components from thorough environment preparation to assessment. Business continuity, desired efficiency and performance, future sustainability, and even regulatory compliance are benefits enabled through these stages.
In particular, the process should comprise:
- The validation of prerequisites and preliminary environment analysis
- The review of established access and security settings
- The evaluation of risks
- The deployment to the pilot group & documentation
- The adjustment of parameters
- The deployment to the target group & assessment
## Microsoft Copilot’s future prospects
The technology is revolutionizing the approaches towards implementing and leveraging artificial intelligence. The solution is transforming business operations across industries by harnessing machine and deep learning, recursive neural networks, large language models, and large image datasets.
The assistant does also present concerns – data leakage, unauthorized access, and other common problems. But applying robust mechanisms, you can mostly eliminate the potential reputational and financial damage.
## Summing up
Abto Software, Microsoft Gold Certified Partner, can deliver enterprise-level products empowering leaders. Our company, by leveraging Microsoft Copilot along with other technology and specialized domain expertise, can facilitate code generation, context awareness, thorough testing and debugging, prototyping, hackathons, and more.
Our services:
- .NET development
- ASP.NET development
- Web app development
- Mobile app development
- VB6 migration
- Full-cycle, custom software development
| abtosoftware |
1,894,893 | EXTRACTING DUPLICATES USING SOQL QUERIES | 1: Simple find account duplicates example SELECT Count(id), name,ShippingCity FROM... | 0 | 2024-06-20T15:07:02 | https://dev.to/ahmed_hammami_33f0d95ab89/extracting-duplicates-using-soql-queries-2l7k | salesforce, soql | #### 1: Simple find account duplicates example
```
SELECT Count(id), name,ShippingCity FROM account GROUP BY ShippingCity, name having Count(ID) > 1
```
Explanation: we can effectively detect duplicates in Salesforce. This query groups records based on specific criteria, such as name and city in the case of accounts, and then counts the number of records within each group. By filtering the results to include only groups where the count is greater than 1, we identify instances where multiple records share the same name and city, indicating potential duplicates. This approach allows us to quickly identify and address duplicate data, helping to maintain data integrity and accuracy within the Salesforce platform.
Now to get the IDs of those records (because in the query results it would be “aggregate results”) we'd typically need to run two separate queries. The first query, identifies the duplicates based on specific criteria. Then, we use the information from that query to construct a second query to fetch the IDs of those duplicate records, in this case we could use the name values exported with the first query, we can copy them in an Excel file and then put them in SOQL query,
example:
```
SELECT id, name,ShippingCity FROM account WHERE Name in (‘’)
```
So in summary:
1_ Run our initial aggregate query to identify the duplicates based on
name and shipping city.
2_ Retrieve the names and shipping cities of the duplicate records.
3_ Construct a new query using the retrieved names and shipping cities
to fetch the IDs of those duplicate records.
#### 2: Real life scenario with detailed explanation
We’ll now examine a real life scenario example; looking for duplicate ContactOpportunityRole records created by user has “hammami” in name, for these records to be duplicates they MUST have the same OpportunityId and ContactId.
I find this query was a good example to show how we could add a lot of logic to quickly determine duplicates using Salesforce Inspector or any other Salesforce tool.
Query for finding duplicates:
```
SELECT Opportunity.Name AS Op_Name,
Opportunity.Id,
Contact.Name AS Cnt_Name,
COUNT(OpportunityId) AS NB_OP
FROM OpportunityContactRole
WHERE CreatedBy.Name LIKE '%hammami%'
GROUP BY Contact.Name, Opportunity.Name, Opportunity.Id
HAVING COUNT(OpportunityId) > 2
```
Query breakdown;
This SOQL query retrieves data from the OpportunityContactRole object in Salesforce and performs the following actions:
1. It selects the following fields:
_Opportunity Name (renamed as "Op_Name")
_Opportunity ID
_Contact Name (renamed as "Cnt_Name")
_The count of Opportunity IDs associated with each record
(renamed as "NB_OP")
2. It filters the records based on the name of the user who created them. The filter condition is that the "CreatedBy" field should contain the name '%hammami%', meaning it should match any user whose name contains "hammami".
3. It groups the records by the Contact name, Opportunity name, and Opportunity ID. This means that records with the same combination of Contact name, Opportunity name, and Opportunity ID will be grouped together.
4. It applies a filter condition using the "HAVING" clause to include only groups where the count of Opportunity IDs is greater than 2.
How this query can determine duplicates;
In this SOQL query, we're using the "GROUP BY" clause to group the records by Contact name, Opportunity name, and Opportunity ID. This allows us to aggregate the data and perform calculations (such as counting the number of Opportunity IDs) for each unique combination of Contact and Opportunity.
We're also using the "HAVING" clause to filter the grouped results. Specifically, we're filtering to include only groups where the count of Opportunity IDs is greater than 2. This means that we're interested in finding instances where a Contact is associated with the same Opportunity more than twice.
So, by using the "GROUP BY" and "HAVING" clauses in this query, we're effectively extracting and filtering data to identify Contacts that are associated with the same Opportunity multiple times or in other words, duplicates.
| ahmed_hammami_33f0d95ab89 |
1,894,892 | AsyncAPI in 15 Minutes | AsyncAPI is a specification for designing asynchronous APIs, similar to how OpenAPI is used for... | 0 | 2024-06-20T15:06:32 | https://dev.to/raphael-dumhart/asyncapi-in-15-minutes-hel | eventdriven, learning, microservices | AsyncAPI is a specification for designing asynchronous APIs, similar to how OpenAPI is used for synchronous APIs. Thereby it is agnostic to any specific messaging protocoll. Asynchronous communication involves sending and receiving data without expecting an immediate response. This making it suitable for event-driven architectures, microservices, IoT, and streaming applications.
AsyncAPI helps formalize and document these interactions, ensuring that different components can communicate efficiently. The specification includes key elements like channels, messages, and operations. Channels are specific communication pathways used by messaging systems like RabbitMQ or Kafka, and each channel can handle various messages and operations.
To get started with AsyncAPI, one can use the AsyncAPI Studio, which provides an interactive interface for creating and experimenting with API specifications. For more advanced usage, the CLI (Command Line Interface) is recommended. Installation of the CLI can be done via NPM or by downloading and installing the binaries.

An example of a basic AsyncAPI specification might include details such as the API version, metadata (like title and description), and the channels through which messages are sent. Additionally, the specification can include server configurations for different environments (development, testing, production) and components that define reusable message structures.
One of the significant advantages of AsyncAPI is its ability to promote loose coupling between components. This is achieved through asynchronous messaging, where components can operate independently and communicate via events. This design pattern is particularly beneficial in microservices architectures, where services need to interact without tight dependencies.
AsyncAPI also supports various messaging protocols, including AMQP, MQTT, and Kafka, making it versatile for different use cases. It enables developers to describe how messages should be structured, ensuring consistency and reliability in communication across the system.
Moreover, AsyncAPI facilitates the creation of comprehensive documentation and code generation, similar to OpenAPI. This capability enhances the development process by providing clear, standardized communication pathways and reducing the likelihood of errors.
In terms of best practices, it is crucial to focus on reusability and modularity. Using references within the specification can help minimize redundancy and maintain consistency. Additionally, organizing specifications by localizing them within respective project repositories can improve manageability and reduce complexity.
Ensuring uniformity across specifications is another best practice. This can be achieved by defining a style guide for naming conventions, file structures, and other elements within the specification. Consistency in these areas enhances readability and maintainability.
One of the future goals of the AsyncAPI initiative is to become the leading API specification, integrating seamlessly with other specifications like OpenAPI, GraphQL, and gRPC. This vision aims to unify different API standards, simplifying the development process and fostering a more cohesive ecosystem.
In conclusion, if you are utilizing asynchronous communication but haven't adopted AsyncAPI, it's a good time to standardize your interfaces. By introducing AsyncAPI, you can improve have well-defined asynchronous interfaces.
For more detailed insights, you can read the [full post in German](https://www.raphaeldumhart.at/blog/asyncapi-in-15-minuten/).
Disclaimer: This post was partly created with AI for summary and translation. | raphael-dumhart |
1,894,891 | Serverless Event-Driven Architectures with AWS Lambda and Amazon EventBridge | Serverless Event-Driven Architectures with AWS Lambda and Amazon EventBridge Modern... | 0 | 2024-06-20T15:05:56 | https://dev.to/virajlakshitha/serverless-event-driven-architectures-with-aws-lambda-and-amazon-eventbridge-550h | 
# Serverless Event-Driven Architectures with AWS Lambda and Amazon EventBridge
Modern application development increasingly embraces event-driven architectures for their scalability, responsiveness, and efficiency. Serverless computing takes these advantages further by abstracting away infrastructure management, allowing developers to focus on application logic. This blog post explores the synergy of serverless and event-driven paradigms using AWS Lambda and Amazon EventBridge.
### Understanding Serverless and Event-Driven Architectures
**Serverless computing** allows developers to run code without provisioning or managing servers. AWS Lambda exemplifies this by providing a fully managed environment where code execution scales automatically based on demand.
**Event-driven architectures** revolve around the production, detection, and consumption of events – occurrences within a system that signify a change in state. These events trigger asynchronous actions, decoupling components and enhancing flexibility.
### Introducing AWS Lambda and Amazon EventBridge
**AWS Lambda** forms the backbone of serverless function execution within AWS. Developers upload code as functions, which Lambda executes upon triggered events. This event-driven nature makes Lambda ideal for tasks ranging from data processing to backend logic.
**Amazon EventBridge** acts as the nervous system, providing a serverless event bus that facilitates event routing and filtering. It ingests events from various sources, including AWS services, SaaS applications, and custom applications. With rules and routing mechanisms, EventBridge ensures that events reach the appropriate consumers for processing.
### Use Cases for Event-Driven Architectures with Lambda and EventBridge
The combination of Lambda and EventBridge unlocks a wide array of use cases:
1. **Real-Time Data Processing and Analytics**
**Use Case:** Imagine a real-time analytics platform for tracking website user behavior. As users interact with the website (e.g., page views, clicks), events are generated and sent to EventBridge.
**Technical Implementation:** EventBridge rules filter and route these events to Lambda functions dedicated to specific analyses, such as session duration calculation or user segmentation. These functions can then update dashboards, trigger alerts, or feed data into analytics pipelines.
2. **Microservices Orchestration**
**Use Case:** In a microservices architecture, different services need to communicate effectively. Event-driven patterns, facilitated by EventBridge, provide a loosely coupled solution.
**Technical Implementation:** When a service completes an action (e.g., order processing), it publishes an event to EventBridge. Other services subscribed to this event type are automatically notified and can execute their respective logic (e.g., inventory update, payment processing).
3. **Application Integration**
**Use Case:** Modern applications often need to integrate with third-party services or legacy systems. EventBridge simplifies this by offering pre-built integrations with numerous SaaS applications and AWS services.
**Technical Implementation:** Consider a scenario where a new customer is added to a CRM system. EventBridge can capture this event and trigger a Lambda function to synchronize the customer data with a marketing automation platform.
4. **IoT Data Ingestion and Processing**
**Use Case:** IoT devices generate vast amounts of data. EventBridge and Lambda provide a scalable and responsive solution for handling this influx.
**Technical Implementation:** Sensors on devices can send data to EventBridge via AWS IoT Core. EventBridge rules then direct this data to Lambda functions for analysis, anomaly detection, or storage in databases.
5. **Serverless Workflows**
**Use Case:** Many business processes involve a sequence of steps. EventBridge and Lambda can orchestrate these steps in a loosely coupled and scalable manner.
**Technical Implementation:** Imagine an e-commerce order fulfillment workflow. Each stage – order placement, payment confirmation, shipping notification – can be represented as an event. EventBridge ensures that these events trigger the appropriate Lambda functions in sequence, ensuring a smooth and reliable workflow.
### Alternatives and Comparison
While AWS Lambda and EventBridge provide a robust solution for serverless event-driven architectures, other cloud providers offer comparable services:
* **Google Cloud Platform:** Google Cloud Functions (serverless functions) and Pub/Sub (event streaming) present an alternative ecosystem.
* **Microsoft Azure:** Azure Functions handle serverless computing, while Azure Event Grid provides event routing capabilities.
Key differentiators for AWS include its:
* **Maturity and Breadth of Services:** AWS has a longer history in the serverless space, resulting in a wider range of supporting services and integrations.
* **EventBridge's Partner Ecosystem:** EventBridge's pre-built integrations with SaaS and enterprise applications offer significant advantages for application integration.
### Conclusion
The synergy of AWS Lambda and Amazon EventBridge empowers developers to build highly scalable, responsive, and cost-effective event-driven applications. By abstracting away infrastructure management, these serverless technologies allow teams to focus on innovation and business logic. As the serverless landscape continues to evolve, expect further advancements in tooling and capabilities, making event-driven architectures increasingly accessible and powerful.
### Architecting an Advanced Serverless Event-Driven Solution: Real-Time Fraud Detection
**The Challenge:**
A financial institution seeks to enhance its fraud detection capabilities with a real-time system that analyzes transactions and identifies potentially fraudulent activities.
**Solution Architecture:**
1. **Data Ingestion:** Transaction data from various sources (ATMs, online banking, point-of-sale systems) is streamed into Amazon Kinesis Data Streams, a managed service for real-time data streaming.
2. **Real-time Processing with Lambda and EventBridge:**
* Kinesis Data Streams triggers Lambda functions to perform initial data transformation and enrichment (e.g., geolocation lookup based on IP address).
* Enriched transaction data is sent to EventBridge.
* EventBridge rules route transactions to different Lambda functions based on pre-defined risk factors (e.g., transaction amount, location, merchant category).
3. **Machine Learning for Fraud Scoring:**
* Specialized Lambda functions utilize pre-trained machine learning models (developed and deployed using Amazon SageMaker) to analyze transactions in real-time.
* These models assign a fraud risk score to each transaction based on learned patterns.
4. **Alerting and Actionable Insights:**
* High-risk transactions trigger alerts, notifying a dedicated fraud prevention team.
* Alerts can be delivered through Amazon SNS (Simple Notification Service) via email, SMS, or integrated with incident management systems.
* Data is persisted in a data lake (Amazon S3) and a data warehouse (Amazon Redshift) for further analysis and reporting.
**Benefits:**
* **Real-time Fraud Detection:** The solution analyzes transactions as they occur, enabling immediate detection and response to fraudulent activities.
* **Scalability and Availability:** The serverless architecture automatically scales to handle fluctuating transaction volumes and ensures high availability.
* **Cost-Effectiveness:** By using Lambda and EventBridge, the solution avoids the need for managing servers, reducing costs and operational overhead.
* **Machine Learning Integration:** Integration with Amazon SageMaker allows for the development and deployment of sophisticated machine learning models, improving fraud detection accuracy.
This advanced use case demonstrates the power and flexibility of AWS Lambda and Amazon EventBridge for building sophisticated, real-time, and scalable event-driven architectures to address complex business challenges.
| virajlakshitha | |
1,894,877 | Twilio Challenge: Building HR Whatapp bot for your workforce | This is a submission for Twilio Challenge v24.06.12 What I Built This is whatapp HR Bot... | 0 | 2024-06-20T15:04:38 | https://dev.to/bogere/building-hr-whatapp-bot-for-your-workforce-35j1 | devchallenge, twiliochallenge, ai, twilio | ---
title: Twilio Challenge: Building HR Whatapp bot for your workforce
published: true
tags: devchallenge, twiliochallenge, ai, twilio
---
*This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What I Built
<!-- Share an overview about your project. -->
This is whatapp HR Bot that allows HR manager to load a CSV file for employee FAQs and then the employees can ask questions about it using natural language. The application uses an LLM to generate a response about your CSV File and then sends to WhatsApp via the Twilio API
## Demo
<!-- Share a link to your app and include some screenshots here. -->
You can try it out by scanning the QR Code in the image or by texting the code to join something primitive to the number +14155238886 on WhatsApp. Next, ask a sample question such as What are the office hours or any other HR-related question

### source code
{% github bogere/whatapp_hr_bot %}
## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
I used the langchain to utilise the openAI API to train the frequent Asked Questions(FAQs) information from the CSV file and then answer questions to the whatsapp user using the Twilio Programmable Messaging (WhatsApp Sandbox) and Twilio Programmable Voice for sending voice call responses.
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
Twilio Times Two - The project uses Twilio Programmable Messaging (WhatsApp Sandbox).
Impactful Innovators - The project empowers employees to get access to employees handbook information without over burdening the HR manager
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->

| bogere |
1,894,889 | "Introduction to C#: A Beginner's Guide to Getting Started" | Hello Dev community! 👋 Are you new to programming or looking to expand your skills into backend... | 0 | 2024-06-20T15:03:48 | https://dev.to/emmanuelmichael05/introduction-to-c-a-beginners-guide-to-getting-started-gnb | backenddevelopment, beginners, csharp | Hello Dev community! 👋 Are you new to programming or looking to expand your skills into backend development? In this post, I'll guide you through the basics of C#, a powerful and versatile programming language that's widely used in building applications for Windows, web, and more. Whether you're curious about coding or ready to dive into your first language, let's explore the fundamentals of C# together!
**What is C#?**
C# (pronounced as "C sharp") is a modern, object-oriented programming language developed by Microsoft. It's part of the .NET framework and is known for its simplicity, type-safety, and scalability. C# is widely used for developing desktop applications, web services, games, and much more.
**Getting Started with C#**
Setting Up Your Development Environment
To start coding in C#, you'll need:
- **Visual Studio**: A powerful IDE (Integrated Development Environment) for Windows users.
- **Visual Studio Code**: A lightweight IDE preferred by many developers for its versatility and cross-platform support.

*Caption: Visual Studio and Visual Studio Code logos.*
##### Writing Your First C# Program
Let's dive into writing your first C# program. Open your preferred IDE (e.g., Visual Studio Code) and create a new C# file.
```csharp
using System;
class Program
{
static void Main()
{
Console.WriteLine("Hello, C#! Welcome to the world of programming.");
}
}
```
*Caption: Example of a simple C# program printing a welcome message.*
#### Key Concepts in C#
##### Variables and Data Types
In C#, you declare variables to store data. Here are some common data types:
- `int`: Integer numbers (e.g., 42)
- `double`: Floating-point numbers (e.g., 3.14)
- `string`: Text (e.g., "Hello, World!")
- `bool`: Boolean values (e.g., true or false)
```csharp
int age = 25;
double pi = 3.14159;
string message = "Hello, World!";
bool isTrue = true;
```
##### Control Flow: Conditional Statements and Loops
Control flow structures allow you to control how your program executes:
- `if` statement for conditional logic.
- `for` loop for repetitive tasks.
```csharp
int number = 10;
if (number > 0)
{
Console.WriteLine("Number is positive.");
}
else
{
Console.WriteLine("Number is non-positive.");
}
for (int i = 0; i < 5; i++)
{
Console.WriteLine("Iteration: " + i);
}
```
#### Conclusion
Congratulations! You've taken your first steps into the world of C#. We've covered the basics of setting up your environment, writing your first program, and understanding key concepts like variables, data types, and control flow. Keep exploring and practicing to strengthen your skills in C# programming.
#### Next Steps
- Explore more advanced topics such as object-oriented programming, exception handling, and LINQ.
- Join online communities and forums to connect with other C# developers and share your learning journey.
#### Connect with Me
Let's continue the conversation! Connect with me here on Dev.to and share your experiences learning C#. I'm here to help and learn together.

| emmanuelmichael05 |
1,894,888 | The Basic Structure of a JavaFX Program | The abstract javafx.application.Application class defines the essential framework for writing JavaFX... | 0 | 2024-06-20T15:03:29 | https://dev.to/paulike/the-basic-structure-of-a-javafx-program-1b8d | java, programming, learning, beginners | The abstract **javafx.application.Application** class defines the essential framework for writing JavaFX programs. We begin by writing a simple JavaFX program that illustrates the basic structure of a JavaFX
program. Every JavaFX program is defined in a class that extends **javafx.application.Application**, as shown in the program below:


The **launch** method (line 23) is a static method defined in the **Application** class for launching a stand-alone JavaFX application. The **main** method (lines 22–24) is not needed if you run the program from the command line. It may be needed to launch a JavaFX program from an IDE with a limited JavaFX support. When you run a JavaFX application without a main method, JVM automatically invokes the **launch** method to run the application.
The main class overrides the **start** method defined in **javafx.application.Application** (line 9). After a JavaFX application is launched, the JVM constructs an instance of the class using its **no-arg** constructor and invokes its **start** method. The **start** method normally places UI controls in a scene and displays the scene in a stage, as shown in Figure below (a).

Line 11 creates a **Button** object and places it in a **Scene** object (line 12). A **Scene** object can be created using the constructor **Scene(node, width, height)**. This constructor specifies the width and height of the scene and places the node in the scene.
A **Stage** object is a window. A **Stage** object called _primary stage_ is automatically created by the JVM when the application is launched. Line 14 sets the scene to the primary stage and line 15 displays the primary stage. JavaFX names the **Stage** and **Scene** classes using the analogy from the theater. You may think stage as the platform to support scenes and nodes as actors to perform in the scenes.
You can create additional stages if needed. The JavaFX program in program below displays two stages, as shown in Figure above (b).

Note that the main method is omitted in the listing since it is identical for every JavaFX application. From now on, we will not list the **main** method in our JavaFX source code for brevity.
By default, the user can resize the stage. To prevent the user from resizing the stage, invoke **stage.setResizable(false)**. | paulike |
1,894,887 | Stop Using LocalStorage: Discover the Power of BroadcastChannel | In the world of web development, efficient communication between different parts of an application is... | 0 | 2024-06-20T15:00:59 | https://dev.to/henriqueschroeder/stop-using-localstorage-discover-the-power-of-broadcastchannel-26fe | webdev, javascript, programming |
In the world of web development, efficient communication between different parts of an application is crucial. While localStorage is widely used to share data between tabs, it has its limitations. A powerful and lesser-known alternative is the BroadcastChannel API. In this post, we'll explore what BroadcastChannel is, how to use it, practical use cases, its limitations, and best practices.
## What is BroadcastChannel?
BroadcastChannel is a JavaScript API that allows communication between different browsing contexts (tabs, windows, iframes) within the same browser and domain. With it, you can send messages simply and efficiently between these different instances, overcoming some of the limitations of localStorage.
## How Does It Work?
The operation of BroadcastChannel is quite straightforward. You create a communication channel with an identifier, and any message sent to that channel will be received by all listeners connected to it.
### Usage Example
1. **Creating the Channel**
```javascript
const channel = new BroadcastChannel('my_channel');
```
2. **Sending Messages**
```javascript
channel.postMessage('Hello, world!');
```
3. **Receiving Messages**
```javascript
channel.onmessage = (event) => {
console.log('Message received:', event.data);
};
```
### Closing the Channel
To close the channel and free up resources, use the `close()` method:
```javascript
channel.close();
```
## Use Cases
1. **State Synchronization:** Keep the application state synchronized across multiple tabs, such as updating a shopping cart in real-time.
2. **Notifications:** Send notifications or alerts between different contexts, like chat messages.
3. **Global Logout:** Implement a global logout that invalidates sessions in all tabs at once.
4. **Media Playback Control:** Synchronize media playback between different tabs.
5. **Settings Adjustments:** Update user settings simultaneously in multiple windows.
## Limitations
- **Compatibility:** Supported only in modern browsers, so check compatibility before using in production.
- **Same Domain:** Works only between contexts of the same domain, limiting its application to a single site.
- **Performance:** In intensive use cases, there may be performance impacts depending on the volume and frequency of messages.
## Best Practices
1. **Unique Identifiers:** Use descriptive channel identifiers to avoid collisions and confusion.
2. **Error Handling:** Always implement error handling to deal with possible communication failures.
3. **Resource Cleanup:** Close unused channels to free up resources and avoid memory leaks.
4. **Security:** Do not send sensitive information via BroadcastChannel without adequate encryption measures.
### Advanced Example: Theme Synchronization
Imagine you have an application with light and dark themes, and you want to synchronize the theme choice across different tabs.
```javascript
// Listen for theme changes
channel.onmessage = (event) => {
if (event.data.type === 'THEME_CHANGE') {
document.body.className = event.data.theme;
}
};
// Change theme and send message
function changeTheme(theme) {
document.body.className = theme;
channel.postMessage({ type: 'THEME_CHANGE', theme });
}
// Example theme change
changeTheme('dark-mode');
```
### Security Considerations
While BroadcastChannel is secure for communications within the same domain, avoid sending sensitive information such as passwords or authentication tokens without additional security measures.
### Conclusion
BroadcastChannel is a powerful tool for any web developer who needs to coordinate actions between different parts of an application. Its simplicity and efficiency make it ideal for synchronizing states and events across multiple browsing contexts. Try integrating it into your next project and see how it can simplify communication between browser tabs.
---
Translated post with the help of AI | henriqueschroeder |
1,894,571 | Understanding the Difference Between Spread and Rest Operators in JavaScript | JavaScript, a versatile and ever-evolving language, has seen significant improvements with the... | 0 | 2024-06-20T14:59:44 | https://raajaryan.tech/understanding-the-difference-between-spread-and-rest-operators-in-javascript | javascript, beginners, programming, tutorial | [](https://buymeacoffee.com/dk119819)
JavaScript, a versatile and ever-evolving language, has seen significant improvements with the introduction of ES6 (ECMAScript 2015). Among these enhancements are the spread and rest operators, denoted by the three-dot syntax (`...`). While they look identical, their purposes and functionalities are distinct. Understanding these operators is crucial for writing clean, efficient, and modern JavaScript code. In this article, we'll delve into the spread and rest operators, explore their uses, and provide practical examples to illustrate their differences.
### Understanding the Spread Operator
The spread operator allows an iterable (like an array or string) to be expanded in places where zero or more arguments (for function calls) or elements (for array literals) are expected. It's a powerful tool for handling arrays and objects, making your code more readable and concise.
#### Common Use Cases for the Spread Operator
1. **Copying Arrays**
```javascript
const originalArray = [1, 2, 3];
const copiedArray = [...originalArray];
console.log(copiedArray); // Output: [1, 2, 3]
```
2. **Merging Arrays**
```javascript
const array1 = [1, 2, 3];
const array2 = [4, 5, 6];
const mergedArray = [...array1, ...array2];
console.log(mergedArray); // Output: [1, 2, 3, 4, 5, 6]
```
3. **Expanding Strings**
```javascript
const str = "Hello";
const chars = [...str];
console.log(chars); // Output: ['H', 'e', 'l', 'l', 'o']
```
4. **Copying Objects**
```javascript
const originalObject = { a: 1, b: 2 };
const copiedObject = { ...originalObject };
console.log(copiedObject); // Output: { a: 1, b: 2 }
```
5. **Merging Objects**
```javascript
const obj1 = { a: 1, b: 2 };
const obj2 = { b: 3, c: 4 };
const mergedObject = { ...obj1, ...obj2 };
console.log(mergedObject); // Output: { a: 1, b: 3, c: 4 }
```
### Understanding the Rest Operator
The rest operator collects multiple elements and condenses them into a single element, typically an array. It's often used in function parameter lists to handle an indefinite number of arguments.
#### Common Use Cases for the Rest Operator
1. **Function Parameters**
```javascript
function sum(...numbers) {
return numbers.reduce((acc, number) => acc + number, 0);
}
console.log(sum(1, 2, 3, 4)); // Output: 10
```
2. **Destructuring Arrays**
```javascript
const [first, ...rest] = [1, 2, 3, 4];
console.log(first); // Output: 1
console.log(rest); // Output: [2, 3, 4]
```
3. **Destructuring Objects**
```javascript
const { a, b, ...rest } = { a: 1, b: 2, c: 3, d: 4 };
console.log(a); // Output: 1
console.log(b); // Output: 2
console.log(rest); // Output: { c: 3, d: 4 }
```
### Key Differences Between Spread and Rest Operators
- **Purpose**:
- **Spread**: Expands elements into individual elements.
- **Rest**: Collects multiple elements into a single array or object.
- **Usage Context**:
- **Spread**: Typically used in array and object literals, as well as function calls.
- **Rest**: Used in function parameter definitions and array/object destructuring.
- **Syntax Position**:
- **Spread**: Appears where values are expanded (e.g., in function calls or array/object literals).
- **Rest**: Appears in function parameters or destructuring patterns to gather remaining elements.
### Practical Example: Combining Spread and Rest Operators
Combining these operators can lead to elegant and powerful code. Let's see an example where both are used together.
```javascript
function greet(firstName, lastName, ...titles) {
console.log(`Hello, ${firstName} ${lastName}`);
console.log('Titles:', titles);
}
const person = ['John', 'Doe', 'Mr.', 'Dr.', 'Sir'];
greet(...person);
// Output:
// Hello, John Doe
// Titles: ['Mr.', 'Dr.', 'Sir']
```
In this example, the spread operator expands the `person` array into individual arguments for the `greet` function, while the rest operator collects additional titles into a single array.
### Conclusion
The spread and rest operators in JavaScript offer powerful capabilities for handling arrays, objects, and function arguments. Understanding the nuances and proper contexts for using each operator will help you write more concise and readable code. By mastering these tools, you'll enhance your ability to manage data structures efficiently and effectively in your JavaScript projects.
Feel free to experiment with these operators in your projects, and you'll soon discover the many ways they can simplify your code and improve its maintainability.
---
## 💰 You can help me by Donating
[](https://buymeacoffee.com/dk119819) | raajaryan |
1,894,882 | Hands-On: Escalonamento automático com EKS e Cluster Autoscaler utilizando Terraform e Helm | Introdução O escalonamento automático de clusters é uma funcionalidade essencial em ambientes de... | 0 | 2024-06-20T14:59:05 | https://dev.to/aws-builders/hands-on-escalonamento-automatico-com-eks-e-cluster-autoscaler-utilizando-terraform-e-helm-51ki | devops, aws, kubernetes | **Introdução**
O escalonamento automático de clusters é uma funcionalidade essencial em ambientes de computação em nuvem, especialmente quando se trata de gerenciar recursos de forma eficiente e econômica.
Nesse contexto o Cluster Autoscaler (CA) é uma ferramenta vital para ajustar dinamicamente o número de instâncias de nó em um cluster Kubernetes, garantindo que as cargas de trabalho tenham recursos suficientes enquanto minimiza os custos.
Este artigo técnico explora o processo de configuração e uso do Amazon EKS e do Cluster Autoscaler utilizando Terraform e Helm para implementar o escalonamento automático.
---
**Informações gerais**
**_As configurações abaixo são para ambientes de testes, workshops e demos. Não utilizar em ambientes de produção._**
Caso já conheça o Cluster Autoscaler e quer fazer testes, clique nesse [link ](https://github.com/rodrigofrs13/cluster-autoscaler-terraform-helm)e use o repositório completo.
Se quer fazer o passo-a-passo para entender em detalhes, siga as instruções abaixo.
---
**Setup do Cluster **
Para o setup do cluster, iremos utilizar um repositório em Terraform com o código de um cluster básico já pronto.
Acesse o repositório clicando [aqui](https://github.com/rodrigofrs13/basic-cluster-eks-workshop), no readme existe o passo-a-passo para o setup completo do cluster.
Após a execução dos passos, aguarde até conclusão, o output será conforme imagem abaixo:

Pronto, o setup do cluster está concluido, vamos acessar o cluster e fazer alguns testes iniciais para analisar a integridade do cluster.
---
**Acessando o Cluster**
Para acessar o cluster vamos utilizar o AWS Cloud9 e para a configuração vamos seguir o artigo **Boosting AWS Cloud9 to Simplify Amazon EKS Administration** clicando [aqui]
(https://medium.com/@rodrigofrs13/boosting-aws-cloud9-to-simplify-amazon-eks-administration-4c0044dff017).
Após seguir os passos do artigo teremos o Cloud9 e o script de ferramentas para Kubernetes configurados.
Copie o comando abaixo, altere a região e o nome do cluster e execute o comando para acessar o cluster EKS .
`$ aws eks --region <sua-região> update-kubeconfig --name <nome-do-cluster>`
Vamos fazer alguns testes iniciais para verificar a integridade do cluster.
Coletando algumas informações.
`kubectl cluster-info`

Verificando os Worker Nodes.
`kubectl get nodes -o wide`

Analisando todos os recursos criados.
`kubectl get all -A`

Com isso podemos concluir que nosso cluster está funcionando corretamente.

---
**Com o cluster configurado vamos ao Cluster Autoscaler.**
**O que é o Cluster Autoscaler**
O Cluster Autoscaler é uma ferramenta de gerenciamento automático de recursos em clusters Kubernetes.
Ele ajusta automaticamente o tamanho de um cluster Kubernetes, aumentando ou diminuindo o número de Worker Nodes conforme a necessidade de execução das cargas de trabalho.
O Cluster Autoscaler toma decisões com base na quantidade de pods em execução e nas suas respectivas necessidades de recursos.
Para saber mais sobre o Cluster Autoscaler acesse a documentação oficial clicando [aqui](https://github.com/kubernetes/autoscaler).
---
**Instalação do Cluster Autoscaler**
Vamos dividir os arquivos de insralação e configuração do cluster em 3 partes:
- _cluster_autoscaler_iam.tf_
- _cluster_autoscaler_chart.tf_
- _cluster_autoscaler_values.yaml_
Vamos começar configurando as permissões.
Primeiramente temos que pegar o id da conta AWS e o id do OIDC Provider criado pelo cluster EKS.
Para pegar o id do OIDC Provider execute o comando abaixo, alterando a variável _cluster_name_.
`aws eks describe-cluster --name <cluster_name> --query "cluster.identity.oidc.issuer" --output text`
Com o id da conta aws e o id do OIDC Provider, vamos criar o arquivo _cluster_autoscaler_iam.tf_ e colar o trecho do código abaixo.
Lembrado de alterar as variáveis id-da-conta-aws e oidc.
```
# Criação da política IAM para o Cluster Autoscaler
resource "aws_iam_policy" "cluster_autoscaler_policy" {
name = "ClusterAutoscalerPolicy"
description = "Policy for Kubernetes Cluster Autoscaler"
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Action = [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeInstances",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DescribeTags"
],
Resource = "*"
}
]
})
}
# Criar a IAM Role
resource "aws_iam_role" "cluster_autoscaler" {
name = "eks-cluster-autoscaler-role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Federated = "arn:aws:iam::<id-da-conta-aws>:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/<iodc>"
},
Action = "sts:AssumeRoleWithWebIdentity",
Condition = {
StringEquals = {
"oidc.eks.${var.region}.amazonaws.com/id/<iodc>:aud" = "sts.amazonaws.com"
"oidc.eks.${var.region}.amazonaws.com/id/<iodc>:sub" = "system:serviceaccount:kube-system:cluster-autoscaler"
}
}
},
],
})
}
# Criar a service account
resource "kubernetes_service_account" "cluster_autoscaler" {
metadata {
name = "cluster-autoscaler"
namespace = "kube-system"
annotations = {
"eks.amazonaws.com/role-arn" = aws_iam_role.cluster_autoscaler.arn
}
}
}
# Atachar a policy na role
resource "aws_iam_role_policy_attachment" "cluster_autoscaler_policy_attachment" {
policy_arn = aws_iam_policy.cluster_autoscaler_policy.arn #"arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/ClusterAutoscalerPolicy"
role = aws_iam_role.cluster_autoscaler.name
}
# (Opcional) Se você estiver usando uma instância EC2 para executar o Cluster Autoscaler, crie um profile para a instância
resource "aws_iam_instance_profile" "cluster_autoscaler_instance_profile" {
name = "ClusterAutoscalerInstanceProfile"
role = aws_iam_role.cluster_autoscaler.name
}
```
Criamos uma IAM Policy chamada ClusterAutoscalerPolicy com as permissões necessárias para o Cluster Autoscaler funcionar.
Criamos uma IAM Role com as permissões necessárias para o OIDC Provider.
Criamos uma Service Account e "atachamos" a role criada.
Opcional, se você estiver usando uma instância EC2 para executar o Cluster Autoscaler, crie um instance profile.
Agora vamos configurar o Helm Chart, para isso crie um arquivo chamado _cluster_autoscaler_chart.tf_ e cole o trecho de código abaixo:
```
resource "helm_release" "cluster_autoscaler" {
name = "cluster-autoscaler"
repository = "https://kubernetes.github.io/autoscaler"
chart = "cluster-autoscaler"
namespace = "kube-system"
timeout = 300
version = "9.34.1"
values = [
"${file("cluster_autoscaler_values.yaml")}"
]
set {
name = "autoDiscovery.clusterName"
value = data.aws_eks_cluster.cluster.name
}
set {
name = "awsRegion"
value = var.region
}
set {
name = "rbac.serviceAccount.create"
value = "false"
}
set {
name = "rbac.serviceAccount.name"
value = "cluster-autoscaler"
}
}
```
Para configurar o Cluster Autoscaler com opções avançadas do Helm chart, você pode ajustar vários parâmetros que controlam o comportamento do autoscaler.
O arquivo_ values.yaml_ permite configurar opções como escalonamento mínimo e máximo de Worker Nodes, controle de tolerâncias, métricas, intervalos de checagem, e muito mais.
Agora crie o arquivo _cluster_autoscaler_values.yaml_ e cole o trecho abaixo.
Temos que ajustar alguns parâmetros:
- _clusterName _- Inserir o nome do cluster EKS
- _awsRegion _- Inserir a região da AWS
```
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity -- Affinity for pod assignment
affinity: {}
# additionalLabels -- Labels to add to each object of the chart.
additionalLabels: {}
autoDiscovery:
# cloudProviders `aws`, `gce`, `azure`, `magnum`, `clusterapi` and `oci` are supported by auto-discovery at this time
# AWS: Set tags as described in https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup
# autoDiscovery.clusterName -- Enable autodiscovery for `cloudProvider=aws`, for groups matching `autoDiscovery.tags`.
# autoDiscovery.clusterName -- Enable autodiscovery for `cloudProvider=azure`, using tags defined in https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/azure/README.md#auto-discovery-setup.
# Enable autodiscovery for `cloudProvider=clusterapi`, for groups matching `autoDiscovery.labels`.
# Enable autodiscovery for `cloudProvider=gce`, but no MIG tagging required.
# Enable autodiscovery for `cloudProvider=magnum`, for groups matching `autoDiscovery.roles`.
clusterName: cluster-workshop
# autoDiscovery.namespace -- Enable autodiscovery via cluster namespace for for `cloudProvider=clusterapi`
namespace: # default
# autoDiscovery.tags -- ASG tags to match, run through `tpl`.
tags:
- k8s.io/cluster-autoscaler/enabled
- k8s.io/cluster-autoscaler/{{ .Values.autoDiscovery.clusterName }}
# - kubernetes.io/cluster/{{ .Values.autoDiscovery.clusterName }}
# autoDiscovery.roles -- Magnum node group roles to match.
roles:
- worker
# autoDiscovery.labels -- Cluster-API labels to match https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md#configuring-node-group-auto-discovery
labels: []
# - color: green
# - shape: circle
# autoscalingGroups -- For AWS, Azure AKS or Magnum. At least one element is required if not using `autoDiscovery`. For example:
# <pre>
# - name: asg1<br />
# maxSize: 2<br />
# minSize: 1
# </pre>
# For Hetzner Cloud, the `instanceType` and `region` keys are also required.
# <pre>
# - name: mypool<br />
# maxSize: 2<br />
# minSize: 1<br />
# instanceType: CPX21<br />
# region: FSN1
# </pre>
autoscalingGroups: []
# - name: asg1
# maxSize: 2
# minSize: 1
# - name: asg2
# maxSize: 2
# minSize: 1
# autoscalingGroupsnamePrefix -- For GCE. At least one element is required if not using `autoDiscovery`. For example:
# <pre>
# - name: ig01<br />
# maxSize: 10<br />
# minSize: 0
# </pre>
autoscalingGroupsnamePrefix: []
# - name: ig01
# maxSize: 10
# minSize: 0
# - name: ig02
# maxSize: 10
# minSize: 0
# awsAccessKeyID -- AWS access key ID ([if AWS user keys used](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials))
awsAccessKeyID: ""
# awsRegion -- AWS region (required if `cloudProvider=aws`)
awsRegion: us-east-1
# awsSecretAccessKey -- AWS access secret key ([if AWS user keys used](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials))
awsSecretAccessKey: ""
# azureClientID -- Service Principal ClientID with contributor permission to Cluster and Node ResourceGroup.
# Required if `cloudProvider=azure`
azureClientID: ""
# azureClientSecret -- Service Principal ClientSecret with contributor permission to Cluster and Node ResourceGroup.
# Required if `cloudProvider=azure`
azureClientSecret: ""
# azureResourceGroup -- Azure resource group that the cluster is located.
# Required if `cloudProvider=azure`
azureResourceGroup: ""
# azureSubscriptionID -- Azure subscription where the resources are located.
# Required if `cloudProvider=azure`
azureSubscriptionID: ""
# azureTenantID -- Azure tenant where the resources are located.
# Required if `cloudProvider=azure`
azureTenantID: ""
# azureUseManagedIdentityExtension -- Whether to use Azure's managed identity extension for credentials. If using MSI, ensure subscription ID, resource group, and azure AKS cluster name are set. You can only use one authentication method at a time, either azureUseWorkloadIdentityExtension or azureUseManagedIdentityExtension should be set.
azureUseManagedIdentityExtension: false
# azureUseWorkloadIdentityExtension -- Whether to use Azure's workload identity extension for credentials. See the project here: https://github.com/Azure/azure-workload-identity for more details. You can only use one authentication method at a time, either azureUseWorkloadIdentityExtension or azureUseManagedIdentityExtension should be set.
azureUseWorkloadIdentityExtension: false
# azureVMType -- Azure VM type.
azureVMType: "vmss"
# azureEnableForceDelete -- Whether to force delete VMs or VMSS instances when scaling down.
azureEnableForceDelete: false
# cloudConfigPath -- Configuration file for cloud provider.
cloudConfigPath: ""
# cloudProvider -- The cloud provider where the autoscaler runs.
# Currently only `gce`, `aws`, `azure`, `magnum` and `clusterapi` are supported.
# `aws` supported for AWS. `gce` for GCE. `azure` for Azure AKS.
# `magnum` for OpenStack Magnum, `clusterapi` for Cluster API.
cloudProvider: aws
# clusterAPICloudConfigPath -- Path to kubeconfig for connecting to Cluster API Management Cluster, only used if `clusterAPIMode=kubeconfig-kubeconfig or incluster-kubeconfig`
clusterAPICloudConfigPath: /etc/kubernetes/mgmt-kubeconfig
# clusterAPIConfigMapsNamespace -- Namespace on the workload cluster to store Leader election and status configmaps
clusterAPIConfigMapsNamespace: ""
# clusterAPIKubeconfigSecret -- Secret containing kubeconfig for connecting to Cluster API managed workloadcluster
# Required if `cloudProvider=clusterapi` and `clusterAPIMode=kubeconfig-kubeconfig,kubeconfig-incluster or incluster-kubeconfig`
clusterAPIKubeconfigSecret: ""
# clusterAPIMode -- Cluster API mode, see https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md#connecting-cluster-autoscaler-to-cluster-api-management-and-workload-clusters
# Syntax: workloadClusterMode-ManagementClusterMode
# for `kubeconfig-kubeconfig`, `incluster-kubeconfig` and `single-kubeconfig` you always must mount the external kubeconfig using either `extraVolumeSecrets` or `extraMounts` and `extraVolumes`
# if you dont set `clusterAPIKubeconfigSecret`and thus use an in-cluster config or want to use a non capi generated kubeconfig you must do so for the workload kubeconfig as well
clusterAPIMode: incluster-incluster # incluster-incluster, incluster-kubeconfig, kubeconfig-incluster, kubeconfig-kubeconfig, single-kubeconfig
# clusterAPIWorkloadKubeconfigPath -- Path to kubeconfig for connecting to Cluster API managed workloadcluster, only used if `clusterAPIMode=kubeconfig-kubeconfig or kubeconfig-incluster`
clusterAPIWorkloadKubeconfigPath: /etc/kubernetes/value
# containerSecurityContext -- [Security context for container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
containerSecurityContext: {}
# capabilities:
# drop:
# - ALL
deployment:
# deployment.annotations -- Annotations to add to the Deployment object.
annotations: {}
# dnsPolicy -- Defaults to `ClusterFirst`. Valid values are:
# `ClusterFirstWithHostNet`, `ClusterFirst`, `Default` or `None`.
# If autoscaler does not depend on cluster DNS, recommended to set this to `Default`.
dnsPolicy: ClusterFirst
# envFromConfigMap -- ConfigMap name to use as envFrom.
envFromConfigMap: ""
# envFromSecret -- Secret name to use as envFrom.
envFromSecret: ""
## Priorities Expander
# expanderPriorities -- The expanderPriorities is used if `extraArgs.expander` contains `priority` and expanderPriorities is also set with the priorities.
# If `extraArgs.expander` contains `priority`, then expanderPriorities is used to define cluster-autoscaler-priority-expander priorities.
# See: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/expander/priority/readme.md
expanderPriorities: {}
# extraArgs -- Additional container arguments.
# Refer to https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca for the full list of cluster autoscaler
# parameters and their default values.
# Everything after the first _ will be ignored allowing the use of multi-string arguments.
extraArgs:
logtostderr: true
stderrthreshold: info
v: 4
# write-status-configmap: true
# status-config-map-name: cluster-autoscaler-status
# leader-elect: true
# leader-elect-resource-lock: endpoints
# skip-nodes-with-local-storage: true
# expander: random
# scale-down-enabled: true
# balance-similar-node-groups: true
# min-replica-count: 0
# scale-down-utilization-threshold: 0.5
# scale-down-non-empty-candidates-count: 30
# max-node-provision-time: 15m0s
# scan-interval: 10s
# scale-down-delay-after-add: 10m
# scale-down-delay-after-delete: 0s
# scale-down-delay-after-failure: 3m
# scale-down-unneeded-time: 10m
# skip-nodes-with-system-pods: true
# balancing-ignore-label_1: first-label-to-ignore
# balancing-ignore-label_2: second-label-to-ignore
# extraEnv -- Additional container environment variables.
extraEnv: {}
# extraEnvConfigMaps -- Additional container environment variables from ConfigMaps.
extraEnvConfigMaps: {}
# extraEnvSecrets -- Additional container environment variables from Secrets.
extraEnvSecrets: {}
# extraVolumeMounts -- Additional volumes to mount.
extraVolumeMounts: []
# - name: ssl-certs
# mountPath: /etc/ssl/certs/ca-certificates.crt
# readOnly: true
# extraVolumes -- Additional volumes.
extraVolumes: []
# - name: ssl-certs
# hostPath:
# path: /etc/ssl/certs/ca-bundle.crt
# extraVolumeSecrets -- Additional volumes to mount from Secrets.
extraVolumeSecrets: {}
# autoscaler-vol:
# mountPath: /data/autoscaler/
# custom-vol:
# name: custom-secret
# mountPath: /data/custom/
# items:
# - key: subkey
# path: mypath
# fullnameOverride -- String to fully override `cluster-autoscaler.fullname` template.
fullnameOverride: ""
# hostNetwork -- Whether to expose network interfaces of the host machine to pods.
hostNetwork: false
image:
# image.repository -- Image repository
repository: registry.k8s.io/autoscaling/cluster-autoscaler
# image.tag -- Image tag
tag: v1.30.0
# image.pullPolicy -- Image pull policy
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# image.pullSecrets -- Image pull secrets
pullSecrets: []
# - myRegistrKeySecretName
# kubeTargetVersionOverride -- Allow overriding the `.Capabilities.KubeVersion.GitVersion` check. Useful for `helm template` commands.
kubeTargetVersionOverride: ""
# kwokConfigMapName -- configmap for configuring kwok provider
kwokConfigMapName: "kwok-provider-config"
# magnumCABundlePath -- Path to the host's CA bundle, from `ca-file` in the cloud-config file.
magnumCABundlePath: "/etc/kubernetes/ca-bundle.crt"
# magnumClusterName -- Cluster name or ID in Magnum.
# Required if `cloudProvider=magnum` and not setting `autoDiscovery.clusterName`.
magnumClusterName: ""
# nameOverride -- String to partially override `cluster-autoscaler.fullname` template (will maintain the release name)
nameOverride: ""
# nodeSelector -- Node labels for pod assignment. Ref: https://kubernetes.io/docs/user-guide/node-selection/.
nodeSelector: {}
# podAnnotations -- Annotations to add to each pod.
podAnnotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
# podDisruptionBudget -- Pod disruption budget.
podDisruptionBudget:
maxUnavailable: 1
# minAvailable: 2
# podLabels -- Labels to add to each pod.
podLabels: {}
# priorityClassName -- priorityClassName
priorityClassName: "system-cluster-critical"
# priorityConfigMapAnnotations -- Annotations to add to `cluster-autoscaler-priority-expander` ConfigMap.
priorityConfigMapAnnotations: {}
# key1: "value1"
# key2: "value2"
## Custom PrometheusRule to be defined
## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
prometheusRule:
# prometheusRule.enabled -- If true, creates a Prometheus Operator PrometheusRule.
enabled: false
# prometheusRule.additionalLabels -- Additional labels to be set in metadata.
additionalLabels: {}
# prometheusRule.namespace -- Namespace which Prometheus is running in.
namespace: monitoring
# prometheusRule.interval -- How often rules in the group are evaluated (falls back to `global.evaluation_interval` if not set).
interval: null
# prometheusRule.rules -- Rules spec template (see https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#rule).
rules: []
rbac:
# rbac.create -- If `true`, create and use RBAC resources.
create: true
# rbac.pspEnabled -- If `true`, creates and uses RBAC resources required in the cluster with [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) enabled.
# Must be used with `rbac.create` set to `true`.
pspEnabled: false
# rbac.clusterScoped -- if set to false will only provision RBAC to alter resources in the current namespace. Most useful for Cluster-API
clusterScoped: true
serviceAccount:
# rbac.serviceAccount.annotations -- Additional Service Account annotations.
annotations: {}
# rbac.serviceAccount.create -- If `true` and `rbac.create` is also true, a Service Account will be created.
create: true
# rbac.serviceAccount.name -- The name of the ServiceAccount to use. If not set and create is `true`, a name is generated using the fullname template.
name: ""
# rbac.serviceAccount.automountServiceAccountToken -- Automount API credentials for a Service Account.
automountServiceAccountToken: true
# replicaCount -- Desired number of pods
replicaCount: 1
# resources -- Pod resource requests and limits.
resources: {}
# limits:
# cpu: 100m
# memory: 300Mi
# requests:
# cpu: 100m
# memory: 300Mi
# revisionHistoryLimit -- The number of revisions to keep.
revisionHistoryLimit: 10
# securityContext -- [Security context for pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
securityContext: {}
# runAsNonRoot: true
# runAsUser: 1001
# runAsGroup: 1001
service:
# service.create -- If `true`, a Service will be created.
create: true
# service.annotations -- Annotations to add to service
annotations: {}
# service.labels -- Labels to add to service
labels: {}
# service.externalIPs -- List of IP addresses at which the service is available. Ref: https://kubernetes.io/docs/user-guide/services/#external-ips.
externalIPs: []
# service.loadBalancerIP -- IP address to assign to load balancer (if supported).
loadBalancerIP: ""
# service.loadBalancerSourceRanges -- List of IP CIDRs allowed access to load balancer (if supported).
loadBalancerSourceRanges: []
# service.servicePort -- Service port to expose.
servicePort: 8085
# service.portName -- Name for service port.
portName: http
# service.type -- Type of service to create.
type: ClusterIP
## Are you using Prometheus Operator?
serviceMonitor:
# serviceMonitor.enabled -- If true, creates a Prometheus Operator ServiceMonitor.
enabled: false
# serviceMonitor.interval -- Interval that Prometheus scrapes Cluster Autoscaler metrics.
interval: 10s
# serviceMonitor.namespace -- Namespace which Prometheus is running in.
namespace: monitoring
## [Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#prometheus-operator-1)
## [Kube Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#exporters)
# serviceMonitor.selector -- Default to kube-prometheus install (CoreOS recommended), but should be set according to Prometheus install.
selector:
release: prometheus-operator
# serviceMonitor.path -- The path to scrape for metrics; autoscaler exposes `/metrics` (this is standard)
path: /metrics
# serviceMonitor.annotations -- Annotations to add to service monitor
annotations: {}
## [RelabelConfig](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.RelabelConfig)
# serviceMonitor.metricRelabelings -- MetricRelabelConfigs to apply to samples before ingestion.
metricRelabelings: {}
# tolerations -- List of node taints to tolerate (requires Kubernetes >= 1.6).
tolerations: []
# topologySpreadConstraints -- You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. (requires Kubernetes >= 1.19).
topologySpreadConstraints: []
# - maxSkew: 1
# topologyKey: topology.kubernetes.io/zone
# whenUnsatisfiable: DoNotSchedule
# labelSelector:
# matchLabels:
# app.kubernetes.io/instance: cluster-autoscaler
# updateStrategy -- [Deployment update strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy)
updateStrategy: {}
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
# type: RollingUpdate
# vpa -- Configure a VerticalPodAutoscaler for the cluster-autoscaler Deployment.
vpa:
# vpa.enabled -- If true, creates a VerticalPodAutoscaler.
enabled: false
# vpa.updateMode -- [UpdateMode](https://github.com/kubernetes/autoscaler/blob/vertical-pod-autoscaler/v0.13.0/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L124)
updateMode: "Auto"
# vpa.containerPolicy -- [ContainerResourcePolicy](https://github.com/kubernetes/autoscaler/blob/vertical-pod-autoscaler/v0.13.0/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L159). The containerName is always et to the deployment's container name. This value is required if VPA is enabled.
containerPolicy: {}
# secretKeyRefNameOverride -- Overrides the name of the Secret to use when loading the secretKeyRef for AWS and Azure env variables
secretKeyRefNameOverride: ""
```
Algumas configurações que podem ser personalizadas no arquivo de values:
- _autoDiscovery_: Configura o nome do cluster para descoberta automática de grupos de Auto Scaling.
- _extraArgs_: Define argumentos adicionais para o Cluster Autoscaler, como políticas de escalonamento e thresholds.
- _rbac_: Configura a conta de serviço e as permissões RBAC.
- _image_: Define a versão da imagem do Cluster Autoscaler.
- _resources_: Especifica os recursos solicitados e limites para o pod do Cluster Autoscaler.
- _nodeSelector, tolerations, affinity_: Configurações para especificar onde os pods do Cluster Autoscaler podem ser agendados.
- _replicaCount_: Define o número de réplicas do Cluster Autoscaler.
- _podAnnotations_: Adiciona anotações ao pod do Cluster Autoscaler.
Após criar todos os arquivos acima, vamos aplica-lo´s com o Terraform executando o comando abaixo:
`terraform apply --auto-approve`
Acompanhe os logs do Cluster Autoscaler para avaliar se o deploy ocorreu com sucesso.
`kubectl -n kube-system logs -f deployment/cluster-autoscaler-aws-cluster-autoscaler`
Caso esteja tudo certo o Cluster Autoscaler está operacional e pronto para testes de escalonamento.
---
**Teste o escalonamento automático**
Vamos iniciar os teste o escalonamento automático para isso vamos obter algumas informações, criar alguns recursos e acompanhar os resultados.
Observe a quantidade de Worker Nodes atuais com o comando abaixo:
`kubectl get nodes`

Observe que nesse momento temos somente 1 Worker Node disponível.
Vamos criar um deployment para os testes de stress.
Crie um arquivo com o nome de _cpu-stress-deployment.yaml_ e cole o código abaixo:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: cpu-stress
spec:
replicas: 5
selector:
matchLabels:
app: cpu-stress
template:
metadata:
labels:
app: cpu-stress
spec:
containers:
- name: cpu-stress
image: vish/stress
resources:
requests:
cpu: "1"
args:
- -cpus
- "1"
```
Aplique o deployment com o comando:
`kubectl apply -f cpu-stress-deployment.yaml`
Observe o comportamento do Cluster Autoscaler, que deve aumentar o número de Worker Nodes para acomodar o workload adicional.
Acopanhe os logs do Cluster Autoscaler.
`kubectl -n kube-system logs -f deployment/cluster-autoscaler-aws-cluster-autoscaler`
Acompanhe os Worker Nodes escalando e nota-se que ele escalou vários Worker Nodes para acomodar o novo workload.
`kubectl get nodes`
Vamos simular a redução do workload excedente, voltando o ambiente normal.
Vamos zerar a quantidade de pods no deployment e acompanhe os Worker Nodes sendo desprovisionados da infraestrutura e voltando ao seu estado original.
---
**Conclusão**
Usar Terraform e Helm para configurar um cluster EKS e o Cluster Autoscaler proporciona uma solução robusta e automatizada para gerenciar a escalabilidade dos clusters Kubernetes.
Este artigo detalhado fornece os passos necessários para implementar e gerenciar o escalonamento automático, garantindo que os recursos sejam utilizados de forma eficiente e econômica.
Com estas ferramentas, você pode otimizar os custos e melhorar o desempenho das suas aplicações em um ambiente Kubernetes gerenciado pela AWS. | rodrigofrs13 |
1,894,886 | Day1-100 series - Learning System Design for Interviews. | Day2 -> Learning Databases Remember 90% of battle is won, when you select correct... | 0 | 2024-06-20T14:57:41 | https://dev.to/taniskannpurna/day1-100-series-learning-system-design-for-interviews-13f3 | database, sql, nosql, systemdesign | ## **Day2 -> Learning Databases**
- Remember 90% of battle is won, when you select correct database for your system.
There are 2 types of Databases Relational(SQL) and Non-Relational(NoSql). In todays post we will learn about Relational Databases, ACID properties & Scaling them.
**RELATIONAL DBs**
- Relational DBs were originally created for financial purposes. In 90's there were financial companies had ledgers that were filled by people to mantain financial records and it was tiresome as well as error prone.
- So Relational DB's were inspired from those ledgers like rows and columns. They had to follow all the properties those ledgers had.
- All the properties provided by Relational DB's can be generalised to ACID properties :
1. A -> Atomicity
2. C -> Consistency
3. I -> Isolation
4. D -> Durability
**ATOMICITY**
- This simply means that if you have multiple queries under a transaction, either all of then will be executed or none of them will be executed.
Eg. Let's say Person A is transferring 50₹ to Person B. Now we have 2 statements to execute, one would deduct 50₹ from Person A and add 50₹ to Person B. So, either both of them will have to be executed or none of them should be executed to mantain consistency between data.
**CONSISTENCY**
- This simply means that Relation DB's provides lots of tools and features to make sure that data is consistent. Some of the features are like _Foreign Keys_, _Constraints_, _Cascades_, _triggers_ and many more.
**ISOLATION**
- This simply means that Relational DB's provide multiple levels of transparency between two transactions.
Eg. Let's say Person A and Person B are running transactions at the same time on the same data, Now we can decide on our requirements when, where or how much transparency is there about changes.
Isolations have 4 levels
1. **<u>READ UNCOMMITTED</u>**
- This isolation level allows dirty reads, i.e., where a transaction may see uncommitted changes made by some other transaction. This means values in the data can be changed and rows can appear or disappear in the data set before the transaction completes.
- In general, don’t use READ UNCOMMITTED. The one and only time it ever makes sense is when you are reading data that will never be modified in any way. For example, if you need to scan a large volume of data to generate high-level analytics or summaries, and absolute moment-of-query accuracy is not critical.
2. **<u>READ COMMITTED</u>**
- It is the default isolation level in most Postgres databases, and older SQL databases in general.
- It restricts the reader from seeing any intermediate, uncommitted, ‘dirty’ read during a transaction and guarantees that any data read was committed at the moment it was read.
3. **<u>REPEATABLE READ</u>**
- REPEATABLE READ is the ideal isolation level for read-only transactions.
- REPEATABLE READ would be a good choice for a financial app that calculates the total balance of a user’s accounts. This isolation level ensures that if a row is read twice in the same transaction, it will return the same value each time, preventing the nonrepeatable read anomaly mentioned above. In this case, when calculating a user’s total balance, REPEATABLE READ guarantees that the balance does not change during the calculation process due to concurrent updates.
4. **<u>SERIALIZABLE</u>**
- SERIALIZABLE is the highest level of isolation (and the default isolation level in CockroachDB).
- Transactions are completely isolated from each other, effectively serializing access to the database to prevent dirty reads, non-repeatable reads, and phantom reads.
**SCALING DBs**
- There are 2 types of scaling - VERTICAL & HORIZONTAL.
**VERTICAL SCALING**
- Vertical scaling is simpler and should always be the first option in terms of scaling.
- Vertical scaling is adding more power like increasing RAMs, CPU or PROCESSOR power, MEMORIES etc.
- Vertical scaling is limited by hardware, like whatever source used will have certain hardware limit.
**HORIZONTAL SCALING**
- We know that 90% of the time we perform read operations. So, for these read operations we can have seperate db which handles only read operations.
- We can do sharding, partitioning and many more things for scaling.
We will read SHARDING, REPLICATION and PARTITIONING in the next blog.
If you love posts like this, and think its beneficial, do follow and if any confusion do post any questions in comments.
| taniskannpurna |
1,894,885 | Transactional Email vs Marketing Email (+ Examples) | Transactional emails and marketing emails — you have probably heard these terms before. However, the... | 0 | 2024-06-20T14:56:34 | https://sidemail.io/articles/transactional-email-vs-marketing-email/ | webdev, saas, email, beginners | Transactional emails and marketing emails — you have probably heard these terms before. However, the definitions are not always very clear, which might leave you unsure about the differences and why you should take the time to differentiate them. In this article, I’ll simplify these terms as much as possible, help you cover the basics of transactional vs. marketing emails, show you some examples of each, and answer frequently asked questions.
### What is a transactional email?
**Transactional emails are emails that are sent as a direct response to a user’s actions in an application, service, or website.** Single sign-on, password reset, welcome email, order confirmation, receipt — these all are examples of transactional emails.
### What is a marketing email?
**Marketing emails** (also known as promotional emails, commercial emails, newsletters, broadcast emails, or bulk emails) **are emails that a company member, usually a marketing person, sends to a large group of contacts who have agreed to receive promotional messages from the company.** Examples of marketing emails include product updates, weekly/monthly newsletters, announcements, and sale or promo offers.
### What is the difference between transactional email vs marketing email?
Now that we know the definitions of transactional and marketing emails, let’s go through their differences.

*Transactional vs Marketing email differences (source:* [*Sidemail.io*](https://sidemail.io/articles/transactional-email-vs-marketing-email/)*)*
The main differences between transactional emails and marketing emails are:
* **Trigger ⚡** — Transactional emails are triggered by a specific user action in your application, service, website, or similar. Marketing emails are sent to a group of contacts, usually by a marketing employee of the company.
* **Opt-in 🤝** — Marketing emails, in contrast to transactional emails, require explicit opt-in by a recipient. Marketing emails are also regulated by GDPR, CAN-SPAM, and other regulations.
* **Email content 💌** — The content of a transactional email is customized to the specific user and usually contains unique information, links, or data for the user. It’s direct one-on-one communication. On the other hand, the content of a marketing email is rather general and promotional, often as part of a marketing campaign.
* **Unsubscribe link 📫** — All marketing emails must contain an unsubscribe link, giving the recipient the right to opt-out from receiving further promotional emails from the company at any time. Transactional emails do not need unsubscribe links.
* **Delivery speed and time ⌛** — Transactional emails are part of the critical architecture of an application, service, or website. Therefore, deliverability rates and speed are crucial. A [good transactional email provider](https://sidemail.io/articles/best-transactional-email-platform/) should deliver emails reliably and within a few milliseconds. For marketing emails, the delivery speed is generally less critical. Marketing emails are rather optimized to reach the recipient at the most convenient time, often using timezone-based delivery for optimization.
* **Engagement and open rates 📈** — Transactional emails generally have higher engagement and open rates than marketing emails. On average, transactional emails tend to have an open rate between 40% and 60% due to their nature of being triggered by user actions and containing important information that the recipient is expecting or needs to access. In contrast, marketing emails usually have an open rate ranging from 20% to 30%, though this can vary widely based on factors such as industry, target audience, email content, and the quality of the email list.
### Why do I need to differentiate between transactional and marketing emails?
As introduced in the previous section, transactional emails and marketing emails have different specifics, are sent with different purposes, and need to comply with different regulations. Therefore, companies must have knowledge regarding the specifics of transactional and marketing emails to comply with email regulations and ensure the best possible experience for their users.
By separating transactional and marketing emails, you can **achieve better deliverability results**. For transactional emails, deliverability speed is crucial, so they should always have the highest priority for email sending. However, marketing emails also need to be reliably delivered.
Also, when you separate transactional and marketing emails, **email service providers can better identify and categorize your emails** in users’ inboxes.
Additionally, separating transactional and marketing emails **simplifies troubleshooting and potential debugging**, while also saving time for your support team.
#### And how to separate transactional and marketing email sending?
Generally, the best practice is to separate your emails by using different “from” names, “from” addresses, and IP addresses. Today, there are email sending providers that can help you easily manage this. For example, Sidemail manages all the separation, unsubscribe link handling, and email delivery for you, and provides you with industry best practices. You can check out [Sidemail and start a free trial here](https://sidemail.io/).
### Do transactional emails need to contain an unsubscribe link?
**No, transactional emails do not need to contain an unsubscribe link.** As explained earlier, transactional emails are sent in response to a user’s action in your application, service, or website. Therefore, if you’re sending transactional emails such as password resets, single sign-on confirmations, receipts, or similar messages, they do not need to include an unsubscribe link.
### Do marketing emails need to contain an unsubscribe link?
**Yes, all marketing emails must contain an unsubscribe link.** Therefore, if you’re sending newsletters, product updates, discount offers, or similar promotional emails, you are required to include an unsubscribe link. Also, it’s important to keep in mind that to send marketing emails to a recipient, you need to have their explicit opt-in to receive such marketing messages from your company.
### Transactional email examples
Now that we understand the definition and specifics of transactional emails, let’s explore some examples.
#### Password reset email
Perhaps the most recognized transactional email, the password reset email is sent to a user after they request to change their password. This email verifies the user’s action and allows them to proceed with updating their password securely.
#### Single-sign-on email (SSO)
For applications or services offering passwordless login, the single sign-on email facilitates user login. Users enter their email, request an SSO email, and receive a login link via your transactional email provider. Clicking the link logs them into your application or service.
#### Receipt, Order confirmation, Subscription activation emails
After a successful payment, it’s good practice to send your user an email with a receipt, order confirmation, or subscription activation. You can also use these emails to thank users, build trust, and foster good relationships.
#### Registration & Welcome email
Registration and welcome emails are crucial for making a good first impression. They can help you establish a great relationship with users, guide them through their first steps in your application or service, or offer assistance if they need it.
#### Account activation email & Email address verification
If you have a SaaS platform, you’ll probably want to verify your users’ email addresses before they get started. To do this, you can use an [email API](https://sidemail.io/articles/what-is-email-api/) to send an activation email. Users verify their email address by clicking the link inside this email.
#### Dunning emails
Dunning emails notify users about issues with payments. These emails are typical for SaaS and play an important role in churn prevention, building trust, and managing subscriptions. Examples of dunning emails include:
* Payment attempt failed
* Subscription renewal issue
* Expired credit card information
* Authorization issues
### Marketing email examples
And now, let’s look at some examples of marketing emails.
#### Newsletters & Product updates
Have you introduced a new feature, improved your application, or recorded a new tutorial video? These are amazing opportunities to reach out to your users and audience, sharing what you’re up to. You can also use newsletters to ask for their thoughts and collect feedback.
#### Discounts & special offers
Companies often offer special discounts for celebrations and holidays like Black Friday, New Year, and Summer holidays. Email marketing is a great way to spread the word about your offers. You can also combine it with discount codes, buy-more-save-more offers, and other strategies depending on your resources and creativity.
### Need to help with transactional and marketing emails?
Do you need help with your transactional and marketing emails? We’re here for you! We’re experts on email delivery and enjoy working with clients directly. Whether you need help separating transactional and marketing emails, managing and delivering emails for your service, integrating transactional emails into your application, or anything else related to emails, we’ve got you covered. You can reach out to me directly at [kristyna@sidemail.io](mailto:kristyna@sidemail.io) or check [Sidemail](https://sidemail.io/), our email platform.
### 👉 Try Sidemail today
Dealing with emails and choosing the right email API service is not easy. We will help you to simplify it as much as possible. Create your account now and **start sending your emails in under 30 minutes**.
[**Start 7-day free trial →**](https://sidemail.io/) | k_vrbova |
1,894,857 | 먹튀로얄 | 먹투로얄토토사이트: 카지노, 바카라, 슬롯사이트를 아우르는 최고의 토토 커뮤니티 먹투로얄토토사이트 먹투로얄토토사이트는 다양한 카지노 게임과 토토 커뮤니티를 제공하는 인기 있는... | 0 | 2024-06-20T14:12:35 | https://dev.to/alemtroy08/meogtwiroyal-2dg3 | 먹투로얄토토사이트: 카지노, 바카라, 슬롯사이트를 아우르는 최고의 토토 커뮤니티
먹투로얄토토사이트
먹투로얄토토사이트는 다양한 카지노 게임과 토토 커뮤니티를 제공하는 인기 있는 플랫폼입니다. 이 사이트는 사용자들에게 안전하고 신뢰할 수 있는 게임 환경을 제공하여 많은 플레이어들 사이에서 큰 인기를 끌고 있습니다. 특히, 먹투로얄토토사이트는 철저한 보안 시스템과 공정한 게임 운영을 통해 사용자들의 신뢰를 받고 있습니다.
**_[먹튀로얄](https://mtroyale.com/)_**
카지노사이트
먹투로얄토토사이트의 카지노 섹션은 다양한 게임 옵션을 제공하여 사용자들이 원하는 게임을 자유롭게 즐길 수 있도록 합니다. 슬롯 머신, 블랙잭, 룰렛 등 전통적인 카지노 게임은 물론, 최신 게임들도 포함되어 있어 모든 플레이어들의 취향을 만족시킵니다. 먹투로얄토토사이트의 카지노는 높은 보안 수준과 빠른 거래 처리로 사용자들이 안심하고 게임을 즐길 수 있게 합니다.
바카라사이트
먹투로얄토토사이트는 또한 다양한 바카라 게임을 제공하여 많은 팬들에게 사랑받고 있습니다. 바카라는 단순하지만 긴장감 넘치는 게임으로 많은 이들의 관심을 끌고 있으며, 먹투로얄토토사이트는 이를 완벽하게 제공하고 있습니다. 실시간 딜러와의 인터랙션을 통해 실제 카지노에서의 경험을 집에서도 즐길 수 있으며, 다양한 베팅 옵션을 통해 사용자들에게 높은 만족감을 줍니다.
슬롯사이트
슬롯게임은 먹투로얄토토사이트에서 가장 인기 있는 게임 중 하나입니다. 다양한 테마와 기능을 갖춘 슬롯 머신들은 사용자들에게 끝없는 재미를 제공합니다. 먹투로얄토토사이트는 최신 슬롯 게임을 지속적으로 업데이트하여 사용자들이 항상 새로운 경험을 할 수 있도록 합니다. 또한, 높은 보너스와 잭팟 기회는 슬롯 게임의 매력을 한층 더해줍니다.
토토 커뮤니티
먹투로얄토토사이트는 게임뿐만 아니라 강력한 토토 커뮤니티를 제공하여 사용자들 간의 교류와 정보를 공유할 수 있는 장을 마련하고 있습니다. 이 커뮤니티는 다양한 토토 정보와 팁을 제공하여 사용자들이 더 나은 베팅 전략을 세울 수 있도록 돕습니다. 먹튀 방지를 위한 철저한 관리와 신뢰할 수 있는 정보 제공은 먹투로얄토토사이트 커뮤니티의 큰 장점입니다.
먹튀로얄
먹투로얄토토사이트는 먹튀 방지를 위해 철저한 관리 시스템을 운영하고 있습니다. 사용자들의 소중한 자산을 보호하기 위해 안전한 거래 환경을 제공하며, 신뢰할 수 있는 운영 방침을 유지합니다. 먹투로얄은 사용자들의 피드백을 바탕으로 지속적으로 시스템을 개선하여 최고의 서비스 품질을 유지하고 있습니다.
결론
먹투로얄토토사이트는 카지노, 바카라, 슬롯 등 다양한 게임을 아우르며, 안전하고 신뢰할 수 있는 환경을 제공합니다. 강력한 토토 커뮤니티와 철저한 먹튀 방지 시스템은 사용자들이 안심하고 즐길 수 있는 최고의 플랫폼으로 자리 잡게 합니다. 지속적인 업데이트와 높은 보안 수준을 유지하는 먹투로얄토토사이트는 앞으로도 많은 사용자들에게 사랑받는 플랫폼이 될 것입니다. | alemtroy08 | |
1,894,884 | Deep Dive into Caching: Techniques for High-Performance Web Apps | Normal Developers: Wait Caching I know what's that, just saving information locally. I mean you are... | 0 | 2024-06-20T14:55:27 | https://dev.to/nayanraj-adhikary/deep-dive-into-caching-techniques-for-high-performance-web-apps-56kb | webdev, javascript, programming | Normal Developers: Wait Caching I know what's that, just saving information locally.
I mean you are correct in a way but...
In today's fast-paced digital world, users expect web applications to be fast and responsive. One of the key techniques to achieve high performance in web applications is caching. Caching can drastically reduce load times, decrease server load, and enhance the overall user experience.
Let's go with the basics
## Caching
Caching is the process of storing copies of files or data in a temporary storage location so that they can be accessed more quickly. When a request is made, the system first checks the cache; if the requested data is present, it can be served immediately without needing to retrieve it from the original source. This process reduces the time and resources required to deliver the data.
### When to Cache?
- The data that doesn't change frequency can be cached.
### Types of Caching
#### 1. Client-Side Caching:
- **Browser Caching** : Browsers store static assets like HTML, CSS, JavaScript, and images locally. Using cache control headers, you can define how long assets should be cached.
```
Cache-Control: max-age=3600
```
The browser already uses multiple techniques to cache. Most of the `GET` responses are cached by default.
#### 2. Server-Side Caching:
- **HTTP Caching** : Utilize HTTP headers such as ETag, Cache-Control, Expires, and Last-Modified to control caching behavior.
- **Content Delivery Network (CDN)**: CDNs cache your content at various geographically distributed servers, reducing latency and improving load times for users around the globe.
> Here is a small knowledge regarding the CDN,
[Jio cinema](https://www.jiocinema.com) is a streaming platform, Whenever there is an IPL(Indian Premier League) going on and the servers are on heavy load, the response of a user's home screen is cached using CDN and client.
- **Reverse Proxy Cache** : Tools like Nginx can act as intermediaries that cache responses from your server, reducing the load on your web server.
### 3. Database Caching
- **Query Caching**: Store the results of expensive database queries to speed up subsequent requests. Most database systems, like MySQL and PostgreSQL, offer built-in query caching like using Indexing on the primary key, which makes a HashMap with the address of the location of the data.
- **Object Caching**: Use in-memory data stores like Redis or Memcached to cache objects retrieved from a database, reducing the need to perform repetitive and expensive queries.
### 4. Application-Level Caching:
- **Page Caching**: Store the entire rendered HTML of pages that don't change frequently.
- **Fragment/Component Caching**: Cache parts of a page (like sidebar widgets) that change less frequently than the main content.
- **Data Caching**: Cache expensive data computations or API calls.
There are many more techniques to implement this caching strategy
Thanks for reading a simple and short blog about caching in Web applications. Follow to learn the real magics of programming and make me motivated.
| nayanraj-adhikary |
1,895,093 | Workshop Sobre Figma Gratuito: Crie Seu Protótipo Do Zero | Participe do workshop “Figma em 2 dias: do zero à criação do seu protótipo” e aprenda a usar a... | 0 | 2024-06-23T13:50:06 | https://guiadeti.com.br/workshop-figma-gratuito-crise-seu-prototipo/ | semcategoria, cursosgratuitos, design, designgrafico | ---
title: Workshop Sobre Figma Gratuito: Crie Seu Protótipo Do Zero
published: true
date: 2024-06-20 14:54:01 UTC
tags: Semcategoria,cursosgratuitos,design,designgrafico
canonical_url: https://guiadeti.com.br/workshop-figma-gratuito-crise-seu-prototipo/
---
Participe do workshop “Figma em 2 dias: do zero à criação do seu protótipo” e aprenda a usar a ferramenta mais famosa do UX/UI Design.
Desde o primeiro login até o desenvolvimento de um protótipo completo, você conhecerá os comandos essenciais e entenderá a utilidade deste software indispensável para quem trabalha com produtos digitais.
Conheça as principais funcionalidades e recursos do Figma. Baixe a versão gratuita e aplique na prática as valiosas dicas de William de Almeida, Arquiteto UX Sênior e tutor da EBAC. Serão dois dias de aulas gratuitas e ao vivo, oferecidas pela EBAC.
## Workshop Figma em 2 Dias: Do Zero à Criação do Seu Protótipo
Participe do workshop “Figma em 2 dias: do zero à criação do seu protótipo” e conheça os comandos essenciais da ferramenta mais famosa do UX/UI Design.

_Imagem da página do evento_
Desde o primeiro login até o desenvolvimento de um protótipo completo, este workshop abrange tudo o que você precisa saber para começar a usar o Figma.
### A Quem Pode Interessar
Este workshop é ideal para:
- Interessados em UX, UI e Design de Produtos Digitais
- Estudantes ou recém-iniciados no Design
- Quem deseja trabalhar com a criação de apps, sites e plataformas online
- Profissionais de Humanas que buscam caminhos na era digital
### Principais Funcionalidades e Recursos
Entenda para que serve o Figma, o software preferido de quem trabalha com produtos digitais. Explore suas principais funcionalidades e recursos, e aprenda como utilizá-los de forma eficaz. Confira a ementa:
- Contexto da ferramenta: Entenda o que é, para que serve e a utilização do software.
- Referências de ferramentas do mercado: Saiba quais as ferramentas disponíveis no mercado e o motivo pelo qual o Figma é o mais utilizado.
- Utilização prática: Experimente aplicações práticas dessa ferramenta.
### Dicas Práticas
Baixe a versão gratuita do Figma e aplique na prática as valiosas dicas de William de Almeida, Arquiteto UX Sênior e tutor da EBAC.
As aulas são totalmente ao vivo, e você pode tirar suas dúvidas e enviar perguntas ao professor pelo chat em tempo real. Não perca esta oportunidade de aprender com um especialista!
#### William de Almeida
William de Almeida é formado em Design Gráfico com Certificação em UX Design, especializado em Research, Acessibilidade e UI.
Ele possui uma extensão pela PUC-SP em Inteligência Artificial e Impacto Social, além de cursos de Facilitação e Processos de Inovação, com amplos conhecimentos em Data-Driven (tomadas de decisão orientadas a dados).
Com mais de 20 anos de experiência na área de design, William tem uma vasta expertise em todas as vertentes do design (gráfico, digital e UX). Ele já colaborou com empresas como Bradesco Seguros, CNN Brasil, Banco Santander, Banco Original, GPA e, além da EBAC, atualmente exerce o cargo de Arquiteto UX Sênior na CCEE.
### Aulas Gratuitas e Ao Vivo
Serão dois dias imperdíveis de aulas gratuitas e ao vivo oferecidas pela EBAC. O workshop acontecerá nos dias 25 e 26 de junho de 2024, às 19:00. Confira o cronograma:
#### 25 de junho, 19:00 DIA 1 – Apresentando o Figma
- Contando sobre a ferramenta
- Quais as ferramentas similares?
- Por que o Figma é o mais utilizado no mercado?
- Preparando o terreno para a mão na massa.
#### 26 de junho, 19:00 DIA 2 – Iniciando o processo de prototipação
- Um tour pelas ferramentas básicas;
- Forma de organização dos documentos;
- Reproduzindo a partir de uma referência;
- Compartilhando seu protótipo.
### Exclusivo para Você
#### Certificado de Participação EBAC
Participantes que enviarem seus dados no formulário compartilhado no chat ao vivo durante os encontros receberão um certificado de participação da EBAC, contendo informações sobre o workshop.
#### Descontos Exclusivos
Fora o conhecimento adquirido, os participantes do evento receberão um desconto exclusivo no curso “Profissão: UX/UI Designer”.
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Bootcamp-Machine-Learning-AWS-280x210.png" alt="Bootcamp Machine Learning AWS" title="Bootcamp Machine Learning AWS"></span>
</div>
<span>Bootcamp De Machine Learning Para AWS Gratuito</span> <a href="https://guiadeti.com.br/bootcamp-machine-learning-aws-gratuito/" title="Bootcamp De Machine Learning Para AWS Gratuito"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Workshop-Sobre-Figma-280x210.png" alt="Workshop Sobre Figma" title="Workshop Sobre Figma"></span>
</div>
<span>Workshop Sobre Figma Gratuito: Crie Seu Protótipo Do Zero</span> <a href="https://guiadeti.com.br/workshop-figma-gratuito-crise-seu-prototipo/" title="Workshop Sobre Figma Gratuito: Crie Seu Protótipo Do Zero"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Oracle-Learning-Explorer-280x210.png" alt="Oracle Learning Explorer" title="Oracle Learning Explorer"></span>
</div>
<span>Cursos Oracle Gratuitos: Treinamentos e Certificados</span> <a href="https://guiadeti.com.br/cursos-oracle-gratuitos-treinamentos-certificados/" title="Cursos Oracle Gratuitos: Treinamentos e Certificados"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Cursos-Sest-Senat-280x210.png" alt="Cursos Sest Senat" title="Cursos Sest Senat"></span>
</div>
<span>SEST SENAT Cursos Gratuitos: LGPD, Excel, Gestão E Informática</span> <a href="https://guiadeti.com.br/sest-senat-cursos-gratuitos/" title="SEST SENAT Cursos Gratuitos: LGPD, Excel, Gestão E Informática"></a>
</div>
</div>
</div>
</aside>
## Figma
Figma é uma ferramenta de design baseada na nuvem que permite aos designers criar interfaces de usuário e experiências de usuário de forma colaborativa e eficiente.
Muito utilizada por profissionais de UX/UI, a plataforma oferece uma ampla gama de funcionalidades que facilitam o processo de design, prototipagem e colaboração.
### Design Colaborativo em Tempo Real
Uma das características mais notáveis do Figma é sua capacidade de suportar design colaborativo em tempo real.
Múltiplos usuários podem trabalhar simultaneamente no mesmo arquivo, o que promove uma colaboração eficiente e elimina a necessidade de enviar arquivos de design de um lado para o outro.
### Prototipagem Interativa
Figma permite a criação de protótipos interativos diretamente na plataforma. Os designers podem adicionar transições, animações e interações para simular a experiência do usuário final, facilitando a visualização e o teste do fluxo de navegação.
### Ferramentas de Design Flexíveis
A plataforma oferece uma ampla variedade de ferramentas de design, incluindo formas vetoriais, texto, imagens e componentes reutilizáveis, possibilitando aos designers criar interfaces detalhadas e complexas com facilidade.
### Sistema de Design
Figma facilita a criação e manutenção de sistemas de design consistentes. Os designers podem criar bibliotecas de componentes que podem ser reutilizadas em diferentes projetos, garantindo consistência visual e eficiência no desenvolvimento.
### Vantagens do Figma
#### Baseado na Nuvem
Sendo uma ferramenta baseada na nuvem, Figma elimina a necessidade de instalações locais e permite acesso fácil de qualquer lugar. Isso é particularmente útil para equipes distribuídas e trabalho remoto.
#### Integrações e Plugins
Figma oferece uma variedade de integrações com outras ferramentas populares de design e desenvolvimento, como Slack, Jira e Zeplin. A plataforma suporta uma variedade de plugins que estendem suas funcionalidades e personalizam a experiência de design.
#### Versatilidade
Figma é versátil e pode ser utilizado para diversos tipos de projetos de design, incluindo web, mobile e design de produtos digitais. As ferramentas flexíveis atendem tanto designers iniciantes quanto profissionais experientes.
### Casos de Uso
Design de Interfaces Web e Mobile
Figma é amplamente utilizado para o design de interfaces web e mobile. Sua capacidade de criar protótipos interativos permite aos designers testar e iterar rapidamente sobre suas ideias.
## EBAC
A Escola Britânica de Artes Criativas e Tecnologia (EBAC) é uma instituição educacional que se oferece cursos nas áreas de artes, design e tecnologia.
A instituição tem como objetivo capacitar seus alunos para que se tornem profissionais de destaque em suas respectivas áreas.
### Metodologia de Ensino
A metodologia de ensino da EBAC é centrada no aluno, promovendo a aprendizagem ativa e a aplicação prática dos conhecimentos adquiridos.
Os cursos são ministrados por profissionais experientes e reconhecidos no mercado, que trazem para a sala de aula uma visão realista e atualizada das indústrias criativas.
### Programas e Parcerias
A EBAC oferece programas de graduação, pós-graduação e cursos de curta duração, abrangendo diversas disciplinas como design gráfico, animação, fotografia, desenvolvimento de jogos, entre outros.
A instituição também mantém parcerias estratégicas com universidades internacionais, empresas e organizações líderes do setor, proporcionando aos alunos oportunidades de networking e experiências práticas que enriquecem sua formação.
## Link de inscrição ⬇️
As [inscrições para o workshop Figma em 2 Dias: Do Zero à Criação do Seu Protótipo](https://ebaconline.com.br/webinars/product-workshop-2024-06-25-26) devem ser realizadas no site da EBAC.
## Compartilhe esta oportunidade de aprender com a EBAC!
Gostou do conteúdo sobre o evento de Figma? Então compartilhe com a galera!
O post [Workshop Sobre Figma Gratuito: Crie Seu Protótipo Do Zero](https://guiadeti.com.br/workshop-figma-gratuito-crise-seu-prototipo/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,894,883 | Java Message Service | 1. Anatomy of JMS message | 27,794 | 2024-06-20T14:49:31 | https://dev.to/bhardwajsameer7/java-message-service-4hf6 | jms, jakarta, j2ee, jee | **1. Anatomy of JMS message** | bhardwajsameer7 |
1,894,881 | JavaFX Basics | JavaFX is an excellent pedagogical tool for learning object-oriented programming. JavaFX is a new... | 0 | 2024-06-20T14:47:59 | https://dev.to/paulike/javafx-basics-2lck | java, programming, learning, beginners | JavaFX is an excellent pedagogical tool for learning object-oriented programming. JavaFX is a new framework for developing Java GUI programs. The JavaFX API is an excellent example of how the object-oriented principles are applied. We'll present the basics of JavaFX programming. and uses of JavaFX to demonstrate object-oriented design and programming. Specifically, we'll introduce the framework of JavaFX and discuss JavaFX GUI components and their relationships. You will learn how to develop simple GUI programs using layout panes, buttons, labels, text fields, colors, fonts, images, image views, and shapes.
## JavaFX vs Swing and AWT
Swing and AWT are replaced by the JavaFX platform for developing rich Internet applications. When Java was introduced, the GUI classes were bundled in a library known as the _Abstract Windows Toolkit (AWT)_. AWT is fine for developing simple graphical user interfaces, but not for developing comprehensive GUI projects. In addition, AWT is prone to platform-specific bugs. The AWT user-interface components were replaced by a more robust, versatile, and flexible library known as _Swing components_. Swing components are painted directly on canvases using Java code. Swing components depend less on the target platform and use less of the native GUI resources. Swing is designed for developing desktop GUI applications. It is now replaced by a completely new GUI platform known as _JavaFX_. JavaFX incorporates modern GUI technologies to enable you to develop rich Internet applications. A rich Internet application (RIA) is a Web application designed to deliver the same features and functions normally associated with deskop applications. A JavaFX application can run seemlessly on a desktop and from a Web browser. Additionally, JavaFX provides a multi-touch support for touchenabled devices such as tablets and smart phones. JavaFX has a built-in 2D, 3D, animation support, video and audio playback, and runs as a stand-alone application or from a browser.
JavaFX is much simpler to learn and use for new Java programmers. Swing is essentially dead, because it will not receive any further enhancement. JavaFX is the new GUI tool for developing cross-platform-rich Internet applications on desktop computers, on hand-held devices, and on the Web. | paulike |
1,894,880 | "Embarking on My Backend Development Journey: From C# to Future Challenges" | Hello, Dev community! 👋 I'm Emmanuel Michael, an aspiring backend developer who recently completed a... | 0 | 2024-06-20T14:47:10 | https://dev.to/emmanuelmichael05/embarking-on-my-backend-development-journey-from-c-to-future-challenges-20lg | webdev, beginners, programming, react |
Hello, Dev community! 👋 I'm Emmanuel Michael, an aspiring backend developer who recently completed a comprehensive C# course and is eager to dive deeper into the world of software development. I'm passionate about leveraging technology to solve real-world problems and am excited to embark on this continuous learning and growth journey.
Proficient in C# and currently exploring other languages.
I am familiar with basic backend concepts and eager to expand my knowledge in frameworks like .NET and tools like Visual Studio.
I am particularly interested in backend development, API design, and database management.
I recently completed a rigorous C# course where I gained hands-on experience in object-oriented programming, database integration with SQL Server. I'm enthusiastic about applying my newfound skills to real-world projects and contributing to the developer community.
As I continue my journey, I'm committed to exploring new technologies and best practices in backend development. I look forward to connecting with fellow developers, sharing insights, and collaborating on exciting projects that push the boundaries of what's possible in software development.
Let's connect to discuss all things backend development, share resources, and support each other's growth in this dynamic field. I'm eager to learn from your experiences and contribute to our shared journey of becoming proficient backend developers.
| emmanuelmichael05 |
1,894,878 | Building a Custom Chatbot with Next.js, Langchain, OpenAI, and Supabase. | A chatbot system that can be trained with custom data from PDF files. ... | 0 | 2024-06-20T14:43:29 | https://dev.to/nassermaronie/building-a-custom-chatbot-with-nextjs-langchain-openai-and-supabase-4idp | llm, langchain, openai, nextjs | ## A chatbot system that can be trained with custom data from PDF files.
{% embed https://www.linkedin.com/embed/feed/update/urn:li:ugcPost:7208975610667311104?source=post_page-----a770e3fa9163-------------------------------- %}
---
In this tutorial, we will create a chatbot system that can be trained with custom data from PDF files. The chatbot will utilize [Next.js](https://nextjs.org/) for the frontend, [MaterialUI](https://mui.com/) for the UI components, [Langchain](https://www.npmjs.com/package/langchain) and [OpenAI](https://openai.com/) for working with language models, and [Supabase](https://supabase.com/) to store the data and embeddings. By the end, you will have a fully functional chatbot that can answer questions based on the contents of uploaded PDF files.
### Next.js
Next.js is a powerful and flexible React framework developed by Vercel that enables developers to build server-side rendering (SSR) and static web applications with ease. It combines the best features of React with additional capabilities to create optimized and scalable web applications.
### OpenAI
The OpenAI module in Node.js provides a way to interact with OpenAI’s API, allowing developers to leverage powerful language models like GPT-3 and GPT-4. This module enables you to integrate advanced AI functionalities into your Node.js applications.
### LangChain.js
LangChain is a powerful framework designed for developing applications with language models. Originally developed for Python, it has since been adapted for other languages, including Node.js. Here’s an overview of LangChain in the context of Node.js:
#### What is LangChain?
LangChain is a library that simplifies the creation of applications using [large language models (LLMs)](https://www.ibm.com/topics/large-language-models). It provides tools to manage and integrate LLMs into your applications, handle chaining of calls to these models, and enable complex workflows with ease.
#### Key Features
- Model Integration: Connect and interact with various language models, including those from OpenAI, Hugging Face, and more.
- Prompt Management: Manage, optimize, and format prompts effectively.
- Chain Building: Create sequences of model interactions, enabling more sophisticated workflows.
- Memory Management: Maintain context between interactions, making the models’ responses more coherent and contextually aware.
- Tool Use: Integrate external tools and APIs to augment the capabilities of language models.
- Streaming: Support for streaming responses, which is useful for real-time applications.
#### How Large Language Models (LLM) Work?
Large Language Models (LLMs) like [OpenAI’s GPT-3.5](https://openai.com/index/gpt-3-5-turbo-fine-tuning-and-api-updates/) are trained on vast amounts of text data to understand and generate human-like text. They can generate responses, translate languages, and perform many other natural language processing tasks.
#### Supabase
Supabase is an open-source backend-as-a-service (BaaS) platform designed to help developers quickly build and deploy scalable applications. It offers a suite of tools and services that simplify database management, authentication, storage, and real-time capabilities, all built on top of [PostgreSQL](https://www.postgresql.org/)
---
### Our Goals
#### Training the Model
- PDF File Conversion: The uploaded PDF files are converted into vectors. Vectors are numerical representations that the AI can understand.
- Embedding: The vectors are embedded into a vector store for efficient querying.

#### Chatting with the Trained Model
- User Input: The user provides an input query.
- Prompt Conversion: The input is converted into a standalone question.
- Vector Conversion: The question is converted into a vector.
- Nearest Match Search: The system searches for the nearest match in the vector store.
- Response Generation: The system generates an answer based on the closest match.

---
### Prerequisites
Before we start, ensure you have the following:
- Node.js and npm installed
- A Supabase account
- API key for OpenAI
---
### Step 1: Setting Up Supabase
#### Creating Tables and Functions
First, create an extension if it doesn’t already exist for our vector store:
```sql
create extension if not exists vector;
```
Next, create a table named “documents”. This table will be used to store and embed the content of our uploaded PDF files in vector format:
```sql
create table if not exists documents (
id bigint primary key generated always as identity,
content text,
metadata jsonb,
embedding vector(1536)
);
```
Now, we need a function to query our embedded data:
```sql
create or replace function match_documents (
query_embedding vector(1536),
match_count int default null,
filter jsonb default '{}'
) returns table (
id bigint,
content text,
metadata jsonb,
similarity float
) language plpgsql as $$
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding
limit match_count;
end;
$$;
```
The “match_documents” function performs the task of querying the embedded data. We will call this function in our Next.js app via Supabase Vector Store.
Next, we need to set up our tables for the chatbot system:
```sql
create table if not exists files (
id bigint primary key generated always as identity,
name text not null,
created_at timestamp with time zone default timezone('utc'::text, now()) not null
);
create table if not exists rooms (
id bigint primary key generated always as identity,
created_at timestamp with time zone default timezone('utc'::text, now()) not null
);
create table if not exists chats (
id bigint primary key generated always as identity,
room bigint references rooms(id) on delete cascade,
role text not null,
message text not null,
created_at timestamp with time zone default timezone('utc'::text, now()) not null
);
```
The “files” table will store details of the uploaded PDF files. This allows us to reference and filter the files in the “documents” table. Our chatbot system will query embedding data with the given “file id” selected in our app. This way, our chatbot system can manage multiple PDF files and focus on the context of a specific file.
The “rooms” table will store all the chat sessions, allowing users to have multiple chat sessions within our app.
Finally, the “chats” table will store all the chats from a particular chat session (room). The role will differentiate whether it’s a user or a bot. If it’s a user, the role will be “user”.
---
### Step 2: Setting Up Next.js
#### Create Next.js app
```bash
$ npx create-next-app chatbot
$ cd ./chatbot
```
Install the required dependencies:
```bash
npm install @langchain/community @langchain/core @langchain/openai @supabase/supabase-js langchain openai pdf-parse pdfjs-dist
```
Then we will install Material UI for building our interface, feel free to use other library:
```bash
npm install @mui/material @emotion/react @emotion/styled
```
#### Connecting to Supabase
Create a file to connect your Next.js app to Supabase:
```typescript
// src/libs/supabaseClient.ts
import { createClient, SupabaseClient } from "@supabase/supabase-js";
const supabaseUrl: string = process.env.NEXT_PUBLIC_SUPABASE_URL || "";
const supabaseAnonKey: string = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY || "";
if (!supabaseUrl) throw new Error("Supabase URL not found.");
if (!supabaseAnonKey) throw new Error("Supabase Anon key not found.");
export const supabaseClient: SupabaseClient = createClient(supabaseUrl, supabaseAnonKey);
```
#### Setting Up LLM clients
Create a file to set up LangChain and Embeddings:
```typescript
// src/libs/openAI.ts
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
const openAIApiKey: string = process.env.NEXT_PUBLIC_OPENAI_API_KEY || "";
if (!openAIApiKey) throw new Error("OpenAI API key not found.");
export const llm = new ChatOpenAI({
openAIApiKey,
modelName: "gpt-3.5-turbo",
temperature: 0.9,
});
export const embeddings = new OpenAIEmbeddings(
{
openAIApiKey,
},
{ maxRetries: 0 }
);
```
#### Next.js Config
Lastly, we need to update our Next.js config file, since we will be using Web PDF Loader from Langchain, and it depends on fs module that will throw error if used in the browser. So update your config file following this snippet:
```typescript
/** @type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
output: "export",
webpack: (config, { isServer }) => {
// See https://webpack.js.org/configuration/resolve/#resolvealias
config.resolve.alias = {
...config.resolve.alias,
sharp$: false,
"onnxruntime-node$": false,
};
config.experiments = {
...config.experiments,
topLevelAwait: true,
asyncWebAssembly: true,
};
config.module.rules.push({
test: /\.md$/i,
use: "raw-loader",
});
// Fixes npm packages that depend on `fs` module
if (!isServer) {
config.resolve.fallback = {
...config.resolve.fallback, // if you miss it, all the other options in fallback, specified
// by next.js will be dropped. Doesn't make much sense, but how it is
fs: false, // the solution
"node:fs/promises": false,
module: false,
perf_hooks: false,
};
}
return config;
},
};
export default nextConfig;
```
Now our Next.js app is ready! let’s continue on building the chatbot system.
---
### Step 3: Prepare the services that communicate with our database
We will use these services / methods to communicate with the Supabase from our React component.
#### File Service
The file service handles file-related operations, such as fetching the list of files and saving a new file to the database.
```typescript
// src/services/file.ts
import { embeddings } from "@/libs/openAI";
import { supabaseClient } from "@/libs/supabaseClient";
import { WebPDFLoader } from "@langchain/community/document_loaders/web/pdf";
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";
export interface IFile {
id?: number | undefined;
name: string;
created_at?: Date | undefined;
}
// Fetch the list of uploaded files from the Supabase database.
export async function fetchFiles(): Promise<IFile[]> {
const { data, error } = await supabaseClient
.from("files")
.select()
.order("created_at", { ascending: false })
.returns<IFile[]>();
if (error) throw error;
return data;
}
// Save a new file to the database, convert it to vectors, and store the vectors.
export async function saveFile(file: File): Promise<IFile> {
const { data, error } = await supabaseClient
.from("files")
.insert({ name: file.name })
.select()
.single<IFile>();
if (error) throw error;
const loader = new WebPDFLoader(file);
const output = await loader.load();
const docs = output.map((d) => ({
...d,
metadata: { ...d.metadata, file_id: data.id },
}));
await SupabaseVectorStore.fromDocuments(docs, embeddings, {
client: supabaseClient,
tableName: "documents",
queryName: "match_documents",
});
return data;
}
```
- fetchFiles: Fetches the list of uploaded files from the Supabase database, ordered by creation date.
- saveFile: Saves a new file to the database, converts the PDF content to vectors using the Langchain library, and stores the vectors in the Supabase vector store.
#### Room Service
The room service handles operations related to chat rooms, such as fetching the list of rooms and creating a new room.
```typescript
// src/services/room.ts
import { supabaseClient } from "@/libs/supabaseClient";
export interface IRoom {
id?: number | undefined;
created_at?: Date | undefined;
}
// Fetch the list of chat rooms from the Supabase database.
export async function fetchRooms(): Promise<IRoom[]> {
const { data, error } = await supabaseClient
.from("rooms")
.select()
.order("created_at", { ascending: false })
.returns<IRoom[]>();
if (error) throw error;
return data;
}
// Create a new chat room in the database.
export async function createRoom(): Promise<IRoom> {
const { data, error } = await supabaseClient
.from("rooms")
.insert({})
.select()
.single<IRoom>();
if (error) throw error;
return data;
}
```
- fetchRooms: Fetches the list of chat rooms from the Supabase database, ordered by creation date.
- createRoom: Creates a new chat room in the database and returns the created room.
#### Chat Service
The chat service handles operations related to chats, such as fetching the list of chats, posting a new chat, and getting an answer from the chatbot.
```typescript
// src/services/chat.ts
import { embeddings, llm } from "@/libs/openAI";
import { supabaseClient } from "@/libs/supabaseClient";
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";
import { StringOutputParser } from "@langchain/core/output_parsers";
import {
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
} from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { formatDocumentsAsString } from "langchain/util/document";
export interface IChat {
id?: number | undefined;
room: number;
role: string;
message: string;
created_at?: Date | undefined;
}
// Fetch the list of chats for a given room from the Supabase database.
export async function fetchChats(roomId: number): Promise<IChat[]> {
const { data, error } = await supabaseClient
.from("chats")
.select()
.eq("room", roomId)
.order("created_at", { ascending: true })
.returns<IChat[]>();
if (error) throw error;
return data;
}
// Post a new chat message to the database.
export async function postChat(chat: IChat): Promise<IChat> {
const { data, error } = await supabaseClient
.from("chats")
.insert(chat)
.select()
.single<IChat>();
if (error) throw error;
return data;
}
// Get an answer from the chatbot based on the user's chat message.
export async function getAnswer(chat: IChat, fileId: number): Promise<IChat> {
const vectorStore = await SupabaseVectorStore.fromExistingIndex(embeddings, {
client: supabaseClient,
tableName: "documents",
queryName: "match_documents",
});
const retriever = vectorStore.asRetriever({
filter: (rpc) => rpc.filter("metadata->>file_id", "eq", fileId),
k: 2,
});
const SYSTEM_TEMPLATE = `Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------
{context}`;
const messages = [
SystemMessagePromptTemplate.fromTemplate(SYSTEM_TEMPLATE),
HumanMessagePromptTemplate.fromTemplate("{question}"),
];
const prompt = ChatPromptTemplate.fromMessages(messages);
const chain = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
const answer = await chain.invoke(chat.message);
const { data, error } = await supabaseClient
.from("chats")
.insert({
role: "bot",
room: chat.room,
message: answer,
})
.select()
.single<IChat>();
if (error) throw error;
return data;
}
```
- fetchChats: Fetches the list of chats for a given room from the Supabase database, ordered by creation date.
- postChat: Posts a new chat message to the database.
- getAnswer: Gets an answer from the chatbot based on the user’s chat message. It uses the Langchain library to retrieve the most relevant documents from the vector store and generates a response using OpenAI’s language model.
---
### Step 4: Building the UI
#### Chat Room Component
The ChatRoom component handles the display and interaction of the chat interface.
```typescript
// src/components/ChatRoom.tsx
import {
Box,
Button,
LinearProgress,
Stack,
TextField,
Typography,
} from "@mui/material";
import { ChangeEvent, MouseEvent, useEffect, useState } from "react";
import { IChat, fetchChats, getAnswer, postChat } from "@/services/chat";
export default function ChatRoom({
roomId,
fileId,
}: {
roomId: number;
fileId: number;
}) {
const [message, setMessage] = useState<string>("");
const [chats, setChats] = useState<IChat[]>([]);
const [submitting, setSubmitting] = useState(false);
const onChangeInput = (e: ChangeEvent<HTMLInputElement>) =>
setMessage(e.target.value);
const onSubmitInput = async (e: MouseEvent<HTMLElement>) => {
e.preventDefault();
if (!message) return;
let currChats = [...chats];
try {
setSubmitting(true);
const chat = await postChat({
role: "user",
room: roomId,
message,
});
setMessage("");
currChats.push(chat);
const answer = await getAnswer(chat, fileId);
currChats.push(answer);
setChats(currChats);
} catch (err) {
console.error(err);
} finally {
setSubmitting(false);
}
};
useEffect(() => {
(async () => {
try {
if (typeof roomId !== "undefined") {
const chats = await fetchChats(roomId);
setChats(chats);
}
} catch (err) {
console.error(err);
}
})();
}, [roomId]);
return (
<>
<Stack sx={{ gap: 2, mb: 2 }}>
{chats.map((chat, i) => (
<Box
key={i}
sx={{
display: "flex",
justifyContent: chat.role === "user" ? "flex-end" : "flex-start",
}}
>
<Box
sx={{
minWidth: "250px",
maxWidth: "1000px",
p: 2,
border: "1px solid #555",
borderRadius: (theme) => theme.spacing(2),
}}
>
<Typography
sx={{
whiteSpace: "pre-line",
wordBreak: "break-word",
mb: 2,
display: "block",
}}
>
{chat.message}
</Typography>
</Box>
</Box>
))}
</Stack>
{submitting && <LinearProgress />}
<TextField
fullWidth
multiline
minRows={2}
maxRows={10}
value={message}
label="Write Something ..."
onChange={onChangeInput}
sx={{ mb: 2 }}
/>
<Button
fullWidth
type="submit"
variant="contained"
onClick={onSubmitInput}
disabled={submitting}
>
<Typography>Send</Typography>
</Button>
</>
);
}
```
- useEffect: Fetches the list of chats for the room when the component mounts or the roomId changes.
- onSubmitInput: Handles sending a new chat message, posting it to the database, and getting a response from the chatbot.
#### File Uploader Component
The FileUploader component handles uploading files.
```typescript
// src/components/Fi
import { ChangeEvent, MouseEvent, useState } from "react";
import { Box, Button, Typography } from "@mui/material";
import { IFile, saveFile } from "@/services/file";
export default function FileUploader({
onSave,
}: {
onSave: (file: IFile) => void;
}) {
const [inputFile, setInputFile] = useState<File | undefined>(undefined);
const [uploading, setUploading] = useState<boolean>(false);
const onChangeFile = (e: ChangeEvent<HTMLInputElement>) => {
const file = e?.target?.files?.[0];
setInputFile(file);
};
const handleSaveFile = async (e: MouseEvent<HTMLElement>) => {
e.preventDefault();
if (!inputFile) return;
try {
setUploading(true);
const file = await saveFile(inputFile);
onSave(file);
} catch (err) {
console.error(err);
} finally {
setUploading(false);
}
};
return (
<>
<Box
component="label"
htmlFor="file-uploader"
sx={{ mb: 2, display: "block" }}
>
<input
accept="application/pdf"
id="file-uploader"
type="file"
style={{ display: "none" }}
onChange={onChangeFile}
/>
<Button variant="outlined" fullWidth component="span">
<Typography>{inputFile ? inputFile.name : "Select File"}</Typography>
</Button>
</Box>
<Button
fullWidth
variant="contained"
color="primary"
disabled={!inputFile || uploading}
onClick={handleSaveFile}
>
<Typography>Upload</Typography>
</Button>
</>
);
}
```
- handleSaveFile: Handles file upload, saving the file to the database, and updating the list of files.
#### Home Page
The Home component is the main page that allows the user to create or select chat room, upload a file, and selecting a file to chat about.
```typescript
// src/pages/index.tsx
import ChatRoom from "@/components/ChatRoom";
import FileUploader from "@/components/FileUploader";
import { IFile, fetchFiles } from "@/services/file";
import { IRoom, createRoom, fetchRooms } from "@/services/room";
import {
Button,
Divider,
Grid,
List,
ListItemButton,
Typography,
} from "@mui/material";
import { MouseEvent, useEffect, useMemo, useState } from "react";
export default function Home() {
const [rooms, setRooms] = useState<IRoom[]>([]);
const [files, setFiles] = useState<IFile[]>([]);
const [roomId, setRoomId] = useState<number | undefined>(undefined);
const [fileId, setFileId] = useState<number | undefined>(undefined);
const onSaveFile = (file: IFile) => setFiles((v) => [file, ...v]);
const handleCreateRoom = async (e: MouseEvent<HTMLElement>) => {
e.preventDefault();
try {
const newRoom = await createRoom();
setRooms((v) => [newRoom, ...v]);
setRoomId(newRoom.id);
} catch (err) {
console.error(err);
}
};
const handleSelectRoom =
(id: number | undefined) => (e: MouseEvent<HTMLElement>) => {
e.preventDefault();
setRoomId(id);
};
const handleSelectFile =
(id: number | undefined) => (e: MouseEvent<HTMLElement>) => {
e.preventDefault();
setFileId(id);
};
useEffect(() => {
(async () => {
try {
const rooms = await fetchRooms();
setRooms(rooms);
const files = await fetchFiles();
setFiles(files);
} catch (err) {
console.error(err);
}
})();
}, []);
return (
<Grid container>
<Grid item xs={2} sx={{ p: 2 }}>
<Button fullWidth variant="contained" onClick={handleCreateRoom}>
New Chat
</Button>
<Divider sx={{ my: 2 }} />
<List>
{rooms.map((room, i) => (
<ListItemButton
selected={roomId === room.id}
key={i}
onClick={handleSelectRoom(room.id)}
>
{room.created_at?.toString()}
</ListItemButton>
))}
</List>
</Grid>
<Grid item xs={2} sx={{ p: 2 }}>
<FileUploader onSave={onSaveFile} />
<Divider sx={{ my: 2 }} />
<List>
{files.map((file, i) => (
<ListItemButton
selected={fileId === file.id}
key={i}
onClick={handleSelectFile(file.id)}
>
{file.name}
</ListItemButton>
))}
</List>
</Grid>
<Grid item xs sx={{ p: 2 }}>
{roomId && fileId ? (
<ChatRoom roomId={roomId as number} fileId={fileId as number} />
) : (
<Typography>Select one room and one file</Typography>
)}
</Grid>
</Grid>
);
}
```
- onSaveFile: Callback to becalled once our FileUploader component save the file into database successfully, this way we can update the “files” state with the new file.
- handleCreateRoom: Handles on creating a new chat room
- handleSelectRoom: Handles selecting a room
- handleSelectFile: Handles selecting a file.
- Conditional Rendering: Renders the text helper component if no file and room is selected, and the ChatRoom component if a file and room is selected.
---
### Step 5: Running the Application
Create a .env file in the root of your project to store your environment variables:
```
NEXT_PUBLIC_SUPABASE_URL=your-supabase-url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-supabase-anon-key
NEXT_PUBLIC_OPENAI_API_KEY=your-openai-api-key
```
Finally, start your Next.js application:
```bash
npm run dev
```
Now, you should have a running application where you can upload PDF files, chat with a bot trained on your data, and receive relevant responses based on the uploaded content.
---
### Conclusion
This guide provided a comprehensive overview of building a custom chatbot that can answer questions based on uploaded PDF files. You learned how to set up your project, configure Supabase and OpenAI, create the necessary services, and build the frontend components with React and MaterialUI. With this foundation, you can extend and customize the chatbot to fit your specific needs.
#### Check the source code in this repo:
[https://github.com/firstpersoncode/chatbot](https://github.com/firstpersoncode/chatbot)
Happy coding! | nassermaronie |
1,894,021 | amber: writing bash scripts in amber instead. pt. 2: loops and ifs | in this series we're looking at using the amber language to write scripts that transpile into bash.... | 27,793 | 2024-06-20T14:43:22 | https://dev.to/gbhorwood/amber-writing-bash-scripts-in-amber-instead-pt-2-loops-and-ifs-1694 | linux, bash | in this series we're looking at using [the amber language](https://amber-lang.com/) to write scripts that transpile into bash. so far we've covered [using shell commands and handling error cases](https://gbh.fruitbat.io/2024/06/18/amber-writing-bash-scripts-in-amber-instead-pt-1-commands-and-error-handling/). in this installment, we'll be looking at control structures: loops and `if`s, basically.
<figcaption>a developer writing an ‘if’ statement in bash
</figcaption>
## the basic `if` statement (with `else`)
writing an `if` statement in bash is an aggressively user-hostile experience. there are developers out there who have been writing bash for thirty years who still have to refer to a cheat sheet of all the various condition statements (ahem).
by comparison, amber's syntax is gloriously mundane. if you have any experience in php or python or javascript, `if` statements look exactly like you expect them to.
```dart
let dir = unsafe $pwd$
if(dir == "/home/ghorwood") {
echo "you're home"
}
else {
echo "you're lost"
}
```
and all of the comparison operators are the standard ones: `!=`, `<`, and so on.
### a shorter, terser, `if` syntax
amber also offers a terser syntax, allowing us to get that three-line `if` statement down to one by forgoing the braces in favour of a colon.
```dart
if(dir == "/home/ghorwood") : echo "you're home"
else : echo "your'e lost"
```
### ternery expressions: terser still.
if we're _really_ committed to lowering the line count, there's also [ternery expressions](https://en.wikipedia.org/wiki/Ternary_conditional_operator). i'm a huge fan of terneries and it's my preference to construct conditionals this way, however public opinion on this does vary. a lot.
the basic template for an amber ternery is:
```
<some condition> then <some value if true> else <some value if false>
```
note here that the code in the `then` and `else` blocks are _values_. we can't put execution statements in here. for instance, this will not run:
```dart
// this will not work!
dir == "/home/ghorwood" then echo "you're home" else echo "you're lost"
```
instead, we can use the ternery to return a value that we pass to `echo`, like so:
```dart
echo dir == "/home/ghorwood" then "you're home" else "you're lost"
```
terneries can also be used to assign default values to variables. for instance, if we want to create a variable called `is_root` that is set to `true` if the user is root and `false` if the user is anyone else, we can write it in a ternery.
```dart
let me = unsafe $whoami$
let is_root = me == "root" then true else false
```
one thing to note when doing this is that the values in the `then` and `else` blocks must both be of the same type. we can't return a string in `then` and a number in `else`; amber will complain. and rightfully so.
### the `if` chain. sort of like `switch`.
amber doesn't do `switch` statements, but it _does_ allow us to chain a stack of conditionals in one `if` block.
```dart
if {
birth_year >= 1981 and birth_year <= 1996 {
echo "millennial"
}
birth_year >= 1946 and birth_year <= 1964 {
echo "boomer"
}
else {
echo "whatever"
}
}
```
only the first condition that evaluates to true in an `if` chain will execute. the following example will output 'greater than 3', but _not_ 'greater than 4'.
```dart
let somenumber = 7
if {
somenumber > 3 {
echo "greater than 3"
}
somenumber > 4 {
echo "greater than 4"
}
else {
echo "the else block"
}
}
```
### using `if` and `status`
in the installment that covered [commands and error handling](https://gbh.fruitbat.io/2024/06/18/amber-writing-bash-scripts-in-amber-instead-pt-1-commands-and-error-handling/), we went over how amber puts the exit code of a command in a special, global variable called `status`. most people who write bash scripts never check exit codes; they just let the shell handle any errors. but with amber, we can leverage `status` to improve our error handling.
for instance, we want a script that runs three shell commands. if any command fails, we want to terminate the script (we'll cover terminating the script in the next installment). however, we want to show _all_ the error messages. if two commands fail, we want to show both fails to the user and _then_ quit.
we can do this by tracking the `status` of each command after it executes, like so:
```dart
let stop_execution = 0
// whoami
silent $whoami$ failed {
echo "error: cannot whoami"
}
stop_execution += status
// touch /etc/passwd (this fails)
silent $touch /etc/passwd$ failed {
echo "error: cannot touch"
}
stop_execution += status
// pwd
silent $pwd$ failed {
echo "error: cannot pwd"
}
stop_execution += status
// test if any command failed
if(stop_execution) {
echo "fail case"
}
```
here, we declared a variable called `stop_execution` that holds the sum of all the `status` variable after each command is run. if all commands pass they all have an exit code of 0, and their sum in `stop_execution` is 0. we proceed.
however, if one command (or more) has a non-zero exit code in `status`, the sum in `stop_execution` is greater than zero and we execute our error-handling `if` statement.
## loops
loops in bash aren't actually _that_ bad. there's a standard `for` and a standard `while` and they work mostly like we expect.
amber takes a slightly different approach to looping, dividing them into two broad categories:
* **infinite loops**: that run forever until `break` is called.
* **iterator loops**: that iterate over an iterable data structure. you know, an array.
let's look at both of them.
### infinite loops
infinite loops run until something stops them. this makes them dangerous, but it also makes them useful.
in this example, we poll the user for some text input until we get the single character 'q'; then we quit.
```dart
let user_input = ""
loop {
user_input = unsafe $read input && echo \$input$
if(user_input == "q") {
break
}
}
```
the key here is the `break` statement that terminates our loop block when the condition is met.
### loops that iterate
other programming languages are overflowing with ways to loop over things like arrays: there are `for`s and `foreach`es, various `while` implementations, and all manner of `map`s. in amber, we use the same `loop` command as we do with infinite loops, but with modifications to make it work a bit like a mixture of a traditional `for` that tracks the array's index and a `foreach` that assigns the current element to a variable. here's an example:
```dart
let users = ["ghorwood", "mnle", "mflewitt"]
loop index, user in users {
echo index
echo user
}
```
if we've used languages like php or python or javascript, this construction should look familiar. it's basically php's `foreach($collection as $item)` or python's `for item in collection:`, but with the addition of `index`. if we run the above code, we get what we expect:
```
0
ghorwood
1
mnle
2
mflewitt
```
### a short note on arrays
we saw an array in the example above, and we should probably address that.
amber does arrays. and while the syntax and their behaviour is pretty much what we would expect coming from other modern languages, there are some notable restrictions:
* **type consistency is required:** every element of an array has to be of the same type. you can have an array of strings or an array of numbers, but you cannot mix the two.
* **there are no associative arrays:** associative arrays or dictionaries or whatever we want to call them do not exist in amber. keys are numbers starting at zero.
* **you cannot nest arrays:** an array cannot have another array as an element.
* **there is no subscripting:** when we get to functions later on, we will have functions that return arrays and we may be tempted to do something like `get_all_users()[0]`. this doesn't work. at least not yet.
lastly, using defined arrays, like we did above, is probably what we're least interested in. what we _want_ is to be able to get a directory listing or lines from a file as an array. we'll cover that in the next installment that goes over the standard library.
## what's next
amber has a bunch of commands in it's 'standard library' for doing things like string manipulation and file access. useful things we will probably want to do. none of this is covered in the official documentation (as of yet!). in the next installment, we will go over all of these commands and how to use them.
> 🔎 this post was originally written in the [grant horwood technical blog](https://gbh.fruitbat.io/2024/06/20/amber-writing-bash-scripts-in-amber-instead-pt-2-loops-and-ifs/) | gbhorwood |
1,894,876 | What mathematics does the big model involve? | Large models like GPT-4 involve a wide array of mathematical knowledge that supports their structure,... | 0 | 2024-06-20T14:40:14 | https://dev.to/fridaymeng/what-mathematics-does-the-big-model-involve-1co9 | data, math | Large models like GPT-4 involve a wide array of mathematical knowledge that supports their structure, training, and applications. Here's an overview of the key areas:

[Demo](https://addgraph.com/biglargemodelmath) | fridaymeng |
1,894,874 | The <map> Tag in HTML | The Tag in HTML The tag in HTML is used to define an image map. An image map is a way of... | 0 | 2024-06-20T14:39:37 | https://dev.to/mhmd-salah/the-tag-in-html-2691 | html, webdev, website | ## The <map> Tag in HTML
The <map> tag in HTML is used to define an image map. An image map is a way of defining multiple clickable areas (regions) within a single image. Each of these clickable areas can be associated with different links or actions.
### How It Works
The <map> tag is used in conjunction with the <area> tag to define these clickable regions. The <map> tag itself does not display any content; it simply provides a container for the <area> tags that define the clickable regions.
Here’s a step-by-step breakdown of how to use the <map> tag:
- Define the Image:
You first need an image to apply the map to. Use the <img> tag for this.
- Define the Map:
Use the <map> tag to define the map and give it a unique name using the name attribute.
- Define the Clickable Areas:
Inside the <map> tag, use the <area> tags to define the clickable regions. Each <area> tag uses attributes to specify the shape, coordinates, and link for each region.

### Attributes
- name (required):
Specifies the name of the map. This name is referenced by the usemap attribute in the <img> tag.
- shape (in <area>):
Defines the shape of the clickable area. Possible values are rect (rectangle), circle (circle), and poly (polygon).
- coords (in <area>):
Specifies the coordinates that define the shape of the clickable area. The format of the coordinates depends on the shape:
rect: "x1,y1,x2,y2"
circle: "x,y,radius"
poly: "x1,y1,x2,y2,x3,y3,..."
- href (in <area>):
Defines the URL to which the user will be redirected when they click the area.
- alt (in <area>):
Provides alternative text for the clickable area, useful for accessibility.
### Use Cases
Image maps are useful when you want to create interactive images, such as:
- Geographic maps where each region links to a different page.
- Interactive diagrams where each part of the diagram links to more detailed information.
- Any scenario where multiple links are needed within a single image. | mhmd-salah |
1,894,872 | Copado Robotic Testing Certification | The Copado Robotic Testing Certification exam questions cover a range of key concepts related to... | 0 | 2024-06-20T14:36:59 | https://dev.to/copadorobotictesting/copado-robotic-testing-certification-n78 | education, exam, dumps, webdev | The [Copado Robotic Testing Certification](https://exam4future.com/) exam questions cover a range of key concepts related to robotic testing and the Copado Robotic Testing platform:
• Robotic Testing Fundamentals: Principles and concepts of robotic testing, its benefits, challenges, and best practices.
• Copado Robotic Testing Platform: Features and functionality of the Copado Robotic Testing platform, including its architecture, components, and capabilities.
• Test Case Design and Development: Techniques for designing and developing effective test cases for robotic testing, including data-driven testing and keyword-driven testing.
• Test Automation Execution: Methods for executing automated tests using the Copado Robotic Testing platform, including scheduling, monitoring, and reporting.
• Test Result Analysis and Debugging: Techniques for analyzing test results, identifying defects, and debugging automated tests.
• Integration with Salesforce: Best practices for integrating Copado Robotic Testing with Salesforce, including data synchronization and test data management.
By mastering these key concepts, candidates can demonstrate their proficiency in using Copado Robotic Testing to automate Salesforce testing processes, ensuring efficiency and accuracy in their organizations. [https://exam4future.com/](https://exam4future.com/)
| copadorobotictesting |
1,894,871 | Started learning HTML and CSS :) | Few days ago, I started learning the building blocks of Web development. So I made this landing page... | 0 | 2024-06-20T14:35:48 | https://dev.to/krsna_11/started-learning-html-and-css--343n | webdev, html, css, landingpage | Few days ago, I started learning the building blocks of Web development. So I made this landing page to reinforce what I learned :)

Writing pure CSS is fun for these kind of designs.
| krsna_11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.