id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,895,936 | The Power of Patience and Persistence: How to Thrive in the Ever-Evolving Tech Landscape | There's a lot to study if you choose any Software Engineering position. You can develop desktop, web,... | 0 | 2024-06-21T11:40:03 | https://ionixjunior.dev/en/the-power-of-patience-and-persistence-how-to-thrive-in-the-everevolving-tech-landscape/ | career | ---
title: The Power of Patience and Persistence: How to Thrive in the Ever-Evolving Tech Landscape
published: true
date: 2024-06-21 00:00:00 UTC
tags: career
canonical_url: https://ionixjunior.dev/en/the-power-of-patience-and-persistence-how-to-thrive-in-the-everevolving-tech-landscape/
cover_image: https://ionixjuniordevthumbnail.azurewebsites.net/api/Generate?title=The+Power+of+Patience+and+Persistence%3A+How+to+Thrive+in+the+Ever-Evolving+Tech+Landscape
---
There's a lot to study if you choose any Software Engineering position. You can develop desktop, web, mobile, IoT applications, and each of these options have huge possibilities for you to choose or specialize in. A common question, mainly for a newbie is: when will I be a master in this technology? This is a difficult question to answer, isn't it? Technology evolves fast and all the time we face things changing. This can cause some nervousness and stress, because it's a little bit difficult to not get things under control. So, in today's post, I'll share some of my thoughts about it. Take a coffee or any drink you prefer, have a seat, and let's get started.
I’ve been working with software development since 2008. I’ve already worked in six different companies, and I felt I and my colleagues were looking for the same objective: to be a master in some technology. Everybody who wants to learn, usually, wants to be a master in some subject, but how fast can we achieve this to acquire the knowledge we want?
Well, all of us, IT guys, love to say: it depends! That’s the absolute truth for this question. It depends, because each of us have different ways to learn. Some people like to learn by watching videos, another one likes to learn by reading blog posts, other people like to roll up their sleeves and dive into the code. So, it depends. Do you agree with this?
Another thing that I believe is very important to understand: learning is different from knowing. You can watch a bunch of videos about some specific content, and that’s okay. You learn about it. But did you really understand it? Do you really know about it? You watched some course in 2x velocity and are you prepared for the fight? If you need to get your hands dirty, can you put into practice what you’ve learned? Did you really learn the concepts?
And last, but not least: the time. This is another factor in the equation. Most of us want to learn very fast, and a lot of things. There’s a lot of abilities that we can learn and practice, but what we really need to focus at first to feel comfortable? This is important to think to you understand when to stop or when to go easy on yourself.
## My Point Of View
When we want to learn something or want to achieve an objective, we create expectations. We want to change, to make a career transition, to learn a new technology or just to start in the field. Sometimes we know what we want, but we don’t know the path to achieve it. So, in this case, it’s very common to immerse yourself in studying everything you come across. Do you want to achieve your objective, right? How much time do you believe you’ll need to get it? Three or six months? One year? And what’s your plan B? Do you have one? It’s not difficult to start trying to learn all the things, but if you don’t have a clear path, you can get only anxiety and fear.
This usually happens when you try to learn too much at once. Learning a lot of things doesn’t mean knowing these things. You need to practice, and you need time to develop the new abilities. This can cause anxiety, because if you don’t have a clear path, you can see there’s a lot of new things to learn. You start learning just one new thing, and you discover there are five or ten new concepts to look for. That’s okay! You’re not alone. If you don’t have a clear path to focus on what’s more important, you can feel a little anxious, because you won’t stop to look for new content. All you’ll see isn’t enough, and you won’t enjoy your journey. This is terrible.
It can cause fear too. If you understand that there’s always a lot of things to learn, and you don’t feel confident with your knowledge, how can you work properly if you’re always concerned about your own knowledge?
## You Can Learn With Your Mistakes
I’ve already faced this situation in different stages of my career. The first one in 2008 when I started in the field as a Web Developer. In that year, I was almost a Webmaster, because I needed to learn about web applications, databases, manage Linux servers, and all the things that an IT guy does — including fixing printers. At the beginning of my career, I didn’t have a clear path to follow, I just learned a lot of things, fast, and practiced a lot. I learned a lot practicing, and I worked all the time. It was very tiring, but I evolved a lot.
The second stage of my career that I faced a change was in 2014 when I started as a Mobile Developer working with Xamarin Forms. I had a challenge to develop a sales representative application for Android, iOS, and Windows, but as a Web Developer, I had no idea what was necessary to learn, so I started again to learn all content that I found and practice a lot. In this period, I was always worried about learning mobile development. It was a period of time that I’ve grown a lot again, but it cost my mental health. Some people may think this is bullshit, but only I know what I felt.
In 2019, I made a new move, and started as a Mobile Engineer. Despite continuing to work with mobile development, there was something different: I started to work more closely to the native Android and iOS APIs. I started to work all day with the Xamarin Traditional approach. This may seem like nothing, but it was a big change for me. I felt I didn’t have enough good knowledge to stay there. Honestly, I knew that I had a good knowledge about software development, but not about mobile development, because I had been working with Xamarin Forms for a long time. So, I needed to start to learn again. Tiring, isn’t it? Well, but this time was different for me, because I was (and continue) in a good place, and it provides me to finally enjoy my learning path.
I was determined to try something different in 2019. I didn’t want to repeat the same cycle.
## Calm Down And Enjoy The Learning Path
To keep calm and enjoy the learning path, I accept that I never will be a master of everything. This was very difficult for me, because I put a lot of pressure on myself. I want to be the best, so accepting this was not easy.
What is working for me is to think about the following questions:
1. What knowledge do I need to develop to make myself confident and allow me to move on?
2. Where and how can I get this new knowledge?
3. How much time will I invest to achieve this?
Is it difficult to answer these questions? If you don’t have the habit or ability to reflect about yourself, this can be a big challenge. You need to know about yourself, what you want, and how much are you willing to sacrifice. This can be difficult, but understanding this is very important. Let’s explore these questions.
### 1. What knowledge do I need to develop to make myself confident and allow me to move on?
This is very particular to each one. You need to think about what knowledge you’ll focus on learning and practicing that will make you more confident and prepared for the challenges in your career. So, think: what do you want to learn?
Do you want to learn about backend development? Maybe you should start learning about the basics of databases, HTTP protocol, and RESTful APIs.
Do you want to learn about frontend development? Maybe you should start learning about HTML, CSS, JavaScript, and responsive design.
Do you want to learn about mobile development? Maybe you should start learning about view controllers / activities / fragments, UI components, and life cycle.
You need to choose what path you want to follow, and understand what knowledge you need to learn that allows you to move on without problems.
### 2. Where and how can I get this new knowledge?
Today there are a lot of resources that you can use to learn about anything. YouTube videos, blog posts, courses at [LinkedIn Learning](https://www.linkedin.com/learning/), [Pluralsight](https://www.pluralsight.com), [Udemy](https://www.udemy.com), [Alura](https://www.alura.com.br), [BackFront Plus](https://backfront.com.br/backfront-plus), or even free content on YouTube.
I think this topic is a little bit interesting, because I was born in 1987, and I grew up without the new technologies that we use today. My first computer I won when I was 15 years old, and the internet was dial-up. Today, we turn on our computers and smartphones, and all the time we’re connected, with a lot of content on the palm of our hands. But even with a lot of content to consume, I think sometimes we don’t know where to find good content or where to start. So, the most important is: just start. It’s better than do nothing.
### 3. How much time will I invest to achieve this?
Now you already know what knowledge you want to develop, where the resources are, and you need to think about how much time you’ll invest to achieve this. How much time? One hour per day? Two, maybe? Between three and six hours per week? You need to organize your agenda and put your learning path into your routine. Don’t you have time at the moment? No problem, but you can’t feel frustrated with this, and this situation needs to be very clear to you.
It’s not a problem to not have time now. It’s a problem if you want to make a change and not dedicate on it. No pain, no gain.
### Things to care about
Focus means setting a target and sticking to it. So, if you plan to study a lot of things, maybe you’re starting wrong, or you won’t achieve your plans properly. I’m telling you this because it’s humanly impossible to start learning many things at the same time and be productive in all of them. There are some exceptions, but focus means starting something and ensuring you evolve it until the end. Think about it.
## Sharing My Experiences
I’ve dedicated this session to share my last experiences. Maybe something can help you to clear the learning path.
### Experience 1: Start Working With iOS And Android Native APIs
As I said previously, in 2019 I started as a Mobile Engineer to work with Xamarin Traditional, more closely to iOS and Android native APIs. This was a great challenge for me, because I was used to working only with Xamarin Forms. At that time, Xamarin Forms was very evolved, and each new application didn’t need a lot of custom renderers to work properly anymore. This meant that native knowledge was less crucial, allowing developers to focus on the shared code for most of their work.
Well, my scenario changed, and I needed a plan to study. I knew I needed to study a lot of things, and I tried to answer the three questions.
#### 1. What knowledge do I need to develop to make myself confident and allow me to move on?
This is very strange, but for me, to feel comfortable, I didn’t need much:
- Master how to create screens with dynamic lists and content.
- Master how to navigate between screens.
- Master how to communicate between screens.
Very simple, right? Yeah, simple, but I needed to master these things to feel more comfortable in my new job. I knew all the concepts, but I didn’t work with them until that moment. There’s a big difference between only learning about something and truly understanding it, in practice. All of this knowledge I needed to evolve on both platforms, but more on iOS.
#### 2. Where and how can I get this new knowledge?
Well, I don’t remember exactly where I studied, but I remember looking for courses at Udemy, Pluralsight, and some content on YouTube. All very practical training. On Pluralsight, I remember taking the following courses:
- [Swift Fundamentals](https://app.pluralsight.com/library/courses/swift3-fundamentals/table-of-contents)
- [iOS 11 Fundamentals](https://app.pluralsight.com/library/courses/ios-11-fundamentals/table-of-contents)
- [iOS Collection Views: Getting Started](https://app.pluralsight.com/library/courses/ios-collection-views-getting-started/table-of-contents) - very good
- [iOS Data Persistence: The Big Picture](https://app.pluralsight.com/library/courses/ios-data-persistence-big-picture/table-of-contents)
- [iOS Networking with REST APIs](https://app.pluralsight.com/library/courses/ios-networking-rest-apis/table-of-contents)
- [iOS Auto Layout Fundamentals](https://app.pluralsight.com/library/courses/ios-auto-layout-fundamentals/table-of-contents) - 36%
- [Developing Android Applications with Kotlin: Getting Started](https://app.pluralsight.com/library/courses/android-apps-kotlin-build-first-app/table-of-contents)
- [Android Apps with Kotlin: Tools and Testing](https://app.pluralsight.com/library/courses/android-apps-kotlin-tools-testing/table-of-contents)
- [Android Apps with Kotlin: RecyclerView and Navigation Drawer](https://app.pluralsight.com/library/courses/android-apps-kotlin-recyclerview-navigation-drawer/table-of-contents) - 67%
#### 3. How much time will I invest to achieve this?
I usually dedicate many hours to studying. This is how it works for me. When I face a new challenge, I like to dive deep into it. I have the privilege of being able to find time for this. My wife supports me and helps me a lot. I usually dedicate between one and two hours a day, including weekends, because my wife loves to wake up late, and I wake up early to study more time when we stay home.
I hoped to see progress in about a year, and it came. Over time, the tasks became easier. This was a sign to me that I was achieving my objective. I continue studying and evolving, but with less pressure on myself.
This basic knowledge helped me to continue growing in my position. Maybe you’re thinking: why did you study iOS and Android native approaches with Swift and Kotlin instead of C# with Xamarin? Well, I believe if you master native development and understand how each platform works, you will reduce doubt when you’re facing problems on mobile development with Xamarin. Something goes wrong? What’s wrong? Is it Xamarin or is it iOS / Android? Understanding the core concepts in the native language helps me many times to not blame Xamarin when I faced problems. I’ve seen this sometimes happen when people blamed Xamarin Forms, but the real problem lied within the platforms themselves.
### Experience 2: Start To Focus On Mastering iOS Development With Swift
Last year I wrote about it, and you can find this post [here](/ionixjunior/my-journey-in-mobile-development-from-c-to-swift-3mjm). So, here I am thinking about the questions again.
#### 1. What knowledge do I need to develop to make myself confident and allow me to move on?
Over the last few years working with Xamarin Traditional, I’ve learned a lot about iOS, but now I have some specific necessities to feel comfortable to be an iOS Engineer in the future. For example:
- Master memory management
- Master parallelism and concurrency
- Master closures in Swift
- Master View Code
- Learn how to build the same things in SwiftUI that I know how to build with UIKit
#### 2. Where and how can I get this new knowledge?
To my surprise, I found a lot of content on LinkedIn Learning, but I’ve started this journey using [BackFront Plus](https://backfront.com.br/backfront-plus). They have great paid content, and the videos about iOS development are fantastic, very well explained. If you have the opportunity, I highly recommend it to you.
Also, I like the content of [Paul Hudson](https://www.hackingwithswift.com). He has a lot of great free content. The same is true for [Sean Allen](https://www.youtube.com/@seanallen), and [CodeWithChris](https://www.youtube.com/@CodeWithChris).
Another interesting thing that I’m trying now is to practice in a real project. I love to create a project from scratch and test concepts on it. But there’s a problem: when you’re in a “hello world” project, learning tends to be somewhat limited. So I decided to practice trying to contribute to a real open-source project. I shared about it in [this post](/ionixjunior/you-dont-need-to-be-a-senior-to-contribute-to-open-source-projects-48k4).
#### 3. How much time will I invest to achieve this?
Today, I’m investing at least one hour from Monday to Friday, and some hours on Saturday. My intention here is to study and practice until I feel confident to contribute to open-source projects.
To be confident is subjective, right? Well, in my context, feel confident means contribute and create solutions without too much difficulty. That’s it.
## Need Tips To Find Where Or What To Study?
There’s a site called [roadmap.sh](https://roadmap.sh), that is a community effort to create roadmaps, guides, and other educational content to help guide developers in picking up a path and guide their learnings. Check this out!
Another great resource is [a study guide for software development with Swift](https://medium.com/@ronanrodrigo/follow-this-path-a-study-guide-for-software-development-with-swift-180ba093a752). I didn’t finish it, but I found great content there.
## What Will You Learn Now?
The path to mastery in tech is a journey, not a destination. It’s about finding the right balance between learning and practicing, focusing on what matters most to you, and understanding that you don’t need to know everything to make progress.
Embrace the power of patience and persistence — it’s your key to conquering the ever-evolving tech landscape. Start by identifying what you want to learn, find the resources that suit your learning style, and dedicate the time you can to reach your goals.
Don’t be afraid to make mistakes, learn from them, and keep moving forward. Remember, every new skill you acquire adds value to your career and empowers you to build incredible things. So, take that first step, stay curious, and enjoy the learning journey! | ionixjunior |
341,873 | The truth about Main-Stream Media | We will be Building a Jquery App! | 0 | 2020-05-22T20:22:57 | https://dev.to/trougeg/the-truth-about-main-stream-media-4p6k | jquery, react, bootstrap | #TEST | trougeg |
1,896,258 | DevX Status Update | Slow Week Hey all, it’s been a rather slow week on our side with half the team of on... | 0 | 2024-06-21T16:52:08 | https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-21-devx-status-update/ | puppet, community | ---
title: DevX Status Update
published: true
date: 2024-06-21 00:00:00 UTC
tags: puppet,community
canonical_url: https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-21-devx-status-update/
---
## Slow Week
Hey all, it’s been a rather slow week on our side with half the team of on Holiday and the rest starting to prepare for our next quarter. Hopefully its been more interesting for all of you, maybe some hidden krakens discovered within your code bases and slain!
Anyway with the weather slowly getting better and a heat wave coming in locally I hope you all have a great weekend and get some need sun, I’m hoping to actually get a tan this year.
👋 | puppetdevx |
1,895,327 | Rapid Innovation: Leading the Way in AI and Blockchain Consulting | Demystifying Rapid Innovation's Services Rapid Innovation offers a comprehensive suite of... | 27,673 | 2024-06-20T23:48:19 | https://dev.to/rapidinnovation/rapid-innovation-leading-the-way-in-ai-and-blockchain-consulting-1g67 | ## Demystifying Rapid Innovation's Services
Rapid Innovation offers a comprehensive suite of services designed to cater to
the diverse needs of businesses. Let's delve deeper into their core offerings:
## Custom AI Solutions
Rapid Innovation's team of AI experts works closely with clients to understand
their specific challenges and objectives. They then design and develop bespoke
AI solutions that leverage cutting-edge technologies like natural language
processing (NLP), computer vision, and machine learning (ML) to automate
tasks, improve decision-making, and gain valuable insights from data.
## Natural Language Processing (NLP)
NLP empowers machines to understand and process human language. Rapid
Innovation can leverage NLP to develop chatbots, sentiment analysis tools, and
intelligent document processing systems, transforming the way businesses
interact with customers and extract knowledge from unstructured data.
## Computer Vision
Computer vision enables machines to interpret and analyze visual information.
Rapid Innovation can utilize computer vision to create intelligent video
surveillance systems, automate visual inspections, and develop image
recognition applications, fostering enhanced security, quality control, and
product identification.
## Machine Learning (ML)
ML algorithms learn from data to make predictions and improve performance over
time. Rapid Innovation can harness ML to build predictive maintenance systems,
optimize marketing campaigns, and personalize customer experiences, driving
proactive maintenance, data-driven marketing strategies, and increased
customer engagement.
## Conversational AI Development
Conversational AI, sometimes referred to as chatbot development, is a quickly
developing technology that enables companies to build interactive chatbots
with artificial intelligence that mimic human discussions. Rapid Innovation's
domain knowledge enables organisations to:
## Blockchain Consulting
Blockchain is a cutting-edge technology that makes record-keeping safe, open,
and unchangeable. The blockchain consulting services offered by Rapid
Innovation help companies with:
## The Rapid Innovation Advantage
Several factors differentiate Rapid Innovation from other AI and blockchain
consulting firms:
## Unveiling the Impact of Rapid Innovation
Rapid Innovation's solutions empower businesses to achieve a multitude of
benefits, including:
## A Glimpse into the Future of Rapid Innovation
Rapid Innovation is positioned to be at the forefront of this fascinating
adventure as blockchain and AI technologies continue to advance. Here's what
this innovative firm has in store for the future:
Blockchain and artificial intelligence are two areas in which Rapid Innovation
is ideally positioned to make waves. They are an invaluable partner for
companies looking to take advantage of the revolutionary potential of these
technologies because of their focus on innovation, data-driven strategy, and
client success. Rapid Innovation is poised to sustainably propel corporate
expansion, stimulate creativity, and leave a good mark on the world even as
they manoeuvre the always changing technological terrain.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain Consulting Services](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[Blockchain Consulting Services](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Solutions](https://www.rapidinnovation.io/ai-software-development-company-
in-usa)
[AI Solutions](https://www.rapidinnovation.io/ai-software-development-company-
in-usa)
## URLs
* <https://www.rapidinnovation.io/post/why-choose-rapid-innovation>
## Hashtags
#AIConsulting
#BlockchainSolutions
#TechInnovation
#DataDriven
#FutureOfBusiness
| rapidinnovation | |
1,895,325 | What’s the difference between RBAC and ABAC in Fauna? | ABAC (Attribute-Based Access Control) is not an extension of RBAC (Role-Based Access Control), but... | 0 | 2024-06-20T23:41:47 | https://dev.to/nosqlknowhow/whats-the-difference-between-rbac-and-abac-in-fauna-53ih | security, database, nosql, webdev | ABAC (Attribute-Based Access Control) is not an extension of RBAC (Role-Based Access Control), but rather a distinct model that can be considered a superset regarding flexibility and granularity. They both answer the question, “Does this operation have access,” but use very different mechanisms to determine the answer. Here’s how they compare and relate:
## Role-Based Access Control (RBAC):
- Role-centric: Access decisions are primarily based on the static roles assigned to users. Each role has predefined permissions that determine what the bearer of that role can access.
- Simplicity and manageability: RBAC is generally simpler to implement and manage because it categorizes permissions by broad roles, which can be easily assigned to users.
- Static: The rules are static and don't typically consider the context of a request or the attributes of the resources being accessed.
## Attribute-Based Access Control (ABAC) in Fauna:
- Attribute-centric: ABAC uses a variety of attributes (user attributes, resource attributes, action attributes, and contextual attributes) to make access decisions. These attributes can encompass various data points pertinent to enforcing access control policies. This includes personal user information such as age and location, organizational roles assigned to the user, and broader system-level conditions like the time of day or the device being used for access. Each attribute can be dynamically assessed to make real-time decisions about the user’s permissions within the system.
- Dynamic and granular: Policies in ABAC can be very granular and context-sensitive, allowing for more precise control over who can access what, when, and under what conditions.
- Flexibility: Due to its reliance on multiple attributes for making decisions, ABAC can accommodate more complex scenarios than RBAC. It can adapt to a range of changing conditions, which would be more difficult or cumbersome to manage in a purely role-based model.
## Relationship between RBAC and ABAC:
- While RBAC is focused on user roles, ABAC uses roles as just one of the many attributes it uses for access control. This means ABAC can implement all the policies that RBAC can, plus additional policies that are too specific or dynamic for RBAC to handle effectively.
- Thus, ABAC can be seen as a superset of RBAC in terms of capability. It offers everything RBAC does, with additional flexibility to incorporate a broader range of criteria into access decisions.
In summary, ABAC offers a more flexible and comprehensive approach to access control compared to RBAC, capable of handling complex, dynamic environments by leveraging a wide range of attributes, whereas RBAC offers a simpler, more straightforward approach that might be sufficient for environments with fixed access control requirements based on well-defined roles.
| nosqlknowhow |
1,893,014 | Desvendando o Segredo: Como Implementar File Upload com Spring Boot e Amazon S3 | Não é de hoje que empresas e também pessoas tem necessidade de armazenar e gerenciar arquivos de... | 0 | 2024-06-20T23:29:16 | https://dev.to/jordihofc/desvendando-o-segredo-como-implementar-file-upload-com-spring-boot-e-amazon-s3-1jd1 | aws, spring, java, amazons3 | Não é de hoje que empresas e também pessoas tem necessidade de armazenar e gerenciar arquivos de forma segura e eficiente. Um bom exemplo disso é que até hoje existe uma alta procura por produtos como SDD, Pendrives, HDD externos. Sem falar que empresas como Amazon, Google, Apple, Microsoft vendem armazenamento em cloud em diferentes escalas e preços.
Storage de arquivo pode atender clientes em diversas categorias, desde os usuários comuns por meio de produtos como Google Drive, One Drive e ICloud. Até mesmo empresas de todos os tamanhos, com a necessidade de utilizar sistemas customizados para gerenciar seus arquivos.
Os clientes da Cloud Amazon Web Service, podem escrever suas aplicações para se integrar ao [Amazon Simple Storage Service (S3)](https://aws.amazon.com/pt/s3/), o serviço de armazenamento distribuído seguro e escalável, que pode ser acessado de diferentes aplicativos e plataformas por meio de API's.
Hoje é possivel escrever aplicações nas mais diversas linguagens e plataformas, utilizando o SDK da Amazon para gerenciar seus arquivos. E para nós javeiros não é diferente, podemos nos basear no SDK ou utilizar algum framework como Micronaut, Quarkus e Spring, para nos auxiliar neste processo.
## E a pergunta que não quer calar? Como Implemento um FileUploader com Spring Boot e Amazon S3?
A solução deste problema é dividida em duas etapas, a primeira etapa é onde iremos realizar a configuração da aplicação para utilizar o Amazon S3 em um projeto Spring. E por fim, será construído o serviço responsável por fazer o upload dos arquivos no bucket.
### 1 - Configurando a Aplicação para Utilizar o Amazon S3
Então, vamos lá! O [Spring Cloud AWS](https://docs.awspring.io/spring-cloud-aws/docs/3.0.0/reference/html/index.html#using-amazon-web-services) é um módulo construído para abstrair o uso de serviços da AWS para aplicativos Spring Boot. Inclusive existe um starter para aplicações que utilizaram apenas o modulo para uso do Amazon S3, se trata do [Spring Cloud AWS S3](https://docs.awspring.io/spring-cloud-aws/docs/3.0.0/reference/html/index.html#spring-cloud-aws-s3), que oferecem a diferentes formas de acessar e utilizar o Bucket no sistema, e também conta com a abstração `S3Template` que torna simples e intuitivo o ato de executar operações de consulta, inserção e deleção no S3. Então o primeiro passo é adicioná-la nas dependências do projeto.
Em seguida, iremos trabalhar na configuração do Spring Cloud AWS S3, a fim de permitir que o mesmo faça o acesso ao bucket no S3, para isso devemos informar o Endpoint de acesso ao bucket, a região que o bucket esta disponível, e as credenciais de acesso à conta da AWS.
```properties
spring.cloud.aws.endpoint=${AWS_ENDPOINT:http://s3.localhost.localstack.cloud:4566}
spring.cloud.aws.region.static=${AWS_REGION:us-east-1}
spring.cloud.aws.credentials.access-key=${AWS_ACCESS_KEY:localstackAccessKeyId}
spring.cloud.aws.credentials.secret-key=${AWS_SECRET_KEY:localstackSecretAccessKey}
```
No código acima, iniciamos definindo o endpoint de acesso ao bucket, através da propriedade `spring.cloud.aws.endpoint`. Em seguida, definimos que a região em que o bucket se encontrar por via da propriedade `spring.cloud.aws.region.static`, por fim definimos as credenciais de acesso à conta através das propriedades `spring.cloud.aws.credentials.access-key` e `spring.cloud.aws.credentials.secret-key`.
### 2 - Escrevendo o Serviço de Upload de Arquivo
Nesta etapa vamos definir o código responsável por receber um arquivo e fazer o upload dele no Amazon S3. Antes de iniciar a construção é preciso entender como funciona a postagem de um arquivo. No Amazon S3 você pode definir um bucket. Eu ouvi certo? Um balde? Sim, no S3 o bucket é um tipo de diretório que irá armazenar seus objetos, que podem ser arquivos, imagens, musicas, e qualquer outro tipo de objeto independente do seu tipo de mídia. Esse objeto deve ter uma chave única de acesso (Key), que irá ser utilizada para geração de links de download, atualização e deleção do objeto.
Então iremos iniciar o processo, definindo a abstração responsável por representar um Arquivo. Em seguida, definiremos o `bean` `FileUploadService` que será responsável por gerenciar as operações sobre os Arquivos. E por fim, iremos construir o método responsável por fazer Upload no bucket.
```Java
public record FileUpload(MultipartFile data){}
```
O código apresentado acima, define uma abstração que representa um arquivo que irá ser persistido no storage de arquivos. O FileUpload tem um único campo do tipo [`MultipartFile`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/multipart/MultipartFile.html) que é uma abstração do Spring Web para representar um arquivo que foi submetido através de uma requisição web.
Após a definição da abstração seguiremos para definição do serviço de upload.
```java
@Service
public class FileStorageService {
@Autowired
private S3Template template;
private String bucket = "myBucketName";
public String upload(FileUpload fileUpload) {
try (var file = fileUpload.data().getInputStream()) {
String key = UUID.randomUUID().toString();
S3Resource uploaded = template.upload(bucket, key, file);
return key;
} catch (IOException ex) {
throw new RuntimeException("Não foi possivel realizar o upload do documento");
}
}
}
```
O código pode ser resumido da seguinte forma, primeiro definimos a classe **FileStorageService** como um `bean` de serviço, através da anotação `@Service`, em seguida, injetamos o S3Template através do campo template previamente anotado com `@Autowired`. Em seguida, é definido o método upload, que retorna a String referente a chave de consulta do arquivo no Bucket, e recebe o FileUpload que será persistido.
A lógica de upload do arquivo é um pouco sensível, dado que manipular arquivos em Java exigem que o arquivo seja fechado, o que é muito fácil de ser esquecido, e pequenos erros, sutis como este, levam a problemas de vazamento de memória que podem prejudicar o funcionamento da aplicação.
Dado a esta exigência, foi utilizado a instrução [try-with-resource] (https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html), que tenta abrir o arquivo dentro da cláusula try, caso tenha sucesso, o mesmo é fornecido para uso dentro do escopo da instrução e fechá-lo automaticamente quando a execução for finalizada. Caso contrario podemos fazer um tratamento de exceção personalizado com as instruções catch. No escopo do bloco try, é onde a mágica acontece, então definimos uma chave de acesso única para arquivo, e utilizamos o método upload do S3Template, informando o bucket de destino, a chave de acesso, e o inputStream do arquivo, caso nenhuma IOException não seja lançada, retornamos a chave de acesso.
### Conclusão
Armazenar e gerenciar arquivos é uma necessidade de muitas pessoas e empresas, não é à toa que os grandes players do mercado como Amazon, Google, Apple, Microsoft e outros, vem lucrando com a venda dos seus produtos de Storage.
É comum que empresas de todos os tamanhos precisem definir processos e logicas customizadas para o armazenamento e uso de seus arquivos, e uma boa solução é o desenvolvimento de um software que cuide de garantir as regras e comportamentos para o gerenciamento dos seus arquivos, que serão armazenados em um serviço storage distribuído como Amazon S3.
O Spring Cloud AWS é um módulo focado em abstrair o uso de serviços da Cloud AWS para aplicações Spring, e contém um starter que facilita a integração com os buckets do S3. A construção de um serviço que manipule arquivos deve ser feita com cuidado, já que conforme os arquivos são abertos, os mesmos devem ser fechados, para não ocasionar vazamentos de memória. Porém, podemos mitigar isso, utilizando o try-with-resource, que abstrai o processo de fechamento do arquivo. | jordihofc |
1,894,488 | File Upload with Google Cloud Storage and Node.js | With the data collection rate currently going through the roof, the chances that you will build an... | 0 | 2024-06-20T23:26:02 | https://dev.to/kalashin1/file-upload-with-google-cloud-storage-and-nodejs-17bh | cloud, node, javascript, googlecloud | With the data collection rate currently going through the roof, the chances that you will build an app requiring users to upload one or multiple files will also be through the roof. There are many solutions to this problem, there are tons of services out there focused on making this process as smooth as possible. We have names like Firebase storage, Supabase storage, and Cloudinary.
However, in today's post, we will consider Google Cloud Storage (GCS), a cloud storage service provided by Google via GCP, spoiler alert, this is what powers Firebase Storage underhood, Google Cloud Storage is a Cloud managed service for storing unstructured data. You can store any amount of data and retrieve it as often as you like
In today's post, we will see how we can handle file uploads to Google Cloud Storage using Node.js, and here are the following talking points;
• Project Setup
• Build an express server
• processing uploaded files with Multer
• Handling file upload with GCS
### Project Setup
The first thing you need to do is to head over to Google Cloud and register a new account if you do not already have one. If you do then you're part of the bandwagon of developers with unfinished projects. If you have registered successfully or you have an account the next thing is to navigate to the cloud storage on your Google cloud console. We need to create a new bucket, take note of the name of this bucket we'll use it later. Now we need to set up a Node.js project open up your terminal and navigate into your projects directory, create a folder to serve as the current project directory say, `gc_node` and navigate into the newly created folder.
Now we need to generate a package.json file.
```bash
npm init --y
```
Now we are going to install some dependencies for the project with the following command.
```bash
npm i express multer @google-cloud/storage cors nodemon
```
These are the dependencies we'll need to get our server up and running now let's create a basic express server. This will serve as the entry point for processing files stored on GCS. Inside the `gc_node` directory let's make a new folder, `src` which will house all of our working files.
```tree
gc_node
├── src
│ ├── index.ts
```
Now let's open up our index file and edit it accordingly;
```typescript
const express = require('express');
const cors = require('cors');
const app = express()
app.use(cors())
app.listen(3000, () => {
console.log('server running on port 3000');
});
```
Let's add the following script to our package.json file.
```json
{
"scripts": {
"dev": "nodemon ./src/index.js",
"start": "node ./src/index.js"
},
}
```
from the terminal, we need to run the dev command to serve our project locally;
```bash
npm run dev
```
You'll see that our project is running on port 3000 if everything is done correctly. Now let up a route for processing uploaded files.
```typescript
// gc_node/index.ts
const multer = require('multer');
const upload = multer({ storage: multer.memoryStorage() });
// cont'd
app.post('/upload', upload.array("images"), (req, res) => {
const imageMimeTypes = ["image/png", "image/jpeg", "image/jpg", "image/svg"];
const files = req.files as any[];
files.forEach((file) => {
const mimeType = imageMimeTypes.find((mT) => mT === file.mimetype);
if (file && !mimeType) {
return res.status(400).json({ message: "Only images allowed!" });
}
});
// more on here later
})
```
We have used the multer middleware to parse any files sent along with the request and we are expecting to receive an array of files. We'll define a helper function that will process all uploaded files before we send them off to Google.
```javascript
// gc_node/index.ts
boostrapFile(file) {
const key = crypto.randomBytes(32).toString("hex");
const [, extension] = file.originalname.split(".");
const uploadParams = {
fileName: `${key}.${extension}`,
Body: file.buffer,
};
return { uploadParams, key, extension };
}
// .... continued
```
In summary, the function above takes a file object, generates a unique key, creates a new filename with the key and original extension, prepares the file data for upload, and returns all the necessary information for upload processing. Let's upload our file now;
```typescript
// gc_node/index.ts
import { Storage } from "@google-cloud/storage";
const storage = new Storage({
projectId: process.env.GOOGLE_PROJECT_ID,
keyFilename: path.join(__dirname + "../key.json"),
});
async function uploadFile(
bucketName: string,
buffer: Buffer | string,
destFileName: string
) {
const file = await storage.bucket(bucketName).file(destFileName);
await file.save(buffer);
const message = `file uploaded to ${bucketName}!`;
const [publicUrl] = await file.getSignedUrl({
expires: new Date("12/12/3020"),
action: "read",
});
return { message, publicUrl };
}
// cont'd
```
The code snippet above defines an asynchronous function called `uploadFile` that uploads a file to Google Cloud Storage (GCS) and potentially returns a publicly accessible URL. First, we import the Storage class from the @google-cloud/storage library, which provides functionalities to interact with GCS. Then we create a new Storage instance using configuration options. It retrieves the projected for the project on Google Cloud from the environment variable `process.env.GOOGLE_PROJECT_ID` and uses the path `path.join(__dirname + "../key.json")` to specify the location of a service account key file. This key file is required for GCS authentication.
The `uploadFile` function accepts three parameters, the first is the name of the bucket we want to upload the file to, and the second is the array buffer which stores the content of the file we want to upload, the third argument is the name for the file. Inside the function, we create a reference to the file object within the specified bucket using the `destFileName`. Then we call `await file.save(buffer);` This asynchronous call uploads the file data (either from the buffer) to the GCS file object. Next, we create a success message indicating the file was uploaded to the specified bucket. The next line retrieves a publicly accessible URL for the uploaded file. It uses the getSignedUrl method with configuration options; `expires` sets an expiration date for the URL (here, a very distant date "12/12/3020"), and `action` specifies the allowed action on the URL, which is set to "read" for public read access.
Then return an object containing the success message and the public URL (if generated). We will use this function inside our route handler function to help us upload the file to GCS.
```typescript
// gc_node/index.ts
// cont'd
app.post('/upload', upload.array("images"), (req, res) => {
// cont'd
const files = req.files as any[];
const responses = []
// cont'd
for (const file of files) {
const {
uploadParams: { Body },
extension,
key,
} = storageService.boostrapFile(file);
const response = await uploadFile(
process.env.BUCKET_NAME,
Body,
`/photos/${extension}.${key}`
);
response.push(response)
}
console.log(responses);
return res.json(responses);
})
```
This route handler allows uploading multiple files, prepares data for each file using the boostrapFile function, and then uploads each file to GCS using the uploadFile function. Finally, it returns a JSON response containing information about each uploaded file.
You can go ahead and use this next time you want to set up uploads to GSC, what are your thoughts on the post? Do you personally use GCS for handling file storage? Or have you at any point in time used GCS? What was the experience like and what are your thoughts on using GCS? I would like to know all these and more, so leave your thoughts below using the comment section. I hope you found this useful and I will see you in the next one. | kalashin1 |
1,895,307 | From Messy Data to Super Mario Pipeline: My First Adventure in Data Engineering | Welcome to the thrilling tale of my very first automated data pipeline! Imagine you’ve just been... | 0 | 2024-06-20T23:24:21 | https://dev.to/jampamatos/from-messy-data-to-super-mario-pipeline-my-first-adventure-in-data-engineering-1apo | dataengineering, python, automation, sql | **Welcome to the thrilling tale of my very first automated data pipeline!**
Imagine you’ve just been handed a database that looks like it’s been through Bowser’s castle and back. Yes, it’s that messy. The mission? To transform this chaos into a clean, analytics-ready dataset with as little human intervention as possible. Sounds like a job for Mario himself, right? Well, buckle up, because this is the story of how I tackled the subscriber data cleanup project, dodged fireballs, and came out victorious.
When I first opened the `cademycode.db` file, I felt like Mario entering a warp pipe into an unknown world. It was a mess. Missing values, inconsistent data types, and duplicates everywhere! But hey, every great adventure starts with a challenge, and this was mine.
In this post, I’ll take you through my journey of building my very first automated data pipeline. We’ll dive deep into the nitty-gritty details and share the ups and downs along the way. Whether you’re a fellow data plumber or just curious about the magical world of data cleaning, you’re in for a treat.
So grab a 1-Up mushroom, get comfy, and let’s embark on this data adventure together! Spoiler alert: There might be some gold coins and hidden blocks of wisdom along the way.
Ready to jump into the pipe? Let’s start with how I set up the project and the initial hurdles I had to overcome. Spoiler alert: There were quite a few!
## Entering the Warp Pipe
Setting up this project was like entering a warp pipe into an unknown world. I knew the journey ahead would be filled with challenges, but I was ready to tackle them head-on.
### Getting Started
First things first, I needed to clone the repository and set up my working environment. I created a directory called `subscriber-pipeline` and jumped right into it.
```bash
mkdir -p /home/jampamatos/workspace/codecademy/Data/subscriber-pipeline
cd /home/jampamatos/workspace/codecademy/Data/subscriber-pipeline
```
Next, I set up a virtual environment to keep my project dependencies isolated. If there's one thing Mario taught me, it's to always be prepared!
```bash
python3 -m venv venv
source venv/bin/activate
```
### Tools and Technologies
Since there were no red flower or mushrooms to collect, here’s a list of the tools and technologies I used for this project:
- **Python:** The hero of our story. I used Python for data manipulation and scripting.
- **SQLite:** Our trusty sidekick. This lightweight database was perfect for managing the data.
- **Pandas: **The power-up we needed to handle data manipulation with ease.
- **Jupyter Notebook:** My go-to tool for exploring and experimenting with data.
- **Bash:** The magical spell that automated our pipeline.
### Initial Hurdles
Setting up the environment was smooth sailing until I encountered my first Goomba: installing the required Python packages. After a few head bumps, I finally managed to get everything installed.
```bash
pip install pandas sqlite3 jupyter
```
But wait, there’s more! I also needed to install some additional packages for logging and testing.
```bash
pip install unittest logging
```
### Facing the First Boss: Database Connection
With everything set up, it was time to connect to the database. This felt like facing the first boss. I opened the `cademycode.db` file, unsure of what awaited me inside. Using SQLite, I established a connection and was ready to explore the data.
```python
import sqlite3
con = sqlite3.connect('dev/cademycode.db')
print('Database connection established successfully.')
```
Suffice to say, the database was indeed as messy as Bowser’s castle. But that’s a story for the next section.
In the next part of our adventure, we'll dive into inspecting and cleaning the data. Get ready to battle missing values, inconsistent data types, and duplicates galore!
## Battling the Data Monsters
With the setup complete and the database connection established, it was time to dive into the data. This part of the journey felt like battling hordes of Koopa Troopas. Every step revealed new challenges, but with determination (and some Italian pasta), I tackled them head-on.
### Data Inspection
The first step was to inspect the data and understand the lay of the land. Using Pandas, I loaded the tables from `cademycode.db` into DataFrames and took a peek at what I was dealing with.
```python
import pandas as pd
tables = pd.read_sql_query("SELECT name FROM sqlite_master WHERE type='table';", con)
table_names = tables['name'].tolist()
df = {table: pd.read_sql_query(f"SELECT * FROM {table}", con) for table in table_names}
for table, data in df.items():
print(f"Table: {table}")
print(data.head())
```
The output revealed the initial state of the data – missing values, inconsistent data types, and duplicates galore. It was like entering a haunted house in Luigi's Mansion!
(If you're curious about the output messages, you can check them at the project's [Jupyter Notebook](https://github.com/jampamatos/codecademy_stuff/blob/main/Data/subscriber-pipeline/Subscriber%20Pipeline.ipynb).)
### Handling Missing Values
Next, I identified and handled the missing values. This was akin to collecting power-ups to boost my chances of success. For some columns, I filled the missing values with zeros, while for others, I used the median value.
```python
students_df = df['cademycode_students'].copy()
# Fill missing values
students_df['job_id'].fillna(0, inplace=True)
students_df['current_career_path_id'].fillna(0, inplace=True)
students_df['num_course_taken'].fillna(students_df['num_course_taken'].median(), inplace=True)
students_df['time_spent_hrs'].fillna(students_df['time_spent_hrs'].median(), inplace=True)
```
Again, the choice on whether I filled data with zeroes or with median values is explained in the project's [Jupyter Notebook](https://github.com/jampamatos/codecademy_stuff/blob/main/Data/subscriber-pipeline/Subscriber%20Pipeline.ipynb), but in a nutshell it was because `job_id` and `current_career_path_id`were given `0` to indicate 'unemployed' and 'not enrolled' status.
### Correcting Data Types
The next challenge was correcting inconsistent data types. This felt like trying to fit puzzle pieces together. With a bit of Pandas magic, I converted columns to their appropriate data types.
```python
# Convert data types
students_df['dob'] = pd.to_datetime(students_df['dob'], errors='coerce')
students_df['job_id'] = pd.to_numeric(students_df['job_id'], errors='coerce')
students_df['num_course_taken'] = pd.to_numeric(students_df['num_course_taken'], errors='coerce')
students_df['current_career_path_id'] = pd.to_numeric(students_df['current_career_path_id'], errors='coerce')
students_df['time_spent_hrs'] = pd.to_numeric(students_df['time_spent_hrs'], errors='coerce')
```
### Dealing with Duplicates
No Mario adventure is complete without encountering duplicates – like those pesky Bullet Bills that keep reappearing! I identified and removed duplicate records to ensure the data was clean.
```python
# Remove duplicates
students_df.drop_duplicates(inplace=True)
jobs_df_cleaned = df['cademycode_student_jobs'].drop_duplicates()
```
### Extracting Nested Data
One of the trickiest parts was dealing with nested data in the `contact_info` column. It was like the water stage. To nail it, I had to write a function to extract the nested information and split it into separate columns.
Before we continue, we should notice that the data in `contact_info` was in the form of a json object, containing the mailing address and an email, such as:
```json
{"mailing_address": "470 Essex Curve, Copan, Mississippi, 86309", "email": "cleopatra_singleton7791@inlook.com"}
```
(Here is a nice place to add that the data used in this project is fictional, so don't worry!)
So, in order to extract that, we could treat it like a json object after all. So that's what we did:
```python
import json
def extract_contact_info(contact_info):
try:
info = json.loads(contact_info.replace("'", '"'))
return pd.Series([info.get('mailing_address'), info.get('email')])
except json.JSONDecodeError:
return pd.Series([None, None])
students_df[['mailing_address', 'email']] = students_df['contact_info'].apply(extract_contact_info)
students_df.drop(columns=['contact_info'], inplace=True)
```
With the data cleaned and ready, I felt like I had just collected a Super Star power-up. But the adventure was far from over. Next up, I had to create the output CSV and ensure it was analytics-ready.
## The Final Battle for Clean Data
With the data cleaned and ready, it was time for the final showdown: combining the data into a single, analytics-ready CSV. You know, like grabbing giant Bowser by the tail and throwing him around in Super Mario 64. There were obstacles to overcome, but I was determined to save the day (or in this case, the data).
### Combining the Data
First, I needed to combine the cleaned data from multiple tables into a single DataFrame. Using Pandas, I performed the necessary joins to bring everything together:
```python
# Merge dataframes
merged_df_cleaned = pd.merge(students_df, jobs_df_cleaned, how='left', left_on='job_id', right_on='job_id')
final_df_cleaned = pd.merge(merged_df_cleaned, df['cademycode_courses'], how='left', left_on='current_career_path_id', right_on='career_path_id')
```
Validating the Final Dataset
Once the data was combined, I needed the flagpole at the end of the level: to validate the final dataset. I checked for any inconsistencies or missing values that might have slipped through.
```python
# Fill remaining missing values
final_df_cleaned = final_df_cleaned.assign(
career_path_id=final_df_cleaned['career_path_id'].fillna(0),
career_path_name=final_df_cleaned['career_path_name'].fillna('Unknown'),
hours_to_complete=final_df_cleaned['hours_to_complete'].fillna(0)
)
```
### Generating the Output CSV
With the data validated, it was time to generate the final CSV. I mean, even Super Mario Bros had a save game feature, right?:
```python
# Save final DataFrame to CSV
final_df_cleaned.to_csv('dev/final_output.csv', index=False)
```
### Overcoming Challenges
Of course, no epic battle is without its challenges. One of the biggest hurdles was ensuring that the final DataFrame retained all the original rows and that no data was lost during the merges. After some debugging (and a few extra lives), I successfully retained the integrity of the data.
### Celebrating the Victory
Finally, with the output CSV generated and validated, it was a triumphant moment. I could rest knowing that the data was now clean and ready for analysis.
Or could I?
With the final CSV in hand, the next step was to ensure that the pipeline could run automatically with minimal intervention. This meant developing unit tests and logs to keep everything in check.
## Our Princess Is in Another Castle
After all the hard work of cleaning and combining the data, it might feel like the job is done. But as any Mario fan knows, "our princess is in another castle!" The journey isn't complete until the pipeline is foolproof and can run automatically without constant supervision. This meant developing unit tests and logging to ensure everything runs smoothly.
### The Importance of Unit Tests
Unit tests are like Mario's power-ups—they help you tackle challenges and keep you safe from unexpected pitfalls. I implemented unit tests to ensure the data integrity and functionality of the pipeline. These tests checked for things like schema consistency, the presence of null values, and the correct number of rows.
```python
import unittest
class TestDataCleaning(unittest.TestCase):
def test_no_null_values(self):
self.assertFalse(final_df_cleaned.isnull().values.any(), "There are null values in the final table")
def test_correct_number_of_rows(self):
original_length = len(df['cademycode_students'])
final_length = len(final_df_cleaned)
self.assertEqual(original_length, final_length, "The number of rows differs after the merges")
def test_schema_consistency(self):
original_schema = set(df['cademycode_students'].columns)
final_schema = set(final_df_cleaned.columns)
original_schema.discard('contact_info')
original_schema.update(['mailing_address', 'email'])
self.assertTrue(original_schema.issubset(final_schema), "The final table schema does not include all original columns")
if __name__ == '__main__':
unittest.main(argv=['first-arg-is-ignored'], exit=False)
```
### Implementing Logging
Logging is essential for tracking the pipeline's execution and troubleshooting issues. Think of it as Mario's map—it helps you see where you've been and identify any trouble spots. I implemented logging to record each step of the pipeline, including updates and errors.
```python
import logging
logging.basicConfig(filename='logs/data_pipeline.log', level=logging.INFO,
format='%(asctime)s:%(levelname)s:%(message)s')
def log_update(message):
logging.info(message)
def log_error(message):
logging.error(message)
try:
# Pipeline code...
log_update("Pipeline executed successfully.")
except Exception as e:
log_error(f"Error running the pipeline: {e}")
raise
```
### Creating the Changelog
To keep track of updates, I created a changelog that records version numbers, new rows added, and missing data counts.
```python
def write_changelog(version, new_rows_count, missing_data_count):
with open('logs/changelog.txt', 'a') as f:
f.write(f"Version: {version}\n")
f.write(f"New rows added: {new_rows_count}\n")
f.write(f"Missing data count: {missing_data_count}\n")
f.write("\n")
```
### Challenges and Learnings
One of the biggest challenges was ensuring the unit tests covered all edge cases. Oh, when we see 'OK' in the test suite, that feeling is unbeatable! And, with thorough testing and logging, I ensured the pipeline was robust and reliable.
### Conclusion
With unit tests and logging in place, I felt confident that my pipeline could handle anything thrown its way. It was a moment of triumph, like finally rescuing Princess Peach after a long adventure.
Now, all that's left was to create a Bash script to automate the pipeline and move the updated files to the production directory. Talk about more data plumbing fun!
## Automating the Pipeline—Mario Kart Style
With the data cleaned, combined, and validated, and with unit tests and logging in place, it was time to put the pipeline on autopilot. Like Mario's Kart has to be race—ready, everything needed to run smoothly and efficiently, with no banana peels or red shells in sight.
### Purpose of the Bash Script
The Bash script propelled the pipeline forward with speed and precision, you know, like shooting turtle shells at other racers (but a little less fun. Just a little!). It was designed to:
1. Execute the Python script that runs the data pipeline.
2.Check the changelog to determine if an update occurred.
3.Move the updated files from the working directory to the production directory.
### The Script
Here's the Bash script that made it all possible:
```bash
#!/bin/bash
# Path to the Python script
PYTHON_SCRIPT="/home/jampamatos/workspace/codecademy/Data/subscriber-pipeline/main.py"
# Path to the production directory
PROD_DIR="/home/jampamatos/workspace/codecademy/Data/subscriber-pipeline/prod"
# Path to the changelog
CHANGELOG="/home/jampamatos/workspace/codecademy/Data/subscriber-pipeline/log/changelog.txt"
# Current version from the changelog
CURRENT_VERSION=$(grep -oP 'Version: \K.*' $CHANGELOG | tail -1)
# Execute the Python script
python3 $PYTHON_SCRIPT
# Check if the script executed successfully
if [ $? -eq 0 ]; then
echo "Pipeline executed successfully."
# New version from the changelog
NEW_VERSION=$(grep -oP 'Version: \K.*' $CHANGELOG | tail -1)
# Check if there was an update
if [ "$CURRENT_VERSION" != "$NEW_VERSION" ]; then
echo "Update detected. Moving files to production."
# Move updated files to the production directory
mv /home/jampamatos/workspace/codecademy/Data/subscriber-pipeline/dev/clean_cademycode.db $PROD_DIR/
mv /home/jampamatos/workspace/codecademy/Data/subscriber-pipeline/dev/final_output.csv $PROD_DIR/
echo "Files moved to production."
else
echo "No updates detected. No files moved to production."
fi
else
echo "Pipeline execution failed. Check logs for details."
fi
```
### Challenges on the Track
Like any Mario Kart race, there were more than a few obstacles along the way. One of the trickiest parts was ensuring the script correctly identified updates and moved the files only when necessary. After a few laps of testing and tweaking, I had the script running smoothly.
### Creating an Alias
To make running the script as easy as throwing a green shell, I created an alias. This allowed me to execute the script with a simple command, no matter where I was in the terminal.
```bash
alias run_pipeline="/home/jampamatos/workspace/codecademy/Data/subscriber-pipeline/run_pipeline.sh"
```
By adding this line to my ~/.bashrc file and reloading the shell, I could start the pipeline with a single command:
```bash
run_pipeline
```
With the Bash script in place, my pipeline was ready to zoom along the track with minimal human intervention. It was a satisfying moment, like crossing the finish line in first place.
## Wrap-up
After navigating through a maze of messy data, cleaning and validating it, automating the process, and ensuring everything runs smoothly with unit tests and logging, we've finally crossed the finish line. It’s been a wild ride, but let's take a moment to reflect on our adventure.
### Summary of Tasks
Throughout this project, we aimed to build a data engineering pipeline to transform a messy database of long-term canceled subscribers into a clean, analytics-ready dataset. Here's a summary of the key tasks we accomplished:
1. **Setting Up the Project:** We began by setting up our working directory and ensuring all necessary files and tools were in place.
2. **Inspecting and Cleaning the Data:** We imported the tables from `cademycode.db` into dataframes, inspected them for missing or invalid data, and performed various data cleaning operations. This included handling null values, correcting data types, and dealing with duplicates.
3. **Creating the Output CSV:** Using the cleaned data, we produced an analytics-ready SQLite database and a flat CSV file. We validated the final table to ensure no data was lost or duplicated during the joins.
4. **Developing Unit Tests and Logs:** We converted our Jupyter Notebook into a Python script. The script includes unit tests to check for updates to the database and to protect the update process. It also includes logging to track updates and errors.
5. **Creating the Bash Script:** We created a Bash script to handle running the Python script and moving updated files from the working directory to the production directory. The script checks the changelog to determine if an update occurred before moving the files.
### Final Thoughts
Building my first automated data pipeline was an exciting and challenging journey. It felt like a typical Super Mario stage, filled with obstacles and power-ups. Along the way, I learned valuable lessons about data cleaning, automation, and the importance of thorough testing and logging.
### Conclusion
In conclusion, this project successfully demonstrates how to build a robust data engineering pipeline that automates the transformation of raw data into a clean and usable format. By following a structured approach, we ensured that the pipeline is reliable, maintainable, and easy to understand. The inclusion of unit tests and logging provides additional safeguards and transparency, making it easier to monitor and debug the process.
This project not only serves as a valuable addition to my portfolio but also equips me with practical experience in handling real-world data engineering challenges. The skills and methodologies applied here are transferable to a wide range of data engineering tasks, ensuring I am well-prepared for future projects and roles in the field.
Thank you for joining me on this adventure! If you have any questions or comments, feel free to leave them below. I’d love to hear about your own data engineering experiences and any tips you might have. Until next time, keep racing towards your data goals and may your pipelines always be free of banana peels! | jampamatos |
1,895,318 | Use Modern C++ std::any in your projects | Say goodbye to void* once and for all. std::any is a feature of the C++ standard... | 0 | 2024-06-20T23:16:37 | https://dev.to/marcosplusplus/use-modern-c-stdany-in-your-projects-363l | cpp, cpp17, moderncpp | ### Say goodbye to `void*` once and for all.
---
`std::any` is a feature of the C++ standard library that was introduced in [C++17](https://terminalroot.com/tags#cppdaily).
This component belongs to the set of type-safe container classes, providing a safe means to store and manipulate values of any type.
It is especially useful when you need to deal with situations where the type of the variable can vary! 😃
Then you say:
> **- Oh man! Good. For these cases I use `void *`.**
Yes, you're really right, but have you seen how the new generation is in relation to *memory safety*???
Not to mention that `void*` is really dangerous!
If you do this, it works:
```cpp
void * some_data; // Bad idea
std::string str = "Hi";
int x = 3;
decltype(x) y = 6;
some_data = &str;
std::cout << *(std::string*)some_data << '\n';
some_data = &x;
std::cout << *(int*)some_data << '\n';
some_data = &y;
std::cout << "Type of y: " << typeid(y).name() << '\n'; // include typeinfo
```
But, the chance of this giving `mer%$a` is great! At the end of using these variables, `some_data` will continue to exist, that is, an indefinite lifetime!
And it is to replace `void*` that `std::any` was created in Modern C++ which, of course, is totally **Safe**!
In other words, it is a *wrapper* that encapsulates your variable to a `shared_ptr`([smart pointers](https://en.cppreference.com/book/intro/smart_pointers)) of life! Yes, and there is even a `std::make_any`!!!
---
## How to use `std::any`
First you need to include its header:
> Logically, it only works from C++17 as was said at the beginning!
```cpp
#include <any>
```
And now the same code that was presented above, but using `std::any`:
```cpp
#include <iostream>
#include <any>
int main(){
std::any some_data;
std::string str = "Hi";
int x = 3;
auto y = std::make_any<decltype(x)>(6);
some_data = str;
std::cout << std::any_cast<std::string>(some_data) << '\n';
some_data = x;
std::cout << std::any_cast<int>(some_data) << '\n';
some_data = y;
std::cout << "Type of y: " << some_data.type().name() << '\n';
}
```
In the code above we saw that:
+ `std::any some_data;` - Declares the variable;
+ `std::any_cast<T>(some_data)` - Converts to the desired type;
+ `std::make_any<T>` - Another way to create objects;
+ `some_data.type().name()` - Gets the data type without needing `typeinfo`.
And you can use it for absolutely everything: `std::vector`, [Lambda](https://terminalroot.com/10-examples-of-using-lambda-functions-in-cpp/) and all existing data types!
And the guy asks something else:
> **- OK! What if I want to end the lifetime of `std::any` manually?**
Just use the `reset` union structure or even the initialization operator:
```cpp
some_data.reset();
// Or
some_data = {};
```
> **— And to check if `std::any` is empty?**
Use `has_value()`:
```cpp
std::cout << (some_data.has_value() ? "Full!" : "Empty.") << '\n';
```
The unionless `type()` with `name()` can be used to compare types:
```cpp
std::cout << (some_data.type() == typeid(void)) << '\n'; // 0 to false
std::cout << (some_data.type() == typeid(int)) << '\n'; // 1 to true
```
> To use *Boolean* names use: `std::cout << std::boolalpha << (some_data.type() == typeid(int)) << '\n';`.
To throw exceptions you must use `std::bad_any_cast`:
```cpp
try {
std::any any_str("Hiii");
auto my_any{ std::make_any<std::string>(any_str.type().name()) };
std::cout << std::any_cast<std::string>(my_any) << '\n';
}catch (const std::bad_any_cast& e) {
std::cerr << "Error: " << e.what() << std::endl;
}
```
To check whether everything really complies, never forget to use the flags for your compiler: `-Wall -Wextra -pedantic -g -fsanitize=address`.
---
In addition to being completely **SAFE**, `std::any` is very practical and a great help!
There was a company project that I was developing, which passed a function argument and could be any type, but the function's return was `std::string` concatenated to the name of the object received.
And someone had created a great `switch case` to convert to `std::string`(*bizarre!*), I substituted it to receive the parameter for `std::any` and converted it with `std::any_cast< std::string>` and I solved it in a way: Modern, Safe and Like a Boss! Exactly what `std::any` is!!! 😃
For more information visit: <https://en.cppreference.com/w/cpp/utility/any> | marcosplusplus |
1,895,317 | Adobe Pro Script - Color | I am trying to write a script to change text boxes to red when a certain range of numbers is used.... | 0 | 2024-06-20T23:15:32 | https://dev.to/dashirvine/adobe-pro-script-color-55i0 | help | I am trying to write a script to change text boxes to red when a certain range of numbers is used.
If a number is over 5, it turns red; if it is -5, it turns red. This alert on an aviation form will show a pilot that his plan is not level.
I have tried this but not working:
if(event.value >= 5){
event.target.fillColor= color.red;
} else if(event.value = > -5){
event.target.fillColor= color.red;
} else if(event.value = <5) event.target.fillColor-color.transparent;
Any help out there? I was doing good until I had to add the negative number to the equation.
| dashirvine |
1,895,316 | Utilize std::any do C++ Moderno nos seus projetos | Dê adeus de uma vez por todas ao void*. std::any é um recurso da biblioteca padrão C++... | 0 | 2024-06-20T23:10:20 | https://dev.to/marcosplusplus/utilize-stdany-do-c-moderno-nos-seus-projetos-3heh | cpp, cpp17, moderncpp |
### Dê adeus de uma vez por todas ao `void*`.
---
`std::any` é um recurso da biblioteca padrão C++ que foi introduzido no [C++17](https://terminalroot.com.br/tags#cppdaily).
Este componente pertence ao conjunto de classes de contêineres com segurança de tipo, fornecendo um meio seguro para armazenar e manipular valores de qualquer tipo.
Ele é especialmente útil quando você precisa lidar com situações em que o tipo da variável pode(pleonasmo): variar! 😃
Aí você diz:
> **— Ah, cara! De boa. Para esses casos eu uso o `void *`.**
Sim, realmente você tem razão, mas você já viu como a nova geração está em relação a *segurança de memória* ???
> Lembrando que o termo *segurança* é usado em Português, pois não existe uma palavra que se adeque a tradução para `Safe`, ou seja: `Safe` **≠** `Seguro`)! 😛
Sem dizer que `void *` é realmente perigoso!
Se você fizer isso, funciona:
```cpp
void * some_data; // Péssima ideia
std::string str = "Oi";
int x = 3;
decltype(x) y = 6;
some_data = &str;
std::cout << *(std::string*)some_data << '\n';
some_data = &x;
std::cout << *(int*)some_data << '\n';
some_data = &y;
std::cout << "Tipo de y: " << typeid(y).name() << '\n'; // include typeinfo
```
Mas, a chance de isso dar `mer%$a` é grande! Ao final do uso dessas variáveis, `some_data` vai continuar existindo, ou seja, tempo de vida indefinido!
E é para subsitituir o `void*` que o `std::any` foi criado no C++ Moderno que, com certeza, é totalmente **Safe**!
Em outras palavras, ele é um *wrapper* que encapsula sua variável para um `shared_ptr`([ponteiros inteligentes](https://terminalroot.com.br/2022/08/entenda-ponteiros-inteligentes-em-cpp-smart-pointers.html)) da vida! Sim, e existe até um `std::make_any`!!!
---
## Como utilizar o `std::any`
Primeiramente você precisa incluir o cabeçalho dele:
> Logicamente, só funciona a partir do C++17 como foi dito no início!
```cpp
#include <any>
```
E agora o mesmo código que foi apresentado acima, mas usando `std::any`:
```cpp
#include <iostream>
#include <any>
int main(){
std::any some_data;
std::string str = "Oi";
int x = 3;
auto y = std::make_any<decltype(x)>(6);
some_data = str;
std::cout << std::any_cast<std::string>(some_data) << '\n';
some_data = x;
std::cout << std::any_cast<int>(some_data) << '\n';
some_data = y;
std::cout << "Tipo de y: " << some_data.type().name() << '\n';
}
```
No código acima vimos que:
+ `std::any some_data;` - Declara a variável;
+ `std::any_cast<T>(some_data)` - Converte para o tipo desejado;
+ `std::make_any<T>` - Outra forma de criar objetos;
+ `some_data.type().name()` - Obtém o tipo de dado sem precisar de `typeinfo`.
E você pode usar pra absolutamente tudo: `std::vector`, [Lambda](https://terminalroot.com.br/2021/04/10-exemplos-de-uso-de-funcoes-lambda-em-cpp.html) e tudo que existir de tipo de dado!
E o cara pergunta outra coisa:
> **— Tá! E se eu quiser acabar o tempo de vida do `std::any` manualmente?**
Basta usar a estrutura de união `reset` ou até mesmo com o operador de incialização:
```cpp
some_data.reset();
// Ou
some_data = {};
```
> **— E pra verificar se `std::any` está vazio?**
Use `has_value()`:
```cpp
std::cout << (some_data.has_value() ? "Cheio!" : "Vazio.") << '\n';
```
O `type()` sem união com `name()` pode ser usado para comparar tipos:
```cpp
std::cout << (some_data.type() == typeid(void)) << '\n'; // 0 pra false
std::cout << (some_data.type() == typeid(int)) << '\n'; // 1 pra true
```
> Para usar os nomes *booleanos* use: `std::cout << std::boolalpha << (some_data.type() == typeid(int)) << '\n';`.
Para lançar exceções você deve usar o `std::bad_any_cast`:
```cpp
try {
std::any any_str("Oiii");
auto my_any{ std::make_any<std::string>(any_str.type().name()) };
std::cout << std::any_cast<std::string>(my_any) << '\n';
}catch (const std::bad_any_cast& e) {
std::cerr << "Error: " << e.what() << std::endl;
}
```
Para verificar se realmente tudo está entre os conformes, nunca se esqueça de usar as flags para seu compilador: `-Wall -Wextra -pedantic -g -fsanitize=address`.
---
Além de totalmente **SAFE**, o `std::any` é muito prático e uma mão na roda!
Tinha um projeto da empresa que eu estava desenvolvendo, que passava um argumento de função e podia ser qualquer tipo, mas o retorno da função era `std::string` concatenada ao nome do objeto recebido.
E alguém havia criado um baita de um `switch case` para converter para `std::string`(*bizarro!*), eu subsititui para recebimento de parâmetro para `std::any` e converti com `std::any_cast<std::string>` e resolvi de forma: Moderna, Safe e Like a Boss! Exatamente isso que `std::any` é!!! 😃
Para mais informações acesse: <https://en.cppreference.com/w/cpp/utility/any>
| marcosplusplus |
1,895,315 | My Ansible Learning Journey: Exploring Essential Modules | Introduction to Ansible Ansible is an open-source automation tool that simplifies tasks like... | 0 | 2024-06-20T23:10:20 | https://dev.to/faruq2991/my-ansible-learning-journey-exploring-essential-modules-2e6c | devops, learning, automation, tooling | **Introduction to Ansible**
Ansible is an open-source automation tool that simplifies tasks like configuration management, application deployment, and task automation. It uses a simple, human-readable language called YAML (Yet Another Markup Language) to describe automation jobs, making it accessible for beginners and powerful for experts.
### Why Use Ansible Modules?
Ansible modules are the building blocks for creating tasks in Ansible. Each module is a standalone script that Ansible runs on your behalf, performing specific tasks like managing files, installing packages, or configuring services. Modules help you automate tasks efficiently and consistently, reducing the risk of manual errors.
### Important Ansible Modules for Beginners
Here are some essential Ansible modules that every beginner should know:
1. **ping**: Checks connectivity with the target hosts.
2. **shell**: Executes shell commands on remote hosts.
3. **file**: Manages files and directories.
4. **copy**: Copies files to remote locations.
5. **apt**: Manages packages on Debian-based systems.
6. **yum**: Manages packages on Red Hat-based systems.
7. **service**: Manages services on remote hosts.
8. **user**: Manages user accounts.
### Using Ansible Modules: Code Examples
#### 1. **ping** Module
The `ping` module checks for connectivity to your host machines.
```yaml
- name: Check connectivity
hosts: all
tasks:
- name: Ping all hosts
ansible.builtin.ping:
```
#### 2. **shell** Module
The `shell` module allows you to run shell commands on remote hosts.
```yaml
- name: Run a shell command
hosts: all
tasks:
- name: Print date on remote hosts
ansible.builtin.shell: date
```
#### 3. **file** Module
The `file` module manages file and directory properties.
```yaml
- name: Create a directory
hosts: all
tasks:
- name: Ensure /tmp/mydir exists
ansible.builtin.file:
path: /tmp/mydir
state: directory
mode: '0755'
```
#### 4. **copy** Module
The `copy` module copies files from the local machine to remote hosts.
```yaml
- name: Copy a file
hosts: all
tasks:
- name: Copy file to remote hosts
ansible.builtin.copy:
src: /path/to/local/file
dest: /path/to/remote/file
mode: '0644'
```
#### 5. **apt** Module
The `apt` module manages packages on Debian-based systems.
```yaml
- name: Install a package on Debian/Ubuntu
hosts: all
tasks:
- name: Install htop
ansible.builtin.apt:
name: htop
state: present
```
#### 6. **yum** Module
The `yum` module manages packages on Red Hat-based systems.
```yaml
- name: Install a package on CentOS/RHEL
hosts: all
tasks:
- name: Install htop
ansible.builtin.yum:
name: htop
state: present
```
#### 7. **service** Module
The `service` module manages services on remote hosts.
```yaml
- name: Start and enable a service
hosts: all
tasks:
- name: Ensure nginx is running and enabled
ansible.builtin.service:
name: nginx
state: started
enabled: yes
```
#### 8. **user** Module
The `user` module manages user accounts on remote hosts.
```yaml
- name: Create a user account
hosts: all
tasks:
- name: Ensure user 'john' exists
ansible.builtin.user:
name: john
state: present
groups: 'wheel'
```
### Conclusion
Learning Ansible modules is a fundamental step in mastering automation with Ansible. These essential modules will help me automate repetitive tasks within my workflow, making it more efficient and reliable. As I continue on my Ansible journey, I believe I will discover many more modules that will cater to some specific needs, but starting with the basics gives me a solid foundation.
I love automating!
I love Ansible!! | faruq2991 |
1,873,920 | Should you really Roll your own auth? | Hey guys, In this article, I want to discuss whether it's better to build your own authentication... | 23,487 | 2024-06-20T23:00:00 | https://dev.to/devlawrence/should-you-really-roll-your-own-auth-4dj | webdev, authjs, javascript, programming | Hey guys, In this article, I want to discuss whether it's better to build your own authentication system or to use a third-party service provider. Let’s dive right in 😃
## Why Consider Building Your Own Authentication?
First, let's consider the **"WHY"** behind building your own authentication system from scratch. The decision depends significantly on your role and the kind of application you are developing. For instance, if you are a backend developer working for a company, they likely already have their own authentication system in place. However, the situation might be different if you're a freelancer (hello🙂), or frontend developer working for clients or personal projects.
Some people advocate for building your own authentication, while others recommend using third-party services like Clerk, Auth0, Kinde, and others. I'll outline the pros and cons of each approach and share my perspective on both solutions.
## 👨🏽💻 The Freelancer's Perspective
As a freelancer, if I have projects to deliver to clients, creating authentication from scratch is not the best solution. Here’s why:
- **Time Constraints**: Freelancers often work with tight deadlines. Building a robust authentication system from scratch is time-consuming and complex, which can delay project delivery.
- **Resource Management**: Freelancers usually handle multiple aspects of a project. Using a third-party service for authentication allows them to focus on other important tasks, enhancing overall productivity.
- **Cost**: While third-party services can be expensive, many offer generous free tiers that are sufficient for small to medium projects. This can be a cost-effective solution for freelancers working on budget-constrained projects.
**Cons**:
- **Dependency**: Relying on an external provider might raise concerns about reliability and data security.
- **Cost at Scale**: While initial costs might be low, they can increase significantly as the project scales.
_💡 But at the end of day the client does not really care what you use. They just want to see results._
## 👨🏽💻 The Frontend Developer's Perspective
As a frontend developer, or more broadly, as a software engineer focused on building frontend applications, the scenario is slightly different:
- **Ease of Integration**: Frontend developers can easily integrate third-party authentication services without delving into the complexities of backend systems.
- **Time Efficiency**: Using third-party services allows frontend developers to concentrate on the UI/UX aspects of the project, ensuring a better user experience.
- **Learning Opportunity**: While it’s beneficial to understand how authentication works, building it from scratch isn’t always necessary for frontend-focused projects. However, gaining some knowledge can help when integrating third-party services securely.
**Cons**:
- **Limited Control**: Depending on third-party services means you have less control over the authentication process and data management.
- **Potential Integration Issues**: There can be occasional compatibility issues with other parts of the application.
## 👨🏽💻 The Backend Developer's Perspective
For backend developers, the line becomes blurry, and here's why:
- **Control and Customization**: Building your own authentication system offers greater control over the implementation, allowing customization to meet specific security and business requirements.
- **Security Considerations**: Backend developers often need to ensure high security standards. While third-party services are secure, having control over the authentication process allows for more tailored security measures.
- **Scalability and Maintenance**: Maintaining your own system can be challenging but rewarding. Backend developers need to weigh the benefits of customization against the overhead of maintaining and scaling the system.
**Cons**:
- **Time and Resources**: Developing and maintaining a custom authentication system requires significant time and resources.
- **Complexity**: Ensuring that the system is secure and scalable adds to the complexity of the project.
_💡 Here is thing, you are a backend dev which means you chose the path of long suffering, So you don’t need to take the cons into consideration_ 🙂.
## Common Questions
Here are some common questions I was hoping you’d ask as well the answers in mind 👇🏽
- **What if my project scales quickly?**
Using a third-party service can be beneficial as they often provide scalable solutions that can handle increased loads without significant changes.
- **Are third-party services secure enough?**
Most third-party services invest heavily in security and compliance, often exceeding what a small team can implement. However, always review their security policies and practices to ensure they meet your requirements.
_💡 But you should try out_ [clerk.dev](https://clerk.dev/) _though 🙂_
- **Can I switch from a third-party service to my own system later?**
Yes, but it can be complex. Plan for such transitions by abstracting the authentication layer in your application to make future changes easier.
## Conclusion
Alright guys, Thanks for getting to this part of the article 🎊 🎊. The decision to build your own authentication system or use a third-party service depends on your role, project requirements, and constraints. Freelancers and frontend developers might find third-party services more practical due to time constraints and workload, while backend developers might benefit from the flexibility and control of building their own systems. Regardless of your choice, it's crucial to weigh the pros and cons and make an informed decision that best suits your needs.
Have an amazing weekend and see you next week 🙂
| devlawrence |
1,895,311 | Entendendo as Nomenclaturas getBy..., findBy... e queryBy... no Jest | No contexto de testes com Jest, especialmente ao testar componentes React usando... | 27,693 | 2024-06-20T22:52:13 | https://dev.to/vitorrios1001/entendendo-as-nomenclaturas-getby-findby-e-queryby-no-jest-2ni4 | jest, testing, javascript, typescript | No contexto de testes com Jest, especialmente ao testar componentes React usando `@testing-library/react`, você pode encontrar várias funções de consulta com diferentes prefixos, como `getBy...`, `findBy...` e `queryBy...`. Cada uma dessas funções serve a um propósito específico e entender suas diferenças pode ajudar a escrever testes mais eficazes e robustos.
## **getBy...**
### Uso
A função `getBy...` é usada para selecionar elementos que devem estar presentes no DOM.
### Comportamento
Se o elemento não for encontrado, `getBy...` lança um erro imediatamente. Isso é útil quando você espera que o elemento esteja no DOM após a renderização inicial.
### Exemplos
```javascript
const button = screen.getByText('Submit');
const input = screen.getByPlaceholderText('Enter your name');
```
### Quando Usar
Use `getBy...` quando você espera que o elemento esteja no DOM imediatamente após a renderização inicial do componente.
### Exemplo de Teste com `getBy...`
```javascript
import { render, screen } from '@testing-library/react';
import MyComponent from './MyComponent';
test('renders submit button', () => {
render(<MyComponent />);
const button = screen.getByText('Submit');
expect(button).toBeInTheDocument();
});
```
## **findBy...**
### Uso
A função `findBy...` é usada para selecionar elementos que podem aparecer no DOM de forma assíncrona.
### Comportamento
Retorna uma `Promise` que é resolvida quando o elemento é encontrado. Se o elemento não for encontrado dentro de um tempo padrão (1 segundo), a `Promise` é rejeitada.
### Exemplos
```javascript
const button = await screen.findByText('Submit');
const input = await screen.findByPlaceholderText('Enter your name');
```
### Quando Usar
Use `findBy...` quando o elemento pode não estar presente imediatamente e pode aparecer depois de alguma operação assíncrona, como uma chamada de API ou uma animação.
### Exemplo de Teste com `findBy...`
```javascript
import { render, screen } from '@testing-library/react';
import MyComponent from './MyComponent';
test('renders submit button asynchronously', async () => {
render(<MyComponent />);
const button = await screen.findByText('Submit');
expect(button).toBeInTheDocument();
});
```
## **queryBy...**
### Uso
A função `queryBy...` é usada para selecionar elementos que podem ou não estar presentes no DOM.
### Comportamento
Retorna `null` se o elemento não for encontrado, ao invés de lançar um erro.
### Exemplos
```javascript
const button = screen.queryByText('Submit');
const input = screen.queryByPlaceholderText('Enter your name');
```
### Quando Usar
Use `queryBy...` quando você quer verificar a ausência de um elemento no DOM. É útil para testar condições negativas.
### Exemplo de Teste com `queryBy...`
```javascript
import { render, screen } from '@testing-library/react';
import MyComponent from './MyComponent';
test('does not render submit button initially', () => {
render(<MyComponent />);
const button = screen.queryByText('Submit');
expect(button).not.toBeInTheDocument();
});
```
## Conclusão
A escolha entre `getBy...`, `findBy...` e `queryBy...` depende do comportamento esperado do componente que você está testando. Usar a função adequada pode tornar seus testes mais robustos e claros, além de evitar falsos positivos ou falhas desnecessárias. Compreender essas diferenças é crucial para escrever testes eficazes e manter a qualidade do seu código.
Agora que você entende como essas funções de consulta funcionam, pode aplicá-las corretamente em seus testes, garantindo que seus componentes React sejam testados de forma abrangente e eficiente. | vitorrios1001 |
1,895,312 | Modelo de Desing de Aplicações Backend | Introdução Camadas Infrastructure Business Boas práticas Testes Unitários Testes de... | 0 | 2024-06-20T22:52:06 | https://dev.to/brunobrolesi/modelo-de-desing-de-aplicacoes-backend-47jp | 1. [Introdução](#intro)
2. [Camadas](#layers)
- [Infrastructure](#infra)
- [Business] (#business)
3. [Boas práticas](#good-practices)
- [Testes Unitários] (#unit-tests)
- [Testes de Integração] (#integration-tests)
4. [Fluxo de desenvolvimento](#developing)
5. [Ponto de atenção](#atention)
# Introdução <a name="intro"></a>
Após alguns anos trabalhando em aplicações utilizando diferentes modelos de desing e colhendo feedbacks positivos e negativos de como as implementações se comportavam ao longo do tempo, escrevi esse artigo propondo um modelo de design de código para o desenvolvimento de aplicações backend. Nele trago o que mais funcionou nas equipes que trabalhei e considero um modelo com um nível ideal entre abstrações e separação de responsabilidade. Esse modelo traz uma mistura de conceitos de [Arquitetura Hexagonal](https://medium.com/bemobi-tech/ports-adapters-architecture-ou-arquitetura-hexagonal-b4b9904dad1a), [Clean Arch](https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html) e [DDD](https://fullcycle.com.br/domain-driven-design/).
A imagem abaixo contém um overview do modelo que será destrinchado ao longo desse artigo.

Antes de entrarmos afundo nos detalhes de cada módulo, vamos realizar uma breve jornada entre as camadas da nossa aplicação:
1. Eu como cliente realizo uma chamada para a aplicação, ela será recebida na camada de `delivery`, onde todos os dados que foram inseridos na chamada serão extraídos e devidamente validados. Caso os dados sejam inválidos a requisição já retorna para o usuário nessa camada.
2. Como os dados são válidos, a aplicação encaminha o tratamento da requisição para a camada de `usecase`, nela todas as regras de negócio que são necessárias para processar a requisição são aplicadas.
3. Se o `usecase` necessitar de um serviço externo para processar a requisição, por exemplo um banco de dados, ele ira recorrer a camada de `gateway`, ela contém todas as interfaces que definem o contrato das implementações que se comunicam com serviços externos.
4. Como o `usecase` esta visualizando apenas uma interface, é nesse momento que a real implementação do serviço externo é chamada. Depois desse passo, acontece o caminho reverso para devolver a resposta ao cliente, os dados são retornados para o `usecase`, onde ele pode ou não chamar outros serviços, o `usecase` devolve o resultado para a camada de `delivery`, sendo ele de sucesso ou falha e o resultado é devolvido ao cliente.
#Camadas <a name="layers"></a>
Agora iremos entrar em cada componente das camadas da imagem para explicar as suas responsabilidades.
Ao analisarmos a parte mais externa desse modelo de design, podemos visualizar dois pacotes: `infrastructure` e `business`. Vamos nos aprofundar em cada um deles individualmente.
##Infrastructure <a name="infra"></a>
`infrastructure:` esse pacote é responsável por conter tudo que não tem relação com o domínio da nossa aplicação, ou seja, tudo aquilo que não faz parte da nossa regra de negócio, protocolos de comunicação, framework , conexão com banco de dados, etc. A seguir vamos entender quais são os pacotes internos do módulo de `infrastructure`.
- `delivery:` esse pacote é responsável por conter tudo aquilo que é referente a comunicação com o `client` utilizador da nossa aplicação, seja ele um webapp, cli, etc. Este pacote contém sub pacotes para facilitar a organização de cada responsabilidade.
- `webapp:` esse pacote é responsável por conter nosso web server, as rotas que a aplicação expõe e tudo aquilo relacionado com o tratamento de requisições http para nossa aplicação. Dentro dele existem outros sub pacotes para facilitar a organização de cada responsabilidade.
- `middlewares:` esse pacote é responsável por conter os middlewares da aplicação.
- `handlers:` esse pacote é responsável por conter os handlers http da nossa aplicação. Nessa etapa todos os dados necessário da request são extraídos e validados, posteriormente sendo utilizados na chamada dos use cases.
- `consumers:` esse pacote é responsável por conter os handlers referentes aos consumers da nossa aplicação. Nessa etapa todos os dados necessário da message são extraídos e validados, posteriormente sendo utilizados na chamada dos use cases.
- `requests:` esse pacote é responsável por conter as estruturas que definem o das requisições recebidasbody nos handlers e também validadores para o body, headers, pathParams e queryParams.
- `responses:` esse pacote é responsável por conter as estruturas que definem body que são retornados nos handlers e também padronizar as responses de sucesso e erro.
- `messages:` esse pacote é responsável por conter as estruturas e validações das messages que são recebidas nos consumers.
- `dependencies:` esse pacote é responsável por conter toda a lógica referente a injeção de dependências da aplicação.
- `repository:` esse pacote é responsável por conter a implementação da comunicação com um determinado banco de dados. Essa implementação sempre deve seguir o contrato estabelecido em um gateway.
- `config:` esse pacote é responsável por conter as configurações dos clientes dos bancos de dados utilizados na camada de repository.
- `service:` esse pacote é responsável por conter a implementação da comunicação com serviços externos. Essa implementação sempre deve seguir o contrato estabelecido em um gateway.
- `config:` esse pacote é responsável por conter as configurações dos clientes https, sdks, etc… utilizados na camada de service.
- `publisher:` esse pacote é responsável por conter a implementação da publicação de uma mensagem em determinado tópico. Essa implementação sempre deve seguir o contrato estabelecido em um gateway.
- `config:` esse pacote é responsável por conter as configurações dos clientes utilizados na camada de publisher.
##Business <a name="business"></a>
`business:` essa pacote é responsável por conter tudo aquilo que faz parte do domínio da nossa aplicação, ele é dividido nos seguinte sub pacotes.
- `usecase:` esse pacote é responsável por conter a implementação dos casos de uso da nossa aplicação, ou seja, tudo aquilo referente as regras de negócio necessárias para atender as solicitações. Algumas solicitações podem necessitar de serviços externos para validar informações referentes a regra de negócio, por exemplo, verificar se um item pertence a um determinado usuário antes de realizar uma alteração. Para realizar esse tipo de operação, o `use case` deve "enxergar" o `gateway` que define o contrato desse serviço externos, ou seja, dependa sempre da abstração, nunca da implementação.
- `logic:` essa é uma camada opcional, em muitas aplicações ela não necessita existir. Essa camada utilizamos quando necessitamos de tratamentos mais complexos sobre determinado `usecase`, por exemplo, caso tenhamos um caso de uso que de acordo com o tipo de recurso que esta recebendo necessita realizar fluxos completamente diferentes, nesse caso para não agregarmos muita complexidade a nossa camada de usecase criamos na camada logic e podemos utilizar o padrão [strategy](https://refactoring.guru/pt-br/design-patterns/strategy) para gerenciar essa complexidade.
- `gateway:` esse pacote é responsável por conter os contratos (interfaces) que definem a comunicação com serviços externos que são implementados nas camadas de `repository`, `service` e `publisher`. Ou seja, ele é a ponte da camada de business com a camada de infrastructure.
- `domain:` esse pacote é responsável por conter o domínio da nossa aplicação, ou seja, todas as estruturas que definem entidades core. Por exemplo, user, userID, item, invoice, purchase, etc. **IMPORTANTE:** a camada domain pode ser usada pelas demais camadas do projeto, porém ela **NUNCA** deve usar outras camadas do projeto. Outro ponto importante desta camada é a preferência por evitar [domínios anêmicos](https://dev.to/wsantosdev/design-modelos-anemicos-e-modelos-ricos-4k8f).
#Boas práticas <a name="good-practices"></a>
Essas seção tem como objetivo listar boas práticas ao longo do desenvolvimento desse tipo de aplicação.
##Testes Unitários <a name="unit-tests"></a>
Outro ponto muito importante é a escrita de testes unitários, porém não acredito que seja uma vantagem escrever testes para todas as camadas. De acordo com experiências passadas, não realizaria testes da camada de `infrastructure` em relação ao pacotes que implementam os `gateways` (`repository`, `service` e `publisher`), pois a maioria do código contido nesses pacotes são libs de terceiros que já foram testadas e muitas vezes necessitamos mockar comportamentos que fazem nosso teste não ter uma real utilidade a não ser aumentar o "coverage".
Outro ponto importante é realizar a escrita dos testes unitário de maneira correta para evitar um excesso de complexidade. Quando falamos de testes unitário, estamos nos referindo a isolar uma unidade de código e testa-la, ou seja, se estamos testando um `handler` o `use case` chamado por esse `handler` durante o teste **DEVE** ser um `mock` e não o `use case` real. Da mesma forma, quando estivermos testando um `use case` todas as chamadas aos serviços externos fornecidas pelos `gateways` **DEVEM** estar chamando `mocks` ao longo do teste e não as implementações reais.
##Testes de Integração <a name="integration-tests"></a>
A escrita de testes de integração é muito importante para garantir a confiabilidade da nossa code base, evitando que errors entre a comunicação das camadas passem despercebidos. Esses testes também são importantes para assegurar o funcionamento das camadas que não realizamos os testes de unidade como mencionado anteriormente.
Esses testes possuem menos casos que os de unidade, sendo geralmente escritos para testar o caso de sucesso. Para sua escrita, levantamos a aplicação e mockamos somente as dependências externas, por exemplo
- Caso utilizamos um banco sql, podemos levantar um banco em memória ou algo similar para utilizar ao longo do teste.
- Caso o fluxo a ser testado realize chamadas http ao longo da execução, realizamos o mock dessas chamadas http.
Dessa forma conseguimos simular o comportamento de um cliente e garantir que a comunicação entre todas as nossas camadas estejam funcionando corretamente.
#Fluxo de desenvolvimento <a name="developing"></a>
Uma sugestão de desenvolvimento para essa estrutura de aplicação é evitar iniciar criando toda a estrutura. Comece pelo `usecase`, dessa forma mantemos o foco no que realmente importa e o restante da aplicação vai crescendo de acordo com esse foco. Tendo isso em mente, podemos seguir os seguintes passos:
1. Criar a camada de bussiness e a sub camada de usecase
2. Comece a escrever o usecase e vá criando as demais camadas conforme o usecase vá necessitando delas. Dessa maneira, evitamos que fique código não utilizado na aplicação e que ela não fique estruturada corretamente.
**DICA:** Uma dica para facilitar o desenvolvimento é utilizar diagrama de fluxos, dessa forma podemos desenhar um diagrama por usecase e já visualizar exatamente o que temos que implementar. Uma ferramenta excelente para desenhar diagramas é o [Mermaid](https://mermaid.js.org/).
#Ponto de atenção <a name="atention"></a>
Esse modelo de desing separa completamente a regra de negócio de tecnologias de terceiros, mas e se nossa regra de negócio necessita de uma transacion SQL por exemplo? Não podemos ser radicais, as vezes temos que "sujar" o design da nossa aplicação para atender as regras de negócio de forma eficiente, porém devemos evitar sempre que possível.
| brunobrolesi | |
1,895,310 | MICROSOFT APPLIED SKILL. Guided Project: | This is exercise 2b of the Microsoft Applied skill guided project. A. CREATE A STORAGE ACCOUNT AND... | 0 | 2024-06-20T22:47:56 | https://dev.to/sethgiddy/microsoft-applied-skill-guided-project-4ci9 | This is exercise 2b of the Microsoft Applied skill guided project.
A. **CREATE A STORAGE ACCOUNT AND CONFIGURE HIGH AVAILABILITY**.
1.**Create a storage account for the internal private company documents.**
- In the portal, search for and select Storage accounts.
- Select + Create.
- Select the Resource group created in the previous lab.
- Set the Storage account name to private. Add an identifier to the name to ensure the name is unique.
- Select Review, and then Create the storage account.
- Wait for the storage account to deploy, and then select Go to resource.

2.**This storage requires high availability if there’s a regional outage. Read access in the secondary region is not required. Configure the appropriate level of redundancy**.
- In the storage account, in the Data management section, select the
- Redundancy blade.
- Ensure Geo-redundant storage (GRS) is selected.
- Refresh the page.
- Review the primary and secondary location information.
- Save your changes.

B. **CREATE A STORAGE CONTAINER, UPLOAD A FILE, AND RESTRICT ACCESS TO THE FILE**.
1.**Create a private storage container for the corporate data**.
- In the storage account, in the Data storage section, select the Containers blade.
- Select + Container.
- Ensure the Name of the container is private.
- Ensure the Public access level is Private (no anonymous access).
- As you have time, review the Advanced settings, but take the defaults.
- Select Create.

2. For testing, upload a file to the private container. he type of file doesn’t matter. A small image or text file is a good choice. Test to ensure the file isn’t publicly accessible.
- Select the container.
- Select Upload.
- Browse to files and select a file.
- Upload the file.
- Select the uploaded file.
- On the Overview tab, copy the URL.
- Paste the URL into a new browser tab.
- Verify the file doesn’t display and you receive an error.

3. An external partner requires read and write access to the file for at least the next 24 hours. Configure and test a shared access signature (SAS). Learn more about Shared Access Signatures.
- Select your uploaded blob file and move to the Generate SAS tab.
- In the Permissions drop-down, ensure the partner has only Read permissions.
- Verify the Start and expiry date/time is for the next 24 hours.
- Select Generate SAS token and URL.
- Copy the Blob SAS URL to a new browser tab.
- Verify you can access the file. If you have uploaded an image file it will display in the browser. Other file types will be downloaded.

C. **CONFIGURE STORAGE ACCESS TIERS AND CONTENT REPLICATION.**
1. To save on costs, after 30 days, move blobs from the hot tier to the cool tier. Learn more how manage the Azure Blob storage lifecycle.
- Return to the storage account.
- In the Overview section, notice the Default access tier is set to Hot.
- In the Data management section, select the Lifecycle management blade.
- Select Add rule.
- Set the Rule name to move-to-cool.
- Set the Rule scope to Apply rule to all blobs in the storage account.
- Select Next.
- Ensure Last modified is selected.
- Set More than (days ago) to 30.
- In the Then drop-down select Move to cool storage.
- As you have time, review other lifecycle options in the drop-down.
- Add the rule.

2. The public website files need to be backed up to another storage account.[Learn more about object replication.
- In your storage account, create a new container called backup. Use the default values. Refer back to Lab 02a if you need detailed instructions.

- Navigate to your publicwebsite storage account. This storage account was created in the previous exercise.
- In the Data management section, select the Object replication blade.
- Select Create replication rules.
- Set the Destination storage account to the private storage account.
- Set the Source container to public and the Destination container to
backup.
- Create the replication rule.
- Optionally, as you have time, upload a file to the public container.
- Return to the private storage account and refresh the backup
container. Within a few minutes your public website file will
appear in the backup folder.

| sethgiddy | |
1,894,473 | Understanding the Factory Method Design Pattern | Hello everyone, السلام عليكم و رحمة الله و بركاته The Factory Method is a creational design pattern... | 0 | 2024-06-20T22:42:39 | https://dev.to/bilelsalemdev/understanding-the-factory-method-design-pattern-45gk | javascript, typescript, designpatterns, oop |
Hello everyone, السلام عليكم و رحمة الله و بركاته
The Factory Method is a creational design pattern that provides an interface for creating objects in a superclass but allows subclasses to alter the type of objects that will be created. It helps in dealing with the problem of creating objects without having to specify the exact class of the object that will be created. This is particularly useful in scenarios where the creation process is complex or involves multiple steps.
### What Problem Does the Factory Method Solve?
1. **Class Instantiation Issues**: Directly instantiating objects using `new` can lead to code that is tightly coupled with specific classes. This makes the code difficult to maintain and extend.
2. **Complex Object Creation**: When object creation involves several steps or configurations, using a constructor can be cumbersome and hard to read.
3. **Subclasses Control**: It provides a way for subclasses to decide which class to instantiate, allowing for more flexible and reusable code.
### Key Concepts of the Factory Method
- **Product**: The interface or abstract class defining the objects that the factory method will create.
- **Concrete Product**: The implementation of the product interface.
- **Creator**: The abstract class or interface that declares the factory method.
- **Concrete Creator**: The class that implements the factory method to create an object of the Concrete Product class.
### Real-World Example: Document Creation
Consider a scenario where we have an application that can create different types of documents such as Word Documents, PDF Documents, and Excel Sheets. The application should be able to generate these documents without knowing their specific implementation details.
#### Step-by-Step Implementation in TypeScript
1. **Define the Product Interface**
```typescript
interface IDocument {
open(): void;
save(): void;
close(): void;
}
```
2. **Concrete Products**
```typescript
class WordDocument implements IDocument {
open(): void {
console.log("Opening Word Document");
}
save(): void {
console.log("Saving Word Document");
}
close(): void {
console.log("Closing Word Document");
}
}
class PDFDocument implements IDocument {
open(): void {
console.log("Opening PDF Document");
}
save(): void {
console.log("Saving PDF Document");
}
close(): void {
console.log("Closing PDF Document");
}
}
class ExcelDocument implements IDocument {
open(): void {
console.log("Opening Excel Document");
}
save(): void {
console.log("Saving Excel Document");
}
close(): void {
console.log("Closing Excel Document");
}
}
```
3. **Creator Abstract Class**
```typescript
abstract class DocumentCreator {
public abstract createDocument(): IDocument;
public newDocument(): void {
const doc = this.createDocument();
doc.open();
doc.save();
doc.close();
}
}
```
4. **Concrete Creators**
```typescript
class WordDocumentCreator extends DocumentCreator {
public createDocument(): IDocument {
return new WordDocument();
}
}
class PDFDocumentCreator extends DocumentCreator {
public createDocument(): IDocument {
return new PDFDocument();
}
}
class ExcelDocumentCreator extends DocumentCreator {
public createDocument(): IDocument {
return new ExcelDocument();
}
}
```
5. **Client Code**
```typescript
class Application {
public static main(): void {
let creator: DocumentCreator;
creator = new WordDocumentCreator();
creator.newDocument();
creator = new PDFDocumentCreator();
creator.newDocument();
creator = new ExcelDocumentCreator();
creator.newDocument();
}
}
Application.main();
```
### Explanation
1. **IDocument Interface**: This interface defines the methods `open()`, `save()`, and `close()` that all document types must implement.
2. **Concrete Products**: `WordDocument`, `PDFDocument`, and `ExcelDocument` are classes that implement the `IDocument` interface. Each class provides its own implementation for the methods defined in the interface.
3. **DocumentCreator Abstract Class**: This abstract class declares the factory method `createDocument()`. It also provides a `newDocument()` method that calls the factory method to create a document and then performs a series of operations on it.
4. **Concrete Creators**: `WordDocumentCreator`, `PDFDocumentCreator`, and `ExcelDocumentCreator` are subclasses of `DocumentCreator`. Each subclass implements the `createDocument()` method to instantiate a specific type of document.
5. **Client Code**: The `Application` class demonstrates how to use the factory method pattern. It creates instances of different document creators and calls the `newDocument()` method to generate and operate on documents.
### Benefits of Using the Factory Method Pattern in TypeScript
- **Decoupling**: The client code (`Application`) does not need to know the exact class of the document it works with. It only interacts with the `IDocument` interface and the `DocumentCreator` abstract class.
- **Single Responsibility**: Each creator class is responsible for instantiating a specific type of document. The document classes handle their own specific behaviors.
- **Flexibility and Extensibility**: Adding a new type of document is straightforward. You can create a new class that implements the `IDocument` interface and a new creator class that extends `DocumentCreator`.
- **Maintainability**: Changes in the creation process of specific documents do not affect the client code or other document types.
By using the Factory Method pattern, you can create complex applications with different features without complicating the code, adhering to principles of clean code and design patterns.
| bilelsalemdev |
1,895,308 | Keyword Prominence and Proximity for SEO Success | Introduction In the ever-evolving world of SEO, keyword strategy remains a cornerstone for... | 0 | 2024-06-20T22:38:20 | https://dev.to/gohil1401/keyword-prominence-and-proximity-for-seo-success-2k7h | webdev, beginners, tutorial, seo | ## Introduction
In the ever-evolving world of SEO, keyword strategy remains a cornerstone for achieving high search engine rankings. Two critical aspects of this strategy are keyword prominence and keyword proximity. Understanding and effectively utilizing these elements can significantly enhance your website's visibility and relevance, driving more organic traffic and improving user engagement.
## Keyword Prominence
Keyword prominence refers to the strategic placement of keywords within your content. When keywords are positioned prominently, they are given more weight by search engines, helping them understand the main topics of your page. This practice is crucial for signaling relevance and improving search rankings.
**Key Areas for Keyword Prominence**
1. **Title Tag:** Keywords in the title tag are essential as they help search engines comprehend the page's content. For example, a title tag like "Discover the Best Italian Restaurants in New York" immediately signals the page's focus.
2. **Headings (H1, H2, H3, etc.):** Including keywords in headings emphasizes their importance. For instance, using "Best Italian Restaurants in New York" in an H1 tag helps highlight the primary topic.
3. **First 100 Words:** Placing keywords early in the content signals relevance. Starting your article with "If you’re searching for the best Italian restaurants in New York, look no further" establishes the main subject right away.
4. **URL:** A keyword-rich URL enhances the perceived relevance of the page. For example, www.example.com/best-italian-restaurants-nyc directly indicates the content's focus.
5. **Meta Description:** While not a direct ranking factor, a well-crafted meta description with keywords can improve click-through rates. An example could be, "Explore the best Italian restaurants in New York with our comprehensive guide."
6. **Image Alt Text:** Using keywords in alt text helps with image search optimization. For instance, "Alt text: Best Italian Restaurants in New York" ensures images are relevantly indexed.
**Best Practices for Keyword Prominence**
- **Natural Placement:** Ensure keywords are placed naturally and logically within the content to maintain readability and user experience.
- **Avoid Keyword Stuffing:** Overuse of keywords can lead to penalties. Aim for a balanced approach.
- **Contextual Use:** Use keywords in a way that fits the content's context. This enhances both prominence and relevance.
- **Synonyms and Variations:** Use synonyms and variations to avoid repetition and provide a richer content experience.
## Keyword Proximity
Keyword proximity refers to the closeness of keywords to each other within a text. This is particularly important for phrases and long-tail keywords. The closer the keywords are to each other, the more likely the content is relevant for those search terms.
**Impact on Long-Tail Keywords**
For example, in a phrase like “best Italian restaurant,” having these words together in the same order will be more effective than having them spread out across the text. Ensuring keyword proximity helps search engines better understand the context and relevance of your content.
**Examples of Effective Keyword Proximity**
Using "Some of the best Italian restaurants in New York include..." maintains the proximity of the keywords, enhancing the phrase's effectiveness.
**Best Practices for Keyword Proximity**
- **Natural and Logical Placement:** Keywords should be placed where they naturally fit within the content to maintain readability.
- **Avoid Overstuffing:** Excessive use of keywords in close proximity can be seen as spammy and may lead to penalties.
- **Using Keywords in Context:** Ensure keywords are used in a way that makes sense contextually. This helps with both prominence and proximity.
## Example: Optimizing for "Best Italian Restaurants in New York"
Title Tag Optimization: "Discover the Best Italian Restaurants in New York."
**Heading Optimization (H1, H2, H3):**
- **H1:** Best Italian Restaurants in New York
- **H2:** Top Picks for Italian Cuisine in NYC
- **H3:** Hidden Gems in New York’s Italian Food Scene
- **Early Content Placement:** "If you’re searching for the best Italian restaurants in New York, look no further. Our guide highlights top spots for authentic Italian cuisine."
**Maintaining Keyword Proximity:** "Some of the best Italian restaurants in New York include..."
**Using Variations and Related Terms:** "Top-rated Italian eateries in NYC," "New York’s finest Italian dining spots."
## Conclusion
Understanding and implementing keyword prominence and proximity are essential for effective SEO. By strategically placing keywords in prominent positions and ensuring their proximity, you can enhance the relevance and visibility of your content, driving more organic traffic and improving user engagement. Remember to maintain natural placement, avoid overstuffing, and use contextual keywords for the best results. | gohil1401 |
1,894,110 | Mock Class Constructor in Jest Test with Mocking Partials | I've been implementing tests on server-side code using Jest. One thing I initially struggled with was... | 0 | 2024-06-20T22:27:43 | https://dev.to/c0xxxtv/mock-class-constructor-in-jest-test-with-mocking-partials-1dd5 | jest, testing, mock, react | I've been implementing tests on server-side code using Jest. One thing I initially struggled with was mocking class constructors. Here’s a guide on how to do it :)
## Example Code
Here is the code for the function you want to test:
```javascript
export const abc=(arg)=>{
return new ClassA(arg)
}
```
In the test, you want to verify that the function abc returns a new instance of `ClassA` and that the ClassA constructor is called with arg. The first step is to mock the ClassA constructor.
## How to mock the Class Constructor?
You can mock the Class Constructor and even its implementation with **mock function from Jest** .
Here is how you do it.
```javascript
jest.mock('path for ClassA', () => ({
ClassA: jest.fn().mockImplementation(() => ({ ClassA: 'dummyClassAResult' })),
}));
```
In the code above, I am using a partial mock, meaning I’m only mocking part of the module where ClassA is defined.
The module is mocked so that ClassA is a Jest function. Additionally, I completely replace the implementation of the mock function to return an object `{ ClassA: 'dummyClassAResult' }`.
Note that **you cannot use mockReturnValue** to mock the return value of a class constructor. Instead, you should use `mockImplementation` to achieve this.
Here is the incorrect approach:
```javascript
jest.mock('ClassA module path', () => ({
ClassA: jest.fn().mockReturnValue({ ClassA: 'dummyClassAResult' })
}));
```
Using `mockReturnValue` does not work for class constructors because it is intended for mocking the return values of functions. When mocking class constructors, you need to replace the implementation of the constructor itself to ensure the class instance is created correctly.
[Official Documentation for Jest mock](https://jestjs.io/docs/mock-functions
)
## How to Keep Some Parts of a Mocked Module
If you mock a module as shown above, only ClassA is mocked. This means that other functions and objects exported from the module will not be mocked and will show as undefined when imported. So, what if you want to keep some of them intact? You can use the `jest.requireActual` function for such cases.
```javascript
import { originalExportedObject, ClassA } from 'ClassA module path';
jest.mock('ClassA module path', () => {
const actualModule = jest.requireActual('ClassA module path');
return {
...actualModule,
ClassA: jest.fn().mockImplementation(() => ({ ClassA: 'dummyClassAResult' })),
};
});
```
In the code above, `jest.requireActual` is used to extract the original exports from the module. This allows you to keep originalExportedObject and other parts of the module unchanged, while only ClassA is mocked.
## Time to write a test
```javascript
//import ClassA
import {ClassA} from 'ClassA module path'
//mock the module
jest.mock('ClassA module path', () => ({
ClassA: jest.fn()
.mockImplementation(() => ({ ClassA:'dummyClassAResult' })),
}));
//test
it('Returning ', () => {
const arg='dummyArg'
const result=abc(arg)
expect(ClassA).toHaveBeenCalledWith(arg); //// Since ClassA is now a Jest function, you can use assertions on it!
expect(result).toStrictEqual({ ClassA: 'dummyClassAResult' })
});
```
I hope this helps my fellow developers to write test on their code!
| c0xxxtv |
1,895,291 | How Bitcoin work ? | Imagine a ledger like #Splitwise, where Alice, Bob, and Charlie pay each other, and at the end of the... | 0 | 2024-06-20T22:18:47 | https://dev.to/beingwizard/how-bitcoin-work--1ech | bitcoin, cryptocurrency, blockchain, magic | Imagine a ledger like #Splitwise, where Alice, Bob, and Charlie pay each other, and at the end of the month, everyone settles up in cash. In the digital world, this concept evolves into the idea of cryptocurrency: Ledger + Trust + Cryptography = Cryptocurrency. Instead of a bank verifying every transaction, we rely on mathematical calculations rooted in cryptography.
###The Digital Ledger
A ledger records all transactions. In our analogy, anyone can add a line to the ledger, and everyone settles up at the end of the month. The challenge is verifying each transaction. For instance, if Alice pays Bob $100, how can we be sure it actually happened?
###Enter Digital Signatures
To tackle this, digital signatures come into play. Alice adds a signature to the transaction, proving Bob’s approval and ensuring no one can forge the signature. The formula is simple:
```
Sign(Message, Secret Key) = Signature
Verify(Message, 256-bit Signature, Public Key) = True/False
```
```
With 2^256 possibilities, the security is immense—imagine 8 x 4 billion possibilities.
```
This cryptographic assurance ensures that only valid signatures are recognized.
###Addressing the "Charlie Problem"
What if Charlie takes on $1000 in debt and refuses to show up? The solution lies in prepayment or ensuring transactions are only accepted based on what one can afford to lose. This way, bypassing the system is nearly impossible.
###The Ledger: A History of Transactions
In essence, the ledger is a history of transactions, forming the currency. Traditionally, everyone trusts a centralized system. But who owns this system? Who adds the lines? The solution? Everyone keeps their own copy of the ledger, broadcasting transactions to the network.
###Synchronizing Ledgers
The challenge is ensuring all ledgers stay synchronized and receive the same information. Here’s where SHA256 comes in. SHA256("Lakshit") yields a specific hash, like 1010101010101010101. This hash is deterministic, providing the same output for the same input, though it looks random.
###Proof of Work: The Computation Race
Proof of Work simplifies the process. Each ledger has a unique code, SHA256 encrypted. Suppose it needs the first 30 digits to be zeros. Miners then guess and check billions of numbers to find the correct pattern. This computational effort is what proves the work.
###Blocks and Blockchain
Successful cryptocurrency relies on decentralized ledgers. Each ledger has a block containing a proof of work and a previous hash, creating a chain. This structure is the blockchain. Miners, who solve complex puzzles, create new blocks, broadcast them, and get rewarded. This reward mechanism, called the block reward, isn’t from anyone but is intrinsic to the protocol.
###Mining: The Lottery of Computation
Mining is the process of creating these blocks. Miners listen to transactions, create blocks, and broadcast them. They are rewarded for their computational work, akin to a lottery. The key is that the more computational work put into finding a block, the higher the chance of being rewarded.
###The Longest Chain Consensus
Our protocol updates to define the valid chain. The longest chain with the most computational work is the valid one. In case of a tie, we wait for a new block. Alice trying to defraud Bob by not broadcasting a ledger is futile. She’d need 50% more computation power than the entire network combined, making it nearly impossible.
###The Bitcoin Reward System
The block explorer shows that miners used to get 50 bitcoins as a reward, but this amount halves over time, capped at 21 million bitcoins. This gradual reduction ensures scarcity and value preservation.
This article synthesizes the workings of cryptocurrency, inspired by **[3blue1brown](https://www.youtube.com/watch?v=bBC-nXj3Ng4)**. From digital ledgers and signatures to the intricate dance of proof of work and mining, cryptocurrency is a marvel of modern cryptography, ensuring trust and security without central authority. Welcome to the future of finance! | beingwizard |
1,895,065 | What is Ledger and why does it need Idempotence? | PTBR Version What is Ledger Series What is a Ledger and why you need to learn about... | 0 | 2024-06-20T22:14:43 | https://dev.to/woovi/what-is-ledger-and-why-does-it-need-idempotence-18n9 | javascript, webdev |
[PTBR Version](https://daniloab.substack.com/p/o-que-e-ledger-e-por-que-precisa)
## What is Ledger Series
1. [What is a Ledger and why you need to learn about it?](https://dev.to/woovi/what-is-ledger-and-why-does-it-need-idempotence-18n9)
2. [What is Ledger and why does it need Idempotence?](https://dev.to/woovi/what-is-ledger-and-why-does-it-need-idempotence-18n9)
3. [What is a Ledger and Why Floating Points Are Not Recommended?](https://dev.to/woovi/what-is-a-ledger-and-why-floating-points-are-not-recommended-1f4l)
In our previous [blog post](https://dev.to/woovi/what-is-a-ledger-and-why-you-need-to-learn-about-it-4d0g), we discussed what a ledger is and why it is essential to learn about it. We explored its origins, how it works, and its importance in financial institutions. Now, we will delve into a critical concept in building robust and reliable ledgers: idempotency.
## What is Idempotency?
Idempotency is the property of certain operations that can be applied multiple times without changing the result beyond the initial application. In other words, if an operation is idempotent, performing it once or multiple times has the same effect. This concept is crucial in distributed systems, APIs, and financial transactions to prevent duplicate processing.
## Why is Idempotency Important in a Ledger?
1. **Preventing Duplicate Transactions**: In a ledger, idempotency ensures that if a transaction is accidentally submitted more than once, it does not get recorded multiple times, which could lead to incorrect balances.
2. **Consistency**: Idempotency helps maintain the consistency and integrity of financial records, which is vital for audits and regulatory compliance.
3. **Error Handling**: Systems can safely retry operations without the risk of applying the same transaction multiple times, making the system more robust and fault-tolerant.
Implementing Idempotency in a Ledger
To illustrate how to implement idempotency in a ledger, we will update our previous example to include a transaction ID. This transaction ID will be used to ensure that each transaction is only recorded once.
Define the structure of the ledger in MongoDB:
```ts
{
_id: ObjectId("60c72b2f9b1d8e4d2f507d3a"),
date: ISODate("2023-06-13T12:00:00Z"),
description: "Deposit",
amount: 1000.00,
balance: 1000.00,
transactionId: "abc123"
}
```
Function to add a new entry to the ledger and calculate the balance with idempotency:
```ts
const { MongoClient } = require('mongodb');
async function addTransaction(description, amount, transactionId) {
const url = 'mongodb://localhost:27017';
const client = new MongoClient(url);
try {
await client.connect();
const database = client.db('finance');
const ledger = database.collection('ledger');
// Check if the transaction already exists
const existingTransaction = await ledger.findOne({ transactionId: transactionId });
if (existingTransaction) {
console.log('Transaction already exists:', existingTransaction);
return;
}
// Get the last entry in the ledger
const lastEntry = await ledger.find().sort({ date: -1 }).limit(1).toArray();
const lastBalance = lastEntry.length > 0 ? lastEntry[0].balance : 0;
// Calculate the new balance
const newBalance = lastBalance + amount;
// Create a new entry in the ledger
const newEntry = {
date: new Date(),
description: description,
amount: amount,
balance: newBalance,
transactionId: transactionId
};
// Insert the new entry into the ledger
await ledger.insertOne(newEntry);
console.log('Transaction successfully added:', newEntry);
} finally {
await client.close();
}
}
// Example usage
addTransaction('Deposit', 500.00, 'unique-transaction-id-001');
```
## How to keep the same transactionId?
**Important**: Your system needs to ensure that the transactionId is always created as expected to ensure that the transaction is unique. How to do this?
Let's assume you have an employee payment system and you cannot double pay employees. To do this, you can have a helper model that initiates a payment, a PaymentIntent. This PaymentIntent has the following model
```ts
{
_id: ObjectId("60c72b2f9b1d8e4d2f508d82"),
taxID: '12345678990', // cpf do funcionário
description: "Salário",
amount: 3000.00,
status: 'PENDING',
transactionId: '456',
companyId: '123',
}
```
Once you call the entry creation function in the ledger you can create the transactionId by combining: company id + paymentIntent transactionId. Thus, every time you process the payment for that PaymentIntent, the same transactionId will always be created, ensuring that the Ledger can identify that entry.
## Conclusion
Implementing idempotency in your ledger system is crucial for maintaining accurate and reliable financial records. By ensuring that each transaction is only recorded once, you can prevent duplicate entries and maintain the integrity of your data.
As we have seen, idempotency is not just a technical detail but a fundamental principle that helps build robust and fault-tolerant systems. In our next blog post, we will explore more advanced topics in ledger management and how to handle other challenges such as concurrency and eventual consistency.
Stay tuned for more insights into building reliable financial systems!
---
Visit us [Woovi](https://woovi.com/)!
---
Follow me on [Twitter](https://x.com/daniloab_)
If you like and want to support my work, become my [Patreon](https://www.patreon.com/daniloab)
Want to boost your career? Start now with my mentorship through the link
https://mentor.daniloassis.dev
See more at https://linktr.ee/daniloab
Photo of <a href="https://unsplash.com/pt-br/@thecreativv?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">The Creativv</a> in <a href="https://unsplash.com/pt-br/fotografias/edificio-de-concreto-marrom-e-branco-qLEKtlHvGfo?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
| daniloab |
1,895,076 | Estruturas de estilização de página CSS | Estilização CSS: Ferramentas que moldam o conteúdo da página Width: largura } auto/... | 0 | 2024-06-20T22:06:22 | https://dev.to/marimnz/estruturas-de-estilizacao-de-pagina-css-3844 | css, beginners, frontend | ##Estilização CSS:
Ferramentas que moldam o conteúdo da página
- `Width`: largura } auto/ initial
- `heigt`: altura } min/ max
- `inherit`: mantém a medida já definida
- `margin`: top/ left/ right/ bottom
- `padding`: espaço do conteúdo interno com o externo
- `box sizing`: retorna o elemento aos tamanhos pré-definidos
##Cores no CSS
- **RGB**: Valores entre 0 e 255 para definir os tons de vermelho, verde e azul, separados por virgula. Exemplo:
```
#rgb{
color: rgb(250, 30, 70);
}
```
O valor 250 representa o red,30 representa o green, e 70 o blue, que no caso resultaria em algo parecido como:

- **RGBA**: Muito semelhante ao RGB, mas se adiciona o fator transparência que varia entre 0 e 1;
- **HEX**: Hexadecimal definidas entre 0 e 9, e A até F, onde F é o valor mais alto, seguindo um padrão parecido do rgb. Exemplo:
00FF00 -> Verde
FF0000 -> Vermelho
0000FF -> Azul
```
#hex{
color: #03BB76;
}
```
Resultaria em algo como:

- **HSL** (hue, saturation, lightness): definindo a cor através na sua matiz (0 vermelho, 120 verde, 240 azul), saturação (0% tom de cinza, 100% cor total), luminosidade (0% preto, 100% branco) - Existe também o HSLA, que conta com o fator alpha (0 a 1) para medir o nível de transparência. Exemplo:
```
#hsl{
color: hsla(120, 100%, 50%, 1.0);
}
```
Essa programação resultaria em uma cor completamente verde, mas pode buscar outros tons utilizando o circulo cromático HSL.

---
##Fundos
- `background-color`: cor sólida do fundo
- `background-image`: referenciar imagem no fundo
- `linear-gradient`: degrade linear
- `radial-gradient`: degrade circular
- `repeating`: repetir efeito
**background-size**: define o tamanho do fundo do elemento, acompanhada das configurações:
- `auto`: ajuste automático
- `cover`: cobrir todo o espaço do elemento
- `contain`: redimensionar o conteúdo para que apareça a imagem completa/ sem cortes
- `valor`: Definir o tamanho da imagem dentro do elemento
Repetição **background-repeat**: define o eixo a qual a imagem se repete:
- `repeat`: máximo de repetições possíveis
- `repeat-x`: só repete no eixo x (horizontal)
- `repeat-y`: só repete no eixo y (vertical)
- `space`: se repete em ambos os eixos sem ser cortada com espaços
- `round`: se repete em todas as direções sem ser cortada, apenas redimensionada
- `no-repeat`: sem repetições
**Background-position**: Posicionamento das imagens de fundo
`center`, `left`, `right`, `x%,y%`
**background-attachment**: Como a imagem vai se comportar de acordo com a janela do navegar
- `fixed`: não sai do lugar
- `scroll`: ela está fixada a um objeto
- `local`: "rola" junto ao conteúdo
**background-origin**: Define a área de posicionamento da imagem
- `padding-box`: canto de origem junto ao padding
- `border-box`: a imagem começa junto a área externa da borda
- `content-box`: inferior ao padding, alinhada ao conteúdo do elemento
**background-flip**: Define se a cor do elemento cobre ou não as bordas
- `padding-box`: alinhada ao padding
- `border-box`: alinhada a borda
- `content-box`: preenche a área do conteúdo
- `clip-text`: fundo no texto (a cor tem que ser transparente)
**background-bland-mode**: efeitos no fundo dos elementos
---
##Bordas
- `border-width`: tamanho que o contorno terá
- `border-style`: tipo do contorno
- `border-color`: cor do contorno
- `border-radius`: Arredonda borda
**border-image**
- `source`: definir o caminho da imagem
- `widht`: largura da imagem da borda
- `repeat`: controlar se a imagem repete ou não
- `outset`: distancia da borda sobre o elemento
- `slice`: dividir em regiões
---
##Conteúdo (imagem ou vídeo)
**object-fit** : Como o conteúdo de um elemento se comporta na caixa estabelecida
- `fill:` preencher todo o espeço e distorcer
- `contain`: não fica distorcida, mas vai se encaixar nas medidas estabelecidas
- `cover`: preencher todo o espaço sem distorcer
- `none`: ignora as medidas do objeto pai e usa suas medidas originais
- `scale-down`: menor configuração de imagem sem distorcer
**object-position**: Centralizar a imagem
- eixo x e eixo y
- `left`, `right`, `center`, `top`, `bottom`
| marimnz |
1,895,289 | Scaffolding API projects easily with oak-routing-ctrl | Greetings to Deno / TypeScript / JavaScript community! Today I'd love to introduce a DevTool:... | 27,800 | 2024-06-20T22:06:19 | https://dev.to/thesephi/scaffolding-api-projects-easily-with-oak-routing-ctrl-1pj | deno, typescript, api, tooling | Greetings to Deno / TypeScript / JavaScript community!
Today I'd love to introduce a DevTool: [oak-routing-ctrl](https://jsr.io/@dklab/oak-routing-ctrl)
Now if you read the link above & still are unsure what this tool does, then I hope this article may help 😀
Let's say you:
1. wanna build a microservice that provides a set of API to your consumers
2. chose the [oak framework](oakserver.org) as your HTTP middleware library
3. do not want to write repeated routing code for all your API endpoints
Then `oak-routing-ctrl` might be worth a try.
Here is what's in the box:
## 1. TypeScript Decorators
First we have a set of TypeScript Decorators (conforming to [TC39 proposal](https://github.com/tc39/proposal-decorators)) that makes it straightforward to declare API endpoints & assign it to a handler function in a few lines of code:
```ts
// MyController.ts
import { Controller, Get } from "@dklab/oak-routing-ctrl@0.7.4";
@Controller()
export class MyController {
@Get()
doSomething() {
return "hello, dev.to";
}
}
```
## 2. An initiation helper function
Now to actually spin up an HTTP server, we'll need to initiate the server instance e.g. with the middleware [oak](https://jsr.io/@oak/oak), and glue it up with the API route handler function we wrote above. Of course it can be in the same file, or a different file, up to your flavour. Here we use a different file:
```ts
// main.ts
import { Application } from "@oak/oak@16.1.0";
import { useOakServer } from "@dklab/oak-routing-ctrl@0.7.4";
import { MyController } from "./MyController.ts";
const app = new Application();
useOakServer(app, [MyController]);
await app.listen({ port: 1993 });
```
The example code so far assumes we're on `Deno` runtime. But heads up: both `oak` and `oak-routing-ctrl` [fully support other runtimes](https://github.com/Thesephi/oak-routing-ctrl?tab=readme-ov-file#other-runtimes) (Bun, Cloudflare Workers, and Node.js).
With Deno assumed, let's start our server up:
```bash
deno run --allow-env --allow-net main.ts
```
And send a test cURL:
```bash
curl localhost:1993
# prints: hello, dev.to
```
That was the most simple form of it. More complex examples are available in the [README](https://github.com/Thesephi/oak-routing-ctrl?tab=readme-ov-file#example-retrieving-path-parameters).
## Looking forward...
Up to this point, you may wonder what's so special about this library. I, as the author, am of course biased, but if you must know: nope, it's indeed "_just another DevTool_". I'm not taking pride in its superb performance or any underlying rocket science, I do appreciate the level of care and seriousness I myself put in while developing it. Before it reaches the "relatively stable" version of 0.7.4 (at which time I write this blog post), it went through a valley of alpha releases:

I don't intend to put much more logic in it from this point on. After using it on production for a while, I'll bump it to 1.0.0. It's released under MIT & anyone can [contribute](https://github.com/Thesephi/oak-routing-ctrl/blob/main/CONTRIBUTING.md) or even fork their own development directions.
What I commit to is the **continuous maintenance** of this library. It is used on production for my own and my employer's web-scale products, thereby giving me "unfair incentives" to keep it sane & performant.
If you use it for your work, I would really love to hear your thoughts and suggestions, i.e. what you like, what you don't like. A GitHub <a class="github-button" href="https://github.com/Thesephi/oak-routing-ctrl" data-color-scheme="no-preference: light; light: light; dark: dark;" data-icon="octicon-star" data-size="large" aria-label="Star Thesephi/oak-routing-ctrl on GitHub">Star</a>, or a constructive feedback issue, to me is equally valuable.
If I still have you on this line, it means my English skill is not any more rusty than my coding skill ;) and it means the world to me (😀). If you feel I missed something, or said something contrary to your knowledge, please feel free to place your questions in the comment section down below. Discussing tech topics is the best way for everyone to learn!
Thank you for your time on this blog post. Have a blast in what you're doing, wherever you are 🍻
| thesephi |
1,854,425 | Dev: Automation | An Automation Developer is a professional responsible for designing, developing, and implementing... | 27,373 | 2024-06-20T22:00:00 | https://dev.to/r4nd3l/dev-automation-2233 | automation, developer | An **Automation Developer** is a professional responsible for designing, developing, and implementing automated solutions to streamline processes, increase efficiency, and reduce manual intervention across various domains such as software development, testing, infrastructure management, and business operations. Here's a detailed description of the role:
1. **Understanding of Automation Concepts:**
- Automation Developers possess a strong understanding of automation principles, methodologies, and best practices.
- They are familiar with automation frameworks, tools, and technologies used for automating repetitive tasks, workflows, and processes.
2. **Programming and Scripting Skills:**
- Automation Developers are proficient in programming languages such as Python, Java, C#, JavaScript, and scripting languages like Bash, PowerShell, and Shell Scripting.
- They use programming and scripting languages to write automation scripts, code automation workflows, and develop custom automation solutions tailored to specific requirements.
3. **Automation Frameworks and Tools:**
- Automation Developers have expertise in using automation frameworks and tools such as Selenium, Appium, Robot Framework, Puppet, Chef, Ansible, Jenkins, Travis CI, and GitLab CI/CD.
- They leverage automation frameworks and tools to build, deploy, and manage automated tests, deployments, configurations, and infrastructure as code (IaC) processes.
4. **Continuous Integration and Continuous Deployment (CI/CD):**
- Automation Developers implement CI/CD pipelines and workflows to automate the build, test, and deployment processes of software applications and infrastructure changes.
- They integrate automated testing, code analysis, code quality checks, and deployment automation into CI/CD pipelines to achieve faster and more reliable software delivery.
5. **Test Automation:**
- Automation Developers specialize in test automation by creating automated test scripts, test suites, and test frameworks for functional testing, regression testing, performance testing, and load testing.
- They use test automation tools and libraries to automate the execution of test cases, validate software functionality, and detect defects early in the development lifecycle.
6. **Infrastructure Automation:**
- Automation Developers automate infrastructure provisioning, configuration, deployment, and management using infrastructure as code (IaC) practices.
- They define infrastructure components, environments, and configurations as code using tools like Terraform, CloudFormation, and Azure Resource Manager (ARM) templates for automated infrastructure deployment and scaling.
7. **Process Automation:**
- Automation Developers automate business processes, workflows, and tasks using robotic process automation (RPA) tools, workflow automation platforms, and business process management (BPM) software.
- They identify repetitive manual tasks, analyze process dependencies, and design automated solutions to optimize resource utilization, reduce errors, and improve productivity.
8. **Monitoring and Orchestration:**
- Automation Developers implement automated monitoring, alerting, and orchestration solutions to manage and control automated processes, systems, and workflows.
- They integrate monitoring tools, event-driven automation, and orchestration engines to monitor system health, trigger automated responses, and ensure system reliability and performance.
9. **Security and Compliance Automation:**
- Automation Developers incorporate security and compliance checks into automated workflows and processes to enforce security policies, standards, and regulations.
- They automate security assessments, vulnerability scanning, access controls, and compliance audits using security automation tools and scripting techniques to mitigate risks and ensure regulatory compliance.
10. **Collaboration and Communication:**
- Automation Developers collaborate with cross-functional teams, including developers, testers, operations engineers, and business stakeholders, to identify automation opportunities, gather requirements, and implement automation solutions.
- They communicate effectively, document automation workflows, provide training and support, and promote knowledge sharing to ensure successful adoption and utilization of automation capabilities within the organization.
In summary, an Automation Developer plays a crucial role in driving digital transformation, improving operational efficiency, and accelerating innovation by leveraging automation technologies to automate processes, tasks, and workflows across software development, testing, infrastructure management, and business operations domains. By combining technical expertise, problem-solving skills, and domain knowledge, they empower organizations to achieve agility, scalability, and competitiveness in today's dynamic and fast-paced digital landscape. | r4nd3l |
1,895,286 | Create call center transcript summary using AWS Bedrock Converse API and Lambda - Anthropic Haiku | Generative AI - Has Generative AI captured your imagination to the extent it has for me? Generative... | 0 | 2024-06-20T21:52:42 | https://dev.to/bhatiagirish/create-call-center-transcript-summary-using-aws-bedrock-converse-api-and-lambda-anthropic-haiku-20cj | aws, generativeai, bedrockconverseapi, amazonbedrock | Generative AI - Has Generative AI captured your imagination to the extent it has for me?
Generative AI is indeed fascinating! The advancements in foundation models have opened up incredible possibilities. Who would have imagined that technology would evolve to the point where you can generate content summaries from transcripts, have chatbots that can answer questions on any subject without requiring any coding on your part, or even create custom images based solely on your imagination by simply providing a prompt to a Generative AI service and foundation model? It's truly remarkable to witness the power and potential of Generative AI unfold.
In this article, I am going to show you how to build a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock service. **I have posted a similar article before using Amazon Bedrock Invoke Model API however, as of May 2024, AWS has announced a new Bedrock Converse API and I have updated the API and Lambda function code to call the foundation model using this converse API.**
Amazon Bedrock is a fully managed service and it integrate with multiple popular foundation models like Anthropic, AI21 Labs, Meta, Cohere, Stability AI and AWS their own model Titan foundation model.
Recently, Amazon announced support for the latest Anthropic model - Anthropic Claude 3 Haiku, the fastest model that allows the creation of near-instant generative AI applications. By integrating with Amazon Bedrock Converse API, it provides a powerful combination for building generative AI applications with little effort, enabling enterprises to serve business needs and customers with speed-to-market and faster software delivery in an agile manner.
Let's consider an example of how to create a REST API that integrates with Amazon Bedrock Converse API and Anthropic Claude 3 Haiku. This API takes a call center transcript containing interactions between a call center employee and a customer, and then summarizes it into simple text for further analysis and training purposes.
I am going to show the steps to build a serverless Generative AI solution to create call center transcript summary via a rest API, Lambda and AWS Bedrock Converse API. I will use recently released Anthropic Haiku foundation model and will invoke it via Amazon Bedrock Converse API.
Let's review our **use cases**:
• There is a transcript available for a case resolution and conversation between customer and support/call center team member.
• A call summary needs to be created based on this resolution/conversation transcript.
• An automated solution is required to create call summary.
• An automated solution will provide a repeatable way to create these call summary notes.
• Increase in productivity as team members usually work on documenting these notes can focus on other tasks.
• There is a transcript available for a case resolution and conversation between customer and support/call center team member.
• A call summary needs to be created based on this resolution/conversation transcript.
• An automated solution is required to create call summary.
• An automated solution will provide a repeatable way to create these call summary notes.
• Increase in productivity as team members usually work on documenting these notes can focus on other tasks.
I am generating my lambda function using AWS SAM, however similar can be created using AWS Console. I like to use AWS SAM wherever possible as it provides me the flexibility to test the function without first deploying to AWS cloud.
Here is the **architecture diagram** for our use case.

Let's see the steps to create this automated Generative AI solution for all center transcript summary.
**Review Bedrock Service**
Amazon Bedrock is fully managed service that offers choice of many foundation models like Anthropic Claude, AI21 Jurassic-2 , Stability AI, Amazon Titan and others.
Since Bedrock is another serverless offering from amazon, it allows integrating with popular FMs and also allows to privately customize FMs with your own data using the AWS tools without having to manage any infrastructure.
**Why Bedrock Converse API?**
You might be wondering why there's a need for another API when Bedrock already supports invoking models for large language models (LLMs). The challenge that the Converse API aims to address is the varying parameters required to invoke different LLMs. It offers a consistent API that can call underlying Amazon Bedrock foundation models without requiring changes in your code. For example, your code can call Anthropic Haiku, Anthropic Sonnet, or Amazon Titan just by changing the model ID without needing any other modifications!
While the API specification provides a standardized set of inference parameters, it also allows for the inclusion of unique parameters when needed.
As of May 2024, the newly introduced Converse API does not support embedding or image generation models.
**Request Model Access**
Before model can be used, you need to request the access to the model.


**Temperature, Top P, Max Token Count**
Let's review few key terms before we use the parameters for the model.
• Temperature - A value of 0 indicate less random generation, a value of 1 means more random generation.
• Top P - If value is less than 1, it will have less probable set of tokens.
• Max Token Count - This indicate Max numbers of tokens to generate. Responses are not guaranteed to fill up to the maximum desired length. Tokens are factor in the overall cost hence it is worthwhile to pay attention to max tokens you want to use.
Among a few others, these are some key parameters that need to be passed when invoking the Amazon Bedrock API along with the prompt to get the desired response.
**Create a SAM template**
I will create a SAM template for the lambda function that will contain the code to invoke Bedrock Converse API along with required parameters and a prompt. Lambda function can be created without the SAM template however, I prefer to use Infra as Code approach since that allow for easy recreation of cloud resources. Here is the SAM template for the lambda function.

**Create a Lambda Function**
The Lambda function serves as the core of this automated solution. It contains the code necessary to fulfill the business requirement of creating a summary of the call center transcript using the Amazon Bedrock Converse API. This Lambda function accepts a prompt, which is then forwarded to the Bedrock Converse API to generate a response using the Anthropic Haiku foundation model. Now, Let's look at the code behind it.

**Build function locally using AWS SAM**
Next build and validate function using AWS SAM before deploying the lambda function in AWS cloud. Few SAM commands used are:
• SAM Build
• SAM local invoke
• SAM deploy
**Bedrock Invoke Model Vs. Bedrock Converse API**
**Bedrock InvokeModel**

**Bedrock Converse API**

**Validate the GenAI Model response using a prompt**
Prompt engineering is an essential component of any Generative AI solution. It is both art and science, as crafting an effective prompt is crucial for obtaining the desired response from the foundation model. Often, it requires multiple attempts and adjustments to the prompt to achieve the desired outcome from the Generative AI model.
Given that I'm deploying the solution to AWS API Gateway, I'll have an API endpoint post-deployment. I plan to utilize Postman for passing the prompt in the request and reviewing the response. Additionally, I can opt to post the response to an AWS S3 bucket for later review.

I am using Postman to pass transcript file for the prompt.
This transcript file has a conversation between call center employee (John) and customer (Girish) about a request to reset the password due to the locked account.
• John: Hello, thank you for calling technical support. My name is John and I will be your technical support representative. Can I have your account number, please?
• Girish: Yes, my account number is {ACCOUNT_NUMBER-1}.
• John: Thank you. I see that you have locked your account due to multiple failed attempts to enter your password. To reset your password, I will need to ask you a few security questions. Can you please provide me with the answers to your security questions?
• Girish: Sure, my security questions are: What is your favorite color? and What is your favorite food?
• John: Great, thank you. I will now reset your password and send you an email with instructions on how to log in to your account. Please check your email in a few minutes.
• Girish: Thank you so much for your help.
• John: You're welcome. Is there anything else I can assist you with today?
• Girish: No, that's all for now. Thank you again for your help.
• John: You're welcome. Have a great day!
**Review the response returned by Generative AI Foundation Model**

The above response is generated using Anthropic Haiku LLM, however since one of the strengths of Bedrock Converse API is to support multiple LLMs without requiring much change in the code, let's see below responses returned by Anthropic Sonnet and Amazon Titan by just passing a different model id to the Lambda function.
**Response using Anthropic Sonnet**

**Response using Amazon Titan**

With these steps, a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock Converse API has been successfully completed. Python/Boto3 were used to invoke the Bedrock API with Anthropic Haiku.
As was demonstrated, with Converse API, a consistent approach to invoke the API allows changing the underlying foundation model a much less code intensive effort and we were able to get the responses by different models just by updating the model id via environment variable without requiring any code change!
As GenAI solutions continue to evolve, they are set to reshape traditional workflows and drive tangible benefits across industries. This workshop serves as a compelling example of the transformative power of AI technologies in addressing real-world challenges and unlocking new opportunities for innovation.
Thanks for reading!
Click here to get to YouTube video for this solution.
{% embed https://www.youtube.com/watch?v=elFTO4TAcvk %}
https://www.youtube.com/watch?v=elFTO4TAcvk
𝒢𝒾𝓇𝒾𝓈𝒽 ℬ𝒽𝒶𝓉𝒾𝒶
𝘈𝘞𝘚 𝘊𝘦𝘳𝘵𝘪𝘧𝘪𝘦𝘥 𝘚𝘰𝘭𝘶𝘵𝘪𝘰𝘯 𝘈𝘳𝘤𝘩𝘪𝘵𝘦𝘤𝘵 & 𝘋𝘦𝘷𝘦𝘭𝘰𝘱𝘦𝘳 𝘈𝘴𝘴𝘰𝘤𝘪𝘢𝘵𝘦
𝘊𝘭𝘰𝘶𝘥 𝘛𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘌𝘯𝘵𝘩𝘶𝘴𝘪𝘢𝘴𝘵
| bhatiagirish |
1,895,287 | Debouncing vs Throttling | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-20T21:47:49 | https://dev.to/pabloyeverino/debouncing-vs-throttling-3cm6 | devchallenge, cschallenge, computerscience, beginners | ---
title: Debouncing vs Throttling
published: true
tags: devchallenge, cschallenge, computerscience, beginners
---
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
<!-- Explain a computer science concept in 256 characters or less. -->
Debouncing is like me waiting for you to finish your question to be able to answer it. Throttling is me answering just 1 question every 5 minutes.
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | pabloyeverino |
1,895,285 | Mastering Recursion: A Byte-Sized Explanation | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-20T21:42:46 | https://dev.to/josmel/mastering-recursion-a-byte-sized-explanation-33h | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Recursion: A function calls itself to solve smaller instances of a problem. Useful for tasks like tree traversal and factorial calculation. Efficient but can lead to stack overflow if not handled properly.
## Additional Context
Recursion simplifies complex problems by breaking them down into manageable parts, making code easier to write and understand. However, it requires careful handling of base cases to prevent infinite loops.
| josmel |
1,895,284 | Day 975 : Alright | liner notes: Professional : Got up MAD early for a couple of meetings concerning a trip that I'm... | 0 | 2024-06-20T21:36:47 | https://dev.to/dwane/day-975-alright-5e7f | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Got up MAD early for a couple of meetings concerning a trip that I'm supposed to take in a couple of weeks. We'll see. Went back to sleep. Got back up for a couple of more meetings and got to work refactoring an application to use a new SDK. Got quite a bit done. Had to refactor the UI for some changes. I want to get a majority of it done, so that when a coworker comes back, I'll just have a couple of small things that I have questions about so I can just finish it up. Responded to some community questions and other small tasks. Not a bad day. I'm tired, but I'll be alright. haha.
- Personal : Last night, I picked up some projects on Bandcamp and started putting together the social media posts. Went through a few tracks. Did a little work on the logo for my side project. Ordered a package for a listener of the radio show. Looked as some land. Ended the night watching a live stream of a Kendrick Lamar concert.

Going to finish up the social media posts for tomorrow. I'll also starting putting together the playlist for the radio show and maybe look up the social media handles for the artists. Work some more on the logo. Set up a starter project for the upgrade of a previous project. Maybe watch some "Demon Slayer". We'll see how far I get.
Have a great night!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube Z-48u_uWMHY %} | dwane |
1,895,282 | Mastering UI Design Principles: Day 3 of My UI/UX Learning Journey | Day 3: Learning UI/UX Design 👋 Hello, Dev Community! I'm Prince Chouhan, a B.Tech CSE student with... | 0 | 2024-06-20T21:24:34 | https://dev.to/prince_chouhan/mastering-ui-design-principles-day-3-of-my-uiux-learning-journey-1pmh | ui, ux, uidesign, design | Day 3: Learning UI/UX Design
👋 Hello, Dev Community!
I'm Prince Chouhan, a B.Tech CSE student with a passion for UI/UX design. Today, I'm excited to share my learnings on UI Design Principles.
---
🗓️ Day 3 Topic: UI Design Principles
---
### 📚 Today's Learning Highlights:
1. Concept Overview:
UI design principles guide the creation of intuitive, visually appealing, and easy-to-use interfaces. They encompass concepts such as balance, contrast, alignment, hierarchy, consistency, simplicity, feedback, accessibility, usability, and emotional design.
2. Key Takeaways:
- Balance: Achieve visual stability by distributing elements evenly.
- Symmetrical balance: Equal weight on both sides of a central axis.
- Asymmetrical balance: Unequal elements balanced by visual weight.
- Contrast: Enhances readability and hierarchy by highlighting differences.
- Contrast in color, size, shape, and texture.
- Use high contrast for important elements, low contrast for secondary ones.
- Alignment:** Create order and organization by aligning elements.
- Align based on edges or a grid system for consistency.
- Maintain alignment across different screen sizes for responsiveness.
3. Tools & Resources:
- Tool: Figma – Practiced creating balanced and aligned UI elements.
- Resource: UI UX DESIGN COURSE
4. Practical Application:
- Analyzed a popular website's UI for examples of balance, contrast,
and alignment.
- Created a basic mockup in Figma focusing on these principles.
---
🚀 Future Learning Goals:
Next, I'll explore visual hierarchy, consistency, and simplicity in UI design.
---
📢 Community Engagement:
- What are your favorite examples of well-balanced UI designs?
- Any tips for achieving asymmetrical balance in design?
---
💬 Quote of the Day:
_"Design is not just what it looks like and feels like. Design is how it works." – Steve Jobs_
---
Thank you for reading! Stay tuned for more updates as I continue my journey in UI/UX design.
#UIUXDesign #LearningJourney #DesignThinking #PrinceChouhan
 | prince_chouhan |
1,895,281 | [Game of Purpose] Day 33 | Today I played around with collisions. I want to to turn off the drone when its propellers hit... | 27,434 | 2024-06-20T21:23:49 | https://dev.to/humberd/game-of-purpose-day-33-5bng | gamedev | Today I played around with collisions. I want to to turn off the drone when its propellers hit something. I made it detect hitting physics object, but I can't figure out how to do it when colliding with static objects with phtsics emulation disabled.
Each propeller is a separate Blueprint that has a custom box collider and I emit events when the collision occurs. However, now success so far :/ | humberd |
1,895,280 | A Guide to Better Understand Props in React.js! | Props are essential in any React.js application as they simplify the flow of data! They allow you to... | 0 | 2024-06-20T21:21:31 | https://dev.to/gianni_cast/a-guide-to-better-understand-props-in-reactjs-40of | Props are essential in any React.js application as they simplify the flow of data! They allow you to pass data from parent to child, thus making your code more dynamic as well as reusable.
## What are they?
As mentioned briefly in the intro, props, is how data is passed from one component to another. It is crucial to understand that they are uni-directional, meaning they could only be passed down from a parent component to a child component. They **cannot ** be passed from sibling to sibling.
Here is a basic example:
```
function ParentComponent() {
const data = "Hello from Parent!";
return <ChildComponent greeting={data} />;
}
function ChildComponent(props) {
return <p>{props.greeting}</p>;
}
```
These functions are two different components in an React app. The parent component has data that the other component cant access without it being passed down. the props parameter allows the child component have access to that data. But what if you had another component that wanted to use that data? Simple! You pass it down to that component as well!.
You might be wondering, "What else can I do with props?" and the answer to that is a lot. You can even pass down functions as props which allows child components to communicate with parent components by invoking the passed down functions.
Here is an example from my personal project I did this week:
```
function App() {
const [characters, setCharacters] = useState([]);
useEffect(() => {
fetch(API)
.then(resp => resp.json())
.then(data => setCharacters(data))
}, [])
function addCharacter(character) {
fetch(API, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(character)
})
.then(response => response.json())
.then(json => setCharacters([...characters, json]))}
return (
<Router>
<div className="container">
<Header />
<Routes>
<Route path="/" element={<CharacterPage characters={characters} />} />
<Route path="/create" element={<CreateCharacter addCharacter={addCharacter} />} />
<Route path="/search" element={<Search />} />
<Route path="/search/:searchTerm" element={<Results />} />
</Routes>
</div>
</Router>
)
}
function CreateCharacter ({ addCharacter }) {
const [characterInfo, setCharacterInfo] = useState(initialCharcterInfo)
function handleSubmit(e) {
e.preventDefault()
addCharacter(characterInfo)
setCharacterInfo(initialCharcterInfo)}
function handleChange(e) {
const key = e.target.name
const newCharacterInfo = {
...characterInfo,
[key]: e.target.value
}
setCharacterInfo(newCharacterInfo)}
return (
<div className="form-container">
<form onSubmit={handleSubmit}>
<input type="text" name="fullName" placeholder="Enter Full Name" value={characterInfo.fullName} onChange={handleChange}/>
<input type="text" name="title" placeholder="Enter Title" value={characterInfo.title} onChange={handleChange}/>
<select name="family" value={characterInfo.family} onChange={handleChange}>
<option value="House Stark">House Stark</option>
<option value="House Lannister">House Lannister</option>
<option value="House Baratheon">House Baratheon</option>
<option value="House Greyjoy">House Greyjoy</option>
<option value="House Tyrell">House Tyrell</option>
<option value="House Bolton">House Bolton</option>
<option value="Free Folk">Free Folk</option>
<option value="House Targaryen">House Targaryen</option>
<option value="House Mormont">House Mormont</option>
<option value="misc">misc</option>
</select>
<input type="text" name="imageUrl" placeholder="Enter Image URL" value={characterInfo.imageUrl} onChange={handleChange}/>
<input type="text" name="bio" placeholder="Enter Bio" value={characterInfo.bio} onChange={handleChange}/>
<button type="submit" className="character-submit">Create Character</button>
</form>
</div>
)}
export default CreateCharacter;
```
As you can see the function addCharacter that was used in order to post data to my db.json file was passed down from my App.jsx (parent) component to my CreateCharacter.jsx (child) via some {} curly brackets. And I wanted to use this again elsewhere in another child component I can! That's the beauty of Props in React.js.
Hoped you enjoyed my beginner guide for Props in React.js!
| gianni_cast | |
1,895,279 | uieiureuire | A post by James Gordon | 0 | 2024-06-20T21:17:43 | https://dev.to/james_gordon_9e2ff993b44b/httpsipfsioipfsbafkreigb2j2jfxjhsfvqszioh34qlybrnyctgfyl3coah3thcech2i63ji-3op7 | james_gordon_9e2ff993b44b | ||
1,895,276 | Navigating the Future of Mobile App Development | In an era where smartphones have become an extension of ourselves, mobile app development stands at... | 0 | 2024-06-20T21:10:57 | https://dev.to/john_robinson_0a5ad1e5620/navigating-the-future-of-mobile-app-development-b1g | In an era where smartphones have become an extension of ourselves, mobile app development stands at the forefront of technological innovation. Whether it's for entertainment, productivity, or social interaction, mobile apps play a crucial role in our daily lives. This guest post delves into the essentials of [mobile app development](https://tresmind.com/mobile-app-development/), emerging trends, and best practices for creating apps that resonate with users.
## The Essentials of Mobile App Development
## 1. Understanding the Market
- Conduct thorough market research to identify user needs and gaps in the current offerings.
- Analyze competitors and define your unique value proposition.
## 2. Choosing the Right Platform
- Decide between native (iOS, Android), cross-platform (React Native, Flutter), or web-based apps.
- Consider your target audience and the specific features of each platform.
## 3. User-Centered Design
- Prioritize user experience (UX) with intuitive navigation and engaging interfaces.
- Implement user feedback loops to continuously improve the app.
## 4. Robust Development Process
- Follow agile methodologies to ensure flexibility and iterative progress.
- Use modern development tools and frameworks for efficient coding and debugging.
## Emerging Trends in Mobile App Development
## 1. Artificial Intelligence (AI) and Machine Learning
- Integrate AI for personalized user experiences, predictive analytics, and smart assistants.
- Utilize machine learning for improved app functionality and user engagement.
## 2. 5G Technology
- Leverage 5G's high speed and low latency to enhance app performance and enable advanced features like augmented reality (AR) and virtual reality (VR).
## 3. Internet of Things (IoT)
- Develop apps that can interact with IoT devices for smarter homes, health monitoring, and more.
- Ensure robust security measures to protect user data.
## 4. Blockchain
- Use blockchain technology for secure transactions, data integrity, and decentralized apps (dApps).
- Explore new possibilities in finance, supply chain, and digital identity verification.
## Best Practices for Mobile App Development
## 1. Focus on Performance
- Optimize app speed and responsiveness to provide a seamless user experience.
- Regularly test and refine the app to minimize crashes and bugs.
## 2. Prioritize Security
- Implement strong authentication mechanisms and data encryption.
- Stay updated with the latest security practices and address vulnerabilities promptly.
## 3. Engage Users Effectively
- Utilize push notifications, in-app messages, and gamification to keep users engaged.
- Provide regular updates with new features and improvements.
## 4. Monetization Strategies
- Choose the right monetization model—freemium, subscription, in-app purchases, or ads.
- Ensure that monetization efforts do not compromise user experience.
## Conclusion
Mobile app development is an ever-evolving field that requires a blend of creativity, technical expertise, and user empathy. By understanding the market, leveraging emerging trends, and adhering to best practices, developers can create apps that not only meet user needs but also stand out in a crowded marketplace. Embrace the future of mobile app development to create impactful and innovative digital experiences. | john_robinson_0a5ad1e5620 | |
1,895,275 | Create your own react library and JSX | Root Div: The serves as a container where the custom-rendered content will be inserted. ... | 0 | 2024-06-20T21:10:56 | https://dev.to/geetika_bajpai_a654bfd1e0/create-your-own-react-library-l2g |

Root Div: The serves as a container where the custom-rendered content will be inserted.

## Custom Render Function:
The customRender function takes two arguments:
- reactElement: An object representing the element to be rendered.
- container: The DOM element where the rendered element will be appended.
## DOM Element Creation:
const domElement = document.createElement(reactElement.type); creates a new DOM element of the type specified in reactElement.type (in this case, an <a> element).
## Setting Inner HTML:
domElement.innerHTML = reactElement.children; sets the inner HTML of the newly created DOM element to the value of reactElement.children (which is the text "Click me to visit google").
## Setting Attributes:
- domElement.setAttribute('href', reactElement.props.href); sets the href attribute of the <a> element to the URL specified in reactElement.props.href ("http://google.com").
- domElement.setAttribute('target', reactElement.props.target); sets the target attribute of the <a> element to the value specified in reactElement.props.target ("_blank").
## Appending to Container:
container.appendChild(domElement); appends the newly created and configured DOM element to the specified container (mainContainer).
## React Element Object:
The reactElement object simulates a simplified React element with a type (the type of HTML element), props (an object containing attributes for the element), and children (the inner content of the element).
## Selecting Main Container:
const mainContainer = document.querySelector('#root'); selects the <div> element with the ID of "root" to serve as the container for the rendered content.
## Rendering the Element:
customRender(reactElement, mainContainer); calls the customRender function with the reactElement object and the mainContainer DOM element, rendering an anchor (<a>) element inside the root div.
## Result
When this code runs, it creates an anchor (<a>) element with the text "Click me to visit google", which links to http://google.com and opens in a new tab (target="_blank"), and appends it to the div with the ID root in the HTML document.
## Now we modulate above code by using loops

Now let's examine how work is done in React.

Ques:-What is an App?
Ans:-An App is a function.
Given that App is a function, can we declare it here as well?

So, it's working, which means App is a function.
However, if App is a function, why is the syntax <App/> used? This syntax is JSX.
Ques:-Where does this come from?
Ans:-Every React application uses a bundler, such as Babel or Vite, to handle JSX.
Ques:-What is the Role of a Bundler?
Ans:-A bundler's role is to correct and update the syntax.

HTML Syntax
A bundler's role is to convert this syntax to another form. HTML syntax is obviously easier to understand, but React doesn't natively understand HTML syntax. This is why JSX is used—it mixes JavaScript with HTML. However, the actual syntax should look like this:

The HTML syntax is parsed and converted into a tree structure similar to the syntax above.
An important point is that MyApp is both a function and JSX. Therefore, Babel or another transpiler in the backend should also be converting MyApp. It works this way because, in the end, it is a function. Although we can write it like this, we typically do not use this approach.

Can we write more React elements in main.jsx?
As we mentioned, what we write in the MyApp function will be parsed into a React element object. So, can't we directly pass that object? However, we can't call it directly because React provides methods with its own syntax. This syntax is necessary as it converts the code into a tree structure.

Now we create React elements according to React's specifications, so we import React. The syntax here is predefined.

Previously, we created React elements manually using ReactElement, but now React does not allow us to do this directly. Although React is a library and not as comprehensive as a framework, it provides specific methods for rendering.
In our custom implementation, we used our own render method in customReact, but here we use the render method provided by React.
Now let's explore how variables or JavaScript should be injected.

Note: {username} is called an expression. This means it's an evaluated expression, indicating that we don't write JavaScript here directly but rather the outcome or result of its evaluation.
Ques:-Why can't we write JavaScript directly within curly brackets?
Ans:-Variable injection occurs once the entire tree is constructed.

| geetika_bajpai_a654bfd1e0 | |
1,895,274 | Monitoring Underutilized Storage Resources on AWS | When cloud professionals embark on a journey to fish out underutilized resources that may be driving... | 0 | 2024-06-20T21:08:56 | https://dev.to/aws-builders/monitoring-underutilized-storage-resources-on-aws-1gnf | aws, storage, cloudcomputing, awscommunity | When cloud professionals embark on a journey to fish out underutilized resources that may be driving costs up, they rarely pay attention to doing some cost optimization in the direction of storage resources and often focus solely on optimizing their compute resources. In this article, we will go through some tools and strategies you can leverage in monitoring your AWS storage resources. Before moving on to that main event, let’s start by talking briefly about the different storage types available on AWS. If you are ready to roll, let’s go!!
## Storage Types
When it comes storage, AWS has a wide array of services you can choose from. You will agree with me that having these many options can add some level of confusion to your decision making process especially when you don’t have an understanding of what the options are and which ones are suitable for what use case. To provide you with guidance for when you have to pick a storage service on AWS, let’s talk about some of the storage types available.
On AWS, storage is primarily divided into three categories depending on the type of data you intend to store. These categories are: Block storage, Object storage and File storage. We will go over them one after the other, exploring examples of each as we go.
## Block Storage
To put it simply, a block storage device is a type of storage device that stores data in fixed-size chunks called blocks. The size of each block is dependent on the amount of data the device can read or write in a single input/output (IO) request. So when you want to store data on a block storage device and the size of the data surpasses the size of a single block of data that the device can read or write in single I/O request, the data is broken down into equal-size chunks before it is stored on the underlying storage device. As it is always important to understand the Why behind actions, let me tell you the performance benefit of block storage devices handling data in the manner that they do.
When data is broken down into blocks, it allows for fast access and retrieval of the data. In addition to fast access, when data is on a block storage device and changes are made to the data, only the blocks affected by the change are re-written. All other blocks remain unchanged which helps to further enhance performance and speed. In AWS, the block storage options include Elastic Block Storage (EBS) volumes and Instance Store volumes. Check out [this article](https://medium.com/aws-in-plain-english/exploring-ec2-instance-storage-understand-your-options-425186bf0974) I wrote to learn more about EBS and Instance Store Volumes.
## Object Storage
With object storage, data is not broken down into fixed-sized chunks as is the case with block storage. In object storage, data (files) are stored as single objects no matter their size. This kind of storage is suitable for huge amounts of unstructured data. The object storage service of AWS is S3. With all data being stored as single objects, when some part of that object is updated, the entire object has to be rewritten. You can access data stored in S3 via HTTP, HTTPS or APIs through the AWS CLI or SDK. Some pros that come with using S3 are: it is highly available, tremendously durable, low cost and can scale infinitely not forgetting the fact that you can replicate your data in the same or across regions for disaster recovery purposes. Check out [this article](https://medium.com/@dbrandonbawe/exploring-the-basics-of-amazon-simple-storage-service-s3-f8ad2af0a6f9) I wrote on S3 to learn more about object storage.
### File Storage
File Storage is fundamentally an abstraction of block storage using a file system such as NFS (Network File System) and SMB (Server Message Block). With File storage, the hierarchy of files is maintained with the use of folders and subfolders. The main file storage services of AWS are Amazon EFS and Amazon FSx. File storage is the most commonly used storage type for network shared file systems.
## Monitoring Underutilized Storage Resources
The opening sentence of this paragraph is a lamentation so to speak on how storage resources are seldom considered when organizations and individuals take cost optimization actions. Even though they often fail to do this, it is just as important to pick the right storage option for your use case and also provision them appropriately. You can right size your storage resources by monitoring, modifying and even deleting those that are underutilized. Let’s examine some of the ways in which you can monitor your storage resources.
### Amazon Cloudwatch
Cloudwatch provides out-of-box metrics for monitoring storage services such as S3, DynamoDB, EBS and more. For EBS volumes you can use a metric such as, VolumeIdleTime which specifies the number of seconds there are no read or write requests to the volume within a given time period. With the information that Cloudwatch provides through this metric, you can decide on the action you want to take to manage the underutilized volume. In addition to the metrics that CloudWatch ships with for EBS volumes for example, you can create custom metrics to do things like find under provisioned or over provisioned volumes.
For S3 buckets, you can use the BucketSizeByte CloudWatch metric which gives you the size of your bucket in bytes. This comes in handy if you have stray S3 buckets that aren’t holding much data. Using this metric, you can quickly find and clean up those buckets.
### S3 Object Logs & S3 Analytics
With S3 you can use S3 object access logs as well. These will help you track requests that are made to your bucket. Using this, you can find buckets that aren’t accessed frequently, and then determine if you still need the data in that bucket, or if you can move it to a lower cost storage tier or delete it. This is a manual process of determining access patterns. You make use of S3 Analytics if you are interested in a service that provides an automated procedure.
S3 Analytics can help you determine when to transition data to a different storage class. Using the analytics provided by this service, you can then leverage S3 lifecycle configurations to move data to lower cost storage tiers or delete it, ultimately reducing your spend over time. You can also optionally use the S3 Intelligent-tiering class to analyze when to move your data and automate the movement of the data for you. This is best for data that has unpredictable storage patterns.
### Compute Optimizer and Trusted Advisor
To monitor for situations such as under-provisioned or over-provisioned EBS volumes, you can also make use of Compute Optimizer and Trusted Advisor for an easier and more automated experience. Compute Optimizer will make throughput and IOPS recommendations for General Purpose SSD volumes and IOPs recommendations for Provisioned IOPs volumes. However, it will identify a list of optimal EBS volume configurations that provide cost savings and potentially better performance. With Trusted Advisor, you can identify a list of underutilized EBS volumes. Trusted Advisor also ingests data from Compute Optimizer to identify volumes that may be over-provisioned as well.
## Conclusion
As a self appointed disciple preaching the gospel of optimizing AWS resources for better cost saving and performance, I hope you have taken a lesson or two from this article to implement in your resources monitoring and optimizing strategies. There are services such as CloudWatch, Trusted Advisor, Compute Optimizer, S3 Analytics and much more for you to add to your bag of tools. To make sure you don’t overwhelm yourself, learn more about each service you intend to make use of, start small and then move up from there. Good luck in your cloud endeavors. | brandondamue |
1,895,273 | Parse OpenAI answers as JSON | Certainly! Here's a revised version of your blog post that is concise, simple, and... | 0 | 2024-06-20T21:07:24 | https://dev.to/mehrandvd/parse-openai-result-as-json-p2a | csharp, openai, json, dotnet | Certainly! Here's a revised version of your blog post that is concise, simple, and friendly:
---
Working with OpenAI can be tricky when it comes to parsing JSON responses, especially when they include extra characters like `'''` or a leading `json:`. To tackle this, I've developed `PowerParseJson<T>()`, a handy tool that simplifies the process.
```csharp
var result = await client.GetChatCompletionsAsync(chatCompletionsOptions);
string answer = result.Value.Choices.FirstOrDefault()?.Message.Content;
// No more exceptions!
var json = SemanticUtils.PowerParseJson<JsonObject>(answer);
```
You can find `PowerParseJson<T>()` on my GitHub: [mehrandvd/SemanticValidation](https://github.com/mehrandvd/SemanticValidation/). It is also available here as a Nuget package: [SemanticValidation Nuget](https://www.nuget.org/packages/SemanticValidation/)
While newer GPT models offer settings for cleaner JSON outputs, many models still lack this feature. `PowerParseJson<T>()` is here to bridge that gap and make your OpenAI experience smoother. | mehrandvd |
1,895,272 | Exploring Stacks with Python: Developing the Tower of Hanoi Game | My latest endeavor involves recreating the classic Tower of Hanoi puzzle, offering a fantastic... | 0 | 2024-06-20T21:07:10 | https://dev.to/codecounsel/exploring-stacks-with-python-developing-the-tower-of-hanoi-game-25em | beginners | My latest endeavor involves recreating the classic Tower of Hanoi puzzle, offering a fantastic opportunity to delve into data structures, specifically stacks, in Python. This post outlines my journey in developing the Tower of Hanoi game and reflects on the valuable learning experiences it provided.
The "Tower of Hanoi Game" challenges players to move a stack of disks from one rod to another, following specific rules: only one disk can be moved at a time, no disk may be placed on top of a smaller disk, and all disks must eventually be moved to the rightmost rod. This game was developed using Python, employing the concept of stacks to manage the disks and ensure valid moves.
To bring this project to life, I relied on Python for its clear syntax and efficient handling of data structures. Implementing stacks allowed me to deepen my understanding of their functionality and practical applications.
During the development process, I faced several challenges, particularly around managing user inputs and ensuring the validity of moves. One significant hurdle was implementing the logic to check for valid moves and updating the game state accordingly. I overcame this by carefully structuring the stack operations and incorporating input validation, which was crucial for maintaining the game's integrity.
Key Features
- Stack-Based Disk Management: The game uses stacks to handle disk operations, ensuring that moves adhere to the rules of the Tower of Hanoi puzzle.
- Input Validation: The game checks user inputs to ensure only valid moves are made, enhancing the user experience.
- Move Counter: A counter tracks the number of moves made by the player, providing feedback on performance and encouraging optimization.
- Clear State Display: The current state of the rods is displayed after each move, allowing players to easily track their progress.
This project enriched my understanding of Python programming and the practical use of data structures like stacks. I learned the importance of validating user inputs and managing game states in real-time, which are essential skills in software development.
The "Tower of Hanoi Game" is in its initial phase, and there are numerous enhancements that could be implemented, such as:
- Increasing Complexity: Adding more disks or varying the rules to create different levels of difficulty.
- Enhanced User Interface: Introducing a graphical interface to make the game visually appealing and easier to interact with.
- Performance Metrics: Providing detailed feedback on player performance, including optimal move counts and time taken.
I invite everyone to check out the code on my GitHub repository (https://github.com/codecounsel/towerofhanoigame) and contribute suggestions for improvements or new features. Your feedback is crucial in helping this project evolve!
Developing the "Tower of Hanoi Game" was a rewarding experience that combined my passion for programming with the challenge of game development. I am eager to continue enhancing the game, adding features, and improving the design. I look forward to any feedback that can help take this project to the next level!
| codecounsel |
1,895,271 | shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 4 | In this article, I discuss how Blocks page is built on ui.shadcn.com. Blocks page has a lot of... | 0 | 2024-06-20T21:06:22 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-is-blocks-page-built-part-4-41f9 | typescript, javascript, opensource, nextjs | In this article, I discuss how [Blocks page](https://ui.shadcn.com/blocks) is built on [ui.shadcn.com](http://ui.shadcn.com). [Blocks page](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/blocks/page.tsx) has a lot of utilities used, hence I broke down this Blocks page analysis into 5 parts.
1. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 1](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-1-ac4472388f0a)
2. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 2](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-2-7714c8f36a43)
3. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 3](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-3-991c423b2ea3)
4. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 4
5. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 5 (Coming soon)
In part 3, I mentioned that I would provide an example that demonstrates ts-morph usage in shadcn-ui/ui. I made efforts to create this [ts-morph-usage-in-shadcn-ui](https://github.com/Ramu-Narasinga/ts-morph-usage-in-shadcn-ui) repository and put in the minimum required files to log the response of execution flow when \_getBlockContent is called.
In this article, I will discuss the functions \_extractVariable and project.createSourceFile.
\_extractVariable
-----------------
\_extractVariable function contains the below code:
```js
function \_extractVariable(sourceFile: SourceFile, name: string) {
const variable = sourceFile.getVariableDeclaration(name)
if (!variable) {
return null
}
const value = variable
.getInitializerIfKindOrThrow(SyntaxKind.StringLiteral)
.getLiteralValue()
variable.remove()
return value
}
```
In order to understand this API, I had to read [ts-morph documentation for variables](https://ts-morph.com/details/variables). To explain in simple terms, ts-morph can be used to manipulate Javascript and Typescript code and comes handy when you are trying refactor a large code base. This [Refactoring-Typescript-code-with-ts-morph](https://blog.kaleidos.net/Refactoring-Typescript-code-with-ts-morph/) has some great examples to demonstrate ts-morph power.
```js
// Extract meta.
const description = \_extractVariable(sourceFile, "description")
const iframeHeight = \_extractVariable(sourceFile, "iframeHeight")
const containerClassName = \_extractVariable(sourceFile, "containerClassName")
```
These variables can be found in [_registry_](https://github.com/shadcn-ui/ui/blob/main/apps/www/__registry__/new-york/block/authentication-04.tsx)[/new-york/authentication-04.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/__registry__/new-york/block/authentication-04.tsx) example. This code basically removes the variables declared from the authentication-04.tsx but keeps the other typescript code.
Since there are a lot of files in _registry_ and they could involve this sort of extraction (aka Typescript code manipulation), using ts-morph sounds like the right choice here.
[The image below points to ts-ast-viewer, uploaded with authentication-04.tsx code](https://ts-ast-viewer.com/#code/JYWwDg9gTgLgBASRAQwOYFM4DMoRHAIgDt0APGAelDXQIChRJY4AZYIga2132LMoA27DvQbho8AN5wAQgFcYMCETgBfbnkIABClHSpgAZxhQAnhRIB3ALSnoHCnOAUARgqVF6jCXGkIiYApqGrw6egbGZhboNnZQDk5UAQpe4szSLMgu6ALBOJoEYfpGJuZWtvaOzgJZOaJkTPAAxsrGcAAm6IZNUMBgMMDKcAC8dHCEAIJwAhAGKmA0cJbAMAAWcDCWEHAtAnIgRIYAdHAAKquYWMBQbbv7KqvIhhsX07Ps2ND4y2tw6CjAXLIIjtOALQyGLZQdonc7oPQAcmeyDgADFoKgIPA7HIoGCnpD7NNhHBgaCUUJOBttoZgKgVHIwHBgFg4DiOtsiFi4I8AG6YYGkpotOREGCw16GdAtEE7CB7A48p6kuX8vHUDBHeqkRpyw7wFlQZAgdAACXQdNW8GGhAAHAAGe1gUja3Uy27KGDIdjwgDCNQhADljZgbQQbFg5AJcqtrJHo2DrAAWaaoABcYGs9tdPk6WGQUfgkaITQGQwAIk9Vi4IMhoQAKACUvjGcD0MFxKnrrfGAB52sBeTsA4ZgybhuG41HcgJ06heqDZ2mQOxrLGANoANkdzoAuqm0-PgO1rLtDNYAExwUgCZerjcOp2kXcEAB8PfGcH7g+HBLH6AnLABDIZkYH+c8mnQMV4TgAArORjBZUxTygsC8TAZCAEYLzfD9Py-AchyaEd-wnEBSGsAslDgI9QRsdcAGYAFYn33VBkEzTdcPwniCJ-Yi-xDCdaJojjLw2fgUOgqBuN4nje1WTDfyDISCDA8hrAYm9PjFawawEdo3xYd4iF7ChFPfOT5KZASVPHNTJJcZAahLTB1JgawQAUdATywaB9FwUVDMsqyeIAUWktkIFxP4AVybIZksak3jmZKcTxZBhWisU8LksywBCqyzMIwq8sI5TR1UkT2MzJNZNCviiJIqqF1EzMcNKhre0yBKeRgEABHRKAJ3+b0BDfMK4rMnqck60Le38QIYFyhrjxGuL6Aa3iYFMMAAIIUbAU2rb8LAGpINWeVOmGggQC0MhjTO9AjhaEBjpO8Y9AARycPR2hWuSKDm3jisHYH5PK2zKvs6qxI6gGQch5r7KAkCVnAqS0Pqj7utqGN+sG6AJ3BQloTfAAFAkoXaaa8fBoq2E4BHQtWPQsAnCg-KgTEPJJ6n3o+8YodI26BEohRtnYSl0D0mYmi4dzrEMfAgvhaWBZO+nQqGnmopivnoHaAB+ZmQYoRmOC1-DQd5K3PwW5IDXaYmqcNggNl2-aDbJtt0B+64fLgIHTa-CgSpD3t5EUIYdr2idDDkFwVxgd3hdUiNp2xhrjLmCOKCjjw7cj9whl5OtgGBGAJ2imB1YqkWM+jLPQpzj4fnWABxCBZmAvOC+ULWbcHpHBPskAPJTRXIMixXleb3jy2UAAyDiIEMABueA+QFFRMpFMUjckd2CFUCOLZ5NmJwAYlT5H9tVqB1bt8YAGU6QZMA84twew7BgGh9ygAxG-E74TlWMeToKgXCoE8t5Rc6YXByxEODBaKAMDM0MFAJoHMzqZXQJdAy8IjiGF5KgDWPFnJVwIEgGg5D8LLHaGsCcmEACcF5szMwuJaKhmF7QOjoZ+NO9lYzxlyI3XIEAXCwWlB5FoaoOh1g4GmFwvRUBWhIBCaw657RHAvPudoijDxGlMN0ZytAAbBzNuHa2v9batkbHQVQQA)

In order to get access to TypeSscript AST API, it is now apparent that we need SourceFile API provided ts-morph
project.createSourceFile
------------------------
I found this [createSourceFile API reference in the ts-morph documentation](https://ts-morph.com/setup/adding-source-files#by-structure). It all makes sense now.
```js
const raw = await \_getBlockCode(name, style)
const tempFile = await createTempSourceFile(\`${name}.tsx\`)
const sourceFile = project.createSourceFile(tempFile, raw, {
scriptKind: ScriptKind.TSX,
})
```

To get access to AST API, we create a project with a source file and code so we could perform additional operations such as removing variables (like we saw in \_extractVariableNames). Because blocks examples have some additional variables that needed to removed from the original file, example: [authentication-01.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/registry/new-york/block/authentication-01.tsx) has some variables too and these variables are used in “Blocks” page to set iframe height and parent class name. This is a very useful technique when dealing with multiple files that require some form code formatting and variable extractions.
This is what \_getBlockContent responds with, when called in [getBlock](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27). We can now move on to understand the code inside getBlock, which is used BlockDisplay component by providing the getBlock result into BlockPreview

In the final part 5, I will discuss the getBlock function, [BlockDisplay](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5) and [BlockPreview](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27) components and that brings this journey to understand how “Blocks” page is built to an end.
Conclusion:
-----------
In order to understand the API used in \_extractVariable function in shadcn-ui/ui, I had to read [ts-morph documentation for variables](https://ts-morph.com/details/variables). To explain in simple terms, ts-morph can be used to manipulate Javascript and Typescript code and comes handy when you are trying refactor a large code base. This [Refactoring-Typescript-code-with-ts-morph](https://blog.kaleidos.net/Refactoring-Typescript-code-with-ts-morph/)([https://blog.kaleidos.net/Refactoring-Typescript-code-with-ts-morph/](https://blog.kaleidos.net/Refactoring-Typescript-code-with-ts-morph/)) has some great examples to demonstrate ts-morph power.
To understand project.createSourceFile, I also had to setup a Github repository ([https://github.com/Ramu-Narasinga/ts-morph-usage-in-shadcn-ui](https://github.com/Ramu-Narasinga/ts-morph-usage-in-shadcn-ui)) and write the minimal code required to set up the ts-morph related functions used in lib/blocks.ts. To access AST API, we create a project with a source file and some code extracted from a file so we could perform additional operations such as removing variables (like we saw in \_extractVariableNames). Because blocks examples have some additional variables that needed to removed from the original file, example: [authentication-01.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/registry/new-york/block/authentication-01.tsx) ([https://github.com/shadcn-ui/ui/blob/main/apps/www/registry/new-york/block/authentication-01.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/registry/new-york/block/authentication-01.tsx)) has some variables too and these variables are used in “Blocks” page to set iframe height and parent class names. This is a very useful technique when dealing with multiple files that require some form code formatting and variable extractions. I have to admit, this is some advanced form of Typescript usage that I have never seen or used before.
In the final part 5, I will discuss the getBlock function, [BlockDisplay](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5) and [BlockPreview](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27) components and that brings this journey to understand how “Blocks” page is built to an end.
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/)
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5)
2. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27)
3. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L107](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L107)
4. [https://blog.kaleidos.net/Refactoring-Typescript-code-with-ts-morph/](https://blog.kaleidos.net/Refactoring-Typescript-code-with-ts-morph/)
5. [https://ts-ast-viewer.com/](https://ts-ast-viewer.com/)
6. [https://mariodante.medium.com/unveiling-the-power-of-abstract-syntax-trees-ast-and-typescript-973321947bbc](https://mariodante.medium.com/unveiling-the-power-of-abstract-syntax-trees-ast-and-typescript-973321947bbc)
7. [https://www.satellytes.com/blog/post/typescript-ast-type-checker/](https://www.satellytes.com/blog/post/typescript-ast-type-checker/) | ramunarasinga |
1,895,270 | How-to Guide: Building Your First ASP.NET Core Web Application | Introduction ASP.NET Core is a powerful, open-source framework for building modern, cloud-based, and... | 0 | 2024-06-20T21:04:02 | https://dev.to/a-class/how-to-guide-building-your-first-aspnet-core-web-application-1h59 | **Introduction**
ASP.NET Core is a powerful, open-source framework for building modern, cloud-based, and internet-connected applications. Whether you're a beginner or an experienced developer, creating your first ASP.NET Core web application is a crucial step in understanding how to leverage the full potential of the .NET ecosystem. This guide will walk you through the process of building a basic web application using ASP.NET Core.
**Table of Contents**
1. Setting Up Your Development Environment
Installing .NET SDK
Installing Visual Studio
Creating a New Project
2. Understanding the Project Structure
Key Files and Folders
Configuration Files
3. Creating Your First Web Page
Adding a Controller
Creating a View
4. Implementing a Basic Model
Adding a Model Class
Using the Model in a Controller and View
5. Connecting to a Database
Configuring Entity Framework Core
Creating and Applying Migrations
6. Adding User Authentication
Setting Up Identity
Registering and Logging In Users
7. Deploying Your Application
Publishing Your Application
Deploying to IIS or Azure
8. Conclusion
**Setting Up Your Development Environment**
_Installing .NET SDK_
First, you need to install the .NET SDK. Visit the official .NET download page and download the SDK for your operating system. Follow the installation instructions provided on the website.
Installing Visual Studio
Next, install Visual Studio, which is a powerful integrated development environment (IDE) for .NET development. Download it from the Visual Studio website and select the ASP.NET and web development workload during installation.
Creating a New Project
Open Visual Studio.
Click on "Create a new project."
Select "ASP.NET Core Web Application" and click "Next."
Name your project, choose a location to save it, and click "Create."
Select ".NET Core" and "ASP.NET Core 5.0" (or the latest version), then choose the "Web Application (Model-View-Controller)" template and click "Create."
**Understanding the Project Structure**
Key Files and Folders
- wwwroot: Contains static files like CSS, JavaScript, and images.
- Controllers: Holds controller classes responsible for handling incoming requests and returning responses.
- Models: Contains classes that represent the data and business logic of the application.
- Views: Holds Razor view files that define the UI of the application.
appsettings.json: Configuration file for application settings.
**Creating Your First Web Page**
_Adding a Controller_
Right-click on the "Controllers" folder.
Select "Add" > "Controller..."
Choose "MVC Controller - Empty" and click "Add."
Name your controller "HomeController."
_Creating a View_
Right-click inside the "Index" action method in HomeController.
Select "Add View..."
Name the view "Index" and click "Add."
Edit the generated Index.cshtml file to include some HTML content.
**Implementing a Basic Model**
Adding a Model Class
Right-click on the "Models" folder.
Select "Add" > "Class..."
Name the class "Product" and click "Add."
Define properties for the Product class, such as Id, Name, and Price.
Using the Model in a Controller and View
Modify HomeController to include a list of products.
Pass the product list to the Index view.
Update the Index.cshtml to display the products.
**Connecting to a Database**
Configuring Entity Framework Core
Install the Entity Framework Core NuGet packages.
Configure the database context in Startup.cs.
Creating and Applying Migrations
Create a new DbContext class.
Add a connection string to appsettings.json.
Use the Package Manager Console to add and apply migrations.
**Adding User Authentication**
Setting Up Identity
Install the ASP.NET Core Identity NuGet package.
Configure Identity services in Startup.cs.
Registering and Logging In Users
Scaffold Identity pages.
Update the layout to include login/logout links.
**Deploying Your Application**
Publishing Your Application
Right-click on the project and select "Publish."
Choose a publish target (e.g., Folder, IIS, Azure).
Deploying to IIS or Azure
Follow the instructions to deploy to your chosen target.
Verify that your application is running correctly in the deployed environment.
**Conclusion**
Building your first ASP.NET Core web application is an exciting journey that equips you with essential skills for web development. By following this guide, you have learned how to set up your development environment, create a basic web page, implement a model, connect to a database, add user authentication, and deploy your application. Continue exploring the vast capabilities of ASP.NET Core to build robust, high-performance web applications.
| a-class | |
1,895,269 | Generators in JavaScript | Generators in JavaScript are a powerful feature introduced in ECMAScript 6 (ES6) that allow you to... | 0 | 2024-06-20T21:02:24 | https://dev.to/francescoagati/generators-in-javascript-116 | Generators in JavaScript are a powerful feature introduced in ECMAScript 6 (ES6) that allow you to define iterative algorithms by pausing and resuming execution at defined points. They provide a flexible way to control iteration and asynchronous operations.
#### How Generators Work
Generators are defined using function\* syntax (note the asterisk). They use the `yield` keyword to pause execution and return values sequentially. When called, a generator function returns an iterator object that can be used to control the iteration.
Let's explore some practical examples to understand how generators are used:
#### Example 1: Fibonacci Sequence Generator
```javascript
function* fibonacciGenerator() {
let prev = 0;
let curr = 1;
yield prev;
yield curr;
while (true) {
let next = prev + curr;
yield next;
prev = curr;
curr = next;
}
}
// Usage
const fibonacciSequence = fibonacciGenerator();
for (let i = 0; i < 10; i++) {
console.log(fibonacciSequence.next().value); // Outputs: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34
}
```
In this example, `fibonacciGenerator` generates an infinite sequence of Fibonacci numbers, pausing after each `yield` statement until `next()` is called again.
#### Example 2: Inorder Traversal of a Binary Tree
```javascript
class Node {
constructor(value) {
this.value = value;
this.left = null;
this.right = null;
}
}
function* inorderTraversal(node) {
if (node !== null) {
yield* inorderTraversal(node.left); // Traverse left subtree
yield node.value; // Yield current node's value
yield* inorderTraversal(node.right); // Traverse right subtree
}
}
// Usage
const rootNode = new Node(10);
rootNode.left = new Node(5);
rootNode.right = new Node(15);
// (Set up the rest of the tree as in the example)
const iterator = inorderTraversal(rootNode);
const result = [];
for (let value of iterator) {
result.push(value);
}
console.log(result); // Outputs: [3, 5, 7, 10, 15, 18]
```
This generator function performs an inorder traversal of a binary tree, yielding values in the correct order.
#### Example 3: Custom Iterator Using Generators
```javascript
class Range {
constructor(start, end) {
this.start = start;
this.end = end;
}
*[Symbol.iterator]() {
for (let i = this.start; i <= this.end; i++) {
yield i;
}
}
}
// Usage
const range = new Range(1, 5);
for (let num of range) {
console.log(num); // Outputs: 1, 2, 3, 4, 5
}
```
Here, the `Range` class uses a generator to create an iterable range of numbers from `start` to `end`.
#### Example 4: Asynchronous Control Flow
```javascript
function fetchDataFromAPI(endpoint) {
return new Promise((resolve, reject) => {
setTimeout(() => {
const data = { endpoint, results: [1, 2, 3, 4, 5] };
resolve(data);
}, Math.random() * 1000);
});
}
function* fetchAndProcessData() {
try {
const data1 = yield fetchDataFromAPI('/api/data1');
console.log('Processed data1:', data1);
const data2 = yield fetchDataFromAPI('/api/data2');
console.log('Processed data2:', data2);
const data3 = yield fetchDataFromAPI('/api/data3');
console.log('Processed data3:', data3);
console.log('All data processed!');
} catch (error) {
console.error('Error:', error);
}
}
function runGenerator(generator) {
const iterator = generator();
function iterate(iteration) {
if (iteration.done) {
return iteration.value;
}
const promise = iteration.value;
return promise.then((value) => iterate(iterator.next(value)))
.catch((error) => iterator.throw(error));
}
return iterate(iterator.next());
}
// Usage
runGenerator(fetchAndProcessData);
```
This example demonstrates using a generator to manage asynchronous operations sequentially, yielding promises and processing their results.
### Conclusion
Generators offer a flexible way to control iteration and manage asynchronous flows in JavaScript. By pausing execution with `yield`, they allow complex tasks to be handled in a more readable and sequential manner. Understanding generators enhances your ability to write efficient and expressive JavaScript code. | francescoagati | |
1,895,189 | Cellular Automata | I love procedural generation. As a hobbyist game developer, it is the concept and technique that I... | 0 | 2024-06-20T21:01:53 | https://excaliburjs.com/blog/Cellular%20Automata | gamedev, typescript, tutorial, opensource |
I love procedural generation. As a hobbyist game developer, it is the concept and technique that I keep reaching for in my games. This article is about Cellular Automata, which follows suit of my previous articles regarding other procedural generation strategies for game development. In my last article, we studied the [Wave Function Collapse] (https://dev.to/excaliburjs/wave-function-collapse-d3c) Algorithm. Staying within that topical thread of procedural algorithms which can be leveraged in game development, let's turn our focus to Cellular Automata.
## What is Cellular Automata
Cellular Automata, or CA for short, is an algorithm which has some key potential benefits within the field of game development. You may have seen in certain games, for example Dwarf Fortress or Terraria for example, where organic looking caves are generated, or some map patterns that look naturally grown. Essentially, it uses a grid based data set, and for each discrete unit in that grid, uses the state of all its neighbors to determine the end state of that cell in the ending simulation result.
## History of Cellular Automata
### Background
The early beginnings of the algorithm originated in the 1940's while scientists were studying crystal growth. That study, plus others including self-replicating robot experiments led to the realization of using a method of treating a system as a collection of discrete units (cells), and calculating their behavior based on the influence of each cell's neighbors. For more details on this: [Cellular Automata](https://en.wikipedia.org/wiki/Cellular_automaton)
### The Game of Life

In the 1970's, James Conway famously created a simulation called the [Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life). This very simple simulation, which had only four rules, created a very dynamic and varied group of results that bounced between appearing random and controlled order. The rules determined each cell's future state as classified as dying due to underpopulation or overpopulation, creating a new living unit due to reproduction, or just continuing to exist with the correct balance of population around that unit.
## Uses in Game Development
There are some common implementations of using Cellular Automata in game development. The classic trope is using the CA algorithm for generating tilemaps of organic looking areas or cave systems.
<img src="https://i.pinimg.com/564x/c5/af/69/c5af690b061e7de21ac002d78dbaeaf8.jpg" alt="cave system" style={{width: '250px', height: '250px'}}/>
Another application is simulating the spread of fire across an area. Brogue is a good example of how this can be used.
<img src="https://static.wikia.nocookie.net/procedural-content-generation/images/2/25/Brimstone.png" alt="cave system" style={{width: '450px', height: '250px'}}/>
Other aspects is simulating gas expansion in an area, or the spread of a virus, or enemy reproduction simulations for generating new enemies.
## The Algorithm
For explaining the CA algorithm, we will demonstrate code snippets that demonstrate TypeScript and using Excalibur.js, but this can be done in any languages and framework of your choice.
### Initialization
We start with a grid of tiles that are randomly filled with ones and zeroes.
```ts
const tiles:number[]=new Array(49);
// define the blue and white tiles for the TileMap
export const blueTile = new Rectangle({ width: 16, height: 16, color: Color.fromRGB(0, 0, 255, 1) });
export const whiteTile = new Rectangle({ width: 16, height: 16, color: Color.fromRGB(255, 255, 255, 1) });
//Utilizing PerlinNoise plug-in for Excalibur
generator = new PerlinGenerator({
seed: Date.now(), // random seed
octaves: 2,
frequency: 24,
amplitude: 0.91,
persistance: 0.95,
});
// This uses the TileMap object from Excalibur
export const tmap = new TileMap({
tileWidth: 16,
tileHeight: 16,
columns: 7,
rows: 7,
});
// Using the Perlin Noise Field, fill the Tilemap and tiles array with data
let tileIndex = 0;
for (const tile of tmap.tiles) {
const noise = generator.noise(tile.x / tmap.columns, tile.y / tmap.rows);
if (noise > 0.5) {
tiles[tileIndex] = 1;
tile.addGraphic(blueTile);
} else {
tiles[tileIndex] = 0;
tile.addGraphic(whiteTile);
}
tileIndex++;
}
```
The algorithm will have us walk through the grid tile by tile and we will either leave the one or zero in place, or we will flip that value to the opposite, meaning a zero will become a one, and vice versa. The results of this assessment needs to be kept in a new or cloned array, as to not overwrite the starting array's values as you iterate over the tiles.
### The Rules
The rules around flipping the values in each cell will depend on each implementation of the CA algorithm. These can be variable rules, each implementation can be unique in that instance. This gives you some agency and control over how you want your simulation to run. I've tailored this function with the flexibility to pass in the rules on each iteration. The rules are regarding how to handle out of bounds indexes, and what cutoff points are being used.
```ts
// Defining our CA function, passing in the grid, dimensions, and rules for OOB indexes and cutoff points
export function applyCellularAutomataRules(
map: number[],
width: number,
height: number,
oob: string | undefined,
cutoff0: number | undefined,
cutoff1: number | undefined
): number[] {
const newMap = new Array(width * height).fill(0);
let zeroLimit = 4;
if (cutoff0) zeroLimit = cutoff0 + 1; //this creates the less than effect
let oneLimit = 5;
if (cutoff1) oneLimit = cutoff1; // this creates the greater than or equalto
for (let i = 0; i < height * width; i++) {
for (let x = 0; x < width; x++) {
const wallCount = countAdjacentWalls(map, width, height, i, oob); //counts walls in neighbors
if (map[i] === 1) {
if (wallCount < zeroLimit) {
newMap[i] = 0; // Change to floor if there are less than cuttoff0 adjacent walls
} else {
newMap[i] = 1; // Remain wall
}
} else {
if (wallCount >= oneLimit) {
newMap[i] = 1; // Change to wall if there are cutoff1 or more adjacent walls
} else {
newMap[i] = 0; // Remain floor
}
}
}
}
return newMap;
}
```
To note, this approach to the CA algorithm is for the sake of THIS article. Other approaches can be implemented. Let's define our rules for the scope of this article.
- If the starting value for a tile is a zero, then to flip it to a one, the neighbors must have five or more ones surrounding the starting tile.
- If the starting value for a tile is a one, then to flip it to a zero, the neighbors must have three or fewer ones surrounding the starting tile.
- For tiles on the edges of the grid, which will not have 8 neighbors, out of bound regions will be treated as ones or 'walls'
```ts
tiles = applyCellularAutomataRules(tiles, 7, 7, 'walls', 3, 5);
```
With these rules in place, which can be modified and tailored to your liking, we can use them to determine the next iteration of the grid by going tile by tile and setting the new grid's values based on each tile's neighbors.
### Counting Walls
For the rule on out of bound neighbors, you can use a variety of different rules to your liking. You can treat them as constants, like in this instance, we treat them as walls. You can have them be treated as floors, which will change how your simulation runs, producing a more 'open' result. You can also have the out of bound tiles mirror the value of the starting value, i.e. if your starting tile on the edge is a one, then out of bound tiles are all ones, and vice versa.
```ts
// This function takes in the grid and dims, which index is being inspected, and the rules on OOB tiles
function countAdjacentWalls(map: number[], width: number, height: number, index: number, oob: string | undefined): number {
let count = 0;
const y = Math.floor(index / width);
const x = index % width;
for (let i = -1; i <= 1; i++) {
for (let j = -1; j <= 1; j++) {
if (i === 0 && j === 0) continue;
const newY = y + i;
const newX = x + j;
if (newY >= 0 && newY < height && newX >= 0 && newX < width) {
const adjacentIndex = newY * width + newX;
if (map[adjacentIndex] === 1) count++;
} else {
switch (oob) {
// The 4 types of rules provided are for constant values, floor and wall, random
// , and mirror
case "floor":
break;
case "wall":
count++;
break;
case "random":
let coinflip = Math.random();
if (coinflip > 0.5) count++;
break;
case "mirror":
if (map[index]==1) count++;
break;
default:
count++; // Perceive out of bounds as wall
break;
}
}
}
}
return count;
}
```
So starting at the first tile of the grid, you will look at the eight neighbors of the tile, in this instance, five of them are out of bound indexes. You add all the walls up in the neighbors, since the starting value is a zero, if the value is greater or equal to five, in the new grid/array, you will place a one in index zero for the new grid. This is how you flip the values. If, for instance, there would be less than five walls for the neighbors of this index, the value would have remained zero. You repeat this process for each tile in the grid/array.
### Redraw your tiles
At the end, when you have completely iterated over each tile, you will have a new grid of tiles that are now set to zeroes or ones, based on that starting array. You can use this new grid as a completed result, or you can re-run the same simulation using this new grid as your 'new' starting array of data.
```ts
// function that clears out the existing tilemap and redraws it based on the new returned tile array
function redrawTilemap(map: number[], tilemap: TileMap, game: Engine) {
game.remove(game.currentScene.tileMaps[0]);
let tileIndex = 0;
for (const tile of tilemap.tiles) {
const value = map[tileIndex];
if (value == 1) {
tile.addGraphic(blueTile);
} else {
tile.addGraphic(whiteTile);
}
tileIndex++;
}
game.add(tilemap);
}
```
## Walkthrough of the Algorithm
This walkthrough will simply use an array of numbers. With this array of numbers we will use a noise field, to represent random starting values, and then we will utilize the CA algorithm over multiple steps to highlight how it can be utilized.
### Starting Point
Let's start with an empty array of numbers. We will represent the flat array as a two dimensional grid, with x and y coordinates. This is a 7 x 7 grid, which will be an array of forty-nine cells. As we process through the CA algorithm, we will be recording our results into a new array, as to not overwrite the input array while we are iterating over the indexes.

For the CA algorithm, it is suggested to fill the initial array with random ones and zeroes. You can use a [Perlin noise] https://en.wikipedia.org/wiki/Perlin_noise) field, or a [Simplex noise](https://en.wikipedia.org/wiki/Simplex_noise) field or just use your languages built in random function to fill the field. Here is ours:

Now we start the process of looping through each index and either leave them alone of flip the value between 0 and 1 based on the values of the neighbors. For this simulation we treat out of bound indexes as walls.
### The first index

The first index of the array is the top left corner of the grid. This is relatively unique in the sense that this index only has three real neighbors. But as we mentioned before, out of bound (OOB) indexes will be treated as walls. If we count up each neighbor index, plus the OOB indexes, we get a value of seven. Since this count is higher than four, we will flip this indexes value to one in the new array we are creating.

### Iterating
The second index of the array is a one. Now this index only has three OOB indexes that will count as walls.

This index only has one addition one in its neighbors, and if that's added to the three OOB index values, that puts our value to four. In our algorithm we are using today, the value that is required to change a one to a zero is if it has less than four walls as neighbors. With that, we will leave this one in place and insert this value in the new array.

We will follow this process for each index with the given rules below:
- If the original value is one in the starting index, to be set to zero in the new array, the neighbor values have to be less than four.
- If the original value is zero in the starting index, to be set to one in the new array, the neighbor values need to be five or higher.
Let's speed this process along a bit.

Finishing the first row.

Generating the 2nd row.

Generating the 3rd row.

Generating the 4th row.

Generating the 5th row.

Generating the 6th row.

Generating the Final row.
Now we have a completed array of new values. The thing about the CA algorithm that is favorable is that you can reuse the algorithm again on the new set of values to generate deeper levels of generation on the initial data set.
Let's run the simulation on this new data and see how it turns out.

So you see how numbers start to collect together to create natural, organic looking regions of walls and floors. This is particularly handy technique for generating cave shapes for tilemaps.
## Demo Application

[Link to Demo](https://mookie4242.itch.io/cellular-automata)
[Link to Repo](https://github.com/jyoung4242/CA-itchdemo)
The demo simply consists of a 36x36 tilemap of blue and white tiles. Blue tiles represent the walls, and white tiles represent the floor tiles. There are two buttons, one that resets the simulation, and the other button triggers the CA algorithm and uses the tilemap to demonstrate the results.
Also added to the demo is access to some of the variables that manipulate the simulation. We can now modify the behavior of the OOB indexes. For instance, instead of the default 'walls', you can now change the sim to use random setting, mirror the edge tile, or set it constant to 'wall' or 'floor'.
You also have to ability to see what happens when you unbalance the trigger points. Above we defined 3 and 5 as the trigger points for flipping a tile's state. You have the ability to modify that and see the results it has on the simulation.
The demo starts with a noise field which is a plugin for Excalibur. Using a numbered array representing the 36x36 tilemap, which has ones and zeroes we can feed this array into the CA function. You can repeatedly press the 'CA Generation Step' button and the same array can be re-fed into the algorithm to see the step by step iteration, and then can be reset to a new noise field again to start over.
## Why Excalibur

Small Plug...
[ExcaliburJS](https://excaliburjs.com/) is a friendly, TypeScript 2D game engine that can produce games for the web. It is free and open source (FOSS), well documented, and has a growing, healthy community of gamedevs working with it and supporting each other. There is a great discord channel for it [HERE](https://discord.gg/ScX52wD4eM), for questions and inquiries. Check it out!!!
## Conclusions
So, what did we cover? We discussed the history of Cellular Automata and some generalized use cases for CA within the context of game development. We covered the implementation of the steps to take to perform the simulation on a grid of data, and then we conducted a walk through example of using the algorithm. Finally, we introduced a demo application hosted on itch, and shared the repository in case one is interested in the implementation of it.
This algorithm is one of the easier to implement, as the steps are not that complicated either in cognitive depth or in mathematical processing. It is one of my favorite simple tools that reach for especially for tilemap generation when I create levels. I urge you to give it a try and see what you can generate for yourself!
| jyoung4242 |
1,895,266 | t3twilio: Never Forget Again! | This is a submission for Twilio Challenge v24.06.12 What We Built Do you have someone in... | 0 | 2024-06-20T20:50:04 | https://dev.to/kanav_gupta/t3twilio-never-forget-again-20bl | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What We Built
Do you have someone in your family who suffers from dementia? Or are you just someone who forgets to do their tasks? My teammate, Dhruv Bansal, and I developed a **Notion** extension designed to enhance task management through the integration of Twilio and AI. Dhruv’s grandpa suffers from dementia and often forgets to take his meds. So, we thought this would be the perfect tool for someone like him. Now, Dhruv can set reminders from his own phone for his grandpa!
This tool not only reminds you to complete your tasks via calls and emails but also allows you to set tasks by simply calling a designated number. Our AI capabilities streamline the user experience by generating call prompts, extracting task details from user calls, and predicting the time required for tasks to schedule follow-up calls. This project was designed keeping in mind people who suffer from dementia. Often forgetting to keep a track of tasks, now anyone in their family can set reminders for them!
## Demo
{% embed https://youtu.be/ouK9KJt6ai8 %}
Our Website



## Twilio and AI
We leveraged Twilio's powerful communication APIs to seamlessly integrate calling and email functionalities into our Notion extension. Here’s how Twilio and AI(Cloudflare API) were utilized:
* **Call Prompts**: AI-generated prompts ensure that the calls made to users are engaging and clear, improving user interaction and ensuring that reminders are effective.
* **Task Extraction**: When a user calls to set a task, our AI extracts relevant information from the call, such as task details, due dates, and description. This makes task creation hands-free and efficient.
* **Time Prediction**: Based on the nature of the task, our AI predicts how much time it should take the user to complete it. This information is used to schedule follow-up calls, ensuring timely reminders and better task management.
Wondering what we used to build this?
- [Twilio Voice and Email (Sendgrid) API](https://twilio.com/)
- [Cloudlfare Workers AI](https://developers.cloudflare.com/workers-ai/)
- [Notion Javascript and Python SDK](https://developers.notion.com/)
- [Create T3 App](https://create.t3.gg/)
- [Python FastAPI](https://fastapi.tiangolo.com/)
{% embed https://github.com/dhruvbansal26/t3twilio %}
## Additional Prize Categories
Our submission qualifies for the following additional prize categories:
* **Twilio Times Two**: We utilized both Twilio's calling and email APIs to enhance our Notion extension.
* **Impactful Innovators**: This project is particularly beneficial for people suffering from dementia, and even the elderly by providing them and anyone in their family with a simple and effective way to manage their tasks.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
Teammate that helped me build this > @dhruvb26
## Updates
We plan on further expanding the integration capabilities using all that Twilio has to offer. Check out our roadmap on the github page and consider leaving a ⭐️ on the GitHub repo if you liked the project and our idea.
<!-- Don't forget to add a cover image (if you want). -->
| kanav_gupta |
1,895,265 | Good Mornning em1fg | this onhg | 0 | 2024-06-20T20:40:08 | https://dev.to/ishaan_singhal_f3b6b687f3/good-mornning-em1fg-3m0b | this onh<img src="https://res.cloudinary.com/dlnuvrqki/image/upload/v1718914019/ix3ffqo20mrw34uasdsd.png" alt="Editor Media" class="mt-4 max-w-xs h-auto mx-auto" style="max-width: 30%;"><div>g</div> | ishaan_singhal_f3b6b687f3 | |
1,895,264 | 🧠 Neural Networks Explained | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. Neural... | 0 | 2024-06-20T20:37:28 | https://dev.to/aviralgarg05/neural-networks-explained-2c0p | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
Neural Networks 🧠 are like brain-inspired systems! They have layers of nodes (neurons) 🔗 that connect and learn patterns from data 📊. By adjusting weights ⚖️ during training, they get better at tasks like recognizing images 🖼️ or understanding language 🗣️.
| aviralgarg05 |
1,895,263 | Python Decorators: Simplified Explanation | Python decorators are a powerful feature that allows you to modify or extend the behavior of... | 0 | 2024-06-20T20:36:25 | https://dev.to/francescoagati/python-decorators-simplified-explanation-hm1 | python, decorators | Python decorators are a powerful feature that allows you to modify or extend the behavior of functions or methods without changing their actual code. Let’s explore how decorators work with some simple examples.
#### Example 1: Logging Decorator
The logging decorator adds functionality to log information about when a function is called and what it returns.
```python
def logger(func):
def wrapper(*args, **kwargs):
print(f'Calling {func.__name__} with args={args}, kwargs={kwargs}')
result = func(*args, **kwargs)
print(f'{func.__name__} returned {result}')
return result
return wrapper
```
When you decorate a function with `@logger`, it prints messages before and after calling the function, showing its arguments and return value.
#### Example 2: Timing Decorator
The timing decorator measures how much time a function takes to execute.
```python
import time
def timer(func):
def wrapper(*args, **kwargs):
start_time = time.time()
result = func(*args, **kwargs)
end_time = time.time()
print(f'{func.__name__} took {end_time - start_time} seconds to execute')
return result
return wrapper
```
With `@timer` applied to a function, it calculates and prints the execution time in seconds.
#### Example 3: Authentication Decorator
The authentication decorator restricts access to a function based on user login status.
```python
logged_in_users = ['alice', 'bob']
def authenticate(func):
def wrapper(username, *args, **kwargs):
if username in logged_in_users:
return func(username, *args, **kwargs)
else:
raise PermissionError(f'User {username} is not logged in')
return wrapper
```
When decorated with `@authenticate`, the function can only be accessed by users in `logged_in_users`.
#### Example 4: Memoization Decorator
The memoization decorator caches results of a function to optimize performance for repeated calls with the same arguments.
```python
def memoize(func):
cache = {}
def wrapper(*args):
if args in cache:
return cache[args]
else:
result = func(*args)
cache[args] = result
return result
return wrapper
```
Functions decorated with `@memoize` store computed results, returning cached results for identical arguments to avoid redundant calculations.
#### Using Decorators
```python
@logger
def add(a, b):
return a + b
@timer
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
@authenticate
def protected_function(username, message):
return f'{username}: {message}'
@memoize
def factorial(n):
if n == 0 or n == 1:
return 1
else:
return n * factorial(n-1)
# Testing each decorated function
print(add(3, 5)) # Output: Calling add with args=(3, 5), kwargs={}, add returned 8, 8
print(fibonacci(10)) # Output: fibonacci took 0.0 seconds to execute, 55
print(protected_function('alice', 'Hello!')) # Output: alice: Hello!
try:
print(protected_function('eve', 'Hi!')) # Raises PermissionError
except PermissionError as e:
print(e)
print(factorial(5)) # Output: 120
print(factorial(3)) # Output: 6
```
Each decorator adds specific functionality to the decorated functions:
- `@logger` logs function calls and returns.
- `@timer` measures execution time.
- `@authenticate` restricts function access based on user login.
- `@memoize` caches function results to enhance performance.
Python decorators are versatile tools for adding cross-cutting concerns to functions, promoting code reuse and enhancing readability. They are widely used in frameworks and libraries to simplify and extend functionality without modifying core code. | francescoagati |
1,895,262 | Dapper Stored Procedure tip | Introduction Dapper is a simple object mapper for .NET data access which uses Microsoft... | 25,270 | 2024-06-20T20:35:03 | https://dev.to/karenpayneoregon/dapper-stored-procedure-tip-13j4 | csharp, database, codenewbie | ## Introduction
Dapper is a simple object mapper for .NET data access which uses Microsoft classes under the covers which has been covered in the following article [Using Dapper - C# Part 1](https://dev.to/karenpayneoregon/working-with-dapper-in-c-5kd) which is part of a series on Dapper.
Recently there has been a minor change with working with stored procedures.
The former method for calling a stored procedure using Dapper the command type was required as shown below with **commandType: CommandType.StoredProcedure**.
```csharp
private static async Task GetAllEmployees()
{
await using SqlConnection cn = new(DataConnections.Instance.MainConnection);
// get employees via a stored procedure
var employees =
(
await cn.QueryAsync<Employee>("usp_GetAllEmployees",
commandType: CommandType.StoredProcedure)
)
.AsList();
}
```
Now a developer has a little less coding as the command type is not required.
```csharp
private static async Task GetAllEmployees()
{
await using SqlConnection cn = new(DataConnections.Instance.MainConnection);
// get employees via a stored procedure
var employees =
(
await cn.QueryAsync<Employee>("usp_GetAllEmployees")
)
.AsList();
}
```
## Code
To try out the above clone the following repository.
{% cta https://github.com/karenpayneoregon/sql-basics/tree/master/DapperStoredProcedures1 %} Sample project {% endcta %}
1. Under LocalDb create a database named DapperStoredProcedures
1. Run Scripts\populate.sql
1. Run the project
- GetAllEmployees method returns all records
- GetEmployeeByGender method returns records by gender using an enum.
> **Note**
> Since Dapper does not handle DateOnly the following package [kp.Dapper.Handlers](https://www.nuget.org/packages/kp.Dapper.Handlers/1.0.0?_src=template) is used.
```csharp
using Dapper;
using DapperStoredProcedures1.Classes;
using DapperStoredProcedures1.Models;
using Dumpify;
using kp.Dapper.Handlers;
using Microsoft.Data.SqlClient;
namespace DapperStoredProcedures1;
internal partial class Program
{
static async Task Main(string[] args)
{
await Setup();
// Allows Dapper to handle DateOnly types
SqlMapper.AddTypeHandler(new SqlDateOnlyTypeHandler());
await GetAllEmployees();
Console.WriteLine();
await GetEmployeeByGender();
ExitPrompt();
}
private static async Task GetEmployeeByGender()
{
AnsiConsole.MarkupLine("[cyan]Female employees[/]");
await using SqlConnection cn = new(DataConnections.Instance.MainConnection);
// get employees via a stored procedure
var employees =
(
await cn.QueryAsync<Employee>("usp_GetEmployeeByGender",
param: new { GenderId = Genders.Female })
)
.AsList();
// Nicely display the results from the stored procedure
employees.Dump();
}
private static async Task GetAllEmployees()
{
AnsiConsole.MarkupLine("[cyan]All employees[/]");
await using SqlConnection cn = new(DataConnections.Instance.MainConnection);
// get employees via a stored procedure
var employees =
(
await cn.QueryAsync<Employee>("usp_GetAllEmployees")
)
.AsList();
// Nicely display the results from the stored procedure
employees.Dump();
}
}
```
## Summary
Now a developer has just a little less code to write when working with Dapper and stored procedures. If for some reason this does not work, report this to the Dapper team [here](https://github.com/DapperLib/Dapper/issues).
Also, although code provided uses SQL-Server, this will work with any data provider which supports stored procedures.
| karenpayneoregon |
1,895,170 | State in React | React has many great features to it that help create javascript. One way to impact the DOM is through... | 0 | 2024-06-20T20:25:29 | https://dev.to/spencer_adler_880da14d230/state-in-react-4pmg | react, state | React has many great features to it that help create javascript. One way to impact the DOM is through state. React makes what is required in regular Javascript to update DOM much easier with fewer steps and simpler syntax. The unique feature to state is that every time it is updated the page re-renders.
State is data that is dynamic in a component. It can be updated over time as the application is updated and changed by the user. All these updates will be reflected in the webpage as they are changed.
When it is required to update an item that needs to be shown on the page instead of updating a variable you are to update state as you can have the issue of the variable updating but the page would not re-render and the updated variable would not be displayed.
The way to use state in react is a multi step process. First you need to use the react hook: import { useState } from "react"
This lets you use react's backend information and all that is required to update state.
Then you need to create a state variable. An example of this is:
const [count, setCount] = useState(0)
Above you are setting the state array for the variable count and setting the initial value of count to 0 as that is what is put in the parentheses of useState.
You then can create a function to update state and every time this occurs the count variable will be updated and the page will be re-rendered to show the updated count. See the full example of this below. The starred area is the last piece of the puzzle as discussed above.
import { useState } from "react"
function Counter() {
const [count, setCount] = useState(0);
**function increment() {
setCount(count + 1);
}
return <button onClick={increment}>{count}</button>;
}**
| spencer_adler_880da14d230 |
1,895,261 | Top 5 Resources to Get Started in Web Developing | Are you ready to dive into web development but not sure where to start? I'll guide you through the... | 0 | 2024-06-20T20:24:41 | https://dev.to/codebyten/top-5-resources-to-get-started-in-web-developing-4o7d | webdev, beginners, tips | Are you ready to dive into web development but not sure where to start? I'll guide you through the top 5 essential resources that will kickstart your journey in web development. Whether you're a beginner or looking to enhance your skills, these resources are indispensable.
1.) Mozilla Web Docs: Start with Mozilla Web Docs for foundational knowledge and reference material on essential web development concepts like HTML, CSS, and JavaScript. [https://developer.mozilla.org/en-US/]
2.) Codecademy: Explore Codecademy, an interactive platform offering courses and hands-on practice to sharpen your coding skills in a fun and engaging way. [https://www.codecademy.com/]
3.) r/webdev Subreddit: Join the r/webdev subreddit to connect with a vibrant community of developers. Share ideas, ask questions, and learn from fellow enthusiasts in the field. [https://www.reddit.com/r/webdev/]
4.) CodePen: Use CodePen as your coding playground to experiment with HTML, CSS, and JavaScript in a real-time environment. Test your ideas and showcase your projects effortlessly. [https://codepen.io/]
5.) "The Pragmatic Programmer" Book: Dive deeper into programming principles with "The Pragmatic Programmer" book. Gain insights beyond coding exercises and elevate your understanding of software development. [https://www.amazon.com/Pragmatic-Programmer-journey-mastery-Anniversary/dp/0135957052/ref=pd_lpo_sccl_1/133-8745734-4655528?pd_rd_w=XEuNQ&content-id=amzn1.sym.4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_p=4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_r=8Z455CC4AA3JJKYFTVYG&pd_rd_wg=XkrTN&pd_rd_r=66a945e4-f53d-4697-a752-a0763fd8be37&pd_rd_i=0135957052&psc=1]
Whether you're aiming to build websites, apps, or simply expand your coding knowledge, these resources will equip you with the tools and community support needed to succeed. Start your web development journey today with these top 5 resources and discover the endless possibilities of coding. Happy coding!
Follow me on Instagram, X, TikTok, and Youtube @CodeByTEN | codebyten |
1,895,236 | Simplifying Python Code with List Comprehensions | Python's list comprehensions provide a concise and readable way to perform operations that... | 0 | 2024-06-20T20:21:36 | https://dev.to/francescoagati/simplifying-python-code-with-list-comprehensions-2ci0 | python | Python's list comprehensions provide a concise and readable way to perform operations that traditionally use `map`, `filter`, and `zip`. These functional programming tools streamline common tasks on arrays (or lists) while maintaining clarity and efficiency.
#### Using List Comprehensions Instead of `map`
The `map` function applies a specified function to each item in an iterable and returns a new iterable with the transformed items. Here's how you can achieve the same result with list comprehensions:
```python
# Original using map
arr_map = [1, 2, 3, 4, 5]
doubled_arr_map = list(map(lambda x: x * 2, arr_map))
# Equivalent list comprehension
doubled_arr_lc = [x * 2 for x in arr_map]
# Output and comparison
print("Using map:")
print(" Map:", doubled_arr_map) # Output: [2, 4, 6, 8, 10]
print(" LC: ", doubled_arr_lc) # Output: [2, 4, 6, 8, 10]
print(" Equal?", doubled_arr_map == doubled_arr_lc) # Output: True
```
#### Using List Comprehensions Instead of `filter`
The `filter` function constructs an iterator from elements of an iterable for which a function returns true. List comprehensions provide a cleaner alternative:
```python
# Original using filter
arr_filter = [1, 2, 3, 4, 5, 6, 7, 8, 9]
even_arr_filter = list(filter(lambda x: x % 2 == 0, arr_filter))
# Equivalent list comprehension
even_arr_lc = [x for x in arr_filter if x % 2 == 0]
# Output and comparison
print("\nUsing filter:")
print(" Filter:", even_arr_filter) # Output: [2, 4, 6, 8]
print(" LC: ", even_arr_lc) # Output: [2, 4, 6, 8]
print(" Equal?", even_arr_filter == even_arr_lc) # Output: True
```
#### Using List Comprehensions Instead of `zip`
The `zip` function pairs elements from multiple iterables (arrays) into tuples. List comprehensions simplify this operation as well:
```python
# Original using zip
arr1_zip = [1, 2, 3]
arr2_zip = ['a', 'b', 'c']
zipped_arr_zip = list(zip(arr1_zip, arr2_zip))
# Equivalent list comprehension
zipped_arr_lc = [(x, y) for x, y in zip(arr1_zip, arr2_zip)]
# Output and comparison
print("\nUsing zip:")
print(" Zip:", zipped_arr_zip) # Output: [(1, 'a'), (2, 'b'), (3, 'c')]
print(" LC: ", zipped_arr_lc) # Output: [(1, 'a'), (2, 'b'), (3, 'c')]
print(" Equal?", zipped_arr_zip == zipped_arr_lc) # Output: True
```
### Conclusion
List comprehensions in Python offer a more Pythonic and concise approach to manipulating lists compared to traditional functional programming methods like `map`, `filter`, and `zip`. They enhance code readability and maintainability while reducing the need for lambda functions and explicit looping constructs. By embracing list comprehensions, developers can write cleaner and more efficient Python code for array manipulation. | francescoagati |
1,900,493 | How to Build a Supergraph using Snowflake, Neon PostgreSQL, and Hasura in Five Steps | By combining Hasura Cloud, Snowflake DB, and PostgreSQL on Neon, you can create a powerful supergraph backend to handle complex data tasks such as joining data across multiple data sources and filtering data on model relations. | 0 | 2024-06-20T20:17:00 | https://hasura.io/blog/building-a-supergraph-backend-with-hasura | graphql, snowflake, postgres, hasura | ---
title: How to Build a Supergraph using Snowflake, Neon PostgreSQL, and Hasura in Five Steps
published: true
description: By combining Hasura Cloud, Snowflake DB, and PostgreSQL on Neon, you can create a powerful supergraph backend to handle complex data tasks such as joining data across multiple data sources and filtering data on model relations.
tags: graphql,snowflake,postgresql,hasura
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fkv89aae6oemdh5oquz2.png
# Use a ratio of 100:42 for best results.
published_at: 2024-06-20 20:17 +0000
canonical_url: https://hasura.io/blog/building-a-supergraph-backend-with-hasura
---
By combining Hasura Cloud, Snowflake DB, and PostgreSQL on Neon, you can create a powerful supergraph backend to handle complex data tasks such as joining data across multiple data sources and filtering data on model relations. It's a supergraph framework with modular, unified layers that is continuously evolving and deployed by many companies worldwide.

This tutorial will guide you through the creation of an application backend that leverages the powerful features of Snowflake and PostgreSQL databases. I will show how to integrate tables from both Snowflake and PostgreSQL into your supergraphs, allowing you to perform sophisticated, nested queries that connect models from diverse databases. This approach not only increases your backend's capabilities but also streamlines the data handling process for more complex data relationship management across different sources.
## 1. Setting up Hasura Cloud
Start with Hasura Cloud to quickly create GraphQL APIs.
- Sign up for a free [Hasura Cloud account](https://cloud.hasura.io/signup?pg=product&plcmt=body&cta=get-started-for-free&tech=default).
- Create a new project and connect it to your version control system (e.g., GitHub).
- Define your data model using Hasura's metadata.
- Run some exciting queries

## 2. Integrating Snowflake DB
Snowflake is a cloud-based data warehousing platform that [seamlessly integrates with Hasura Cloud](https://hasura.io/graphql/database/snowflake). To integrate Snowflake:
- Obtain your Snowflake credentials (account, username, password).

- In Hasura Cloud, navigate to the "Data" tab and add a new data source.
- Select "Snowflake" and input your credentials.
- “Import all tables” to add all the tables as models to the supergraph

## 3. Incorporate PostgreSQL on Neon
[Neon](https://hasura.io/docs/latest/databases/postgres/neon/) is a managed platform for PostgreSQL databases. To incorporate PostgreSQL on Neon into your supergraph backend:
Sign up for Neon and create a new PostgreSQL database instance.

- Obtain the connection details for your Neon PostgreSQL database.
- In Hasura Cloud, add another data source, but this time, select "Neon Database" and authorize the Neon database connection details.

- Add the models and relationships you want to track in the data tab.
## 4. Executing individual queries
Now that you have Snowflake and PostgreSQL on Neon integrated into your Hasura Cloud project, you can execute queries individually.
### Querying Snowflake
To query data from Snowflake, you can use Hasura's GraphQL API:
```graphql
query MyQuery {
SNOWFLAKETABLE {
COLUMN1
COLUMN2
}
}
```
Replace snowflakeTable, column1, and column2 with your Snowflake table and columns.

### Querying PostgreSQL on Neon
Similarly, to query data from your Neon PostgreSQL database:
```graphql
query MyQuery {
neontable {
column1
column2
column3
}
}
```
Replace neonTable, column1, and column2 with your Neon table and columns.

## 5. Nested queries for complex data retrieval
GraphQL has the ability to handle complex data retrieval through nested queries. Here’s an example:
```graphql
query {
getUser(userId: 123) {
username
email
posts {
title
content
}
}
}
```
In this query:
- We retrieve user details by their ID from either Snowflake or PostgreSQL.
- Then, we fetch the user's posts with their titles and content.
By defining appropriate relationships and foreign keys in Hasura's metadata for both Snowflake and PostgreSQL, you can perform nested queries like the one above, aggregating data from multiple sources into a single response.
## Conclusion
Continue to explore and experiment with Hasura Cloud, Snowflake DB, and PostgreSQL on Neon. Discover how to efficiently integrate multiple data sources and execute both individual and nested queries. This will increase performance and scalability across a wide range of applications.
| praveenweb |
1,895,175 | FunnyQuotes | This is a submission for the Twilio Challenge What I Built FunnyQuotes is a site hosted... | 0 | 2024-06-20T20:16:58 | https://dev.to/diegocardoso93/funnyquotes-3gko | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
FunnyQuotes is a site hosted on Cloudflare. Every day, five quotes and five images are generated by AI on the edge by Cloudflare Workers. The AI models used is `@cf/meta/llama-3-8b-instruct` to generate the text and `@cf/stabilityai/stable-diffusion-xl-base-1.0` to generate the images.
## Demo
https://funnyquotes.pages.dev/
## Twilio and AI
FunnyQuotes uses Twilio API to send SMS message to subscribed phones directly from Cloudflare Worker. The user phones, generated images and quotes are saved on Cloudflare KV storage.
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
Entertaining Endeavors category.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
Team Submissions:
Diego Cardoso (diegocardoso93)
Take a look at the full project code `_worker.js`:
```
import { Buffer } from 'node:buffer';
var CLOUDFLARE_ACCOUNT_ID;
var CLOUDFLARE_BEARER_TOKEN;
var TWILIO_ACCOUNT_SID;
var TWILIO_AUTH_TOKEN;
var TWILIO_PHONE_NUMBER;
export default {
async fetch(request, env) {
CLOUDFLARE_ACCOUNT_ID = env.CLOUDFLARE_ACCOUNT_ID;
CLOUDFLARE_BEARER_TOKEN = env.CLOUDFLARE_BEARER_TOKEN;
TWILIO_ACCOUNT_SID = env.TWILIO_ACCOUNT_SID;
TWILIO_AUTH_TOKEN = env.TWILIO_AUTH_TOKEN;
TWILIO_PHONE_NUMBER = env.TWILIO_PHONE_NUMBER;
const url = new URL(request.url);
if (url.pathname.startsWith('/generate')) {
const phrases = (await generatePhrases()).split('\n').filter(s => s[0]>0 && s[0]<10);
for (const i in phrases) {
await env.FUNNYQUOTES.put(`IMG${i}`, `data:image/png;base64,${new Buffer(await generateImage(phrases[i])).toString('base64')}`);
await env.FUNNYQUOTES.put(`QUOTE${i}`, phrases[i]);
}
return new Response('OK');
}
if (url.pathname.startsWith('/notify')) {
const phones = JSON.parse(await env.FUNNYQUOTES.get(`PHONES`) || '[]');
let ret = [];
for (const phone of phones) {
ret.push(await sendSMS(`Funny Quotes: daily feed. https://funnyquotes.pages.dev`, phone));
}
return new Response(JSON.stringify(ret));
}
if (url.pathname.startsWith('/subscribe')) {
const phone = url.searchParams.get('phone') || '';
if (phone.length < 6) { return; }
const phones = JSON.parse(await env.FUNNYQUOTES.get(`PHONES`) || '[]');
if (!phones.includes(phone)) {
phones.push(phone);
}
await env.FUNNYQUOTES.put(`PHONES`, JSON.stringify(phones));
return new Response(JSON.stringify({success: 1}));
}
const img = url.searchParams.get('img') || '';
if (img) {
return new Response(JSON.stringify({
img: await env.FUNNYQUOTES.get(`IMG${img}`),
quote: await env.FUNNYQUOTES.get(`QUOTE${img}`)
}), { headers: { 'Content-Type': 'text/json' } });
}
return new Response(getTemplate(), { headers: { 'Content-Type': 'text/html' } });
},
}
async function sendSMS(message, phone) {
const url = `https://api.twilio.com/2010-04-01/Accounts/${TWILIO_ACCOUNT_SID}/Messages.json`;
const params = new URLSearchParams({
To: phone,
From: TWILIO_PHONE_NUMBER,
Body: message,
});
const response = await fetch(url, {
method: 'POST',
headers: {
Authorization: `Basic ${Buffer.from(`${TWILIO_ACCOUNT_SID}:${TWILIO_AUTH_TOKEN}`).toString('base64')}`,
'Content-Type': 'application/x-www-form-urlencoded',
},
body: params,
});
return response.json();
}
async function generatePhrases(q) {
const { result } = await run("@cf/meta/llama-3-8b-instruct", {
messages: [
{ role: "system", content: "language: en-us" },
{
role: "user",
content: q || `generate 5 funny motivational phrases for life`,
},
],
});
return result.response;
}
async function generateImage(q) {
return await run("@cf/stabilityai/stable-diffusion-xl-base-1.0", {
prompt: `no text. oil painted style. "${q || 'random'}"`
}, 'arrayBuffer');
}
async function run(model, input, returnType) {
const response = await fetch(
`https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/ai/run/${model}`,
{
headers: { Authorization: `Bearer ${CLOUDFLARE_BEARER_TOKEN}` },
method: "POST",
body: JSON.stringify(input),
}
);
let result;
if (returnType == 'arrayBuffer') {
result = await response.arrayBuffer();
} else {
result = await response.json();
}
return result;
}
function getTemplate() {
const TEMPLATE = `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Funny Quotes</title>
<style>
html, body, * {
margin: 0;
box-sizing: content-box;
}
.top-fixed {
position: fixed;
background: white;
padding: 16px 0;
width: 100%;
margin: auto;
text-align: center;
border-bottom: 1px solid #ccc;
}
.top-fixed h1 {
font-size: 26px;
}
.bottom-fixed {
position: fixed;
bottom: 0;
width: 100%;
margin: auto;
text-align: center;
background: white;
padding: 6px 0 10px 0;
border-top: 1px solid #ccc;
}
.bottom-fixed p {
padding-bottom: 4px;
}
.bottom-fixed input, .bottom-fixed button {
height: 26px;
}
.container {
display: flex;
flex-direction: column;
align-items: center;
padding: 70px 0;
}
.card {
border-radius: 3px;
border: 1px solid #ccc;
margin: auto;
max-width: 512px;
padding: 10px;
margin: 10px;
}
.card img {
width: 100%;
}
.card p {
padding-top: 10px;
font-size: 24px;
text-align: center;
}
</style>
</head>
<body>
<div class="top-fixed">
<h1 id="title"></h1>
</div>
<div class="container">
<div id="card0" class="card">Loading...</div>
<div id="card1" class="card">Loading...</div>
<div id="card2" class="card">Loading...</div>
<div id="card3" class="card">Loading...</div>
<div id="card4" class="card">Loading...</div>
</div>
<div class="bottom-fixed">
<p>Subscribe to our daily news quotes notification</p>
<input id="phone" type="text" placeholder="Enter your phone number..."></input>
<button onclick="subscribe()">subscribe</button>
</div>
<script>
window.addEventListener('load', async function (event) {
document.querySelector('#title').innerText = 'Funny Quotes - ' + new Date().toLocaleDateString('en-US');
for (let i=0;i<5;i++){
const response = await fetch('?img='+i);
const json = await response.json();
document.querySelector('#card'+i).innerHTML = '<img src="'+json.img+'" /><p>'+json.quote+'</p>';
}
});
async function subscribe() {
const phone = document.querySelector('#phone');
if (!phone.value || phone.value.length < 6) {
alert('please enter your phone number');
return;
}
const response = await fetch('/subscribe/?phone='+encodeURIComponent(phone.value));
const json = await response.json();
if (json.success) {
alert('You are now subscribed.');
phone.value = '';
}
}
</script>
</body>
</html>
`;
return TEMPLATE;
}
``` | diegocardoso93 |
1,886,441 | DEV Computer Science Challenge | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 27,182 | 2024-06-20T20:15:09 | https://dev.to/jarvisscript/dev-computer-science-challenge-2n8g | devchallenge, cschallenge, computerscience, beginners | _This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer._
## Explainer
Git is versioning control software that lets developers save a version of their code or share code and collaborate with others. A sort of time travel to go back and recover previous work. Git allows devs to recover saved work and return to coding.
## Additional Context
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
Header Image is why you need to use git.
<!-- Thanks for participating! --> | jarvisscript |
1,895,178 | Hadoop FS Shell mv | Imagine you are in the ancient empire of Naruda, where Emperor Jason has ordered the relocation of ancient scrolls containing valuable knowledge from one library to another. Your task is to simulate this scenario in the context of Hadoop Distributed File System (HDFS) using the Hadoop FS Shell mv command. Your goal is to successfully move the scrolls from one directory to another without losing any data. | 27,774 | 2024-06-20T20:12:10 | https://labex.io/tutorials/hadoop-hadoop-fs-shell-mv-271874 | hadoop, coding, programming, tutorial |
## Introduction
Imagine you are in the ancient empire of Naruda, where Emperor Jason has ordered the relocation of ancient scrolls containing valuable knowledge from one library to another. Your task is to simulate this scenario in the context of Hadoop Distributed File System (HDFS) using the Hadoop FS Shell `mv` command. Your goal is to successfully move the scrolls from one directory to another without losing any data.
## Move Ancient Scroll
In this step, you will move an ancient scroll named `ancient_scroll.txt` from the `/documents` directory to the `/archives` directory using the Hadoop FS Shell `mv` command.
1. First, use the `su - hadoop` command to switch to the hadoop user, and then explore the `ancient_scroll.txt` file in the `/documents` directory.
```hadoop
hdfs dfs -ls /
hdfs dfs -ls /documents
hdfs dfs -cat /documents/ancient_scroll.txt
```
2. Next, move the `ancient_scroll.txt` file to the `/archives` directory.
```hadoop
hdfs dfs -mv /documents/ancient_scroll.txt /archives
```
Here's an explanation of the command and its components:
- `hdfs dfs`: This is the prefix of the command that invokes the Hadoop file system client, and is used to perform operations that interact with HDFS.
- `mv`: This parameter specifies that the operation to be performed is move, which is similar to the `mv` command in Unix/Linux, and can be used to rename a file or move a file from one location to another.
- `/documents/ancient_scroll.txt`: This part specifies the HDFS path and name of the source file. It tells Hadoop which file you want to move. In this example, the source file is `ancient_scroll.txt` located in the `/documents` directory of HDFS.
- `/archives/`: This part specifies the HDFS path to the destination directory. It tells Hadoop which directory you want to move the source files to. In this example, the target directory is the `/archives` directory of HDFS.
## Update Scroll Location
In this step, you will update the location of the ancient scroll in the metadata without physically moving the file.
1. Check the current location of the `ancient_scroll.txt` file.
```hadoop
hdfs dfs -ls /archives/ancient_scroll.txt
```
2. Update the location information of the file to reflect a new path.
```hadoop
hdfs dfs -mv /archives/ancient_scroll.txt /library/archives/ancient_scroll.txt
```
## Summary
In this lab, the focus was on practicing the Hadoop FS Shell `mv` command within the HDFS environment. By simulating the movement of ancient scrolls in a fictional empire setting, users can grasp the concept of transferring files in Hadoop effectively. The step-by-step guidance ensures that learners can understand the process clearly and apply the knowledge gained in similar scenarios.
---
## Want to learn more?
- 🚀 Practice [Hadoop FS Shell mv](https://labex.io/tutorials/hadoop-hadoop-fs-shell-mv-271874)
- 🌳 Learn the latest [Hadoop Skill Trees](https://labex.io/skilltrees/hadoop)
- 📖 Read More [Hadoop Tutorials](https://labex.io/tutorials/category/hadoop)
Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄 | labby |
1,895,177 | Good Mornning em1fg | this one is an image this one is an image but that ain't thatathis one is an image this one is an... | 0 | 2024-06-20T20:08:44 | https://dev.to/ishaan_singhal_f3b6b687f3/good-mornning-em1fg-3kpk | this one is an image this one is an image but that ain't thatathis one is an image this one is an image but that ain't thatathis one is an image this one is an image but that ain't thatathis one is an image this one is an image but that ain't that h<img src="https://res.cloudinary.com/dlnuvrqki/image/upload/v1718914019/ix3ffqo20mrw34uasdsd.png" alt="Editor Media" class="mt-4 max-w-xs h-auto mx-auto" style="max-width: 30%;"><div>g</div> | ishaan_singhal_f3b6b687f3 | |
1,895,176 | Good Mornning em1 | this one is an image this one is an image but that ain't thatathis one is an image this one is an... | 0 | 2024-06-20T20:07:09 | https://dev.to/ishaan_singhal_f3b6b687f3/good-mornning-em1-37l2 | this one is an image this one is an image but that ain't thatathis one is an image this one is an image but that ain't thatathis one is an image this one is an image but that ain't thatathis one is an image this one is an image but that ain't that <div class="skeleton-image mt-4 w-48 h-32 bg-gray-700 animate-pulse mx-auto"></div> | ishaan_singhal_f3b6b687f3 | |
1,895,161 | Hello World em Elixir | Elixir é uma linguagem de programação dinâmica, funcional e construída sobre a máquina virtual Erlang... | 0 | 2024-06-20T20:06:46 | https://dev.to/abreujp/hello-world-em-elixir-23mn | elixir | Elixir é uma linguagem de programação dinâmica, funcional e construída sobre a máquina virtual Erlang (BEAM). Foi criada para ser escalável e manter sistemas de alta disponibilidade, tornando-se uma escolha popular para aplicações web, sistemas distribuídos e telecomunicações.
## O que é Elixir?
Elixir é uma linguagem projetada para ser produtiva, com uma sintaxe elegante e moderna, enquanto aproveita a robustez e as capacidades de concorrência da máquina virtual Erlang. Criada por [José Valim](https://x.com/josevalim), um dos principais contribuidores do framework Ruby on Rails, Elixir combina o melhor de dois mundos: a simplicidade de Ruby e a potência de Erlang.
## Por que usar Elixir?
- **Concorrência**: Elixir facilita a escrita de código concorrente, aproveitando os processos leves da BEAM.
- **Escalabilidade**: Ideal para aplicações que precisam lidar com um grande número de conexões simultâneas.
- **Alta Disponibilidade**: Projetado para sistemas que precisam estar em operação constante, com tolerância a falhas.
- **Performance**: Aproveita a eficiência da VM do Erlang, conhecida por sua baixa latência e alta taxa de transferência.
- **Comunidade Ativa**: Uma comunidade crescente e acolhedora, com muitos recursos e bibliotecas.
Para instalar Elixir, recomendo a leitura deste procedimento na página da linguagem: [Install](https://elixir-lang.org/install.html). Caso deseje instalar no Fedora/Linux, escrevi um artigo explicando como instalar neste ambiente: [Guia Completo: Instalando Elixir no Fedora/Linux](https://dev.to/jpstudioweb/guia-completo-instalando-elixir-no-fedoralinux-40-100f).
## Executando o IEx (Interactive Elixir)
Depois de instalar Elixir, podemos usar o IEx (Interactive Elixir), um REPL (Read-Eval-Print Loop) interativo que permite executar comandos Elixir em tempo real.
Para iniciar o IEx, abra seu terminal e digite:
```bash
iex
```
Você verá um prompt interativo onde pode começar a escrever comandos Elixir.
### Saindo do IEx
Para sair do IEx, você pode:
- Pressionar `Ctrl + C` duas vezes.
- Digitar `Ctrl + G`, seguido de `q` e pressionar `Enter`.
## Exemplos
Vamos explorar alguns exemplos para familiarizar-se com a sintaxe do Elixir.
### Operações Básicas
#### Soma
Elixir permite realizar operações aritméticas básicas de forma direta. Aqui está um exemplo de soma:
```elixir
IO.puts(1 + 2)
```
Saída esperada:
```
3
```
#### Subtração, Multiplicação e Divisão
Da mesma forma, você pode realizar outras operações aritméticas:
```elixir
IO.puts(5 - 3) # Subtração
IO.puts(4 * 2) # Multiplicação
IO.puts(8 / 2) # Divisão
```
Saída esperada:
```
2
8
4.0
```
#### Concatenar Strings
Em Elixir, você pode concatenar strings usando o operador `<>`:
```elixir
IO.puts("Elixir " <> "é divertido!")
```
Saída esperada:
```
Elixir é divertido!
```
#### Pattern Matching
Pattern matching é uma característica poderosa do Elixir que permite extrair valores de estruturas de dados. Aqui está um exemplo simples:
```elixir
{a, b, c} = {1, 2, 3}
IO.puts(a) # 1
IO.puts(b) # 2
IO.puts(c) # 3
```
Pattern matching pode ser usado em listas, tuplas e outras estruturas:
```elixir
# Listas
[head | tail] = [1, 2, 3, 4]
IO.puts(head) # 1
IO.inspect(tail) # [2, 3, 4]
# Tuplas
{:ok, result} = {:ok, 42}
IO.puts(result) # 42
```
Saída esperada:
```
1
2
3
1
[2, 3, 4]
42
```
## Seu Primeiro Programa Elixir
Vamos criar um programa simples que imprime "Hello, World!" no console.
Crie um arquivo chamado `hello.exs` com o seguinte conteúdo:
```elixir
IO.puts("Hello, World!")
```
Para executar o programa, use o comando:
```bash
elixir hello.exs
```
Você deve ver a mensagem "Hello, World!" impressa no console.
## Principais Características do Elixir
- **Imutabilidade**: Valores em Elixir são imutáveis, o que significa que não podem ser alterados depois de criados. Isso ajuda a evitar erros e facilita a concorrência.
- **Pattern Matching**: Permite extrair valores de estruturas de dados de forma concisa e poderosa.
- **Funções de Primeira Classe**: Funções são cidadãos de primeira classe em Elixir, permitindo passá-las como argumentos, retorná-las de outras funções, e armazená-las em variáveis.
- **Processos Leves**: O modelo de concorrência baseado em processos leves permite criar milhões de processos simultâneos sem sobrecarregar o sistema.
- **Supervisores**: Estruturas que monitoram e gerenciam processos, reiniciando-os em caso de falha, garantindo a alta disponibilidade do sistema.
Elixir é uma linguagem poderosa e moderna que oferece uma combinação única de simplicidade, produtividade e robustez. Com sua sintaxe elegante e capacidades de concorrência, é uma excelente escolha para uma variedade de aplicações. Neste artigo, cobrimos os primeiros passos com Elixir, desde a instalação até a execução de um programa simples.
Nos próximos artigos, vamos explorar mais a fundo os conceitos e funcionalidades do Elixir, ajudando você a se tornar um desenvolvedor proficiente nesta incrível linguagem.
| abreujp |
1,895,173 | Understanding the Subject-Observer Pattern with RxDart in Dart | Reactive programming has gained popularity in modern software development due to its ability to... | 0 | 2024-06-20T20:01:44 | https://dev.to/francescoagati/understanding-the-subject-observer-pattern-with-rxdart-in-dart-gjd | Reactive programming has gained popularity in modern software development due to its ability to handle asynchronous data streams efficiently. In Dart, developers can leverage the power of RxDart, an implementation of reactive extensions (Rx), to implement the Subject-Observer pattern seamlessly. This pattern is fundamental in managing streams of data and responding to changes dynamically.
#### What is the Subject-Observer Pattern?
The Subject-Observer pattern, also known as the Publish-Subscribe pattern, establishes a one-to-many dependency between objects. In this pattern:
- **Subject**: Acts as the source of data or events. It maintains a list of observers (subscribers) and notifies them of any changes or updates.
- **Observer**: Listens to changes or events emitted by the subject. It reacts to these events or updates accordingly.
### Implementing the Pattern with RxDart
Let's explore how to implement the Subject-Observer pattern using RxDart with a practical example.
#### Example Code Walkthrough
```dart
import 'package:rxdart/rxdart.dart';
void main() {
// 1. Creating a PublishSubject
var subject = PublishSubject<String>();
// 2. Creating Observers (Subscribers)
var subscription1 = subject.stream.listen((value) {
print("Observer 1 received: $value");
});
var subscription2 = subject.stream.listen((value) {
print("Observer 2 received: $value");
});
// 3. Adding Events to the Subject
subject.add("Event 1");
subject.add("Event 2");
// 4. Disposing Subscriptions
Future.delayed(Duration(seconds: 1), () {
subscription1.cancel();
subscription2.cancel();
});
// 5. Closing the Subject
Future.delayed(Duration(seconds: 2), () {
subject.close();
});
}
```
#### Explanation
1. **Creating a PublishSubject**: We create a `PublishSubject` named `subject`, which will act as our source of events.
2. **Creating Observers (Subscribers)**: Two observers (`subscription1` and `subscription2`) are created by subscribing to the stream of events emitted by the `subject`. Each observer listens to events and prints the received values.
3. **Adding Events to the Subject**: We add two events ("Event 1" and "Event 2") to the `subject` using the `add` method. These events are immediately emitted and received by both observers.
4. **Disposing Subscriptions**: After one second, we cancel both subscriptions (`subscription1` and `subscription2`) using the `cancel` method. This step ensures that resources are released and prevents memory leaks.
5. **Closing the Subject**: Two seconds after the events are added, we close the `subject` using the `close` method. Closing a subject indicates that it will no longer accept new events, and it notifies all its subscribers that it has completed.
#### Benefits of Using RxDart
- **Efficient Data Handling**: RxDart provides powerful operators and tools to manipulate data streams efficiently.
- **Cleaner Code**: The declarative style of RxDart reduces boilerplate code and enhances readability.
- **Flexibility**: Subjects in RxDart can emit multiple events and handle different types of data streams (e.g., single value, error, completion).
#### Conclusion
The Subject-Observer pattern with RxDart enables developers to build responsive and scalable applications in Dart. By leveraging subjects like `PublishSubject`, developers can manage asynchronous data streams effectively, react to changes dynamically, and ensure efficient resource management. As you explore more advanced features and operators provided by RxDart, you'll discover even greater possibilities for building robust applications that respond to real-time data changes seamlessly. | francescoagati | |
821,078 | Build a website with Next.js using next/images | Creating a website with Next.js and using the next/image component involves several steps. Here’s a... | 0 | 2024-06-20T19:58:04 | https://dev.to/malvinjay/build-a-website-with-nextjs-using-nextimages-2d2d | Creating a website with Next.js and using the next/image component involves several steps. Here’s a step-by-step guide to get you started:
## 1. Setting Up the Project
First, ensure you have Node.js installed on your system. Then, follow these steps:
Step 1: Create a New Next.js Project
```
npx create-next-app@latest my-nextjs-site
cd my-nextjs-site
```
Step 2: Start the Development Server
```
npm run dev
```
## 2. Basic Project Structure
Your project will have a structure similar to this:
```
my-nextjs-site/
├── public/
│ ├── images/
│ │ └── example.jpg
├── pages/
│ ├── _app.js
│ ├── index.js
├── styles/
│ ├── globals.css
├── package.json
├── next.config.js
```
## 3. Using next/image Component
The `next/image` component optimizes images automatically. Here’s how you can use it:
Step 1: Import `next/image` in Your Page
Open pages/index.js and modify it to include the Image component.
```
import Image from 'next/image'
export default function Home() {
return (
<div>
<h1>Welcome to My Next.js Site</h1>
<Image
src="/images/example.jpg"
alt="Example Image"
width={500}
height={300}
/>
</div>
)
}
```
Step 2: Add an Image to the Public Directory
Place your image (e.g., example.jpg) in the public/images directory. Next.js will serve images from this directory.
## 4. Configuring next.config.js for External Images (Optional)
If you plan to use external images, you need to configure the next.config.js file.
Step 1: Create next.config.js (if not existing)
```
module.exports = {
images: {
domains: ['example.com'], // Replace with your image source domain
},
}
```
## 5. Adding Styles
You can style your components using CSS, Sass, or any other styling method supported by Next.js.
Step 1: Modify `styles/globals.css`
```
body {
font-family: Arial, sans-serif;
margin: 0;
padding: 0;
}
h1 {
text-align: center;
margin-top: 50px;
}
div {
text-align: center;
}
```
## 6. Final Directory Structure
Your directory structure should look like this:
```
my-nextjs-site/
├── public/
│ ├── images/
│ │ └── example.jpg
├── pages/
│ ├── _app.js
│ ├── index.js
├── styles/
│ ├── globals.css
├── package.json
├── next.config.js
```
## 7. Running Your Project
Start the development server again:
```
npm run dev
```
Open your browser and navigate to http://localhost:3000 to see your Next.js site with the optimized image.
## Additional Tips
- Dynamic Imports: Use dynamic imports for components that are heavy or only needed on certain conditions.
- API Routes: Next.js supports API routes which can be used to create backend functionality within the same project.
- Static Site Generation (SSG) and Server-Side Rendering (SSR): Leverage these features for better performance and SEO.
## Conclusion
This guide provides a basic setup for a Next.js project using the next/image component. Depending on your project requirements, you can further expand this setup with more pages, components, styles, and functionality. | malvinjay | |
1,895,172 | Asynchronous Streams in Dart with RxDart | In Dart programming, handling asynchronous data streams effectively is crucial, and RxDart simplifies... | 0 | 2024-06-20T19:57:19 | https://dev.to/francescoagati/mastering-asynchronous-streams-in-dart-with-rxdart-1pid | dart, rxdart, merge | In Dart programming, handling asynchronous data streams effectively is crucial, and RxDart simplifies this task with powerful tools. Let's explore a straightforward example using RxDart to merge and manage multiple streams.
### The Example Explained
```dart
import 'package:rxdart/rxdart.dart';
void main() {
// Creating two periodic streams
var stream1 = Stream.periodic(Duration(seconds: 2), (n) => 'Stream 1: $n').take(3);
var stream2 = Stream.periodic(Duration(seconds: 3), (n) => 'Stream 2: $n').take(3);
// Merging streams using RxDart's MergeStream
var mergedStream = MergeStream([stream1, stream2]);
// Subscribing to the merged stream
var subscription = mergedStream.listen((value) {
print("Merged Value: $value");
});
// Cancelling the subscription after 10 seconds
Future.delayed(Duration(seconds: 10), () {
subscription.cancel();
});
}
```
### What's Happening Here?
1. **Imports and Setup**:
- `import 'package:rxdart/rxdart.dart';` brings in RxDart library for reactive programming.
2. **Creating Streams**:
- `Stream.periodic(Duration(seconds: 2), (n) => 'Stream 1: $n').take(3);` defines `stream1` to emit "Stream 1: 0", "Stream 1: 1", "Stream 1: 2" every 2 seconds.
- `Stream.periodic(Duration(seconds: 3), (n) => 'Stream 2: $n').take(3);` defines `stream2` to emit "Stream 2: 0", "Stream 2: 1", "Stream 2: 2" every 3 seconds.
3. **Merging Streams**:
- `MergeStream([stream1, stream2]);` combines `stream1` and `stream2` into `mergedStream`, ensuring all values are processed together.
4. **Subscribing to the Merged Stream**:
- `mergedStream.listen((value) { print("Merged Value: $value"); });` sets up a listener to print each merged value prefixed with "Merged Value:".
5. **Cancellation**:
- `Future.delayed(Duration(seconds: 10), () { subscription.cancel(); });` ensures that after 10 seconds, the subscription to `mergedStream` is canceled, managing resources effectively.
### Why It Matters
Using RxDart simplifies managing asynchronous data streams in Dart. By merging streams and handling subscriptions efficiently, developers can build responsive applications that handle real-time data updates seamlessly. Whether you're dealing with user interactions, network responses, or periodic updates, RxDart's intuitive API provides the tools needed for robust stream management.
### Conclusion
This example highlights how RxDart empowers Dart developers to harness the power of reactive programming. By merging and managing asynchronous streams effectively, developers can create more responsive and scalable Dart applications. Whether you're new to reactive programming or looking to enhance your asynchronous data handling, RxDart offers a straightforward yet powerful solution. | francescoagati |
1,895,169 | Case Study: The ClockPane Class | This case study develops a class that displays a clock on a pane. The contract of the ClockPane class... | 0 | 2024-06-20T19:52:22 | https://dev.to/paulike/case-study-the-clockpane-class-4bpg | java, programming, learning, beginners | This case study develops a class that displays a clock on a pane. The contract of the **ClockPane** class is shown in Figure below.

Assume **ClockPane** is available; we write a test program in the program below to display an analog clock and use a label to display the hour, minute, and second, as shown in Figure below.


The rest of this section explains how to implement the **ClockPane** class. Since you can use the class without knowing how it is implemented, you may skip the implementation if you wish.
To draw a clock, you need to draw a circle and three hands for the second, minute, and hour. To draw a hand, you need to specify the two ends of the line. As shown in Figure 14.42b, one end is the center of the clock at **(centerX, centerY)**; the other end, at **(endX, endY)**, is
determined by the following formula:
`endX = centerX + handLength × sin(θ)
endY = centerY - handLength × cos(θ)`
Since there are 60 seconds in one minute, the angle for the second hand is
`second × (2π/60)`
The position of the minute hand is determined by the minute and second. The exact minute value combined with seconds is **minute + second/60**. For example, if the time is 3 minutes and 30 seconds, the total minutes are 3.5. Since there are 60 minutes in one hour, the angle for the minute hand is
`(minute + second/60) × (2π/60)`
Since one circle is divided into 12 hours, the angle for the hour hand is
`(hour + minute/60 + second/(60 × 60)) × (2π/12)`
For simplicity in computing the angles of the minute hand and hour hand, you can omit the seconds, because they are negligibly small. Therefore, the endpoints for the second hand, minute hand, and hour hand can be computed as:
`secondX = centerX + secondHandLength × sin(second × (2π/60))
secondY = centerY - secondHandLength × cos(second × (2π/60))
minuteX = centerX + minuteHandLength × sin(minute × (2π/60))
minuteY = centerY - minuteHandLength × cos(minute × (2π/60))
hourX = centerX + hourHandLength × sin((hour + minute/60) × (2π/12))
hourY = centerY - hourHandLength × cos((hour + minute/60) × (2π/12))`
The **ClockPane** class is implemented in the program below.
```
package application;
import java.util.Calendar;
import java.util.GregorianCalendar;
import javafx.scene.layout.Pane;
import javafx.scene.paint.Color;
import javafx.scene.shape.Circle;
import javafx.scene.shape.Line;
import javafx.scene.text.Text;
public class ClockPane extends Pane {
private int hour;
private int minute;
private int second;
// Clock pane's width and height
private double w = 250, h = 250;
/** Construct a default clock with the current time */
public ClockPane() {
setCurrentTime();
}
/** Construct a clock with specified hour, minute, and second */
public ClockPane(int hour, int minute, int second) {
this.hour = hour;
this.minute = minute;
this.second = second;
paintClock();
}
/** Return hour */
public int getHour() {
return hour;
}
/** Set a new hour */
public void setHour(int hour) {
this.hour = hour;
paintClock();
}
/** return minute */
public int getMinute() {
return minute;
}
/** Set a new minute */
public void setMinute(int minute) {
this.minute = minute;
paintClock();
}
/** Return second */
public int getSecond() {
return second;
}
/** Set a new second */
public void setSecond(int second) {
this.second = second;
paintClock();
}
/** Return clock pane's width */
public double getW() {
return w;
}
/** Set clock pane's width */
public void setW(double w) {
this.w = w;
paintClock();
}
/** Return clock pane's height */
public double getH() {
return h;
}
/** Set clock pane's height */
public void setH(double h) {
this.h = h;
paintClock();
}
/** Set the current time for the clock */
public void setCurrentTime() {
// Construct a calendar for the current date and time
Calendar calendar = new GregorianCalendar();
// Set current hour, minute and second
this.hour = calendar.get(Calendar.HOUR_OF_DAY);
this.minute = calendar.get(Calendar.MINUTE);
this.second = calendar.get(Calendar.SECOND);
paintClock(); // Repaint the clock
}
/** Paint the clock */
protected void paintClock() {
// Initialize clock parameters
double clockRadius = Math.min(w, h) * 0.8 * 0.5;
double centerX = w / 2;
double centerY = h / 2;
// Draw circle
Circle circle = new Circle(centerX, centerY, clockRadius);
circle.setFill(Color.WHITE);
circle.setStroke(Color.BLACK);
Text t1 = new Text(centerX - 5, centerY - clockRadius + 12, "12");
Text t2 = new Text(centerX - clockRadius + 3, centerY + 5, "9");
Text t3 = new Text(centerX + clockRadius - 10, centerY + 3, "3");
Text t4 = new Text(centerX - 3, centerY + clockRadius - 3, "6");
// Draw second hand
double sLength = clockRadius * 0.8;
double secondX = centerX + sLength * Math.sin(second * (2 * Math.PI / 60));
double secondY = centerY - sLength * Math.cos(second * (2 * Math.PI / 60));
Line sLine = new Line(centerX, centerY, secondX, secondY);
sLine.setStroke(Color.RED);
// Draw minute hand
double mLength = clockRadius * 0.65;
double xMinute = centerX + mLength * Math.sin(minute * (2 * Math.PI / 60));
double minuteY = centerY - mLength * Math.cos(minute * (2 * Math.PI / 60));
Line mLine = new Line(centerX, centerY, xMinute, minuteY);
mLine.setStroke(Color.BLUE);
// Draw hour hand
double hLength = clockRadius * 0.5;
double hourX = centerX + hLength * Math.sin((hour % 12 + minute / 60.0) * (2 * Math.PI / 12));
double hourY = centerY - hLength * Math.cos((hour % 12 + minute / 60.0) * (2 * Math.PI / 12));
Line hLine = new Line(centerX, centerY, hourX, hourY);
hLine.setStroke(Color.GREEN);
getChildren().clear();
getChildren().addAll(circle, t1, t2, t3, t4, sLine, mLine, hLine);
}
}
```
The program displays a clock for the current time using the no-arg constructor (lines 19–21) and displays a clock for the specified hour, minute, and second using the other constructor (lines 24–29). The current hour, minute, and second is obtained by using the **GregorianCalendar** class (lines 87–97). The **GregorianCalendar** class in the Java API enables you to create a **Calendar** instance for the current time using its no-arg constructor. You can then use its methods **get(Calendar.HOUR)**, **get(Calendar.MINUTE)**, and **get(Calendar.SECOND)** to return the hour, minute, and second from a **Calendar** object.
The class defines the properties **hour**, **minute**, and **second** to store the time represented in the clock (lines 11–13) and uses the **w** and **h** properties to represent the width and height of the clock pane (line 16). The initial values of **w** and **h** are set to 250. The **w** and **h** values can be reset using the **setW** and **setH** methods (lines 70, 81). These values are used to draw a clock in the pane in the **paintClock()** method.
The **paintClock()** method paints the clock (lines 100–138). The clock radius is proportional to the width and height of the pane (line 102). A circle for the clock is created at the center of the pane (line 107). The text for showing the hours 12, 3, 6, 9 are created in lines 110–113. The second hand, minute hand, and hour hand are the lines created in lines 115–134. The **paintClock()** method places all these shapes in the pane using the **addAll** method in a list (line 137). Because the **paintClock()** method is invoked whenever a new property (**hour**, **minute**, **second**, **w**, and **h**) is set (lines 28, 39, 50, 61, 72, 83, 96), before adding new contents into the pane, the old contents are cleared from the pane (line 136). | paulike |
1,894,879 | What are DTOs and their significance? | What's up amazing people 👋 I just wanted to talk a lil bit about the concept of DTOs in programming... | 0 | 2024-06-20T19:49:51 | https://dev.to/prathamjagga/what-are-dtos-and-their-significance-2e72 | What's up amazing people 👋
I just wanted to talk a lil bit about the concept of DTOs in programming and how they might be useful and significant in your code.
So, what is a DTO? Well, it stands for Data Transfer Object. It is basically defining an interface for various types of data transfers within a system. Some examples are: typescript interfaces which we use in our typescript code, mongoose schemas, etc.
DTOs make it easier to identify the flow of data between functions, services, modules and components of our application. Having the structure of data clear makes it easier to understand code, apply validations on data, identify unintentional flow of sensitive information and easier to document the code.
Let's take an example of our simple Node.js application and implement a simple API controller first without using DTO and then using DTOs. Later we will compare both of the strategies.
### Basic API example without using DTO
```js
const express = require("express");
const app = express();
// controller without using DTO
async function getUserData(req, res) {
const username = req.query.username;
const key = req.query.key;
if (!username || !key) {
return res.json({ success: false, info: "Invalid parameters!" });
}
// process user data
// may be from your database
// like db.get(users, {username})
const userData = {
name: "XYZ",
age: "23",
// ... more data
};
return res.json({ success: true, data: userData });
}
// route
app.get("/user", getUserData);
app.listen(5000, () => console.log("SERVER LISTENING ON 5000"));
```
This is a very basic API implementation in Nodejs using express, right?
### Let's try implementing the same with a DTO object.
```js
const express = require("express");
const app = express();
class UsernameQueryDTO {
username;
key;
constructor(query) {
this.username = query.username;
this.key = query.key;
}
isValid() {
if (
!this.username ||
this.username.length < 5 ||
!this.key ||
this.key != "your_secret_key"
) {
return false;
}
return true;
}
get() {
return { username: this.username };
}
}
// controller using our DTO object
async function getUserData(req, res) {
const query = (new UsernameQueryDTO(req.query)).get();
if (!query.isValid())
return res.json({ success: false, info: "Invalid parameters!" });
// process user data
// may be from your database
// like db.get(users, {username: query.username})
const userData = {
name: "XYZ",
age: "23",
// ... more data
};
return res.json({ success: true, data: userData });
}
// route
app.get("/user", getUserData);
app.listen(5000, () => console.log("SERVER LISTENING ON 5000"));
```
You can clearly see now we are easily able to:
- define validations. 🤨
- see the structure of incoming data. 📊
- determine sensitive data such as secret key is filtered beforehand (you can see our get function only returns the username). 🔒
- understand the flow of data and document the APIs more effectively as we already have the data defined. 📏
I hope you got the point now !!
Now I leave it up to you as a task to define a DTO class for the response object as well and post it in the comments 💬.
That was it guys, Thankyou for reading the blog and if you like my work you can follow me here 😃
See you next week with a new blog :) | prathamjagga | |
1,547,476 | Bespoke is shutting down. We pivoted! | Please note: This article has not been parsed through ChatGPT. I wrote it with individual... | 0 | 2023-07-24T15:33:01 | https://dev.to/zifahm/bespoke-is-shutting-down-we-pivoted-2g1p | webdev, remix, nestjs | Please note: This article has not been parsed through ChatGPT. I wrote it with individual keystrokes.
## Why we pivoted [Bespoke](https://github.com/bespoke-surf/bespoke/)
19 days ago from July 24th, we open sourced Bespoke. Launched it on [HN](https://news.ycombinator.com/item?id=36586490). We have been building Bespoke for the past eight months.
We intended to create a Personalised Marketing Platform which contained the best parts of Mailchimp, Substack, Klaviyo and Typeform.
We reached around 228 Github stars, whilst we were building Bespoke every day, we changed our marketing speak from the Open Source Mailchimp alternative to "Capture, Engage and Target" customers.
As the marketing speak evolved, this resonated with a lot of users and SaaS operators. Thus based on the market feedback. We completely Pivoted!
We will launch our pivoted product soon. Most of the codebase is around 80% reusable.
## A little bit about Bespoke's architecture and what we learned.
The stack contained:
- Remix
- NestJS
- Monorepo with Turbo and Yarn
The architecture that we followed was BFF (Backend for frontend) architecture
The biggest mistake that we made was using a cookie-based authentication directly from our 2nd backend layer. As our product evolved, we understood this could have been better, and a dual authetication layer is presumably the best. Should have read this [article](https://dev.to/damikun/web-app-security-understanding-the-meaning-of-the-bff-pattern-i85) all along.
Also, we were implementing simple CRUD in the beginning, and as the product evolved, things started getting complicated with events, listeners, queues etc. Should have started with a CQRS-based system.
Also in the UI layer, as the user signed up with a business name, we would always use the business name as the subdomain. Our flow would redirect to the users' app page from our landing page if they are logged in. This would query the database on each load, even if an anonymous user visited our landing page. Occasionally if the back end was updating, the front end would hang. This was a pain.
Note: This can be solved by changing the user flow, and not redirecting.
Also, we needed to keep tabs on the subdomain reserved list if we created any business-centric subdomains for our main website.
## The tech that we choose right
- Remix with flat-routes
- The BFF architecture was great for product velocity.
- Using Nestjs rather than implementing the back end entirely in Remix
- Using a monorepo
- JSONB in Postgres is very reliable and based on different data that we saved to the database, especially Shopify data. It was fast and easy to query.
- GraphQL, TypeORM with TypeGraphQL code-first approach.
This made writing TypeGraphQL types, entities and type assertions very interlinked with each other. Velocity was fast was because of this.
Another thing to note about querying the database here is using TypeORM. Creating relation entities, where clause [bracket](https://github.com/typeorm/typeorm/blob/master/docs/select-query-builder.md#adding-where-expression:~:text=You%20can%20add%20a%20complex%20WHERE%20expression%20into%20an%20existing%20WHERE%20using%20Brackets) assertions, and subqueries are very easily done with TypeORM. Love using it.
In the pivoted product that we hopefully will release ASAP, we did think about choosing Sqlite with LiteFS, but just because of the reliability of JSONB, and probably we might need to query a JSON string in the future. We are reverting to choose Postgres again. CockroachDB looks promising here.
Give us best wishes and a smile, hopefully, we will launch the pivoted product soon. A wait list will be launched in a week presumably.
Thank you for reading
Afi
[Twitter](https://twitter.com/zifahm1)
[Threads](https://www.threads.net/@afiiiiiiiiiiiiiiiiiiiiiiiiiii) | zifahm |
1,895,168 | Introducing Identity Server 7.0 - The Most Powerful and Developer-Friendly Release Yet | Refreshing Look and Feel for the Console UI The console has received a major upgrade with our... | 0 | 2024-06-20T19:39:33 | https://dev.to/harsha_thirimanna_39edfd6/introducing-identity-server-70-the-most-powerful-and-developer-friendly-release-yet-5dk5 | **Refreshing Look and Feel for the Console UI**
The console has received a major upgrade with our brand-new, lightning-fast Oxygen UI! The beta console UI, accessible via https://<hostname>:<port>/console, introduced in version 5.11.0, is now available for production usage for administrative and developer tasks.
With this upgrade, concepts such as service providers, identity providers, inbound/outbound authentication, previously utilized in the Carbon-based management console, have evolved into 'applications' and 'connections', respectively. WSO2 Identity Server 7.0.0 introduces application templates for Single Page Applications (SPAs), web applications with server-side rendering, mobile applications, and machine-to-machine (M2M) applications. It also offers a variety of authentication options, including social login, multi-factor authentication (MFA), passwordless authentication, etc., which can be selected from the available connections.
**Productized Support for B2B CIAM Use Cases**
WSO2 Identity Server now enables secure access for your B2B business customers with flexible organization management capabilities. B2B CIAM is the identity foundation that helps organizations that work with business customers, franchises, distributors and suppliers get their apps and services to market quickly and securely.
_Key Highlights:_
Onboard enterprise IDP, or invite users to register at organizations
Configure varied login options for organizations
Hierarchical organization management
Delegated administration
Different branding for organizations
Resolve organization at login as the user inputs the organization name, based on the user’s email domain mapped for a particular organization or based on a query or path parameter in the URL
**Authentication API for App-Native Authentication**
This release introduces an API-based authentication capability, allowing developers to implement complete authentication workflows within their applications, focusing on enhanced user experience.
_Key Highlights:_
A flexible API containing all necessary details to render UIs inside the application itself
Support for handling authentication orchestration logic at the WSO2 Identity Server without taking that overhead to the application (e.g: Based on the device the user logs in to the app, prompt the second factor)
APIs based on OAuth 2.0/Open ID Connect standards, requiring no browser support
Ensures identity and proof of possession of the client in handling authentication credentials
**Compliance with FAPI 1.0 Profiles**
WSO2 IS is now compliant with FAPI 1.0 Baseline and Advanced profiles, ensuring secure and compliant financial services operations.
_Key Highlights:_
Create FAPI compliant applications from DCR. This validates FAPI enforcements a FAPI compliant application should have like Software Statement Assertions(SSA) validation that ensures the third party is trusted with the regulatory body of the region
Support for certificate bound access tokens.
Support for pairwise subject identifiers
Enforcing request object validations for FAPI compliance
Mandate sending a request object in the authorization request passed via the request or request_uri parameter.
Mandatory request object parameter validations (scope, redirect_uri, nonce)
Request object signing algorithm restriction (PS256, ES256)
Mandate PKCE for PAR
Enforce nbf & exp claim validations
Enforcing FAPI allowed client authentication methods and signature algorithms
**First-Class Support for Securing API Resources**
Comprehensive support for API Authorization via RBAC is now available, allowing easy representation, subscription, and role-based access control for API resources.
_Key Highlights:_
Easily represent API Resources and scopes associated with your applications.
Seamlessly subscribe API Resources to applications.
Define roles collecting API scopes.
Enable RBAC when authorizing APIs.
Role assignment for users and groups connected from various sources (from user stores, from external IdPs)
Role-Based scope validation during token issuing. | harsha_thirimanna_39edfd6 | |
1,895,166 | Build RESTful APIs with Express.js 🚀 | Hey your instructor #KOToka here.... Setup: Install Node.js & npm. Initialize: npm... | 0 | 2024-06-20T19:28:01 | https://dev.to/erasmuskotoka/build-restful-apis-with-expressjs-2fo7 | Hey your instructor #KOToka here....
1. Setup:
- Install Node.js & npm.
- Initialize: `npm init`.
- Install Express: `npm install express`.
2. Basic Server:
```javascript
const express = require('express');
const app = express();
const port = 3000;
app.get('/', (req, res) => res.send('Hello World!'));
app.listen(port, () => console.log(`Server running at http://localhost:${port}`));
```
3. Define Routes:
- Use `app.get()`, `app.post()`, `app.put()`, `app.delete()`.
4. **Middleware**:
```javascript
app.use(express.json());
```
5. CRUD Operations:
- Implement Create, Read, Update, Delete endpoints.
6. Test:
- Use Postman or Insomnia to test endpoints.
Create robust, scalable APIs with Express.js! Happy coding! 💻✨ #ExpressJS #RESTfulAPI #NodeJS #WebDev | erasmuskotoka | |
1,895,164 | Shapes | JavaFX provides many shape classes for drawing texts, lines, circles, rectangles, ellipses, arcs,... | 0 | 2024-06-20T19:26:22 | https://dev.to/paulike/shapes-9gl | java, programming, learning, beginners | JavaFX provides many shape classes for drawing texts, lines, circles, rectangles, ellipses, arcs, polygons, and polylines. The **Shape** class is the abstract base class that defines the common properties for all shapes. Among them are the **fill**, **stroke**, and **strokeWidth** properties. The **fill** property specifies a color that fills the interior of a shape. The **stroke** property specifies a color that is used to draw the outline of a shape. The **strokeWidth** property specifies the width of the outline of a shape. This section introduces the classes **Text**, **Line**, **Rectangle**, **Circle**, **Ellipse**, **Arc**, **Polygon**, and **Polyline** for drawing texts and simple shapes. All these are subclasses of **Shape**, as shown in Figure below.

## Text
The **Text** class defines a node that displays a string at a starting point (**x**, **y**), as shown in Figure below (a). A **Text** object is usually placed in a pane. The pane’s upper-left corner point is (**0**, **0**) and the bottom-right point is (**pane.getWidth()**, **pane.getHeight()**). A string may be displayed in multiple lines separated by **\n**. The UML diagram for the **Text** class is
shown in Figure below. The program below gives an example that demonstrates text, as shown in Figure below (b).


```
package application;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.layout.Pane;
import javafx.scene.paint.Color;
import javafx.geometry.Insets;
import javafx.stage.Stage;
import javafx.scene.text.Text;
import javafx.scene.text.Font;
import javafx.scene.text.FontWeight;
import javafx.scene.text.FontPosture;
public class ShowText extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane to hold the texts
Pane pane = new Pane();
pane.setPadding(new Insets(5, 5, 5, 5));
Text text1 = new Text(20, 20, "Programming is fun");
text1.setFont(Font.font("Courier", FontWeight.BOLD, FontPosture.ITALIC, 15));
pane.getChildren().add(text1);
Text text2 = new Text(60, 60, "Programming is fun\nDisplay text");
pane.getChildren().add(text2);
Text text3 = new Text(10, 100, "Programming is fun\nDisplay text");
text3.setFill(Color.RED);
text3.setUnderline(true);
text3.setStrikethrough(true);
pane.getChildren().add(text3);
// Create a scene and place it in the stage
Scene scene = new Scene(pane);
primaryStage.setTitle("ShowText"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The program creates a **Text** (line 19), sets its font (line 20), and places it to the pane (line 21). The program creates another **Text** with multiple lines (line 23) and places it to the pane (line 24). The program creates the third **Text** (line 26), sets its color (line 27), sets an underline and a strike through line (lines 28–29), and places it to the pane (line 30).
## Line
A line connects two points with four parameters **startX**, **startY**, **endX**, and **endY**, as shown in Figure below (a). The **Line** class defines a line. The UML diagram for the **Line** class is shown in Figure below. The program below gives an example that demonstrates text, as shown in Figure below (b).
```
package application;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.layout.Pane;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
import javafx.scene.shape.Line;
public class ShowLine extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a scene and place it in the stage
Scene scene = new Scene(new LinePane(), 200, 200);
primaryStage.setTitle("ShowLine"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
class LinePane extends Pane{
public LinePane() {
Line line1 = new Line(10, 10, 10, 10);
line1.endXProperty().bind(widthProperty().subtract(10));
line1.endYProperty().bind(heightProperty().subtract(10));
line1.setStrokeWidth(5);
line1.setStroke(Color.GREEN);
getChildren().add(line1);
Line line2 = new Line(10, 10, 10, 10);
line2.startXProperty().bind(widthProperty().subtract(10));
line2.endYProperty().bind(heightProperty().subtract(10));
line2.setStrokeWidth(5);
line2.setStroke(Color.GREEN);
getChildren().add(line2);
}
}
```


The program defines a custom pane class named **LinePane** (line 24). The custom pane class creates two lines and binds the starting and ending points of the line with the width and height of the pane (lines 27–28, 34–35) so that the two points of the lines are changed as the pane is resized.
## Rectangle
A rectangle is defined by the parameters **x**, **y**, **width**, **height**, **arcWidth**, and **arcHeight**, as shown in Figure below (a). The rectangle’s upper-left corner point is at (**x**, **y**) and parameter **aw** (**arcWidth**) is the horizontal diameter of the arcs at the corner, and **ah** (**arcHeight**) is the vertical diameter of the arcs at the corner.

The **Rectangle** class defines a rectangle. The UML diagram for the **Rectangle** class is shown in Figure below. The program below gives an example that demonstrates rectangles, as shown in Figure above (b).

```
package application;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.layout.Pane;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
import javafx.scene.text.Text;
import javafx.scene.shape.Rectangle;
public class ShowRectangle extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane
Pane pane = new Pane();
// Create rectangles and add to pane
Rectangle r1 = new Rectangle(25, 10, 60, 30);
r1.setStroke(Color.BLACK);
r1.setFill(Color.WHITE);
pane.getChildren().add(new Text(10, 27, "r1"));
pane.getChildren().add(r1);
Rectangle r2 = new Rectangle(25, 50, 60, 30);
pane.getChildren().add(new Text(10, 67, "r2"));
pane.getChildren().add(r2);
Rectangle r3 = new Rectangle(25, 90, 60, 30);
r3.setArcWidth(15);
r3.setArcHeight(25);
pane.getChildren().add(new Text(10, 107, "r3"));
pane.getChildren().add(r3);
for(int i = 0; i < 4; i++) {
Rectangle r = new Rectangle(100, 50, 100, 30);
r.setRotate(i * 360 / 8);
r.setStroke(Color.color(Math.random(), Math.random(), Math.random()));
r.setFill(Color.WHITE);
pane.getChildren().add(r);
}
// Create a scene and place it in the stage
Scene scene = new Scene(pane, 250, 150);
primaryStage.setTitle("ShowRectangle"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The program creates multiple rectangles. By default, the fill color is black. So a rectangle is filled with black color. The stroke color is white by default. Line 18 sets stroke color of rectangle **r1** to black. The program creates rectangle **r3** (line 27) and sets its arc width and arc height (lines 28–29). So **r3** is displayed as a rounded rectangle.
The program repeatedly creates a rectangle (line 34), rotates it (line 35), sets a random stroke color (lines 36), its fill color to white (line 37), and adds the rectangle to the pane (line 38).
If line 37 is replaced by the following line
`r.setFill(null);`
the rectangle is not filled with a color. So they are displayed as shown in Figure above (c).
## Circle and Ellipse
You have used circles in several examples early in the previous posts. A circle is defined by its parameters **centerX**, **centerY**, and **radius**. The **Circle** class defines a circle. The UML diagram for the **Circle** class is shown in Figure below.

An ellipse is defined by its parameters **centerX**, **centerY**, **radiusX**, and **radiusY**, as shown in Figure below (a). The **Ellipse** class defines an ellipse. The UML diagram for the **Ellipse** class is shown in Figure below. The program below gives an example that demonstrates ellipses, as shown in Figure below (b).



The program repeatedly creates an ellipse (line 17), sets a random stroke color (lines 18), sets its fill color to white (line 19), rotates it (line 20), and adds the rectangle to the pane (line 21).
## Arc
An arc is conceived as part of an ellipse, defined by the parameters **centerX**, **centerY**, **radiusX**, **radiusY**, **startAngle**, **length**, and an arc type (**ArcType.OPEN**, **ArcType.CHORD**, or **ArcType.ROUND**). The parameter **startAngle** is the starting angle; and length is the spanning angle (i.e., the angle covered by the arc). Angles are measured in degrees and follow the usual mathematical conventions (i.e., 0 degrees is in the easterly direction, and positive angles indicate counterclockwise rotation from the easterly direction), as shown in Figure below (a).

The **Arc** class defines an arc. The UML diagram for the **Arc** class is shown in Figure below. The program below gives an example that demonstrates ellipses, as shown in Figure above (b).

```
package application;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.layout.Pane;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
import javafx.scene.shape.Arc;
import javafx.scene.shape.ArcType;
import javafx.scene.text.Text;
public class ShowArc extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane
Pane pane = new Pane();
Arc arc1 = new Arc(150, 100, 80, 80, 30, 35); // Create an arc
arc1.setFill(Color.RED); // Set fill color
arc1.setType(ArcType.ROUND); // Set arc type
pane.getChildren().add(new Text(210, 40, "arc1: round"));
pane.getChildren().add(arc1); // Add arc to pane
Arc arc2 = new Arc(150, 100, 80, 80, 30 + 90, 35);
arc2.setFill(Color.WHITE);
arc2.setType(ArcType.OPEN);
arc2.setStroke(Color.BLACK);
pane.getChildren().add(new Text(20, 40, "arc2: open"));
pane.getChildren().add(arc2);
Arc arc3 = new Arc(150, 100, 80, 80, 30 + 180, 35);
arc3.setFill(Color.WHITE);
arc3.setType(ArcType.CHORD);
arc3.setStroke(Color.BLACK);
pane.getChildren().add(new Text(20, 170, "arc3: chord"));
pane.getChildren().add(arc3);
Arc arc4 = new Arc(150, 100, 80, 80, 30 + 270, 35);
arc4.setFill(Color.GREEN);
arc4.setType(ArcType.CHORD);
arc4.setStroke(Color.BLACK);
pane.getChildren().add(new Text(210, 170, "arc4: chord"));
pane.getChildren().add(arc4);
// Create a scene and place it in the stage
Scene scene = new Scene(pane, 300, 200);
primaryStage.setTitle("ShowArc"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The program creates an arc **arc1** centered at (**150**, **100**) with **radiusX 80** and **radiusY 80**. The starting angle is **30** with **length 35** (line 15). **arc1**’s arc type is set to **ArcType.ROUND **(line 18). Since **arc1**’s fill color is red, **arc1** is displayed filled with red round.
The program creates an arc **arc3** centered at (**150**, **100**) with **radiusX 80** and **radiusY 80**. The starting angle is **30+180** with **length 35** (line 29). **Arc3**’s arc type is set to **ArcType.CHORD** (line 31). Since **arc3**’s fill color is white and stroke color is black, **arc3** is displayed with black outline as a chord.
Angles may be negative. A negative starting angle sweeps clockwise from the easterly direction, as shown in Figure below. A negative spanning angle sweeps clockwise from the starting angle. The following two statements define the same arc:
`new Arc(x, y, radiusX, radiusY, -30, -20);
new Arc(x, y, radiusX, radiusY, -50, 20);`
The first statement uses negative starting angle **-30** and negative spanning angle **-20**, as shown in Figure below (a). The second statement uses negative starting angle **-50** and positive spanning angle **20**, as shown in Figure below (b).

Note that the trigonometric methods in the **Math** class use the angles in radians, but the angles in the **Arc** class are in degrees.
## Polygon and Polyline
The **Polygon** class defines a polygon that connects a sequence of points, as shown in Figure below (a). The **Polyline** class is similar to the **Polygon** class except that the **Polyline** class is not automatically closed, as shown in Figure below (b).

The UML diagram for the **Polygon** class is shown in Figure below.

The program below gives an example that creates a hexagon, as shown in Figure below.

```
package application;
import javafx.application.Application;
import javafx.collections.ObservableList;
import javafx.scene.Scene;
import javafx.scene.layout.Pane;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
import javafx.scene.shape.Polygon;
//import javafx.scene.shape.Polyline;
public class ShowPolygon extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane, a polygon, and place polygon to pane
Pane pane = new Pane();
Polygon polygon= new Polygon();
pane.getChildren().add(polygon);
polygon.setFill(Color.WHITE);
polygon.setStroke(Color.BLACK);
ObservableList<Double> list = polygon.getPoints();
final double WIDTH = 200, HEIGHT = 200;
double centerX = WIDTH / 2, centerY = HEIGHT / 2;
double radius = Math.min(WIDTH, HEIGHT) * 0.4;
// Add points to the polygon list
for(int i = 0; i < 6; i++) {
list.add(centerX + radius * Math.cos(2 * i * Math.PI / 6));
list.add(centerY - radius * Math.sin(2 * i * Math.PI / 6));
}
// Create a scene and place it in the stage
Scene scene = new Scene(pane, WIDTH, HEIGHT);
primaryStage.setTitle("ShowPolygon"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The program creates a polygon (line 16) and adds it to a pane (line 17). The **polygon.getPoints()** method returns an **ObservableList<Double>** (line 20), which contains the **add** method for adding an element to the list (lines 28–29). Note that the value passed to **add(value)** must be a **double** value. If an **int** value is passed, the **int** value would be automatically boxed into an **Integer**. This would cause an error because the **ObservableList<Double>** consists of **Double** elements.
The loop adds six points to the polygon (lines 27–30). Each point is represented by its x- and y-coordinates. For each point, its x-coordinate is added to the polygon’s list (line 28) and then its y-coordinate is added to the list (line 29). The formula for computing the x- and
y-coordinates for a point in the hexagon is illustrated in Figure above (a).
If you replace **Polygon** by **Polyline**, the program displays a polyline as shown in Figure above (b). The **Polyline** class is used in the same way as **Polygon** except that the starting and ending point are not connected in **Polyline**.
| paulike |
1,895,163 | Dart Streams with RxDart: Debounce, Throttle, and Distinct | Dart, a versatile language for building applications, offers powerful tools for managing asynchronous... | 0 | 2024-06-20T19:24:52 | https://dev.to/francescoagati/exploring-dart-streams-with-rxdart-debounce-throttle-and-distinct-4023 | rxdart | Dart, a versatile language for building applications, offers powerful tools for managing asynchronous data streams. When coupled with `rxdart`, a reactive programming library for Dart, developers can leverage advanced stream handling techniques like debouncing, throttling, and distinct emission to efficiently manage data flow.
### Understanding Reactive Streams
In reactive programming, streams are a fundamental concept used to handle sequences of asynchronous events or data. Each event in a stream represents a discrete piece of data. Dart provides robust support for streams through its `dart:async` library, and `rxdart` extends this capability with additional operators and utilities.
### Key Concepts and Operators
Let's delve into the code snippet to explore how `rxdart` can enhance stream processing:
```dart
import 'dart:async';
import 'package:rxdart/rxdart.dart';
void main() {
// Create a BehaviorSubject for the text input
final textSubject = BehaviorSubject<String>();
// Debounce: Emit only after a specified duration of silence
final debouncedStream = textSubject.stream.debounceTime(Duration(milliseconds: 300));
// Throttle: Emit at most one value per specified duration
final throttledStream = textSubject.stream.throttleTime(Duration(milliseconds: 300));
// Skip duplicates: Emit only if the value is different from the previous one
final distinctStream = textSubject.stream.distinct();
// Listen to the debounced stream
debouncedStream.listen((value) {
print('Debounced: $value');
});
// Listen to the throttled stream
throttledStream.listen((value) {
print('Throttled: $value');
});
// Listen to the distinct stream
distinctStream.listen((value) {
print('Distinct: $value');
});
// Simulate text input
simulateTextInput(textSubject);
}
void simulateTextInput(BehaviorSubject<String> subject) {
const inputValues = [
'hello',
'hello', // Duplicate value
'hell',
'hello world',
'flutter',
'flutter', // Duplicate value
'dart',
];
// Emit values with a slight delay between them
var delay = 0;
for (final value in inputValues) {
Future.delayed(Duration(milliseconds: delay), () {
subject.add(value);
});
delay += 100;
}
// Close the subject after all values have been emitted
Future.delayed(Duration(milliseconds: delay + 500), () {
subject.close();
});
}
```
### Detailed Explanation
1. **BehaviorSubject**: This is a special type of `StreamController` from `rxdart` that allows subscribers to access the most recently emitted item and all subsequent items of the stream.
2. **Debounce**: The `debounceTime` operator emits an item from the source stream only after a specified duration of silence (i.e., no new items emitted). In our example, `debouncedStream` will print 'Debounced' values after 300 milliseconds of inactivity.
3. **Throttle**: In contrast, `throttleTime` ensures that at most one value is emitted per specified duration, ignoring values emitted in quick succession. Here, `throttledStream` will print 'Throttled' values every 300 milliseconds.
4. **Distinct**: The `distinct` operator filters out consecutive duplicate values from the stream. The `distinctStream` in our example will print 'Distinct' values only when the current value differs from the previous one.
### Simulating Text Input
The `simulateTextInput` function mimics user input by emitting predefined values to `textSubject`. Each value is added with a slight delay, simulating realistic user interaction.
### Conclusion
By utilizing `rxdart` operators like `debounceTime`, `throttleTime`, and `distinct`, Dart developers can effectively manage and manipulate asynchronous data streams. These operators empower applications to handle user input, network responses, and other asynchronous events with precision and efficiency. Understanding and leveraging reactive programming concepts and tools like `rxdart` opens up new possibilities for creating responsive and reactive Dart applications.
| francescoagati |
1,895,162 | Practical Steps to Enhance Your Security Today | Introduction In the ever-evolving landscape of cybersecurity, numerous frameworks are... | 0 | 2024-06-20T19:21:47 | https://dev.to/jwtiller_c47bdfa134adf302/practical-steps-to-enhance-your-security-today-91d | security, sdlc | ## Introduction
In the ever-evolving landscape of cybersecurity, numerous frameworks are available to measure maturity and guide improvements, such as OWASP SAMM, Microsoft's SDL (Security Development Lifecycle), and the NIST Cybersecurity Framework (CSF). These frameworks offer comprehensive guidelines but can sometimes be overwhelming for organizations looking for a quick, pragmatic approach to enhance their security posture.
Having worked with complex government solutions that require high standards for confidentiality, integrity, and availability, I understand the importance of robust security measures. If you don't have a systematic approach today, following these steps can take your security light years ahead, setting the stage for adopting a more formal framework in the future.
## A Hands-On Approach to Jump-Start Your Security Today
If you're looking to take immediate, pragmatic steps towards improving your security, here’s a streamlined approach:
1. **Identify Your Assets**: List your assets, including data, systems, networks, and personnel.
2. **Conduct a Risk Analysis**:
- **Probability**: Estimate the likelihood of threats exploiting vulnerabilities.
- **Consequence**: Determine the potential impact of these threats.
3. **Implement Measures to Mitigate Risks**: Apply security controls, update software, train employees, and establish policies to reduce risks.
4. **Prioritize by Cost and Effectiveness**:
- Order the cost of implementing measures.
- Define the effectiveness of each measure to prioritize actions with the greatest return on investment.
5. **Focus on Quick Wins**: Target measures that can be implemented quickly and at a low cost but have a significant impact. Often, 80% of your desired improvements can be achieved with 20% of the effort.
### Conclusion
By following this pragmatic, hands-on approach, you can quickly and effectively enhance your organization's security posture. Comprehensive frameworks like OWASP SAMM, SDL, and NIST CSF provide extensive guidance, but focusing on immediate, practical steps allows you to make meaningful improvements without getting bogged down in complexity. Remember, the goal is to make significant strides in security with manageable effort, setting a strong foundation for future enhancements.
| jwtiller_c47bdfa134adf302 |
1,895,159 | Vasily Mesheryakov, main promoter of Freewallet scam | Alvin Hagg, the co-founder and CEO of Freewallet.org, has long shunned the spotlight. However, our... | 0 | 2024-06-20T19:18:52 | https://dev.to/feofhan/vasily-mesheryakov-main-promoter-of-freewallet-scam-39fi |

Alvin Hagg, the co-founder and CEO of Freewallet.org, has long shunned the spotlight. However, our recent investigation has unveiled a shocking truth: the real masterminds behind the Freewallet scam are two Russian immigrants, deeply involved in another fraudulent cryptocurrency venture, Cryptopay. Among them, Vasily Mesheryakov stands out as the primary PR manager orchestrating these scams.
Freewallet is promoted as a secure multi-currency wallet, but numerous reviews expose it as a major scam. Many users have lost their savings due to blocked or failed transactions. Similarly, Cryptopay, a payment and exchange service, entices victims with low fees, only for them to discover that they cannot withdraw their assets. Our findings reveal that both Freewallet and Cryptopay share the same ownership and employ identical fraudulent tactics.
Dmitry Gunyashov, another key figure, is believed to be at the helm of these operations, despite lacking experience in legitimate business management. His expertise lies in orchestrating scam operations while evading legal repercussions. The scheme is simple: clients create accounts, deposit funds, and then find their accounts blocked, leaving them unable to access their assets.
Mesheryakov's role is critical in maintaining the façade of legitimacy through aggressive marketing and buying positive reviews. His activities extend beyond Cryptopay and Freewallet, involving other ventures like Dioram.
For more information on Vasily Mesheryakov and to find contact details, visit https://vasily-mesheryakov.com/. Join us in holding these scammers accountable and help bring justice to the victims of these fraudulent schemes. If you have any additional information, please contact us at Freewallet-report@tutanota.com.
| feofhan | |
1,895,158 | Differences Between Slice and Splice : javascript | In Javascript slice and splice are used to manipulate Array, both sound a little bit the same but... | 0 | 2024-06-20T19:17:20 | https://dev.to/sagar7170/differences-between-slice-and-splice-javascript-2bfa | javascript, webdev, frontend, programming |
In Javascript slice and splice are used to manipulate Array, both sound a little bit the same but they work differently, first let’s talk about slice

**slice** method used cut out some portion from the array or extract sub-array but it don’t change the origin array instead return new updated array of extracted sub-array as you can see in above example
it takes two argument first starting index , second ending index . The ending index default value is length of its original array if you don’t pass the ending index.

**splice** method use to remove and add values in the array , but it don’t create new array like splice instead it changed the original array as you can see above example
it take three argument first : starting index , second : number of values to be remove , here is a point to be noted its second argument is not ending index but the length of value you want to delete from the array , third: value needs to be added in replacement of removed values
**conclusion**: both used manipulate array , slice extract sub-array and create new array of extracted value it don’t changed original array and splice add and remove array values but changed original array. | sagar7170 |
1,895,157 | firenoms - The most🔥domains on the internet | Introducing firenoms; quality domains for your business/startup. https://firenoms.com | 0 | 2024-06-20T19:17:02 | https://dev.to/richbowen/firenoms-the-mostdomains-on-the-internet-4i67 | Introducing **firenoms**; quality domains for your business/startup.
https://firenoms.com | richbowen | |
1,895,156 | ANIXSOFT | *Our domain goes like: IOS mobile apps, Android apps , HTML5, PHP5, MySQL, postgresql, CakePHP,... | 0 | 2024-06-20T19:14:22 | https://dev.to/siddhartha_ghosh_2f69af08/anixsoft-2dk9 | webdev, programming, react, ai | **Our domain goes like:
IOS mobile apps, Android apps
, HTML5, PHP5, MySQL, postgresql, CakePHP, CodeIgnitor, Laravel, React Js, Angular Js & Node Js
Please visit our site https://www.anixsoft.co.in and visit our PORTFOLIO section, to view our latest works.
Our latest work (ongoing development) with angular 9 in front end and laravel in the service/API end is app.saaaitech.com & vuemotion.ai
Please visit our youtube channel
https://www.youtube.com/channel/UCnmKWHxC54oxPn7LWlS2SWg
To view our work in AI and mobile apps
** | siddhartha_ghosh_2f69af08 |
1,891,513 | Expo Router adoption guide: Overview, examples, and alternatives | Written by Marie Starck✏️ Expo announced the release of Expo Router v3 in January 2024, marking it... | 0 | 2024-06-20T19:09:00 | https://blog.logrocket.com/expo-router-adoption-guide | expo, react | **Written by [Marie Starck](https://blog.logrocket.com/author/marie-starck/)✏️**
Expo announced [the release of Expo Router v3](https://expo.dev/changelog/2024/01-23-router-3) in January 2024, marking it as the first universal, full-stack React framework. Expo Router launched about a decade ago, and this release reflects its attention to the continued changes in the mobile and fronted landscape.
A decade ago, mobile development was split up into iOS and Android. Then, React Native came on the scene and allowed for cross-platform development. Finally, Expo came and made mobile development easier than ever.
Today, Expo is a full suite of tools and services that allow developers to create truly cross-platform applications, mobile and web, and to do so completely in React. Together with Expo Router, Expo makes it easier than ever to implement navigation that will work on both mobile devices and the web seamlessly.
In this article, we’ll discuss Expo Router, its key features, and how it measures up to React Navigation to help you assess whether adopting this tool is the right decision for your next project.
## What is Expo Router?
Before we talk about Expo Router, it’s important to learn about the framework it’s based on: React Native.
As we mentioned earlier, before React Native was created in 2015, iOS and Android development was split. This split doubled development time and required companies to hire two types of developers if they wanted their apps in both the Apple Store and the Google Store.
During an internal hackathon, Meta invented React Native, a framework based on JavaScript that supported cross-platform development. Thanks to this framework, JavaScript developers specializing in web development could also work on mobile development for Android and iOS applications.
Expo is an open source platform built on React Native that allows developers to create applications that run on Android, iOS, and the web. It includes a suite of tools and services to build, run, and test native applications. Expo Router is probably one of the most popular Expo tools and is incredibly practical for handling navigation in a mobile application.
At first, Expo Router used the standard concept of stacks to create pages and navigate between them. The idea was that screens would be placed on top of each other in a stack, similar to the first-in-last-out principle.
Since then, Expo Router has moved away from this navigation philosophy and now uses a file-based router to implement the navigation. If you’ve worked with Next.js, you’re likely familiar with this concept already.
In file-based routing, anytime a file is added to the app directory, a new route is created:
```plaintext
/app
--> index.js matches '/'
--> about.js matches '/about'
--> /dashboard
--> index.js matches '/dashboard'
--> login.js. matches 'dashboard/login'```
This type of routing makes it truly cross-platform, as it can also be used on the web, which isn’t possible with the mobile-only stack approach.
#### _Further reading:_
* [Getting started with React Native and Expo SDK](https://blog.logrocket.com/getting-started-with-react-native-and-expo-sdk/)
* [Building cross-platform apps with Expo instead of React Native](https://blog.logrocket.com/building-cross-platform-apps-expo-instead-of-react-native/)
* [A guide to native routing in Expo for React Native](https://blog.logrocket.com/native-routing-expo-react-native/)
* * *
## Why choose Expo Router?
Now that we have gone over what Expo Router is, let’s go over some reasons you might consider using it:
* **Performance**: There are no performance issues when it comes to Expo Router. Even though it adds an extra layer on top of React Navigation, the experience is fluid and smooth, with no flickering between the screens
* **Ease of use**: Expo Router’s popularity has a lot to do with its excellent DX. Like Create React App, you can create an Expo app with one command (`npx create-expo-app@latest`) and run it with another (`npx expo start`). There are no external libraries to install, and the famous Metro bundler compiles and creates the build for you to test and debug your native application
* **Bundle size**: Considering that Expo Router is an additional layer on top of React Navigation, there’s reason for concern about the bundle size due to running Expo. However, to mitigate this concern, Expo Router comes with the [Metro bundler](https://docs.expo.dev/guides/customizing-metro/) tool for compiling and building your native application, even for production. It’s built and optimized for React Native and comes with bundle splitting, modification, and web support
* **Community & ecosystem**: The Expo community is thriving. On top of a Discord group of over 33k members, they have GitHub repositories with over 30k stars and loads of discussions. On top of that, the Expo team is working hard to release new features. They released Expo v2 in July 2023 with a [Medium blog post](https://blog.expo.dev/introducing-expo-router-v2-3850fd5c3ca1) and released v3 just half a year later in January 2024 with [game-changing new features](https://expo.dev/changelog/2024/01-23-router-3) such as bundle splitting, testing library, and more
* **Learning curve**: The learning curve for Expo is fairly low, especially for JavaScript developers. If you have experience coding for the web, you will likely find it easy to pick up Expo, since file-based routing is already used in other popular frameworks
* **Documentation**: Expo’s docs are complete, thorough, and detailed. The team has even provided [a repository with example projects](https://github.com/expo/examples) for people to experiment with
* **Integrations**: Expo works pretty well with other frameworks. Whether you want to integrate a library such as NativeWind to implement Tailwind CSS styling or a third-party application like Firebase to handle authentication, the setup is straightforward.
* **Ecosystem**: The great thing about Expo Router is that it doesn’t come by itself. The Expo team is currently developing additional tools to assist with application development. Recently, the team created Expo Application Services (EAS), which integrates cloud services to test, build, and deploy your applications
Of course, it’s also important to weigh Expo Router’s drawbacks as you consider whether you should choose it. A few cons to keep in mind include:
* **Confusing routing**: While file-based routing brings working with Expo Router closer to a web-like DX, it’s a step away from the stack-based routing that mobile developers are used to. Some developers recommend learning React Navigation basics before moving on to Expo Router
* **Navigation options**: An alternative like React Navigation offers more types of navigation, including Drawer, Tab, Stack, and more
Overall, Expo Router is a great choice for developers creating mobile applications. It’s both simple and yet filled with features and tools to get up to speed quickly. Its thriving community also ensures that there will be tons of resources to help you out if you’re just getting started with Expo Router.
### Getting started with Expo Router
Creating an Expo application is very easy. Like Create React App, there is a command that will create a sample application:
```bash
npx create-expo-app@latest --template tabs@50
```
This will generate a simple application with Expo Router already set up using a `tabs` template at version `50`. You can then run the project with `npx expo start`. Once done, you should see this: 
#### _Further reading:_
* [Getting started with NativeWind: Tailwind for React Native](https://blog.logrocket.com/getting-started-nativewind-tailwind-react-native/)
* [Integrating Firebase authentication into an Expo mobile app](https://blog.logrocket.com/integrating-firebase-authentication-expo-mobile-app/)
* [Best CI/CD tools for React Native](https://blog.logrocket.com/best-ci-cd-tools-react-native/)
* * *
## Key Expo Router features to know
Expo Router has some key features that are helpful to understand as you evaluate this framework. Let’s go through them now.
### Navigation logic
As mentioned above, Expo Router uses file-based routing logic, meaning that every file added to the app directory is a new route. You can also create html-specific routes along with dynamic routes.
Header and tab bars are created using layout files. By calling the `_layout.js` file, you can specify your layout, which will wrap around the routes to persist the layout across different pages.
Navigation between pages is done through links, either with the `<Links />` React component or the `<a>` element. This navigation approach comes closer to how navigation is done on the web.
Expo’s sample project, for example, has this structure:
```plaintext
/app
--> modal.tsx // modal component
--> _layout.tsx // specifies the layout (headers, tabs, ...)
--> +html.tsx // HTML-specific to configure root HTML
--> +not-found.tsx // 404 pages
--> (tabs) // This is a dynamic route for / or /(tab-number)
--> index.tsx // matches '/'
--> two.tsx // matches '/two'
--> _layout.tsx // layout for the tab pages
```
### Deep linking
Deep linking is the process by which a user will click on a link outside of your app and be redirected to either a specific part of the app or the App Store to download the app. It’s helpful when you need to direct a user to your app from elsewhere, such as from a webpage.
There isn’t any configuration needed, as deep linking comes out-of-the-box with Expo Router. Since Expo Router uses file-based routing, you already have the URLs by default — i.e., `/auth/login`. No prefix is required, as Expo Router handles it.
You do, however, need to set up your scheme in your `app.json` config file, like so:
```json
{
"scheme": "your-app-scheme"
}
```
You can test your links with `uri-scheme`. See [Expo Router’s documentation](https://docs.expo.dev/guides/linking/#testing-urls) on the subject to help you.
### Native support
Expo applications can work with third-party React Native libraries or custom native code. Expo router uses [autolinking](https://docs.expo.dev/modules/autolinking/) to link native dependencies to the Expo project.
Autolinking enables developers to integrate third-party libraries easily and without additional configuration. You simply have to run `npm install` and rerun `pod install`.
#### _Further reading:_
* [Implementing in-app updates for React Native apps](https://blog.logrocket.com/implementing-in-app-updates-react-native/)
* [React Native push notifications: A complete how-to guide](https://blog.logrocket.com/react-native-push-notifications-complete-guide/)
* * *
## Use cases for Expo Router
Many developers have used Expo Router simply because it’s easy to use, but it’s important to note whether a framework is scalable. Realizing the technology you used in your MVP doesn't scale with your growth can be frustrating, especially if you have to discard and rebuild the entire project.
This is why it’s interesting to see which customers use Expo and why. Thankfully, Expo has a [list of customers](https://expo.dev/customers) on their website, many of whom explain their particular use case and why Expo Router is the best fit for their needs.
Insider, for example, explains that they moved to Expo to move away from the development split between iOS, Android, and web that required them to hire three developers to support what should essentially be the same product. The team had a proof of concept in a week and a fully redesigned, multi-platform app nine months later.
Goody, on the other hand, used Expo from the start. What they liked was the out-of-the-box tools that came with Expo. Push notifications, for example, work as is which was a vast improvement from React native which asks for an SDK or a push notification server. This ease of use made the development much faster than had they chosen React Native.
From these example customer stories, you can see that the list of Expo Router’s out-of-the-box features and its quick setup can allow your team to reduce development-related costs and time by maintaining only one application. Your team could also become more efficient, as Expo Router can help you create new features more quickly.
## Expo Router vs. React Navigation
The main alternative to Expo Router is React Navigation. Both frameworks were developed by the Expo team. For developers who don’t want the Expo SDK or are unfamiliar with file-based routing, using React Native with React Navigation would be your good choice.
Let’s go over their main differences:
* **Features**: The main difference between the two is their navigation logic. Expo uses file-based routing and React Navigation uses stacks. If you have more web experience, you will find Expo easier to use. For seasoned mobile developers, React Navigation will make sense to you. Apart from that, Expo comes with lots of out-of-the-box features
* **Performance**: Today’s users expect apps to have a smooth and fluid UI that works quickly without errors. As a result, frameworks are putting all the work into making their apps performant. Expo Router and React Navigation are no exception. Both are near-native when it comes to performance
* **Community**: Both tools have a thriving Discord community. React Navigation has more than 200k members compared to over 33k members for Expo. But for either framework, you’ll find an active group of people to connect and share resources, questions, and answers with
* **Documentation**: There are extensive documentation, articles, and tutorials available for both tools. Fortunately for new developers, many experienced developers have even written articles or answered questions on StackOverflow for both tools
* **Use cases**: It is important to note that you can use React Navigation in an Expo application — you’re not required to use Expo Router. However, Expo Router is tied to Expo, which means you can access a lot of features, but you’re stuck in that ecosystem. So, despite the fact that Expo Router is built on React Native, you’re dependent on Expo for updates and features. One reason to use React Native and React Navigation instead is to have more freedom for customization
Let’s summarize this information in a table so you can compare the two options at a glance:
<table>
<thead>
<tr>
<th></th>
<th>Expo Router</th>
<th>React Navigation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Features</td>
<td>File-based navigation logic Cross-platform (iOS, Android, Web) Deep linking Expo tools (Metro bundler,…)</td>
<td>Stack and Drawer navigation logic Web support Deep linking Server rendering</td>
</tr>
<tr>
<td>Performance</td>
<td>Near native</td>
<td>Near native</td>
</tr>
<tr>
<td>Community</td>
<td>Strong community</td>
<td>Strong community</td>
</tr>
<tr>
<td>Documentation</td>
<td>Extensive</td>
<td>Extensive</td>
</tr>
<tr>
<td>Uses Cases</td>
<td>Great for developers wanting something out of the box with lots of tools and services.</td>
<td>Great for developers wanting more flexibility when customizing their navigation.</td>
</tr>
</tbody>
</table>
* * *
## Conclusion
Expo Router provides a file-based routing solution offered by the Expo team with crucial features such as deep linking and native support.
Expo has been widely popular for the last ten years, and it’s easy to see why. The Expo Router documentation is extensive and includes example repositories, and there’s a thriving community of developers you can seek help from and engage with.
Whether you are trying to learn mobile development or working on a production application, Expo Router is a good choice for you.
---
##Get set up with LogRocket's modern error tracking in minutes:
1. Visit https://logrocket.com/signup/ to get an app ID.
2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side.
NPM:
```bash
$ npm i --save logrocket
// Code:
import LogRocket from 'logrocket';
LogRocket.init('app/id');
```
Script Tag:
```javascript
Add to your HTML:
<script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script>
<script>window.LogRocket && window.LogRocket.init('app/id');</script>
```
3.(Optional) Install plugins for deeper integrations with your stack:
* Redux middleware
* ngrx middleware
* Vuex plugin
[Get started now](https://lp.logrocket.com/blg/signup) | leemeganj |
1,895,132 | Journey into new Web Tech | Today's web offers many new technologies, the core of which is to employ a Content Management tool... | 0 | 2024-06-20T19:07:18 | https://dev.to/msanders5/journey-into-new-web-tech-2o5o | Today's web offers many new technologies, the core of which is to employ a Content Management tool that the site owner can edit their content. From a development perspective, the next steps would be to pull that content, code the client side scripts and then finally to serve it as a web application.
While there are many tools to accomplish the above, the remainder of this post will outline the tools I chose to develop a site. Another goal is to re-use such work for creating any web applications for prospective clients. For content management, I decided upon Ghost CMS and for Static Site generation, I decided upon Astro.
## Ghost CMS
Besides the installation located at https://ghost.org/docs/install/, the next step with using Ghost CMS, would be to create or use a theme. This has a learning curve to it if creating a new theme. Ghost themes are written using [Handlebars](https://handlebarsjs.com/), another templating language to learn if you have not already done so. Most of the existing themes I have looked at also use gulp to concatenate the CSS files. Ghost has some pretty good [documentation on creating themes](https://ghost.org/docs/themes/).
I have a working theme that I uploaded to [git hub](https://github.com/devdog66/meikos-ghost-theme) so that the reader can follow along for the rest of the post.
A good practice that I have seen many ghost theme authors do, is to separate concerns meaning to have separate CSS files for certain areas or components of a site. An example of how I did this is below.

One CSS file, I chose to name it main.css, imports the other CSS files that gulp will concatenate and minify. Here is how my main.css file looks.

Some may find some limitations with Ghost about theming. One such limitation would be with navigation links. Out of the box, there are no multi-level capabilities nor are there ways to assign icons to links if desired. Therefore, I made the choice to hard code the navigation for a site using some placeholders. Another limitation is the seemingly lack of ability to preview the theme. I have tried the [express-handlebars](https://www.npmjs.com/package/express-handlebars) package to do this, however ran into roadblocks with Ghost specific variables and statements. I may revisit that attempt later. Right now the only way I see to preview is to zip up the theme files and upload to the Ghost admin location. Then you can preview it on Ghost.
The project is fairly simple with the following npm steps outlined below.

The first step is a utility from Ghost called gscan. This allows you to check the syntax of your .hbs files and the theme overall to make sure it adheres to a valid Ghost theme. Next is the build step that gulp will use to pack the main.css file. Now to create the theme zip file with the npm zip script. It uses gulp again to do this. I chose not to build with this step as sometimes the errors that gscan produces are not detailed enough. If one uploads the zipped theme even with errors, the Ghost admin site will provide more details as to what specifically needs to be fixed.
A final optional step is to copy the theme or assets folder to the next project in Astro. Here is the example of the main site template, default.hbs.

## Astro SSG
As of now, I have a Ghost instance with a theme of my liking and I have created the content on my Ghost instance. My next goal is to pull the content from Ghost CMS and to create the static assets needed for a web site. This is where Astro comes in. I have uploaded another repo to [git hub](https://github.com/devdog66/meikos-astro) for the user to follow along with.
The project structure looks like the following.

The public folder mostly contains the assets sub folder copied from the ghost theme. The src folder is broken down into the following:
- components = *Reusable parts to be included in layouts or pages.*
- content = *The folder where blogs and pages will be imported to. This follows Astro's content collections instructions.*
- importer = *This has the script used for importing content.*
- layouts = *This has the main layout for the site. Can include multiple layouts if desired.*
- pages = *This contains slugs to render pages or posts.*
The meat of the project is with the importer/index.ts script. It uses an API that needs to be setup in the Ghost instance with an API Key. In order to get Astro and Ghost to work together nicely, some workarounds were needed for this step.
Ghost exports it's format as HTML. Astro is built for Markdown. Therefore the exported content should be stored as .MDX files to work. Next workaround is with the fact that Ghost does not always export well-formed HTML and therefore I was forced to use a tool like sanitize-html to take care of that. I had to use it like below.

The layout is pretty simple and mirrors what was used with the default layout for the Ghost theme.

The page slug looks like the following and shows how to use the 'page' content collection.

All of this results in a beautiful cover page for a new web site.

And there is a list of recent blog posts, including this one. Talk about eating your own dog food.

| msanders5 | |
1,895,145 | OpenTelemetry | What is OpenTelemetry? OpenTelemetry is an open-source project that provides a set of... | 0 | 2024-06-20T19:06:40 | https://dev.to/lakshanwd/opentelemetry-k4o | ## What is OpenTelemetry?
OpenTelemetry is an open-source project that provides a set of APIs, libraries, agents, and instrumentation to enable observability in software applications. Tracing is one of the core features of OpenTelemetry, which helps in tracking the execution of operations within and across services.
## What is OpenTelemetry Tracing?
OpenTelemetry tracing is a way to capture and visualise the flow of requests as they traverse through different components of a distributed system. It allows developers to understand how different parts of an application interact and identify performance bottlenecks, errors, and latency issues.
### Key Concepts
* #### Trace
A trace represents the entire journey of a request as it moves through a system. It consists of a series of spans that are linked together.
* #### Span
A span represents a single operation within a trace. It contains information such as the operation name, start and end timestamps, attributes (metadata), and references to other spans (parent/child relationships).
* #### Context Propagation
Context propagation allows trace context to be passed along with requests as they move through different services and components. This ensures that spans are correctly linked together to form a complete trace.
* #### Sampling
Sampling determines which traces are collected and reported. It helps control the volume of trace data, balancing the need for observability with resource constraints.
### How OpenTelemetry Tracing Works
* #### Instrumentation
OpenTelemetry provides instrumentation libraries for various programming languages and frameworks. These libraries automatically capture trace data for common operations like HTTP requests, database queries, and more.
* #### Context Injection
When a request enters a service, OpenTelemetry injects trace context into the request headers. This context is propagated through downstream services, ensuring that all operations related to the initial request are included in the same trace.
* #### Span Creation
Each operation within a service creates a new span. Developers can create custom spans to capture additional operations or specific code blocks.
* #### Trace Exporting
Collected trace data is exported to a backend for analysis. OpenTelemetry supports various backends, including Jaeger, Zipkin, Prometheus, and more. The exported data can be visualised to understand request flows and identify issues.
---
### Benefits of OpenTelemetry Tracing
* #### End-to-End Visibility
Provides a comprehensive view of how requests are processed across different services, helping to identify bottlenecks and improve performance.
* #### Error Detection
Helps in identifying and diagnosing errors by showing where they occur in the request flow.
* #### Performance Optimisation
By measuring the latency of different operations, developers can pinpoint and optimise slow parts of the system.
* #### Improved Debugging
Detailed trace data makes it easier to debug complex issues that involve multiple services. | lakshanwd | |
1,895,154 | 1552. Magnetic Force Between Two Balls | 1552. Magnetic Force Between Two Balls Medium In the universe Earth C-137, Rick discovered a... | 27,523 | 2024-06-20T19:04:13 | https://dev.to/mdarifulhaque/1552-magnetic-force-between-two-balls-485i | php, leetcode, algorithms, programming | 1552\. Magnetic Force Between Two Balls
Medium
In the universe Earth C-137, Rick discovered a special form of magnetic force between two balls if they are put in his new invented basket. Rick has `n` empty baskets, the <code>i<sup>th</sup></code> basket is at `position[i]`, Morty has `m` balls and needs to distribute the balls into the baskets such that the **minimum magnetic force** between any two balls is **maximum**.
Rick stated that magnetic force between two different balls at positions `x` and `y` is `|x - y|`.
Given the integer array `position` and the integer `m`. Return _the required force_.
**Example 1:**

- **Input:** position = [1,2,3,4,7], m = 3
- **Output:** 3
- **Explanation:** Distributing the 3 balls into baskets 1, 4 and 7 will make the magnetic force between ball pairs [3, 3, 6]. The minimum magnetic force is 3. We cannot achieve a larger minimum magnetic force than 3.
**Example 2:**
- **Input:** position = [5,4,3,2,1,1000000000], m = 2
- **Output:** 999999999
- **Explanation:** We can use baskets 1 and 1000000000.
**Constraints:**
- `n == position.length`
- `2 <= n <= 105`
- <code>1 <= position[i] <= 10<sup>9</sup></code>
- All integers in `position` are **distinct**.
- `2 <= m <= position.length`
**Solution:**
```
class Solution {
/**
* @param Integer[] $position
* @param Integer $m
* @return Integer
*/
function maxDistance($position, $m) {
sort($position);
$left = 1;
$right = $position[count($position) - 1];
while ($left < $right) {
$mid = ($left + $right + 1) >> 1;
if ($this->check($position, $mid, $m)) {
$left = $mid;
} else {
$right = $mid - 1;
}
}
return $left;
}
public function check($position, $f, $m) {
$prev = $position[0];
$cnt = 1;
for ($i = 1; $i < count($position); ++$i) {
$curr = $position[$i];
if ($curr - $prev >= $f) {
$prev = $curr;
++$cnt;
}
}
return $cnt >= $m;
}
}
```
**Contact Links**
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)**
| mdarifulhaque |
1,895,153 | Extending Iterable for Custom Aggregations in Dart | Dart is a powerful programming language that allows developers to extend the functionality of... | 0 | 2024-06-20T19:03:02 | https://dev.to/francescoagati/extending-iterable-for-custom-aggregations-in-dart-3id9 | Dart is a powerful programming language that allows developers to extend the functionality of existing classes using extension methods. This feature can be incredibly useful when you want to add custom aggregations and operations to the `Iterable` class. In this article, we'll explore how to extend `Iterable` with custom aggregation methods for summing, finding the maximum value, and finding the minimum value.
## Adding Custom Aggregations to Iterable
We will create an extension called `IterableExtensions` that adds three methods to `Iterable`:
1. **sum**: Computes the sum of the elements based on a provided function.
2. **maxBy**: Finds the maximum element based on a provided comparison function.
3. **minBy**: Finds the minimum element based on a provided comparison function.
Here is the Dart code for the extension:
```dart
extension IterableExtensions<T> on Iterable<T> {
num sum(num Function(T) f) {
return fold(0, (previous, element) => previous + f(element));
}
T? maxBy(Comparable Function(T) f) {
return isEmpty ? null : reduce((a, b) => f(a).compareTo(f(b)) >= 0 ? a : b);
}
T? minBy(Comparable Function(T) f) {
return isEmpty ? null : reduce((a, b) => f(a).compareTo(f(b)) <= 0 ? a : b);
}
}
```
### Method Breakdown
- **sum**: This method takes a function `f` that maps each element to a `num` value. It uses the `fold` method to iterate through the elements, accumulating their sum.
- **maxBy**: This method takes a function `f` that returns a `Comparable` value for each element. It uses the `reduce` method to find the element with the maximum value based on the comparison function.
- **minBy**: This method is similar to `maxBy`, but it finds the element with the minimum value.
## Usage Examples
Let's see how these extension methods can be used in practice:
```dart
void main() {
var items = [1, 2, 3, 4, 5];
print(items.sum((x) => x)); // Output: 15
print(items.maxBy((x) => x)); // Output: 5
print(items.minBy((x) => x)); // Output: 1
var strings = ["apple", "banana", "cherry"];
print(strings.maxBy((s) => s.length)); // Output: "banana"
print(strings.minBy((s) => s.length)); // Output: "apple"
}
```
### Explanation
- **Summing Elements**: The `sum` method is used on a list of integers to calculate the total sum. In this case, it sums up to 15.
- **Finding Maximum and Minimum Values**: The `maxBy` and `minBy` methods are used to find the maximum and minimum elements in a list based on the value itself for integers and based on the string length for strings.
## Benefits of Extension Methods
Using Dart's extension methods, you can add functionality to existing classes and types in a way that is both expressive and easy to read. This approach allows you to write cleaner and more maintainable code by encapsulating common operations and making them available directly on the `Iterable` interface.
In conclusion, extending `Iterable` with custom aggregation methods can greatly enhance the expressiveness of your Dart code. By using the provided `sum`, `maxBy`, and `minBy` methods, you can perform complex operations in a concise and readable manner. This makes your code not only easier to write but also easier to understand and maintain. | francescoagati | |
1,895,152 | Clash Of Clans Mod APK | Clash of Clans Mod APK offers an exciting twist to the original game, providing players with... | 0 | 2024-06-20T19:02:38 | https://dev.to/clashof_clansfun/clash-of-clans-mod-apk-2ek9 | gamedev |

<a href=https://clashofclans.fun/>Clash of Clans Mod APK offers an exciting twist to the original game</a>, providing players with enhanced features and unlimited resources. This modified version allows gamers to access unlimited gems, gold, elixir, and dark elixir from the start, speeding up game progression and enabling them to build and upgrade their villages swiftly. With these resources readily available, players can experiment with various strategies, construct powerful defenses, and train armies quickly without the usual resource constraints.
Moreover, the mod APK often includes additional features such as custom heroes, troops, and buildings not found in the standard version, offering a unique gameplay experience. Players can engage in more challenging battles, participate in clan wars with strengthened armies, and explore new levels of creativity in base design.
However, it's essential to download Clash of Clans Mod APK from trusted sources to avoid potential security risks or issues with gameplay stability. Due to its modified nature, it may not always receive updates or support from the official game developers, affecting compatibility and functionality over time.
In conclusion, Clash of Clans Mod APK provides an exhilarating alternative for players seeking a faster-paced and resource-abundant gameplay experience, albeit with considerations for security and long-term support.
| clashof_clansfun |
1,894,813 | Maximizing Natural Light in Modular Kitchen Layouts | Natural light plays a crucial role in enhancing the ambiance and functionality of any space, and the... | 0 | 2024-06-20T13:08:21 | https://dev.to/beta_new_03fe0b223c4d3801/maximizing-natural-light-in-modular-kitchen-layouts-3dlf | kitchen | Natural light plays a crucial role in enhancing the ambiance and functionality of any space, and the kitchen is no exception. A well-lit kitchen not only feels more spacious and inviting but also contributes to a healthier and more energy-efficient environment. When designing a modular kitchen, maximizing natural light should be a priority to create a bright and airy atmosphere. Here’s how you can effectively harness natural light in your modular kitchen layout.
1. Strategic Window Placement
The placement and size of windows are fundamental in maximizing natural light. When designing your modular kitchen, identify key areas where windows can be strategically placed to allow maximum sunlight penetration. Consider installing larger windows or multiple windows in areas where the kitchen receives the most sunlight during the day.
2. Utilize Glass Doors and Skylights
In addition to windows, incorporating glass doors and skylights can significantly increase natural light intake. Glass doors leading to outdoor spaces or adjacent rooms with natural light sources can help distribute light throughout the kitchen. Skylights are another excellent option, especially in kitchens located in buildings where windows are limited or small.
3. Opt for Light-Reflective Surfaces
Choose materials and finishes that reflect light rather than absorb it. Opt for glossy or semi-glossy surfaces for cabinets, countertops, and backsplashes. Light-colored cabinetry, such as whites, creams, or light wood tones, can also help bounce light around the space, making the kitchen feel brighter and more open.
4. Minimize Window Coverings
While privacy is important, heavy or opaque window coverings can block natural light. Instead, opt for sheer or light-filtering curtains, blinds, or shades that allow sunlight to penetrate while still providing necessary privacy. Consider installing window treatments that can be easily opened during the day to maximize natural light and closed for privacy in the evenings.
5. Open Floor Plans and Layout Considerations
If possible, design your modular kitchen in an open floor plan layout that allows natural light to flow seamlessly from adjacent rooms. Avoid blocking natural light sources with large appliances or bulky furniture. Keep the kitchen layout open and spacious to maintain a bright and airy feel throughout the day.
6. Mirror and Glass Accents
Integrating mirrors and glass accents into your kitchen design can help amplify natural light by reflecting it throughout the space. Consider incorporating mirrored backsplashes, glass-fronted cabinets, or mirrored panels strategically placed opposite windows or light sources to enhance brightness.
7. Maximize Daylight Hours with Timeless Design
A timeless and efficient kitchen design will ensure that you maximize natural light throughout the day. Keep functionality in mind when planning the layout to optimize workflow and maintain an unobstructed path for sunlight. This approach not only enhances the kitchen’s aesthetic appeal but also improves energy efficiency by reducing reliance on artificial lighting during daylight hours.
8. Regular Maintenance and Cleaning
Finally, ensure that windows, glass doors, and skylights are kept clean and free of obstructions such as dust or dirt buildup. Regular maintenance will ensure that natural light can enter your kitchen unimpeded, maintaining a bright and cheerful atmosphere year-round.
In conclusion, maximizing natural light in your modular kitchen layout involves thoughtful planning and design considerations. By strategically placing windows, incorporating glass doors and skylights, choosing light-reflective surfaces, and maintaining an open layout, you can create a kitchen that is not only functional but also bright, inviting, and energy-efficient. Embrace the beauty of natural light in your kitchen design to enhance your cooking experience and create a space where you love to spend time.
https://tuskerkitchens.in/ | beta_new_03fe0b223c4d3801 |
1,895,151 | Should we or shouldn't we? Create a project management mobile app | If you're new to OpenProject-we're a leading open source project management tool that helps teams... | 0 | 2024-06-20T19:01:49 | https://dev.to/openproject/should-we-or-shouldnt-we-create-a-project-management-mobile-app-31a9 | productivity, feedback, discuss, opensource | If you're new to OpenProject-we're a leading open source project management tool that helps teams make progress in a busy world.
The OpenProject team is close with the community and regularly asks questions, takes requests, and sends new updates.
This summer the team is asking: Do you want an OpenProject mobile app?
👉 [Take our quick survey](https://openproject.limequery.com/116122?lang=en)
🔒 Your responses are completely anonymous!
🗓️ The survey will be open until July 15, 2024.
In the comments, tell us if you already use a project management mobile app? What features do you love about it? What do you miss? | jenwikehuger |
1,895,150 | Cost versus Effort, an important lesson for self-employed | Early on in my career I worked for a company that developed bespoke websites and software. Almost all... | 0 | 2024-06-20T19:01:26 | https://mardy.dev/2022/06/cost-versus-effort-an-important-lesson-for-self-employed/ | webdev, workplace, productivity, career | Early on in my career I worked for a company that developed bespoke websites and software. Almost all the internal systems and processes were bespoke developments. In each instance the process complimented the solution and vice versa. When I was asked to create a solution that would generate a word document for new projects from our bespoke CRM, I naturally started building a bespoke method.
For the first time, I was about to learn the important balance of cost versus effort.
## Cost & Effort
There’s nothing wrong with putting effort into building a bespoke solution. There’s nothing wrong with spending money on the cost of ready made products or outsourcing a task. However, not weighing up the cost versus effort for different tasks and budgets can lead to regret, ineffective performance and lost earnings.
It’s a simple formula really – “Will this cost more in paid man hours versus buying a ready made solution?”
Thinking this way was unnatural for me when I worked in my first job, because costs aren’t really the responsibility of a Junior Developer. In fact, my responsibility was to develop bespoke C# ASP.NET websites and software, so I would always try to solve a solution with code.
## Taking too much time
Trying to create a bespoke solution for my company’s CRM took an awfully long time. In fact it probably took about 2-3 days, which to the frustration of my boss, was a little too long.
I also made the mistake of getting to work and starting early, then leaving on time. Which for some reason always appears worse than arriving on time and leaving late, despite being in the office for the same amount of time!
## The Cost was cheaper than the Effort
The worst part was that the particular bit of code I was struggling with could have been massively simplified with a paid code library. I think it was about £75, which to me sounded a lot at the time. There was an alternative method which required coding, so I thought this would be the better route.
What I should have done was look at my hourly rate and the required time to achieve the coded method. Then, weight this up versus the cost of the paid product.
From the company’s perspective, they were paying something like £10 an hour for my development (it was a long time ago).
`product price / hourly rate = 75 / 10 = 7.5`
For the bespoke method to be a viable option I would have had to have been able to fully develop the solution within 7.5 hours. However, as I’ve said it actually took me 2-3 days as well as involving the end user (my boss) who kept feeding back issues.
## A lesson in career performance
At the time I didn’t look at this equation as a cost efficiency equation, but more of a career performance equation. My boss didn’t look favourably on my position as a result of my performance on this occasion and I could sense that I’d made a big cock up (in fact, much later in a conversation he implied that he was considering letting me go!).
I continued to look at jobs and tasks this way going forward, but now that I’m self-employed it has a new meaning. There are no points for developing a beautiful piece of code when you could have paid for a solution and saved both time and money. The client will only see the result of the work done, not the method of how it’s done. With the client’s happiness being equal the best thing to do is to stop and weigh up the cost versus effort and take the easiest and most viable path.
That boss of mine sure gave me some stick and although it didn’t feel very positive at the time, it was a valuable lesson, one that still serves me to this day. | mardydev |
1,895,149 | Extending String for Validation in Dart 3 | In Dart 3, one of the powerful features you can leverage is extensions. Extensions allow you to add... | 0 | 2024-06-20T19:00:27 | https://dev.to/francescoagati/extending-string-for-validation-in-dart-3-4jp0 | dart, extensions, string | In Dart 3, one of the powerful features you can leverage is extensions. Extensions allow you to add new functionality to existing libraries. In this article, we will demonstrate how to extend the `String` class to include common validation checks such as email validation, numeric check, and phone number validation.
## Creating the String Extension
Let's start by creating an extension on the `String` class. We will define three validation methods:
1. `isValidEmail` - Checks if the string is a valid email format.
2. `isNumeric` - Checks if the string consists only of numeric characters.
3. `isPhoneNumber` - Checks if the string is a valid phone number.
Here's the code for the `StringValidation` extension:
```dart
extension StringValidation on String {
bool get isValidEmail {
final regex = RegExp(r'^[^@]+@[^@]+\.[^@]+');
return regex.hasMatch(this);
}
bool get isNumeric {
final regex = RegExp(r'^-?[0-9]+$');
return regex.hasMatch(this);
}
bool get isPhoneNumber {
final regex = RegExp(r'^\+?[0-9]{10,13}$');
return regex.hasMatch(this);
}
}
```
### Explanation
- **isValidEmail**: This method uses a regular expression to check if the string follows a basic email pattern (e.g., `example@test.com`).
- **isNumeric**: This method checks if the string contains only numeric characters, including an optional leading minus sign for negative numbers.
- **isPhoneNumber**: This method validates if the string is a phone number. It allows an optional leading plus sign and checks for a length between 10 to 13 digits.
## Usage
Using these extensions is straightforward. You simply call the new methods on any string instance. Below is an example demonstrating how to use these validation methods:
```dart
void main() {
var email = "example@test.com";
var phone = "+1234567890";
var number = "12345";
print(email.isValidEmail); // Output: true
print(phone.isPhoneNumber); // Output: true
print(number.isNumeric); // Output: true
}
```
### Explanation
- **email.isValidEmail**: Checks if the `email` string is a valid email. The output will be `true` if the string matches the email pattern.
- **phone.isPhoneNumber**: Validates if the `phone` string is a valid phone number. The output will be `true` if the string matches the phone number pattern.
- **number.isNumeric**: Checks if the `number` string consists of numeric characters. The output will be `true` if the string matches the numeric pattern.
## Conclusion
By using Dart's extension methods, you can add reusable validation logic to the `String` class. This approach makes your code cleaner and more modular. You can extend this pattern to include more validations as needed, enhancing the robustness of your applications.
Try integrating these extensions into your Dart projects and see how they can simplify your validation logic. Happy coding! | francescoagati |
1,895,148 | Enhancing List Functionality in Dart with Custom Extensions | Dart, a language known for its flexibility and robust features, allows developers to extend the... | 0 | 2024-06-20T18:57:58 | https://dev.to/francescoagati/enhancing-list-functionality-in-dart-with-custom-extensions-4c7 | dart, extensions, list | Dart, a language known for its flexibility and robust features, allows developers to extend the functionality of existing classes without modifying them. One powerful way to achieve this is through extensions. In this article, we'll explore how to create a custom extension for the `List` class in Dart to add some useful utility methods.
## Introduction to Dart Extensions
Extensions in Dart enable you to add new functionalities to existing libraries. By creating an extension, you can introduce new methods and properties to a class without altering its source code. This is particularly useful when you want to augment the capabilities of widely used classes like `List`.
## Creating the `ListUtils` Extension
Let's dive into the code for our custom extension `ListUtils` which adds three handy methods to the `List` class:
1. `firstOrNull`: Returns the first element of the list, or `null` if the list is empty.
2. `lastOrNull`: Returns the last element of the list, or `null` if the list is empty.
3. `takeLast`: Returns the last `n` elements of the list.
Here's the code for our extension:
```dart
extension ListUtils<T> on List<T> {
T? firstOrNull() {
return isEmpty ? null : this[0];
}
T? lastOrNull() {
return isEmpty ? null : this[length - 1];
}
List<T> takeLast(int n) {
return skip(length - n).toList();
}
}
```
### Method Breakdown
#### `firstOrNull`
This method returns the first element of the list if it exists; otherwise, it returns `null`.
```dart
T? firstOrNull() {
return isEmpty ? null : this[0];
}
```
#### `lastOrNull`
Similar to `firstOrNull`, this method returns the last element of the list or `null` if the list is empty.
```dart
T? lastOrNull() {
return isEmpty ? null : this[length - 1];
}
```
#### `takeLast`
This method returns the last `n` elements of the list. It uses the `skip` method to skip the first `length - n` elements and then converts the remaining elements to a list.
```dart
List<T> takeLast(int n) {
return skip(length - n).toList();
}
```
## Using the Extension
To see our extension in action, we can use the following example in the `main` function:
```dart
void main() {
var list = [1, 2, 3];
print(list.firstOrNull()); // Output: 1
print(list.lastOrNull()); // Output: 3
print(list.takeLast(2)); // Output: [2, 3]
}
```
### Explanation
- `list.firstOrNull()`: Since the list is not empty, this will return the first element, `1`.
- `list.lastOrNull()`: This will return the last element, `3`.
- `list.takeLast(2)`: This will return the last two elements of the list, `[2, 3]`.
## Conclusion
Extensions in Dart provide a powerful mechanism to enhance the functionality of existing classes in a clean and maintainable way. By creating the `ListUtils` extension, we have added methods to the `List` class that can simplify common operations. This approach keeps our codebase clean and reusable.
By understanding and utilizing Dart extensions, you can significantly improve your productivity and code quality. Happy coding! | francescoagati |
1,895,147 | Enjoy react key takeaways from a new developer. | Welcome back to my second blog on DEV. I have completed phase 2 of FlatIron school and this phase... | 0 | 2024-06-20T18:56:56 | https://dev.to/killerfox007/enjoy-react-key-takeaways-from-a-new-developer-5gdi | webdev, react, beginners, programming | Welcome back to my second blog on DEV. I have completed phase 2 of FlatIron school and this phase covered react. Phase 1 was JavaScript, phase 2 was React with phase 3 being Python and SQL. React was difficult for me due to a lot of different concepts and learning it a lot faster than javascript.
**Key Topics Covered in This Blog:**
The Impact React had on me as a developer
The useState Hook
Lastly Props
Starting with React's had on my understanding of web development, it's hard to write how it changed my preception. Now, when I scroll through Facebook or click a link, I know that it triggers an onClick event, followed by a GET request. Understanding this process has been extremely impactful on my day-to-day, and I enjoyed working with React.
**Understanding and Using useState:**
Initially, useState was nothing short of a challenge. Its syntax and the concept of hooks were completely new to us as students. However, useState was very helpful and was the backbone in our project and for React projects in general. It allows us to update the page without refreshing it. To explain the syntax, the useState is defined as const [state, setState] = useState(). When setState(Anything) is called, it triggers a re-render and updates the page. The state variable holds the value of Anything, which could be data from a fetch request, a boolean for a loading feature, and so on.
**Props:**
Props are similar to arguments in JavaScript but are different. They allow you to pass functions, state or setState, variables, and data. Props allow you to pass what you will need to other files. You need to send props and accept them. Props can be deconstructed and go hand and hand with state. | killerfox007 |
1,895,146 | Test | Test | 0 | 2024-06-20T18:56:41 | https://dev.to/dkumar08/test-4pnf |
Test | dkumar08 | |
1,889,047 | Sail:onLagoon! | Laravel users, we’re so excited to announce the launch of Sail:onLagoon! If you’re familiar with... | 0 | 2024-06-20T18:53:40 | https://dev.to/uselagoon/sailonlagoon-2o3c | laravel, sail | Laravel users, we’re so excited to announce the launch of Sail:onLagoon!
If you’re familiar with Sail, you know it’s a quick and easy way to spin up a Laravel site. And now it’s fully integrated with Lagoon, allowing you to spin up a Laravel site configured for Lagoon quickly and easily.
Check out our demo video to see how quick and seamless Sail:onLagoon is:
{% youtube O2FcUz4Rt9E %}
And read more in the documentation here: https://github.com/uselagoon/sailonlagoon
This is also just the beginning of Lagoon’s relationship with Laravel! Historically we have given a lot of focus to Drupal, and now we are working to grow our support and expertise in the Laravel space. We’re reaching out to you because we want to become a part of the Laravel ecosystem. We want Laravel users to help us drive our efforts, so we’d appreciate it if you could take a few minutes to fill out our Laravel user survey: https://forms.gle/96cQHNQL6gKF3gZ78
You can also check out our Laravel example: https://github.com/lagoon-examples/laravel-example-simple
Questions? Want to learn more? Join us on the Lagoon Discord: https://discord.gg/te5hHe95JE | alannaburke |
1,895,114 | 5. Where to Go Now: Putting Unit Testing into Action | Now that you've grasped the fundamentals of unit testing, it's time to apply this valuable skillset... | 27,796 | 2024-06-20T18:50:38 | https://dev.to/sandheep_kumarpatro_1c48/5-where-to-go-now-putting-unit-testing-into-action-58el | react, vitest, unittest, javascri | Now that you've grasped the fundamentals of unit testing, it's time to apply this valuable skillset to real-world projects. Here's how you can take the next steps:
**1\. Dive into Hands-on Practice:**
- **Build Mini-Apps with TDD:** Experiment with Test-Driven Development (TDD) by creating small, focused applications like:
- **Todo List:** Manage tasks effectively with features like adding, marking, and filtering todos.
- **Expense Tracker:** Track your finances by recording expenses, categorizing them, and visualizing spending patterns.
- **Calculator:** Build a basic calculator to test various arithmetic operations.
- **Weather App:** Fetch weather data (using a free API) and display it in a user-friendly format.
- **Simple E-commerce Shop:** Simulate adding items to a cart, calculating totals, and handling basic checkout logic.
- **Start Small, Scale Up:** Begin with a straightforward app and gradually add complexity to refine your unit testing skills.
- **Embrace TDD:** Practice the TDD cycle (red, green, refactor) to ensure your code is well-tested and maintainable.
**2\. Deepen Your Knowledge with Advanced Topics:**
- **Vitest's Hidden Gems:** While you've covered core principles, consider exploring Vitest's advanced capabilities:
- **Mocking Libraries:** Learn to mock external dependencies (like APIs or databases) to isolate your components and focus on their internal behavior. Popular libraries include `jest.mock` or `vitest.mock`.
- **Snapshot Testing:** Utilize snapshot testing to ensure UI components render consistently as expected. Tools like `jest-serializer-vue` or `@testing-library/jest-dom` can simplify this process.
- **Parallel Testing:** Leverage Vitest's built-in parallel test execution to speed up your test suite, especially in large projects. This can significantly reduce test runtime.
- **Error Handling in Unit Tests:** Craft tests to handle various error scenarios gracefully, ensuring your app behaves predictably even under unexpected conditions. This includes testing proper error messages, logging, or fallback mechanisms.
**3\. Integrate Unit Testing into Your Workflow:**
- **Continuous Integration (CI):** Set up a CI pipeline (like CircleCI, Jenkins, or GitHub Actions) to automatically run your unit tests whenever you push code changes. This provides continuous feedback on code quality and catches regressions early.
- **Code Coverage Reports:** Utilize tools like `jest-coverage` or `vitest --coverage` to generate code coverage reports. Aim for high code coverage (generally 80% or more) to ensure most of your application's logic is tested.
**Beyond Vitest:**
While Vitest is a fantastic testing framework, feel free to explore other options that might suit your project or preferences, such as Jest (also popular with React) or Mocha. The core concepts of unit testing remain largely consistent across frameworks.
| sandheep_kumarpatro_1c48 |
1,895,113 | 4. Some More Examples and Explanations | Example 1 :- Mocking Classes //returnNameOrAge.js export const returnNameOrAge =... | 27,796 | 2024-06-20T18:50:25 | https://dev.to/sandheep_kumarpatro_1c48/4-some-more-examples-and-explanations-38a3 | react, vitest, unittest, javascript | # Example 1 :- Mocking Classes
```javascript
//returnNameOrAge.js
export const returnNameOrAge = (isName) => {
if (isName) {
return "My Name is XYZ"
}
return 25
}
```
```javascript
//someOtherFunction.js
class SomeModule {
returnTrueOrFalse() {
return false;
}
}
export { SomeModule };
```
```javascript
//index.js
import { returnNameOrAge } from "./returnNameOrAge";
import { SomeModule } from "./someOtherFunctions";
const someModule = new SomeModule();
export const testVitest = () => {
const dataFromSomeModule = someModule.returnTrueOrFalse();
const dataFromReturnNameOrAge = returnNameOrAge(dataFromSomeModule);
return dataFromReturnNameOrAge;
};
```
```javascript
//index.test.js
import { it, expect, describe, vi } from "vitest";
import { testVitest } from ".";
import { SomeModule } from "./someOtherFunctions";
vi.mock("./someOtherFunctions.js", async (importActual) => {
const mod = await importActual();
const SomeModule = vi.fn().mockReturnValue({
returnTrueOrFalse: vi.fn(),
});
return {
...mod,
SomeModule,
};
});
describe("testVitest", () => {
it("should return name", () => {
const mockedSomeModule = new SomeModule();
vi.mocked(mockedSomeModule.returnTrueOrFalse).mockReturnValue(true);
const data = testVitest();
expect(data).toBe("My Name is XYZ");
});
it("should return age", () => {
const mockedSomeModule = new SomeModule();
vi.mocked(mockedSomeModule.returnTrueOrFalse).mockReturnValue(false);
const data = testVitest();
expect(data).toBe(25);
});
});
```
let's break down the modules and the test file
**Modules:**
1. **returnNameOrAge.js:** This module defines a simple function named `returnNameOrAge` that takes a boolean (`isName`) as input. If `isName` is true, it returns a string "My Name is XYZ". Otherwise, it returns the number 25.
2. **someOtherFunctions.js:** This module defines a class named `SomeModule`. The class has a method called `returnTrueOrFalse` that simply returns `false`. This class is likely intended to be mocked in the test.
3. **index.js:** This is the main module where everything comes together. It imports the `returnNameOrAge` function and the `SomeModule` class from the other modules. It then creates a new instance of `SomeModule` and defines a function called `testVitest`. The `testVitest` function calls the `returnTrueOrFalse` method of the `someModule` instance and then uses the return value to call the `returnNameOrAge` function. Finally, it returns the result from `returnNameOrAge`.
**Test File (index.test.js):**
This file uses the Vitest testing framework to test the `testVitest` function from `index.js`.
1. **Imports:** It imports the necessary functions from Vitest (`it`, `expect`, `describe`, and `vi`) for writing tests. It also imports the `testVitest` function from `index.js` and the `SomeModule` class from `someOtherFunctions.js`.
2. **Mocking:** It uses the `vi.mock` function to mock the `someOtherFunctions.js` module. This means that when the test runs, it won't use the actual implementation of the `SomeModule` class, but a mocked version instead. The mocked version is defined using a function that returns an object with a mocked `SomeModule` class. The mocked `SomeModule` class has a mocked `returnTrueOrFalse` method that can be controlled by the test.
3. **Test Cases:** The file defines two test cases using `describe` and `it`.
- The first test case (`it("should return name")`) mocks the `returnTrueOrFalse` method of the mocked `SomeModule` to return `true`. It then calls the `testVitest` function and asserts that the returned value is "My Name is XYZ" using `expect`.
- The second test case (`it("should return age")`) mocks the `returnTrueOrFalse` method of the mocked `SomeModule` to return `false`. It then calls the `testVitest` function and asserts that the returned value is 25 using `expect`.
**MAY I HAVE YOUR ATTENTION, PLEASE!!!**
Even though `index.js` depends on both `returnNameOrAge` and `SomeModule` only `SomeModule` but not `returnNameOrAge` is mocked in the test, here are the reason :-
- **Focus of the Test:** The test aims to isolate the logic of `testVitest` in `index.js`. This function relies on `returnNameOrAge` to format the output based on the value received from `SomeModule`. Since `returnNameOrAge` is a simple function with clear logic, mocking it might be unnecessary for this specific test.
- **Mocking Complexity:** Mocking a class like `SomeModule` allows for more control over its behavior. You can define what its methods return in different scenarios. In this case, the test wants to control the output of `returnTrueOrFalse` to verify how `testVitest` handles different inputs. Mocking `returnNameOrAge` wouldn't provide the same level of control.
- **Testing vs. Implementation:** Ideally, the functionality of `returnNameOrAge` should be tested in its own dedicated test file. This keeps the tests focused and avoids redundancy. Mocking it in this test would be testing an implementation detail rather than the overall logic of `testVitest`.
# Example 2 :- Mocking Functions
```javascript
//returnNameOrAge.js
export const returnNameOrAge = (isName) => {
if (isName) {
return "My Name is XYZ"
}
return 25
}
```
```javascript
//someOtherFunction.js
export const someOtherFunction = () => {
const testValue = true;
return testValue;
};
```
```javascript
//index.js
import { returnNameOrAge } from "./returnNameOrAge";
import { someOtherFunction } from "./someOtherFunctions";
export const testVitest = () => {
const dataFromSomeOtherFunction = someOtherFunction();
const dataFromReturnNameOrAge = returnNameOrAge(dataFromSomeOtherFunction);
return dataFromReturnNameOrAge;
};
```
```javascript
//index.test.js
import { it, expect, describe, vi } from "vitest";
import { testVitest } from ".";
const mocks = vi.hoisted(() => ({
someOtherFunction: vi.fn(),
}));
vi.mock("./someOtherFunctions.js", () => ({
someOtherFunction: mocks.someOtherFunction,
}));
describe("testVitest", () => {
it("should return name", () => {
mocks.someOtherFunction.mockReturnValue(true);
const data = testVitest();
expect(data).toBe("My Name is XYZ");
});
it("should return age", () => {
mocks.someOtherFunction.mockReturnValue(false);
const data = testVitest();
expect(data).toBe(25);
});
});
```
let's break down the modules and the test file
**Modules:**
1. **returnNameOrAge.js:**
- This module defines a function named `returnNameOrAge` that takes a boolean (`isName`) as input.
- If `isName` is true, it returns the string "My Name is XYZ".
- Otherwise, it returns the number 25.
2. **someOtherFunctions.js:**
- This module defines a function named `someOtherFunction`.
- This function simply creates a constant `testValue` with the value `true` and returns it.
3. **index.js:**
- This is the main module.
- It imports `returnNameOrAge` from `returnNameOrAge.js` and `someOtherFunction` from `someOtherFunctions.js`.
- It defines a function called `testVitest`.
- `testVitest` calls `someOtherFunction` to get a value.
- It then calls `returnNameOrAge` with the value from `someOtherFunction` and returns the result.
**Test File (index.test.js):**
This file uses the Vitest testing framework to test the `testVitest` function from `index.js`.
1. **Imports:**
- It imports necessary functions from Vitest for writing tests (`it`, `expect`, `describe`, and `vi`).
- It imports the `testVitest` function from `index.js`.
2. **Mocking:**
- It uses the `vi.mock` and `vi.hoisted` functions together to mock the `someOtherFunction`. Here's a breakdown:
- `vi.hoisted` creates a function that returns an object to hold mocks. This ensures mocks are created only once before each test.
- Inside `vi.mock`, it replaces the `someOtherFunction` from `someOtherFunctions.js` with the mocked version from the `mocks` object.
3. **Test Cases:**
- The file defines two test cases using `describe` and `it`.
- The first test (`"should return name"`) mocks `someOtherFunction` to return `true` (using `mockReturnValue`). It then calls `testVitest` and asserts the returned value is "My Name is XYZ" using `expect`.
- The second test (`"should return age"`) mocks `someOtherFunction` to return `false`. It then calls `testVitest` and asserts the returned value is 25 using `expect`.
# Example 3 :- Actual Import Mocking and/or Hoisting+Mocking
Now, We'll write two unit tests for the same function, each leveraging a different mocking approach. The first test case will demonstrate mocking the actual import, while the second will explore the concept of hoisting combined with mocking. By comparing these techniques, you'll gain valuable insights into their strengths and how to choose the most suitable method for your specific testing needs.
```javascript
// userService.js
import axios from "axios";
const userService = async () => {
const { data } = await axios.get(
"https://jsonplaceholder.typicode.com/users/1",
);
return data;
};
export default userService;
```
```javascript
// approach involving hoisting and mocking (this approach doesn't involve importing axios in the test file)
import userService from "./userService";
const mockedAxios = vi.hoisted(() => ({
default: {
get: vi.fn(),
},
}));
vi.mock("axios", () => ({
default: mockedAxios.default,
}));
test("testing", async () => {
mockedAxios.default.get.mockResolvedValue({
data: {
id: 1,
},
});
const data = await userService();
expect(data.id).toBe(1);
});
```
```javascript
// approach involving mocking using importActual (this approach involve importing axios in the test file)
import axios from "axios";
import userService from "./userService";
vi.mock("axios", async () => {
const axios = await vi.importActual("axios");
const get = vi.fn();
return {
default: {
...axios,
get,
},
};
});
test("testing", async () => {
vi.mocked(axios.get).mockResolvedValue({
data: {
id: 1,
},
});
const data = await userService();
expect(data.id).toBe(1);
});
```
`importActual` can also be used as following as well
```javascript
vi.mock("axios", async (importActual) => {
const axios = await importActual<typeof import("axios")>();
const get = vi.fn();
return {
default: {
...axios,
get,
},
};
});
```
**KEY TAKE AWAYS**
1. general use cases of`importActual` :- class mocking, function mocking especially when these functions are spied on {*`vi.spyOn()`*} (good use cases example for `spyOn` would be to unit test how many times the function has been executed)
2. use case of `hoisting+mocking` :- for general test cases
| sandheep_kumarpatro_1c48 |
1,895,112 | 3. Unit testing concept | Topics covered in this section :- Synchronous Testing Asynchronous Testing Data mocking Event... | 27,796 | 2024-06-20T18:50:09 | https://dev.to/sandheep_kumarpatro_1c48/3-unit-testing-concept-4gj8 | react, vitest, unittest, javascript | Topics covered in this section :-
1. Synchronous Testing
2. Asynchronous Testing
3. Data mocking
4. Event testing
**Note :- **An interesting point to know here is that the above points are related and i have explained this relationship in detail at the end.
# Synchronous Testing
In JavaScript, synchronous code executes line by line in a sequential manner. It doesn't involve asynchronous operations like waiting for network requests or timers to complete. This makes synchronous code predictable and easier to test as the outcome is determined by the input and the code's logic alone.
**Advantages of Synchronous Testing:**
- **Faster Execution:** Synchronous tests generally run faster compared to asynchronous tests because there's no waiting for external factors. This improves test suite execution speed.
- **Simpler Logic:** Synchronous tests often involve straightforward assertions about the expected behavior of functions without complex asynchronous handling. This makes them easier to write and maintain.
- **Deterministic Results:** Since synchronous code execution is sequential, you can be confident about the order in which code is executed and the values it produces. This simplifies debugging and ensures consistent test results.
**When to Use Synchronous Testing:**
- **Unit Testing Pure Functions:** Synchronous tests are ideal for testing pure functions that don't rely on external factors and produce the same output for the same input. These functions are typically used for data manipulation or calculations within your application.
- **Testing Utility Functions:** You can effectively test utility functions that perform simple tasks like string manipulations or date formatting using synchronous tests.
- **Initial Unit Tests:** When starting with unit testing, synchronous tests are a great way to get started as they require less setup and reasoning compared to asynchronous tests.
### Practical Code Example with Unit Test
```javascript
//Code (dateUtils.js):
function formatDate(date, formatString = 'YYYY-MM-DD') {
if (!(date instanceof Date)) {
throw new TypeError('formatDate() expects a Date object');
}
const year = date.getFullYear();
const month = String(date.getMonth() + 1).padStart(2, '0'); // Zero-pad month
const day = String(date.getDate()).padStart(2, '0');
return formatString.replace('YYYY', year).replace('MM', month).replace('DD', day);
}
```
**Explanation:**
1. **Function Definition:** The `formatDate` function takes two arguments:
- `date`: This is expected to be a valid JavaScript `Date` object representing a specific date and time.
- `formatString` (optional): This is a string that defines the desired output format for the date. It defaults to 'YYYY-MM-DD' (year-month-day) but can be customized to include other parts like hours, minutes, or seconds.
2. **Input Validation:** The function starts by checking if the `date` argument is indeed a `Date` object using the `instanceof` operator. If not, it throws a `TypeError` to indicate an invalid input. This ensures that the function only works with valid dates.
3. **Date Component Extraction:** Inside the function, we extract the individual components of the date object:
- `year`: Retrieved using `date.getFullYear()`.
- `month`: Obtained using `date.getMonth()`. However, this returns a zero-based index (0 for January, 11 for December). To get the month in the usual 1-based format, we add 1 and then convert it to a string using `String()`.
- `day`: Similar to `month`, we get the day using `date.getDate()` and convert it to a string.
4. **Zero-Padding:** Months and days are typically represented with two digits (01 for January 1st). The `padStart()` method ensures this format by adding leading zeros if the value is less than two digits. Here, we pad both `month` and `day` with a leading '0' if necessary.
5. **String Formatting:** The `formatString` is used as a template for the final output. We use the `replace()` method repeatedly to substitute placeholders with the actual date components:
- `YYYY` gets replaced with the extracted `year`.
- `MM` gets replaced with the zero-padded `month`.
- `DD` gets replaced with the zero-padded `day`.
6. **Returning the Formatted Date:** Finally, the function returns the formatted date string that combines the year, month, and day according to the specified format.
```javascript
//Unit Test (dateUtils.test.js):
import { test, expect } from 'vitest';
import { formatDate } from './dateUtils';
test('formatDate() formats date correctly', () => {
const date = new Date(2024, 5, 13); // June 13, 2024
expect(formatDate(date)).toBe('2024-06-13');
expect(formatDate(date, 'DD/MM/YYYY')).toBe('13/06/2024');
});
test('formatDate() throws for non-Date arguments', () => {
expect(() => formatDate('invalid date')).toThrowError(TypeError);
expect(() => formatDate(123)).toThrowError(TypeError);
});
```
**Explanation:**
**1\. Imports:**
- `test` and `expect` are imported from `vitest` to define test cases and make assertions.
- `formatDate` is imported from the `./dateUtils` file, assuming it's in the same directory.
**2\. Test Case 1: `formatDate() formats date correctly`:**
- **Description:** This test verifies that the function formats a valid `Date` object according to the expected output.
- **`const date = new Date(2024, 5, 13);`:** This line creates a new `Date` object representing June 13, 2024.
- **`expect(formatDate(date)).toBe('2024-06-13');`:** This assertion uses `expect` to check the output of `formatDate(date)`. It expects the formatted date to be a string equal to "2024-06-13" (default format).
- **`expect(formatDate(date, 'DD/MM/YYYY')).toBe('13/06/2024');`:** This additional assertion tests the custom format. It calls `formatDate` with the same `date` object but provides a different `formatString` ("DD/MM/YYYY") as the second argument. The assertion expects the output to be "13/06/2024" based on the provided format.
**3\. Test Case 2: `formatDate() throws for non-Date arguments`:**
- **Description:** This test ensures that the function throws an error when provided with invalid input (anything other than a `Date` object).
- **`expect(() => formatDate('invalid date')).toThrowError(TypeError);`:** This line uses an arrow function to wrap the call to `formatDate` with an invalid string argument ("invalid date"). The `expect` statement then checks if the function throws a `TypeError`.
- **`expect(() => formatDate(123)).toThrowError(TypeError);`:** This assertion follows a similar approach, testing if the function throws a `TypeError` when given a number (123) as input.
# Asynchronous Testing
JavaScript, being an event-driven language, heavily relies on asynchronous operations for tasks like network requests, file I/O, timeouts, and more. These operations don't block the main thread, allowing other code to execute while waiting for results. However, this asynchronous nature can introduce challenges when writing unit tests.
**Challenges of Asynchronous Testing:**
- **Callback Hell:** Nested callbacks can lead to difficult-to-read and maintain code.
- **Promises Anti-Patterns:** Common pitfalls include forgetting to handle errors or using `then` chaining excessively.
- **Test Completion:** Tests that run asynchronous code need a way to ensure they finish before moving on or making assertions.
**Vitest's Approach to Asynchronous Testing:**
Vitest leverages the native `async/await` syntax for a more readable and synchronous-like testing experience. Here's how it works:
1. **Test Functions as Async:** Vitest is flexible and works with both synchronous and asynchronous test functions. You only need to make your test function asynchronous if your tests involve waiting for asynchronous operations.
2. **`await` for Asynchronous Operations:** You can freely use `await` within test functions to pause execution until asynchronous operations resolve (e.g., promises, timers).
3. **Implicit Assertions:** Vitest automatically waits for tests to finish before making assertions. Assertions like `expect` or `test` implicitly wait for all asynchronous operations within the test function to complete.
**Practical Code Example with Unit Test**
```javascript
function scheduleTimer(callback, delay = 1000) {
return setTimeout(callback, delay);
}
test('scheduleTimer calls callback after delay', async () => {
const mockCallback = jest.fn();
const timerId = scheduleTimer(mockCallback);
// No need to await here as Vitest implicitly waits
expect(mockCallback).not.toHaveBeenCalled();
await new Promise((resolve) => setTimeout(resolve, 1200)); // Wait slightly longer than delay
expect(mockCallback).toHaveBeenCalledTimes(1);
clearTimeout(timerId); // Clean up timer
});
```
**Understanding the Asynchronous Timer Function:**
The function `scheduleTimer` takes two arguments:
1. **callback:** A function to be executed after the delay.
2. **delay (optional):** The delay in milliseconds before calling the callback (defaults to 1000ms).
It uses `setTimeout` from the browser's Web APIs to schedule the callback's execution after the specified delay. `setTimeout` returns a timer ID that can be used to cancel the timer if needed (shown in the test).
**Explanation of the Test:**
1. **Mock Callback:** We create a mock function (`mockCallback`) using `jest.fn()` to track whether the callback is called and with what arguments.
2. **Schedule Timer:** We call `scheduleTimer` with the mock callback and set a slightly longer delay (1200ms) than the default (1000ms). This ensures the callback gets called after the delay. The returned timer ID (`timerId`) is stored but not used in this test.
3. **Implicit Waiting:** Vitest's magic lies here. Since the test function is asynchronous (`async`), Vitest automatically waits for all asynchronous operations (like the timer in this case) to complete before moving on to assertions. We don't need explicit `await` or manual waiting for the timer.
4. **Expectation - No Call Before Delay:** We expect the `mockCallback` not to have been called before the delay (`expect(mockCallback).not.toHaveBeenCalled()`).
5. **Waiting for Callback:** We use `await new Promise` to create a promise that resolves after the desired delay (1200ms). This implicitly tells Vitest to wait until the promise resolves, ensuring the scheduled timer has had enough time to trigger the callback.
6. **Expectation - Callback Called:** After the wait, we expect the `mockCallback` to have been called once (`expect(mockCallback).toHaveBeenCalledTimes(1)`) and with no arguments (`expect(mockCallback).toHaveBeenCalledWith()`).
7. **Cleanup (Optional):** Although not crucial for this test, we show how to clean up the timer by calling `clearTimeout` with the stored ID (`timerId`). This is good practice to prevent unused timers from lingering.
# Data mocking
**Data Mocking in JavaScript Unit Testing with Vitest**
Data mocking is a fundamental technique in unit testing that allows you to isolate and test the behavior of your code without relying on external dependencies or real-world data sources. By creating controlled, predictable test data, you can ensure that your code functions correctly under various scenarios.
**Theoretical Explanation**
- **Why Mocking?**
- Unit tests should focus on the specific logic of your code, not external factors. Mocking dependencies like databases, APIs, or file systems prevents unpredictable behavior or errors from these external sources during testing.
- It enables testing edge cases or error conditions without relying on real-world data that might be unavailable or time-consuming to set up.
- **How Mocking Works**
- Mocking frameworks like Vitest provide utilities to create mock objects or functions that simulate the behavior of the original dependencies.
- You can define the expected inputs, outputs, and side effects (actions performed by the mock) for your tests.
**Benefits of Data Mocking**
- **Isolation:** Tests focus solely on your code's logic, improving test reliability and maintainability.
- **Repeatability:** Predictable mock data ensures consistent test results across runs.
- **Control:** Define specific data scenarios to test different code paths.
- **Speed:** Avoids the overhead of interacting with real external systems.
**Vitest Mocking Utilities**
Vitest offers the `vi` object for mocking:
- `vi.fn()`: Creates a mock function for complete control over its behavior.
- `vi.spyOn(object, methodName)`: Spies on an existing method within an object, allowing you to track its calls and modify its behavior.
- `mockImplementation(fn)`: Sets a custom implementation function for the mock to define its return value or behavior.
- `mockResolvedValue(value)`: For mocks that simulate promises, specifies the resolved value for successful outcomes.
- `mockRejectedValue(value)`: Similar to `mockResolvedValue`, but defines the rejected value for promise errors.
**Practical Code Example and Unit Test**
```javascript
// passwordGenerator.js
export function generatePassword(length, includeNumbers, includeSymbols) {
const characters = [];
if (includeNumbers) {
characters.push(...Array(10).keys().map(String)); // Add numbers (0-9)
}
if (includeSymbols) {
characters.push(..."!@#$%^&*()".split("")); // Add symbols
}
characters.push(...Array(26).keys().map((i) => String.fromCharCode(97 + i))); // Add lowercase letters
characters.push(...Array(26).keys().map((i) => String.fromCharCode(65 + i))); // Add uppercase letters
let password = "";
for (let i = 0; i < length; i++) {
const randomIndex = Math.floor(Math.random() * characters.length);
password += characters[randomIndex];
}
return password;
}
```
**Function Breakdown:**
1. **Character Set Construction:**
- It starts by building an array of characters (`characters`) based on the `includeNumbers` and `includeSymbols` flags:
- If `includeNumbers` is true, numbers (0-9) are added.
- If `includeSymbols` is true, common symbols are added.
- Lowercase and uppercase letters are always included.
2. **Password Generation Loop:**
- An empty string (`password`) is initialized to hold the generated password.
- The loop iterates `length` times (desired password length).
- Inside the loop:
- `Math.random()` is called to generate a random decimal value between 0 (inclusive) and 1 (exclusive).
- This value is multiplied by the length of the `characters` array.
- The result, after applying `Math.floor`, gives an index within the `characters` array.
- The character at the calculated index is retrieved from `characters` and appended to the `password` string.
```javascript
import { generatePassword } from './passwordGenerator';
import { vi } from 'vitest';
describe('generatePassword', () => {
// Test different password lengths
test('should create a password of length 1', () => {
vi.spyOn(Math, 'random').mockReturnValueOnce(0.2); // Mock random value for index selection
const password = generatePassword(1, true, false);
expect(password.length).toBe(1);
});
test('should create a password of length 8', () => {
// Mock multiple random values for different character selections
vi.spyOn(Math, 'random')
.mockReturnValueOnce(0.1) // Lowercase letter
.mockReturnValueOnce(0.6) // Number
.mockReturnValueOnce(0.3) // Lowercase letter
.mockReturnValueOnce(0.8) // Uppercase letter
.mockReturnValueOnce(0.4) // Symbol
.mockReturnValueOnce(0.9) // Lowercase letter
.mockReturnValueOnce(0.2) // Number
.mockReturnValueOnce(0.7); // Uppercase letter
const password = generatePassword(8, true, true);
expect(password.length).toBe(8);
});
// Test password with different character inclusion options
test('should create a password with only lowercase letters', () => {
vi.spyOn(Math, 'random').mockReturnValueOnce(0.1) // Lowercase letter
.mockReturnValueOnce(0.3) // Lowercase letter
.mockReturnValueOnce(0.7); // Lowercase letter
const password = generatePassword(3, false, false);
expect(password.match(/[a-z]/g).length).toBe(3); // Check for only lowercase letters
});
test('should create a password with numbers and lowercase letters', () => {
vi.spyOn(Math, 'random')
.mockReturnValueOnce(0.2) // Number
.mockReturnValueOnce(0.5) // Lowercase letter
.mockReturnValueOnce(0.8); // Number
const password = generatePassword(3, true, false);
expect(password.match(/[0-9]/g).length).toBe(2); // Check for two numbers
expect(password.match(/[a-z]/g).length).toBe(1); // Check for one lowercase letter
});
// Test edge cases
test('should handle empty character set (no numbers or symbols)', () => {
vi.spyOn(Math, 'random').mockReturnValueOnce(0.1); // Lowercase letter
const password = generatePassword(1, false, false);
expect(password.length).toBe(1); // Still generates a single character
});
test('should not create a password longer than the character set', () => {
vi.spyOn(Math, 'random').mockReturnValueOnce(0); // Always return 0 for index (first character)
const password = generatePassword(100, true, true);
expect(password.length).toBe(characters.length); // Limited by character set length
});
});
```
This test suite, `passwordGenerator.test.js`, leverages meticulously testing the `generatePassword` function from `passwordGenerator.js`. The focus here is on mocking the `Math.random` function, which is the sole dependency that governs the randomness within the password generation process.
# Event Testing
Event-driven programming is a fundamental paradigm in JavaScript, where code execution is triggered in response to user interactions or system events. Unit testing for event-driven code ensures that components react correctly to expected events and produce the desired outcomes.
**Theoretical Explanation**
- **Event Listeners:** Components register event listeners (functions) to be invoked when specific events occur (e.g., clicks, key presses).
- **Testing Objectives:** Event testing focuses on verifying:
- **Event Attachment:** Listeners are attached to the correct elements with appropriate event types.
- **Event Handling:** Listeners execute the intended logic when events are triggered.
- **State Updates:** Event handlers update component state or emit signals as expected.
- **Side Effects:** Any side effects (e.g., network requests, DOM manipulations) occur correctly.
- **Importance of Mocking:** When testing event handling, it's often desirable to isolate the component under test from external dependencies. Mocking comes into play here by creating "fake" versions of external functions or objects that the event listener might interact with. This allows you to focus on testing the component's internal logic without introducing side effects or relying on external systems.
**Types of Mocking for Event Testing:**
- **Function Mocking:** This is the most common approach, where you replace the actual event handler with a mock function using Jest's `vi.fn()`. This allows you to verify if the function was called, how many times, and with what arguments.
- **Module Mocking:** If your event listener interacts with another module that's not under test, you can use Vitest's built-in mocking capabilities (provided by the `vi` object) to mock the entire module or specific exports. This can be useful for testing interactions with data fetching or asynchronous operations.
**Practical Code Example**
Consider a `ProductCard` component that displays a product name and dispatches an action to add the product to the cart when the "Add to Cart" button is clicked:
```javascript
// ProductCard.js
import { addToCart } from './shoppingCartActions'; // Assuming you have a shoppingCartActions module
export default function ProductCard({ name, onClick }) {
return (
<div>
{name}
<button onClick={onClick}>Add to Cart</button>
</div>
);
}
```
Unit Test with Mocking:
```javascript
// ProductCard.test.js
import { test, expect } from 'vitest';
import { render, fireEvent } from '@testing-library/react';
import ProductCard from './ProductCard';
import { addToCart } from './shoppingCartActions'; // Assuming you've imported the mocked action
vi.mock('./shoppingCartActions', () => ({
addToCart: vi.fn(),
})); // Mock the addToCart function
test('ProductCard dispatches addToCart action on click', () => {
const productName = 'Product X';
const { getByText } = render(<ProductCard name={productName} />);
const addToCartButton = getByText('Add to Cart');
fireEvent.click(addToCartButton);
expect(addToCart).toHaveBeenCalledTimes(1);
expect(addToCart).toHaveBeenCalledWith(productName); // Verify argument passed to the action
});
```
**Explanation:**
1. **Mock `addToCart` Action:** We use `jest.mock` to create a mock version of the `addToCart` function from `shoppingCartActions`. This isolates the `ProductCard` component from the actual implementation of the action.
2. **Test Execution:**
- The test renders the `ProductCard` with a product name and a mocked `onClick` prop.
- A click event is simulated on the "Add to Cart" button using `fireEvent.click`.
3. **Verify Mock Call:** We use `expect` from Vitest to assert that the mocked `addToCart` function was called once (`toHaveBeenCalledTimes(1)`) and with the correct product name as an argument (`toHaveBeenCalledWith(productName)`).
Remember, mocking empowers you to test event-driven components in isolation, ensuring predictable behavior and comprehensive test coverage. By strategically incorporating mocking into your event testing practices, you can build more reliable and maintainable JavaScript applications.
# The relationship :-
1. **Asynchronous Testing: A Response to Synchronous Limitations**
- Synchronous testing, while valuable, has limitations:
- **Slowness:** It can be slow for applications that handle multiple tasks concurrently. Each test case needs to wait for the previous action to complete, leading to longer test execution times.
- **Limited Scope:** Simulating real-world interactions becomes challenging. Synchronous tests can't accurately reflect scenarios where the application waits for external responses (network calls, database queries).
- Asynchronous testing arose to address these limitations. It allows tests to initiate actions without waiting for immediate responses, mimicking how users interact with applications. This results in:
- **Faster Tests:** Tests run concurrently, improving efficiency.
- **More Realistic Scenarios:** Asynchronous testing can better simulate how users interact with applications that handle multiple tasks concurrently.
2. **Mocking: The Hero of Asynchronous Testing**
Mocking plays a crucial role in asynchronous testing by providing controlled environments:
- **Isolating Dependencies:** Asynchronous operations often rely on external systems (databases, APIs). Mocking allows us to create simulated versions of these dependencies, ensuring the test focuses on the functionality being tested and not external factors.
- **Predictable Behavior:** Mocks return pre-defined responses, eliminating the potential for unexpected delays or errors from external systems. This makes tests more reliable and repeatable.
- **Faster Execution:** Mocks bypass external calls, speeding up test execution by avoiding network delays or database interactions.
3. **Event Testing: A Breeze with Mocking**
Mocking simplifies event testing by:
- **Controlling Event Data:** Testers can define specific event data (e.g., user input, system triggers) that trigger the event. This allows for testing various scenarios without relying on actual user interactions or external events.
- **Predictable Outcomes:** Mocks respond to events consistently, eliminating randomness caused by external systems. This makes it easier to verify the application's behavior in response to specific events.
- **Improved Isolation:** Mocks isolate the event handling logic from other system parts. This allows for focused testing of how the application reacts to specific events.
| sandheep_kumarpatro_1c48 |
1,895,110 | 2. Basics of Vitest | Let's start with What is Vitest... Vitest is a modern unit testing framework designed for... | 27,796 | 2024-06-20T18:49:44 | https://dev.to/sandheep_kumarpatro_1c48/2-basics-of-vitest-50jj | react, vitest, unittest, javascript | ## Let's start with What is Vitest...
Vitest is a modern unit testing framework designed for blazing-fast performance, especially when working with JavaScript projects. It leverages the power of Vite, a lightning-quick bundler, to streamline the testing process.
**Key Advantages:**
- **Exceptional Speed:** Vitest boasts exceptional execution times for your tests, thanks to its reliance on Vite's efficient bundling mechanisms. This translates to a significantly smoother development experience, as you won't have to wait long for tests to run after making changes to your code.
- **Seamless Integration with Vite:** If you're already using Vite for your project, Vitest integrates flawlessly. It automatically inherits your Vite configuration, including settings for resolving aliases and plugins, eliminating the need for redundant setup. You can also create a dedicated `vitest.config.ts` file for test-specific configurations.
- **Familiarity for Jest Users:** Coming from Jest, a popular testing framework? Vitest offers a similar syntax, making the transition comfortable. You'll likely find yourself writing tests in a familiar way, leveraging features like `expect`, snapshots, and code coverage.
- **Modern JavaScript Support:** Vitest embraces modern JavaScript features out of the box, providing built-in support for ES modules, TypeScript, and JSX. This aligns perfectly with the current web development landscape.
- **Smart Watch Mode:** Inspired by Hot Module Replacement (HMR), Vitest's watch mode is incredibly intelligent. It only re-runs tests that are directly affected by your code changes, significantly reducing the time it takes to get feedback.
## Installing Vitest: Simple Setup and Next.js 14 Considerations
Vitest boasts a refreshingly straightforward installation process. To get started with basic unit testing, all you need is a single command:
```bash
npm install --save-dev vitest
```
```bash
pnpm add -D vitest
```
```bash
yarn add -D vitest
```
This installs Vitest as a development dependency in your project. With that done, you're ready to write your first tests!
However, if you're working with a Next.js 14 application, there are a few additional considerations to ensure smooth integration with Vitest.
```javascript
import { coverageConfigDefaults, defineConfig } from "vitest/config";
import react from "@vitejs/plugin-react";
import path from "path";
export default defineConfig({
plugins: [react()],
test: {
environment: "jsdom",
globals: true,
coverage: {
exclude: [
"next.config.mjs",
"middleware.ts",
"auth.config.ts",
"tailwind.config.ts",
"postcss.config.js",
// other files which dont require unit testing
...coverageConfigDefaults.exclude,
],
},
env: { // execute the command `openssl rand -base64 32` in the terminal and use the returned value here
AUTH_SECRET: {YOUR_SECRET},
},
server: {
deps: {
inline: ["next-auth"],
},
},
},
resolve: {
alias: {
"@": path.resolve(__dirname, "./src"),
},
},
});
```
Here's a detailed explanation of the above Vitest configuration for Next.js 14:
**Imports:**
- `coverageConfigDefaults, defineConfig` are imported from `vitest/config`. These provide helper functions for configuring Vitest.
- `react` is imported from `@vitejs/plugin-react`. This plugin enables seamless testing of React components within Vitest.
- `path` is imported from the built-in Node.js `path` module. This provides utilities for working with file paths.
**Configuration:**
- **plugins:** This array specifies plugins to be used by Vitest. Currently, only the `react` plugin is included, enabling React component testing.
- **test:** This object configures various aspects of test execution:
- **environment:** Sets the test environment to "jsdom", which provides a simulated browser environment for running your tests.
- **globals:** This setting instructs Vitest to automatically make its built-in test utilities (like `describe`, `it`, `test`, `expect`) globally available within your test files. This eliminates the need to explicitly import them, simplifying your test code (also requires some changes in `tsconfig` file as well. It's mentioned below).
- **coverage:** This object configures code coverage reporting:
- **exclude:** This array specifies files to be excluded from code coverage calculations. This includes common Next.js configuration files, middleware, and potentially other files you don't need unit testing for. It also merges with default exclusions provided by Vitest.
- **env:** This object defines environment variables accessible within your tests. You'll need to replace `{YOUR_SECRET}` with an actual secret value (obtained securely, not shown here for security reasons). This is likely used for testing functionalities that rely on environment variables.
- **server:** This object configures the test server:
- **deps.inline:** This array specifies dependencies to be inlined by the test server. Currently, it includes "next-auth". Inlining is a process where Vite integrates the module directly into the test bundle instead of relying on external imports. This approach can be beneficial for several reasons:
- **Compatibility:** It helps handle dependencies that might not strictly follow Node.js ESM (ECMAScript Module) specifications. `next-auth` might be an example of such a dependency. By inlining, Vite ensures compatibility within the test environment.
- **Performance:** Inlining can sometimes improve test execution speed as the test server doesn't need to resolve and load the dependency separately. Including `next-auth` in the `deps.inline` array suggests that this dependency might be crucial for your tests, potentially because it interacts with functionalities you're testing.
- **resolve:** This object configures how Vitest resolves module imports:
- **alias:** This object defines an alias // Run the tests
run();for the root of your source code. Here, `@` is mapped to the `./src` directory, simplifying imports within your tests.
Also modify the \`tsconfig\` file to include the following like for getting editor autocomplete support
```json
{
"compilerOptions": {
// ...other options
"types": ["vitest/globals"],
}
}
```
## Writing the very first test case
**Unlocking Geometric Insights: Introducing calculateArea()**
Embark on a journey into the realm of geometric computations with the latest tool, `calculateArea()`. This versatile function empowers you to effortlessly determine the areas of fundamental shapes such as rectangles, squares, and circles. Proceed further into the blog to explore the inner workings of `calculateArea()`, accompanied by a suite of meticulously crafted test cases ensuring its accuracy and reliability.
```typescript
// calculate.ts
// Function to calculate the area of different shapes
export function calculateArea(shape: string, ...args: number[]): number {
switch (shape) {
case 'rectangle':
if (args.length !== 2) {
throw new Error('Invalid number of arguments for rectangle');
}
return args[0] * args[1];
case 'square':
if (args.length !== 1) {
throw new Error('Invalid number of arguments for square');
}
return args[0] * args[0];
case 'circle':
if (args.length !== 1) {
throw new Error('Invalid number of arguments for circle');
}
return Math.PI * args[0] * args[0];
default:
throw new Error('Invalid shape');
}
}
```
```typescript
// calculate.spec.ts
// Import the functions to be tested
import { describe, it, assert, run } from 'vitest';
import { calculateArea } from './calculate';
// Define meaningful test suite descriptions
describe('Area Calculation for Geometric Shapes', () => {
// Test Suite 1: Rectangle and Square
describe('Rectangles and Squares', () => {
// Test case 1: Rectangle area calculation
it('should calculate the area of a rectangle', () => {
assert.equal(calculateArea('rectangle', 3, 4), 12);
});
// Test case 2: Square area calculation
it('should calculate the area of a square', () => {
assert.equal(calculateArea('square', 5), 25);
});
// Test case 3: Rectangle area calculation with negative numbers
it('should handle negative numbers for rectangle area calculation', () => {
assert.equal(calculateArea('rectangle', -3, 4), -12);
});
// Test case 4: Square area calculation with zero
it('should calculate the area of a square with zero side length', () => {
assert.equal(calculateArea('square', 0), 0);
});
});
// Test Suite 2: Circle
describe('Circles', () => {
// Test case 1: Circle area calculation
it('should calculate the area of a circle', () => {
assert.approximately(calculateArea('circle', 3), 28.27, 0.01);
});
// Test case 2: Circle area calculation with larger radius
it('should calculate the area of a circle with larger radius', () => {
assert.approximately(calculateArea('circle', 5), 78.54, 0.01);
});
// Test case 3: Circle area calculation with negative radius
it('should throw an error for negative radius in circle area calculation', () => {
assert.throws(() => calculateArea('circle', -3), 'Invalid number of arguments for circle');
});
// Test case 4: Circle area calculation with zero radius
it('should handle zero radius for circle area calculation', () => {
assert.equal(calculateArea('circle', 0), 0);
});
});
});
```
```bash
npm vitest
```
| sandheep_kumarpatro_1c48 |
1,895,109 | 1. Understanding unit test basics | What is Unit Testing? Unit testing is a software development practice that involves... | 27,796 | 2024-06-20T18:49:33 | https://dev.to/sandheep_kumarpatro_1c48/1-understanding-unit-test-basics-3c0m | react, vitest, unittest, javascript | ### **What is Unit Testing?**
Unit testing is a software development practice that involves isolating individual units of code (functions, classes, modules) and verifying their correctness under various conditions. It ensures that each unit behaves as expected and produces the intended output for a given set of inputs.
### **Benefits of Unit Testing:**
- **Improved Code Quality:** Catches errors early in the development process, leading to more robust and reliable software.
- **Increased Confidence in Changes:** Makes developers more confident to modify code without introducing regressions since unit tests act as a safety net.
- **Better Maintainability:** Well-written unit tests document how code works, improving code comprehension for future maintainers.
Let's consider a simple function in TypeScript that calculates the area of a rectangle:
```javascript
// area.ts
export function calculateArea(width: number, height: number): number {
return width * height;
}
```
```javascript
import { expect, describe, it } from 'vitest';
import { calculateArea } from './area';
describe('calculateArea function', () => {
it('should calculate the area of a rectangle correctly', () => {
const width = 5;
const height = 4;
const expectedArea = 20;
const actualArea = calculateArea(width, height);
expect(actualArea).toEqual(expectedArea);
});
it('should return 0 for zero width or height', () => {
const testCases = [
{ width: 0, height: 5, expectedArea: 0 },
{ width: 5, height: 0, expectedArea: 0 },
];
for (const testCase of testCases) {
const { width, height, expectedArea } = testCase;
const actualArea = calculateArea(width, height);
expect(actualArea).toEqual(expectedArea);
}
});
});
```
**Explanation:**
- We import `expect` from Vitest for assertions.
- We import `calculateArea` from our area.ts file.
- We use `describe` to create a test suite for `calculateArea`.
- Within the `describe` block, we define two test cases using `it`:
- The first test verifies if the function calculates the area correctly for non-zero dimensions.
- We define `width`, `height`, and `expectedArea`.
- We call `calculateArea` with the defined values.
- We use `expect` to assert that the actual area (`actualArea`) matches the expected area.
- The second test covers scenarios with zero width or height.
- We create an array of test cases (`testCases`) with different input values.
- We iterate through each test case using a `for` loop.
- For each case, we extract `width`, `height`, and `expectedArea`.
- We call `calculateArea` with these values and assert the result using `expect`.
### **Elements of Unit testing:**
The elements that make up a unit test in Vitest (or any other unit testing framework) can be broken down into three main parts:
1. **Test Runner and Assertions:**
- **Test Runner:** This is the core functionality that executes your test cases and provides the framework for running them. Vitest leverages the power of Vite for fast test execution.
- **Assertions:** These are statements that verify the expected outcome of your tests. Vitest offers built-in assertions (like `expect`) or allows using libraries like Chai for more advanced assertions.
2. **Test Description:**
- **`describe` and `it` blocks:** These functions from Vitest (similar to other frameworks) structure your tests.
- `describe` defines a test suite that groups related tests for a specific functionality.
- Within `describe`, individual test cases are defined using `it` blocks. Each `it` block describes a specific scenario you want to test.
3. **Test Arrangements (Optional):**
- **Mocking and Stubbing:** In some cases, you might need to mock or stub external dependencies or functions to isolate your unit under test. Vitest offers ways to mock dependencies using functions like `vi.fn()`.
These elements come together to form a unit test. You write assertions within `it` blocks to verify the expected behavior of your unit (function, class, module) when the test runner executes the test with specific arrangements (mocking if needed).
Let's consider a simple function that calculates the area of a rectangle:
**Example 1:- Simple Explanation without mocking**
```javascript
function calculateArea(width, height) {
if (width <= 0 || height <= 0) {
throw new Error("Width and height must be positive numbers");
}
return width * height;
}
```
Here's a unit test for this function using Vitest, highlighting the elements mentioned earlier:
```javascript
// test file: rectangleArea.test.js
import { test, expect } from 'vitest';
describe('calculateArea function', () => {
// Test case 1: Valid inputs
it('calculates the area correctly for valid dimensions', () => {
const width = 5;
const height = 3;
const expectedArea = 15;
// Test arrangement (no mocking needed here)
const actualArea = calculateArea(width, height);
// Assertions
expect(actualArea).toBe(expectedArea);
});
// Test case 2: Invalid inputs (edge case)
it('throws an error for non-positive width or height', () => {
const invalidWidth = 0;
const validHeight = 3;
// Test arrangement (no mocking needed here)
expect(() => calculateArea(invalidWidth, validHeight)).toThrow();
});
});
```
**Explanation of Elements:**
1. **Test Runner and Assertions:**
- Vitest acts as the test runner, executing the test cases defined within the `describe` and `it` blocks.
- The `expect` function from Vitest allows us to make assertions about the outcome of the test. Here, we use `expect(actualArea).toBe(expectedArea)` to verify the calculated area matches the expected value.
2. **Test Description:**
- The `describe` block groups related tests, in this case, all tests for the `calculateArea` function.
- Each `it` block defines a specific test case. Here, we have two test cases: one for valid inputs and another for invalid inputs (edge case).
3. **Test Arrangements (Optional):**
- In this example, we don't need mocking or stubbing as we're directly testing the function with its arguments. However, if the function relied on external dependencies (like file I/O or network calls), we might need to mock them to isolate the unit under test.
**Example 2 :- An advance example involving mocking and stubbing**
```javascript
function calculateArea(width, height) {
if (width <= 0 || height <= 0) {
throw new Error("Width and height must be positive numbers");
}
return width * height;
}
async function fetchData() {
// Simulate fetching data (could be network call or file read)
return new Promise((resolve) => resolve({ width: 5, height: 3 }));
}
```
Now, we want to test `calculateArea` in isolation without actually making the external call in `fetchData`. Here's how mocking comes into play:
```javascript
// test file: rectangleArea.test.js
import { test, expect, vi } from 'vitest';
describe('calculateArea function', () => {
// Test case 1: Valid dimensions from mocked data
it('calculates the area correctly using mocked dimensions', async () => {
const expectedWidth = 5;
const expectedHeight = 3;
const expectedArea = 15;
// Test arrangement (mocking)
vi.mock('./fetchData', async () => ({ width: expectedWidth, height: expectedHeight }));
// Call the function under test with any values (mocked data will be used)
const actualArea = await calculateArea(1, 1); // Doesn't matter what we pass here
// Assertions
expect(actualArea).toBe(expectedArea);
// Restore mocks (optional, but good practice)
vi.restoreAllMocks();
});
// Other test cases (can remain the same as previous example)
});
```
**Explanation of Mocking:**
1. **Mocking `fetchData`:**
- We use `vi.mock` from Vitest to mock the `fetchData` function.
- Inside the mock implementation (an async function here), we return pre-defined values for `width` and `height`. This way, `calculateArea` receives the mocked data instead of making the actual external call.
2. **Test Execution:**
- The test case calls `calculateArea` with any values (they won't be used due to mocking).
- Since `fetchData` is mocked, the pre-defined dimensions are used for calculation.
3. **Assertions:**
- We assert that the `actualArea` matches the expected value based on the mocked data.
**Benefits of Mocking:**
- Isolates the unit under test (`calculateArea`) from external dependencies.
- Makes tests faster and more reliable (no external calls involved).
- Allows testing specific scenarios with controlled data.
**Remember:** After each test, it's good practice to restore mocks using `vi.restoreAllMocks()` to avoid affecting subsequent tests. This ensures a clean slate for each test case.
| sandheep_kumarpatro_1c48 |
1,895,108 | 0. Introduction and Surface level Explanation | Let me start with a story Imagine you're building a big Lego castle. Unit testing is like... | 27,796 | 2024-06-20T18:49:19 | https://dev.to/sandheep_kumarpatro_1c48/0-introduction-and-surface-level-explanation-1gom | react, vitest, unittest, javascript | ## Let me start with a story
Imagine you're building a big Lego castle. Unit testing is like checking each individual Lego brick before you snap them together.
- **Small pieces:** Instead of testing the entire castle at once, you test each brick (the tiny building block). In software, these bricks are functions, small pieces of code that do one specific thing.
- **Working alone:** You check if each brick can connect properly on its own, without needing the whole castle built yet. In code, this means testing the function with different inputs (like numbers) to see if it gives the expected output (like the answer).
- **Catching problems early:** If a brick is broken or bent, you find it before wasting time building with it. In code, this helps catch errors early on, before they cause bigger problems later in the whole program.
So, unit testing is basically making sure the tiny building blocks of software work correctly before putting everything together. This helps catch mistakes early and make the final program run smoothly!
Small taste of code now,
```javascript
// Source code (to be unit tested)
import { fetchData } from './data_fetcher'; // Import the data fetching function
async function processData(url: string): Promise<string[]> {
"""
Fetches data from a URL, parses it, and returns an array of strings.
This function depends on the `fetchData` function to retrieve data.
"""
const data = await fetchData(url);
// Simulate parsing data (replace with your actual parsing logic)
const processedData = data.split('\n').map(line => line.trim());
return processedData;
}
export default processData;
```
```javascript
// Code for unit testing
import processData from './data_processor';
import { expect, vi } from 'vitest'; // Use Vitest's built-in expect and vi for mocking
describe('processData function', () => {
it('should process data from a URL', async () => {
const mockData = 'Line 1\nLine 2\nLine 3';
vi.mock('./data_fetcher', () => ({ fetchData: () => mockData })); // Mock using vi.mock
const url = 'https://example.com/data.txt';
const processedData = await processData(url);
expect(processedData).toEqual(['Line 1', 'Line 2', 'Line 3']);
});
});
```
### Explanation by comparing
The provided code for `processData` and its test case can be compared to building a big Lego castle in the following ways:
**Castle Foundation (Imports):**
- **Castle:** Before building, you gather all the necessary bricks (Lego pieces) you'll need.
- **Code:** Similar to gathering bricks, the `import` statements (e.g., `import { fetchData } from './data_fetcher'`) bring in the required functionality from other files (like `data_fetcher.ts`) to build the logic in `processData`. These imported functions act as pre-built Lego components.
**Castle Walls (Function Definition):**
- **Castle:** The castle walls are the main structure, built with various Lego bricks.
- **Code:** The `processData` function is like the main structure of the code. It defines the steps to process data, similar to how instructions guide the castle assembly.
**Castle Details (Function Lines):**
- **Castle:** Each line of the building instructions specifies how to place individual bricks.
- **Code:** Each line of code within `processData` represents an action or step. Let's break them down:
1. `async function processData(url: string): Promise<string[]>;`:
- This line defines the function named `processData`. It's `async` because it might involve waiting for data to be fetched. It takes a `url` (string) as input and promises to return an array of strings (`string[]`). This is like laying the foundation for the processing logic.
2. `const data = await fetchData(url);`:
- This line calls the imported `fetchData` function, passing the provided `url`. It uses `await` because `fetchData` might take time to retrieve data. This is like using a pre-built Lego wall component fetched from another box (the `data_fetcher` file).
3. `// Simulate parsing data (replace with your actual parsing logic)`:
- This line is a comment explaining that the following code simulates parsing data (splitting and trimming lines). You'd replace this with your actual logic for processing the fetched data. This is like the specific steps for building a unique part of the castle wall with different colored or shaped bricks.
4. `const processedData = data.split('\n').map(line =>; line.trim());`:
- This line performs the actual data processing. It splits the fetched `data` by newline characters (`\n`) and then uses `map` to iterate over each line, trimming any whitespace (`trim`). This is like assembling the fetched data wall component by splitting and connecting individual Lego pieces.
5. `return processedData;`:
- This line returns the final processed data (`processedData`) as an array of strings. This is like presenting the completed wall section that you built.
**Testing the Castle (Test Case):**
- **Castle:** After building, you might check the stability and functionality of different parts.
- **Code:** The test case (in a separate file) simulates checking the functionality of `processData`.
**Test Case Breakdown:**
1. **Mocking the Dependency (Mock Data):**
- In a real scenario, `fetchData` might fetch data from a server. Here, the test mocks `fetchData` using `vi.mock` (Vitest) to control the returned data (e.g., `mockData`). This is like creating a mock wall section without fetching real bricks, just to test how the main structure connects.
2. **Test Execution:**
- The test defines what data to process (`url`) and asserts (checks) if the returned `processedData` matches the expected outcome. This is like testing if the built wall section connects properly with the rest of the structure.
**Test Case In-depth Explanation:**
**Imports:**
1. **`import processData from './data_processor';`**: This line imports the `processData` function from the `data_processor.ts` file. This allows the test to access and test the function.
2. **`import { expect, vi } from 'vitest';`**: Here, we import two functionalities from Vitest:
- `expect`: This is used for making assertions about the test results.
- `vi`: This provides utilities for mocking in Vitest.
**Mocking the Dependency:**
1. **`vi.mock('./data_fetcher', () =>; ({ fetchData: () =>; mockData }));`**: This line uses Vitest's `vi.mock` function to mock the `fetchData` module from `data_fetcher.ts`. Mocking essentially creates a fake version of the module for testing purposes.
- `./data_fetcher`: This specifies the path to the module being mocked.
- `() =>; ({ fetchData: () =>; mockData })`: This is an anonymous function that defines the mocked behavior of `fetchData`. Here, it simply returns a predefined string (`mockData`) instead of actually fetching data.
### **Why Mocking?**
Mocking is important in this scenario because the real `fetchData` function might involve network calls or interact with external systems. These can introduce external factors that could make the test unreliable or slow down execution. By mocking, we control the data returned by `fetchData` and isolate the test to focus solely on how `processData` handles the provided data.
**Test Description:**
1. **`describe('processData function', () =>; { ... });`**: This line defines a test suite using `describe` from Vitest. It groups related tests under the descriptive name "processData function". The code within the curly braces (`{...}`) will contain individual test cases.
**Individual Test Case:**
1. **`it('should process data from a URL', async () =>; { ... });`**: This line defines a specific test case using `it`. The string argument describes the test case ("should process data from a URL"). The `async` keyword indicates that the test involves asynchronous operations (waiting for the mocked `fetchData` to return data).
2. **`const mockData = 'Line 1\nLine 2\nLine 3';`**: This line defines a string variable `mockData` that holds the sample data used in the test. This data will be returned by the mocked `fetchData` function.
3. **`const url = 'https://example.com/data.txt';`**: This line defines a string variable `url` that represents the example URL used in the test case. This URL would normally be passed to the real `fetchData` function, but here it's just a placeholder since we're using mocked data.
4. **`const processedData = await processData(url);`**: This line calls the `processData` function with the defined `url`. The `await` keyword is necessary because `processData` is asynchronous due to the mocked `fetchData`. This line essentially simulates calling `processData` with a URL and waits for the processed data to be returned.
5. **`expect(processedData).toEqual(['Line 1', 'Line 2', 'Line 3']);`**: This line is the assertion using Vitest's `expect`. It checks if the `processedData` returned by `processData` is equal to the expected array containing the processed lines ("Line 1", "Line 2", "Line 3"). This verifies if `processData` correctly parses the mocked data.
**Running the Test:**
In your terminal, you can run the tests using `npm vitest`. Vitest will execute the test suite and report if all assertions pass (meaning the code works as expected) or fail (meaning there's an error in the `processData` function).
| sandheep_kumarpatro_1c48 |
1,895,144 | Building a Static Blog with Next.js | Next.js is a powerful framework for building server-side rendering (SSR) and static web applications... | 0 | 2024-06-20T18:48:16 | https://dev.to/malvinjay/building-a-static-blog-with-nextjs-of6 | Next.js is a powerful framework for building server-side rendering (SSR) and static web applications with React. One of the standout features of Next.js is its ability to generate static sites, which can offer improved performance and SEO benefits. In this article, we will explore how to build a simple static blog using Next.js. We’ll cover the setup, creating pages, fetching data, and deploying the static site.
## Prerequisites
To follow along with this tutorial, you should have:
- Basic knowledge of React.
- Node.js installed on your machine.
- A text editor or IDE like VSCode.
## Setting Up a Next.js Project
First, let's create a new Next.js project. Open your terminal and run the following command:
```
npx create-next-app my-static-blog
cd my-static-blog
```
This command will set up a new Next.js project in the my-static-blog directory.
## Creating the Blog Post Pages
Next.js uses a file-based routing system. Each file in the pages directory corresponds to a route in the application. We’ll create a posts directory inside the pages directory to hold our blog post files.
```
mkdir pages/posts
```
Let's create a sample blog post. Create a file named first-post.js inside the pages/posts directory with the following content:
```
// pages/posts/first-post.js
import Link from 'next/link';
export default function FirstPost() {
return (
<div>
<h1>First Post</h1>
<p>This is the content of the first post.</p>
<Link href="/">
<a>Back to Home</a>
</Link>
</div>
);
}
```
## Adding Dynamic Routes
For a blog, we often need dynamic routes to handle multiple posts. Next.js provides a powerful way to create dynamic routes using file names enclosed in square brackets. Let’s create a dynamic route for our blog posts.
Create a file named [id].js inside the pages/posts directory:
```
// pages/posts/[id].js
import { useRouter } from 'next/router';
import Link from 'next/link';
export default function Post() {
const router = useRouter();
const { id } = router.query;
return (
<div>
<h1>{id.replace(/-/g, ' ')}</h1>
<p>This is the content of the post.</p>
<Link href="/">
<a>Back to Home</a>
</Link>
</div>
);
}
```
## Fetching Blog Post Data
To fetch and display the actual content of our blog posts, we can use getStaticProps and getStaticPaths. These functions allow us to fetch data at build time and generate static pages for each blog post.
Create a new directory named posts at the root of your project to store our blog post data:
```
mkdir posts
```
Create a file named first-post.md inside the posts directory with the following content:
```
---
title: "First Post"
date: "2024-06-20"
---
This is the content of the first post.
```
Update the [id].js file to fetch and display the content of the blog post:
```
// pages/posts/[id].js
import fs from 'fs';
import path from 'path';
import matter from 'gray-matter';
import { useRouter } from 'next/router';
import Link from 'next/link';
const postsDirectory = path.join(process.cwd(), 'posts');
export async function getStaticPaths() {
const filenames = fs.readdirSync(postsDirectory);
const paths = filenames.map((filename) => ({
params: { id: filename.replace(/\.md$/, '') },
}));
return {
paths,
fallback: false,
};
}
export async function getStaticProps({ params }) {
const fullPath = path.join(postsDirectory, `${params.id}.md`);
const fileContents = fs.readFileSync(fullPath, 'utf8');
const matterResult = matter(fileContents);
return {
props: {
postData: {
id: params.id,
...matterResult.data,
content: matterResult.content,
},
},
};
}
export default function Post({ postData }) {
return (
<div>
<h1>{postData.title}</h1>
<p>{postData.date}</p>
<div>{postData.content}</div>
<Link href="/">
<a>Back to Home</a>
</Link>
</div>
);
}
```
## Deploying the Static Site
Next.js makes it easy to deploy static sites. First, build the static files by running:
```
npm run build
npm run export
```
This will generate a out directory containing the static files for your site. You can then deploy these files to any static site hosting service, such as Vercel, Netlify, or GitHub Pages.
## Conclusion
In this tutorial, we've built a simple static blog using Next.js. We covered setting up a Next.js project, creating pages, adding dynamic routes, fetching blog post data, and deploying the static site. Next.js is a versatile framework that simplifies the process of building high-performance web applications.
For more information on Next.js, check out the [official documentation](https://nextjs.org/).
Happy coding! | malvinjay | |
1,895,143 | Laravel CKEditor Implementation | This documentation provides a step-by-step guide to implementing CKEditor in a Laravel project. The... | 0 | 2024-06-20T18:46:57 | https://dev.to/tahsin000/laravel-ckeditor-implementation-4aa8 |
This documentation provides a step-by-step guide to implementing CKEditor in a Laravel project. The example includes setting up the CKEditor in an admin panel for content creation and displaying the content on a frontend view.
## Prerequisites
- Laravel installed on your local machine
- Basic knowledge of Laravel framework and Blade templating engine
## Step 1: Set Up Routes
Add the following routes to your `web.php` file to handle displaying the CKEditor form, viewing the content, and uploading images.
```php
// frontend page
Route::get('ck-view', [CkController::class, 'ckView'])->name('ck-view');
// admin page
Route::get('ck-admin', [CkController::class, 'index'])->name('ck-admin');
Route::post('create', [CkController::class, 'store'])->name('create');
Route::post('update', [CkController::class, 'imageUpload'])->name('ck.upload');
```
## Step 2: Create the Blade Files
### Admin Blade File (admin.blade.php)
Create a Blade file `admin.blade.php` for the admin panel where the CKEditor will be integrated.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<title>CkEditor</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/css/bootstrap.min.css" rel="stylesheet">
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/js/bootstrap.bundle.min.js"></script>
</head>
<body>
<div class="container mt-5">
<div class="row justify-content-center">
<div class="col-sm-8">
<form method="POST" action="{{ route('create') }}">
@csrf
<input type="text" name="title" class="form-control mb-3" placeholder="Title">
<textarea name="editor" id="editor" class="form-control mt-3" placeholder="Description"></textarea>
<button class="btn-primary btn mt-3" type="submit">Submit</button>
</form>
</div>
</div>
</div>
<script src="https://cdn.ckeditor.com/ckeditor5/34.0.0/classic/ckeditor.js"></script>
<script>
ClassicEditor
.create(document.querySelector('#editor'), {
ckfinder: {
uploadUrl: '{{ route('ck.upload') . '?_token=' . csrf_token() }}',
}
})
.catch(error => {
console.error(error);
});
</script>
</body>
</html>
```
### View Blade File (view.blade.php)
Create a Blade file `view.blade.php` to display the content stored in the database.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<title>CkEditor</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/css/bootstrap.min.css" rel="stylesheet">
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/js/bootstrap.bundle.min.js"></script>
</head>
<body class="post-content">
@foreach ($data as $note)
{!! $note->content !!}
@endforeach
</body>
</html>
```
## Step 3: Create the Controller
Create a `CkController.php` file and add the following code:
```php
<?php
namespace App\Http\Controllers;
use App\Models\Ck;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Log;
use Illuminate\Support\Str;
class CkController extends Controller
{
public function ckView()
{
$data = Ck::all();
return view('ck-view', compact('data'));
}
public function index()
{
return view('ck-admin');
}
public function store(Request $request)
{
$note = Ck::create([
'content' => $request->input('editor'),
]);
$request->session()->flash('success', 'Note created successfully.');
return redirect()->route('ck-admin');
}
public function imageUpload(Request $request)
{
if ($request->hasFile('upload')) {
try {
$file = $request->file('upload');
$extension = $file->getClientOriginalExtension();
$randomFileName = Str::random(40) . '.' . $extension;
$file->move(public_path('media'), $randomFileName);
$url = asset('media/' . $randomFileName);
return response()->json([
'fileName' => $randomFileName,
'uploaded' => 1,
'url' => $url,
]);
} catch (\Exception $e) {
Log::error('File upload error: ' . $e->getMessage(), [
'exception' => $e,
'trace' => $e->getTraceAsString(),
'request' => $request->all(),
]);
return response()->json(['error' => 'File upload failed. Please try again.'], 500);
}
}
return response()->json(['error' => 'No file uploaded.'], 400);
}
}
```
## Step 4: Database Migration and Model
Create a migration and model for the `Ck` table.
```bash
php artisan make:model Ck -m
```
Update the migration file to include the necessary fields:
```php
public function up()
{
Schema::create('cks', function (Blueprint $table) {
$table->id();
$table->text('content');
$table->timestamps();
});
}
```
Run the migration:
```bash
php artisan migrate
```
Update the `Ck` model:
```php
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Ck extends Model
{
use HasFactory;
protected $fillable = ['content'];
}
```
| tahsin000 | |
341,337 | Why you should start a blog as a developer | Starting a blog is a wonderful idea, and it's a challenging one too. | 0 | 2020-05-21T23:13:58 | https://melbarch.com/blog/why-starting-a-blog/ | blog, writing, blogging | ---
title: "Why you should start a blog as a developer"
published: true
description: "Starting a blog is a wonderful idea, and it's a challenging one too."
tags: ["blog","writing","blogging"]
canonical_url: "https://melbarch.com/blog/why-starting-a-blog/"
---
Starting a blog is a wonderful idea, and it's a challenging one too. If you take a look around on the internet, you will find a lot of people write and publish often, and sometimes it seems like they do it effortlessly. But the truth is that you only need to have clear motivations and goals to guide you through the journey.
In order to motivate those considering starting a blog, and to remind myself why this is important, I would love to explore the main reasons why you should start blogging to get a chance to make your place in this overwhelmed virtual world.
Keep in mind that this is not an exhaustive list :
### Deepen your knowledge
By doing so, you deepen your knowledge. There is an excellent quote by Joseph Joubert regarding teaching others :
> " To teach is to learn twice. "
Trying to share your knowledge and skills, will motivate you to master the subject on hand and to refine your thoughts.
People might comment to correct some of your thoughts, just keep in mind that they are helping you succeed and learn together and not to bring you down.
### Give back to the community
Another great thing is that you help other people. we save a lot of time googling and searching answers on Stack Overflow.
Wring about your findings and how you solved your problems will help others, and probably the future you once you face the same issue.
### Learn new skills
Writing code is not your only job as a developer. You have to document your code and write emails and a lots of other stuff like that. So writing blog posts is a great way to improve those skills.
blogging should be considered a learning experience, and when people learn, they grow and feel fulfilled and more happy.
### Build your online brand
Blogging is an excellent way to market yourself. It means that you open the door for more opportunities.
On a personal side, it will help you build your career. On the business side, it helps building an audience for upcoming eBooks or Apps.
### Increase in self confidence
One of the benefits of blogging, is to build self confidence. by posting about topics that trigger engaging comments and exchanging thoughts about them, you will get a great feeling of positive emotions.
Blogging will make you realize that you do have something important to say and will encourage you to speak your mind more often.
#### Final words
If you already have a blog, your next challenge is to build the habit of publishing regularly, consistency is always the key.
If you don't have a blog yet, I hope this will motivate you to start this new challenge.
The blog doesn't have to be *good*, it can be just a random collection of thoughts, and that's still a valuable and powerful addition to your online presence.
Good luck!
*This article was originally published on [https://melbarch.com](https://melbarch.com/blog/why-starting-a-blog/) on May 10th, 2020.* | melbarch |
1,895,139 | Intersection of Two Sorted Arrays | Finding the intersection of two sorted arrays is a common problem in coding interviews and... | 27,580 | 2024-06-20T18:44:03 | https://blog.masum.dev/intersection-of-two-sorted-arrays | algorithms, computerscience, cpp, tutorial | Finding the intersection of two sorted arrays is a common problem in coding interviews and programming challenges. In this article, we'll discuss two approaches to solve this problem: one using a brute force approach and another using two-pointers technique.
### Solution 1: Brute Force Approach (using Nested Loop)
This method involves using a nested loop to find the intersection of two arrays.
**Implementation**:
```cpp
// Solution-1: Brute Force Approach
// Time Complexity: O(m * n)
// Space Complexity: O(m)
vector<int> intersectionOfSortedArrays(vector<int> &arr1, int n, vector<int> &arr2, int m)
{
vector<int> temp;
vector<int> visited(m, 0);
for (int i = 0; i < n; i++)
{
for (int j = 0; j < m; j++)
{
// If element matches and has not been matched with any element before
if (arr1[i] == arr2[j] && visited[j] == 0)
{
temp.push_back(arr2[j]);
visited[j] = 1;
break;
}
// As the array is sorted, element will not be beyond this
else if (arr2[j] > arr1[i])
{
break;
}
}
}
return temp;
}
```
**Logic**:
**1. Track Visited Elements**:
- Initialize a `visited` vector of size `m` (length of `arr2`) with all elements set to `0`. This vector is used to track elements in `arr2` that have already been matched.
**2. Nested Loop**:
- Iterate through each element of `arr1` using the outer loop. For each element in `arr1`, iterate through each element of `arr2` using the inner loop.
**3. Check for Matches**:
- If the current element of `arr1` matches the current element of `arr2` and the `visited` marker for that element in `arr2` is `0` (indicating it has not been matched yet), add the element to the result vector `temp` and mark it as visited by setting the corresponding element in the `visited` vector to `1`.
- If the current element of `arr2` is greater than the current element of `arr1`, break out of the inner loop as the arrays are sorted and no further match will be found for the current element of `arr1`.
**Time Complexity**: O(m \* n)
* **Explanation**: Each element in `arr1` is compared with every element in `arr2`, resulting in a nested loop.
**Space Complexity**: O(m)
* **Explanation**: Additional space is used to store the `visited` vector and the result vector `temp`.
**Example**:
* **Input**: `arr1 = [1, 2, 2, 3, 4]`, `arr2 = [2, 2, 3, 5, 6]`
* **Output**: `intersection = [2, 2, 3]`
* **Explanation**: Elements `2` and `3` are common in both arrays.
---
### Solution 2: Optimal Approach (Two-Pointers Technique)
This method uses two pointers to find the intersection efficiently.
**Implementation**:
```cpp
// Solution-2: Optimal Approach (Using Two Pointers)
// Time Complexity: O(m + n)
// Space Complexity: O(min(n, m))
vector<int> intersectionOfSortedArrays(vector<int> &arr1, int n, vector<int> &arr2, int m)
{
int i = 0;
int j = 0;
vector<int> temp;
while (i < n && j < m)
{
if (arr1[i] < arr2[j])
{
i++;
}
else if (arr1[i] > arr2[j])
{
j++;
}
else
{
temp.push_back(arr1[i]);
i++;
j++;
}
}
return temp;
}
```
**Logic**:
**1. Initialize Pointers**:
- Initialize two pointers `i` and `j`, both set to `0`. These pointers will traverse `arr1` and `arr2`, respectively.
**2. Traverse Arrays**:
- Use a `while` loop to traverse both arrays as long as both pointers are within the bounds of their respective arrays.
**3. Compare Elements**:
- If the current element of `arr1` is less than the current element of `arr2`, increment pointer `i` to move to the next element in `arr1`.
- If the current element of `arr1` is greater than the current element of `arr2`, increment pointer `j` to move to the next element in `arr2`.
- If the current elements of both `arr1` and `arr2` are equal, add the element to the result vector `temp` and increment both pointers `i` and `j` to move to the next elements in both arrays.
**Time Complexity**: O(m + n)
* **Explanation**: Each array is traversed once using two pointers.
**Space Complexity**: O(min(n, m))
* **Explanation**: The result vector `temp` stores the intersection elements, which can be up to the size of the smaller array.
**Example**:
* **Input**: `arr1 = [1, 2, 2, 3, 4]`, `arr2 = [2, 2, 3, 5, 6]`
* **Output**: `intersection = [2, 2, 3]`
* **Explanation**: Elements `2` and `3` are common in both arrays.
---
### Comparison
* **Brute Force Approach**:
* **Pros**: Simple and straightforward.
* **Cons**: Inefficient for large arrays due to O(m \* n) time complexity and additional space usage.
* **Optimal Approach**:
* **Pros**: Efficient with O(m + n) time complexity and lower space complexity.
* **Cons**: Slightly more complex to implement but highly efficient for large arrays.
### Edge Cases
* **Empty Arrays**: Returns an empty array as there are no elements to intersect.
* **No Intersection**: Returns an empty array if there are no common elements.
* **Identical Arrays**: Returns the entire array as all elements are common.
### Additional Notes
* **Efficiency**: The two-pointers technique is more time-efficient, making it preferable for large arrays.
* **Practicality**: Both methods handle intersections efficiently but the choice depends on the size of the arrays and space constraints.
### Conclusion
Finding the intersection of two sorted arrays can be efficiently achieved using either a brute force approach or a two-pointers technique. The optimal choice depends on the specific constraints and requirements of the problem.
--- | masum-dev |
1,895,138 | Digital Nomadism in 2024 | Hi everyone! in last month i've been search (on web and offline) a way to talk with a digital nomad... | 0 | 2024-06-20T18:43:30 | https://dev.to/tommytortorelli/digital-nomadism-in-2024-29lp | webdev, beginners, programming, ai | Hi everyone!
in last month i've been search (on web and offline) a way to talk with a digital nomad in 2024.
Many programmers and many designer but... BUT... what is the really way to begin a "Digital Nomad"?
What is the best way to work in Nomad Mode?
What is (if anyone don't have a full remote work) the best way to survive (economic)?
**What do you think about that?**
| tommytortorelli |
1,895,135 | Debouncing- A Saver ! | Debouncing is a fundamental technique in JavaScript that improves performance by limiting the number... | 27,558 | 2024-06-20T18:42:08 | https://dev.to/imabhinavdev/debouncing-a-saver--21b3 | webdev, javascript, beginners, programming | Debouncing is a fundamental technique in JavaScript that improves performance by limiting the number of times a function is executed. It's particularly useful in scenarios where rapid events (like scrolling or typing) trigger callback functions that might cause performance issues if executed too frequently. In this guide, we'll explore what debouncing is, why it's important, how it works, and provide practical examples to help you master this essential concept.
## 1. Introduction to Debouncing
In the world of JavaScript programming, managing event handlers efficiently is crucial for maintaining smooth user experiences and optimizing performance. Debouncing is a technique used to control how many times a function gets called over time. It ensures that functions bound to events like scroll, resize, or input are only executed after a specified period of inactivity, thus preventing them from being called excessively.
## 2. Understanding Debouncing
To understand debouncing better, let's delve into its core concept:
### How Debouncing Works
Imagine you have a scenario where a user is typing into an input field, and you want to perform a search operation based on their input. Without debouncing, the search function would trigger with every keystroke, potentially overwhelming the server with requests or causing unnecessary recalculations.
Debouncing solves this problem by introducing a delay before executing the function. If another event of the same type occurs within the delay period, it resets the timer. This way, the function only fires once the user has stopped typing for the specified duration.
### Benefits of Debouncing
- **Performance Optimization:** Reduces the number of function executions, improving overall performance, especially in resource-intensive applications.
- **Control Over Event Handling:** Ensures that actions triggered by events like scrolling or resizing are handled in a controlled and predictable manner.
- **Preventing Unnecessary Operations:** Minimizes unnecessary computations or API requests that could occur from rapid, successive events.
## 3. Implementing Debouncing
### Simple Debouncing Example
Let's walk through a basic implementation of debouncing in JavaScript:
```javascript
function debounce(func, delay) {
let timer;
return function() {
clearTimeout(timer);
timer = setTimeout(() => {
func.apply(this, arguments);
}, delay);
};
}
// Usage example
const searchInput = document.getElementById('search-input');
const search = debounce(function() {
// Perform search operation here
console.log('Searching for:', searchInput.value);
}, 300);
searchInput.addEventListener('input', search);
```
#### Explanation:
- **debounce(func, delay):** This is a higher-order function that takes a function `func` and a delay `delay` as parameters.
- **clearTimeout(timer):** Resets the timer every time the event is triggered within the delay period.
- **setTimeout:** Invokes the function `func` after `delay` milliseconds of inactivity.
### Advanced Use Cases
Debouncing can be applied to various scenarios beyond simple input events. For instance, you might debounce window resize events to optimize layout recalculations or debounce API calls to prevent excessive requests during user interactions.
```javascript
// Debouncing window resize event
window.addEventListener('resize', debounce(function() {
console.log('Window resized');
}, 500));
// Debouncing API calls
const fetchData = debounce(async function(query) {
const response = await fetch(`https://api.example.com/search?q=${query}`);
const data = await response.json();
console.log('Fetched data:', data);
}, 1000);
```
## 4. Practical Applications
### Debouncing User Input
One of the most common applications of debouncing is in handling user input effectively, especially in search bars or form fields where immediate feedback or live search results are desired without overwhelming backend services.
```javascript
const userInput = debounce(function() {
// Validate or process user input
console.log('User input:', userInput.value);
}, 200);
document.getElementById('user-input').addEventListener('input', userInput);
```
### Handling Window Resize Events
Debouncing window resize events ensures that layout calculations or responsive adjustments are performed efficiently, avoiding multiple recalculations during rapid resizing.
```javascript
window.addEventListener('resize', debounce(function() {
console.log('Window resized');
}, 300));
```
### Optimizing Network Requests
Debouncing API requests is crucial for scenarios where user interactions trigger server-side operations, such as autocomplete suggestions or fetching data based on user input.
```javascript
const fetchData = debounce(async function(query) {
const response = await fetch(`https://api.example.com/search?q=${query}`);
const data = await response.json();
console.log('Fetched data:', data);
}, 500);
document.getElementById('search-input').addEventListener('input', function() {
fetchData(this.value);
});
```
## 5. Comparing Debouncing and Throttling
While debouncing and throttling are both techniques used to improve performance in event-driven scenarios, they serve different purposes:
- **Debouncing** delays invoking a function until after a certain period of inactivity.
- **Throttling** limits the number of times a function can be called within a specified time interval.
Choose debouncing when you want to ensure that a function is only executed after a pause in events, whereas throttling is preferable for limiting the rate of execution of a function to a fixed interval.
## 6. Conclusion
In conclusion, debouncing is a powerful technique in JavaScript for managing event handlers effectively, optimizing performance, and ensuring a smooth user experience. By understanding its principles and applying it to relevant scenarios like user input handling, window resizing, or network requests, you can significantly enhance the responsiveness and efficiency of your applications.
Mastering debouncing requires practice and experimentation to find the optimal delay for different use cases. Start integrating debouncing into your projects today to witness firsthand its impact on performance and user satisfaction.
Now that you have a solid understanding of debouncing, experiment with it in your own projects and explore how it can elevate the functionality and efficiency of your JavaScript applications! | imabhinavdev |
1,895,134 | Episode 24/24: Vertical Architectures, WebAssembly, Angular v9's Secret, NgRx | Brandon Roberts unveiled why Angular 9 has the highest download rates. Manfred Steyer gave a talk... | 0 | 2024-06-20T18:38:35 | https://dev.to/this-is-angular/episode-2424-vertical-architectures-webassembly-angular-v9s-secret-ngrx-1265 | webdev, javascript, angular, programming | Brandon Roberts unveiled why Angular 9 has the highest download rates. Manfred Steyer gave a talk about vertical architectures. Evgeniy Tuboltsev published a guide on how to integrate WebAssembly into Angular, and NgRx 18 was released.
{% embed https://youtu.be/mEAcIGUgluo %}
## Vertical Architectures
At the Angular Community Meetup, Manfred Steyer presented an upgraded version of his talk about DDD in Angular.
He mentioned the team topologies model, where we have four different types of teams responsible for different tasks:
1. Platform services team
2. Specialization team
3. Supportive teams
4. Value stream team
{% embed https://www.youtube.com/watch?v=AEMMyvFkx4c %}
## NgRx 18
NgRx, the most popular state management library in Angular, was released in v18 making it compatible to Angular 18.
Be careful, if you want to use the Signal Store. It **will** become stable later and it is not released yet.
To use it in Angular 18, run `npm i @ngrx/signals@next` or use the schematic (also with the `next` tag).
{% embed https://dev.to/ngrx/announcing-ngrx-18-ngrx-signals-is-almost-stable-eslint-v9-support-new-logo-and-redesign-workshops-and-more-17n2 %}
## The secret behind Angular 9
At the moment, Angular is downloaded around 3.5 million times per week, making it the third most downloaded framework after React and Vue.
Around 450,000 downloads come from Angular 9. Given that the current version is 18, that's a little bit strange.
Brandon Roberts discovered that Angular 9 is a dependency of codelzyer. Codelzyer is a library we used before typescript ESLint came out.
It is very likely that Codelyzer is part of many applications, although it is not actively used, and developers should remove it.
According to the statistics, Codelzyer has a current download number of 600,000.
Without Codelzyer, Angular's download numbers would/will drop by 17%.
{% embed https://www.youtube.com/watch?v=fQGDZzrIPRg %}
## WebAssembly & Angular
WebAssembly allows to run application in browser, which have been written in other languages than Javascript. In addition, the execution speed is almost like native code.
Evgeniy Tuboltsev wrote an article where he showed, how to port an application written in Rust to WebAssembly and consume it in Angular. As for the comparison, his example runs three faster than its JavaScript counterpart.
{% embed https://medium.com/@eugeniyoz/powering-angular-with-rust-wasm-0eed1668a51c %}
| ng_news |
1,895,133 | Luxury Guess Watches Showroom in Ghaziabad | Sai Creations Watch | When it comes to blending modern fashion with timeless elegance, Guess Collection (GC) Watches stand... | 0 | 2024-06-20T18:31:12 | https://dev.to/saicreationswatches/luxury-guess-watches-showroom-in-ghaziabad-sai-creations-watch-2pbn | watches | When it comes to blending modern fashion with timeless elegance, Guess Collection (GC) Watches stand out as an epitome of style and sophistication. For those in Ghaziabad, the new Guess Collection Watches showroom is a treasure trove waiting to be explored. Here’s why a visit to this showroom should be on every watch enthusiast’s agenda.
Click: https://saicreationswatches.com/blogs/news/luxury-guess-watches-showroom-in-ghaziabad-sai-creations-watch

A Grand Welcome to Luxury
Located in the heart of Ghaziabad, the Guess Collection Watches showroom offers an inviting ambiance where luxury meets comfort. As you step inside, you are greeted by a meticulously designed space that mirrors the elegance and class of the GC brand. The showroom’s interior is a perfect blend of contemporary design and sophisticated charm, making it the ideal setting to discover the latest in horological fashion.
An Exquisite Collection
The showroom features an extensive range of Guess Collection watches, showcasing the brand's commitment to craftsmanship and innovation. From classic timepieces to modern marvels, there is something for every taste and occasion. Whether you are looking for a watch that exudes understated elegance or one that makes a bold fashion statement, the collection at the showroom will not disappoint.
Get Discount Here: https://saicreationswatches.com/collections/guess-collection
Highlights of the Collection:
Men’s Watches: Rugged yet refined, the men’s collection boasts watches that are perfect for both everyday wear and special occasions. With features like chronographs, tachymeters, and sleek designs, these watches are engineered for precision and style.
Women’s Watches: The women’s collection is a celebration of elegance and sophistication. From sparkling dials adorned with Swarovski crystals to minimalist designs, these watches are the perfect accessories to complement any outfit.
Unisex Options: For those who appreciate versatility, the unisex collection offers stylish and functional timepieces that transcend gender norms.
Personalized Service
What sets the Guess Collection Watches showroom in Ghaziabad apart is its dedication to providing personalized service. The knowledgeable and friendly staff are always on hand to assist you in finding the perfect watch. Whether you need help understanding the features of a particular model or advice on the best watch to match your style, their expertise ensures a seamless and enjoyable shopping experience.
Exclusive Offers and Events
To make your visit even more special, the showroom often hosts exclusive events and offers. From launch parties for new collections to special discounts and promotions, there is always something exciting happening. Be sure to sign up for their newsletter or follow them on social media to stay updated on the latest happenings and offers.
A Perfect Gift Destination
Guess Collection watches make for exceptional gifts. Whether you are celebrating a milestone, looking for a memorable birthday present, or simply want to treat a loved one, the showroom offers a variety of options to suit every occasion. The elegant packaging and the prestige associated with the GC brand ensure that your gift will be cherished for years to come.
Visit Us Today
Embark on a journey of elegance and style at the Guess Collection Watches showroom in Ghaziabad. Discover the perfect timepiece that not only complements your wardrobe but also reflects your personality. Visit us today and let us help you find a watch that is as unique and special as you are.
Contact Us
Shop No. F 232,
Indirapuram Habitat Center,
Opposite श्री रत्नम्,
Ahinsa Khand 1, Indirapuram, Ghaziabad
Website: https://saicreationswatches.com/ | saicreationswatches |
1,895,130 | 10 In-Demand Highest-Paying Python Jobs in 2024 | Below are the top 10 Best careers in Python according to ZipRecruiter and Indeed, along with their... | 0 | 2024-06-20T18:27:57 | https://dev.to/devella/10-in-demand-highest-paying-python-jobs-in-2024-4o5k | python, beginners, webdev, datascience | Below are the **top 10 Best careers** in Python according to **ZipRecruiter** and **Indeed**, along with their estimated average annual salaries:
1. **Machine Learning Engineer ==> _$110,500 - $164,500_:** Machine learning engineers use Python to build and deploy machine learning models that can learn from data and make predictions. They are in high demand across various industries, such as finance, healthcare, and technology.
2. **Data Scientist ==> _$100,500 - $150,000_:** Data scientists use Python to collect, clean, analyze, and interpret data. They use their findings to solve business problems and develop data-driven strategies.
3. **Python Developer ==> _$100,500 - $138,500_**: Python developers use Python to build a wide range of software applications, from web and mobile apps to backend systems and automation scripts.
4. **Full Stack Developer (Python) ==> _$109,000 - $151,000_:** Full stack developers use Python to develop both the front-end (user interface) and back-end (server-side) of web applications. While they may use other languages for the front-end, Python is a popular choice for the back-end.
5. **Automation Engineer ==> _$93,500 - $140,500_:** Automation engineers use Python to automate repetitive tasks, which can improve efficiency and reduce errors. They are in demand across various industries, such as manufacturing, IT, and healthcare.
6. **DevOps Engineer ==> _$91,500 - $143,500_:** DevOps engineers use Python to automate the process of building, testing, and deploying software applications. They work to bridge the gap between development and operations teams.
7. **Data Analyst ==> _$85,500 - $133,500_:** Data analysts use Python to clean, analyze, and visualize data to identify trends and insights. They communicate their findings to stakeholders to help them make data-driven decisions.
8. **Web Developer (Python) ==> _$81,500 - $129,500_:** Web developers use Python to build web applications. While Javascript is the most common language for front-end development, Python with frameworks like Django is a popular choice for back-end development.
9. **Software Engineer (Python) ==> _$79,500 - $127,500_:** Software engineers use Python to develop a wide range of software applications. This can be similar to Python developer roles, but may encompass a broader range of languages and technologies.
10. **Quant Analyst ==> $95,500 - $145,500:** Quant analysts use Python to develop and implement mathematical models to analyze financial markets and make investment decisions. While a strong background in finance is required, Python is becoming an increasingly important tool for quantitative analysts.
> **_Remember these are just estimated average salaries and can vary depending on factors such as experience, location, and the specific company._**
| devella |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.