id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,906,880 | PIPO AI TOKEN | PIPO AI TOKEN: The Future of Digital Currency and Artificial Intelligence In the ever-evolving world... | 0 | 2024-06-30T19:51:33 | https://dev.to/pi_po_2fe93a8de36e265a60d/pipo-ai-token-3pj6 | ai, cryptocurrency | PIPO AI TOKEN: The Future of Digital Currency and Artificial Intelligence
In the ever-evolving world of digital currencies, one token stands out as a beacon of innovation and potential: PIPO AI TOKEN. As we move towards an era where artificial intelligence (AI) and blockchain technology are increasingly intertwined, PIPO AI TOKEN emerges as a groundbreaking solution that promises to revolutionize both fields.
A Visionary Token for a Revolutionary Era
PIPO AI TOKEN is not just another digital currency; it’s a visionary project that integrates the power of AI with the robustness of blockchain technology. This unique combination positions PIPO AI TOKEN as a key player in shaping the future of digital economies and AI-driven applications.
Why PIPO AI TOKEN is the Future
1. Innovative Integration of AI and Blockchain: PIPO AI TOKEN leverages cutting-edge AI algorithms to enhance security, efficiency, and scalability of blockchain transactions. This fusion opens up new possibilities for smart contracts, automated processes, and intelligent decision-making systems.
2. Transformative Potential: The impact of PIPO AI TOKEN extends beyond traditional digital currency applications. By harnessing AI, this token can facilitate advancements in various sectors, including finance, healthcare, logistics, and more. The ability to process vast amounts of data and make intelligent predictions can lead to unprecedented improvements in these fields.
3. Enhanced Security and Transparency: One of the primary concerns with digital currencies is security. PIPO AI TOKEN addresses this by employing advanced AI-driven security protocols, ensuring that transactions are not only faster but also more secure. Additionally, the transparency offered by blockchain technology means that all transactions are verifiable and immutable.
Why You Should Invest in PIPO AI TOKEN
Investing in PIPO AI TOKEN is not just about potential financial gains; it’s about being part of a transformative movement. Here are a few reasons why you should consider adding PIPO AI TOKEN to your investment portfolio:
1. Pioneering Technology: As an early adopter, you’ll be at the forefront of a technological revolution that combines AI and blockchain. This pioneering technology has the potential to create new markets and disrupt existing ones.
2. High Growth Potential: With the increasing adoption of AI and blockchain technologies, the demand for innovative solutions like PIPO AI TOKEN is expected to soar. Early investors stand to benefit significantly as the token’s value appreciates over time.
3. Support a Future-Oriented Vision: By investing in PIPO AI TOKEN, you are supporting a project that aims to make a positive impact on the world. The advancements driven by this token can lead to more efficient systems, smarter applications, and a better future for all. | pi_po_2fe93a8de36e265a60d |
1,906,878 | React State Management: When & Where add your states? | When you start learning React, managing state can be challenging at first. It's crucial to understand... | 0 | 2024-06-30T19:50:55 | https://dev.to/atenajoon/react-state-management-when-where-add-your-states-3g61 | react, state, useref | When you start learning React, **managing state** can be challenging at first. It's crucial to understand when you really need a state for a variable and where to place that state to ensure your code is robust and efficient. Proper state management not only **optimizes performance** by minimizing unnecessary re-renders but also enhances **predictability** and **maintainability**, making your code easier to debug. It promotes component **reusability**, supports application **scalability**, and maintains a clear **separation of concerns**. Ultimately, effective state management leads to a smoother user experience and a high-quality, performant application.
There is a series of questions you can ask yourself to determine if your variables need separate states or if they can be simple constants. Additionally, these questions can help you decide where to keep the state if it is needed.
## When do I need a state variable?
**Do you need to store data?**
YES:
**Will data change at some point?**
NO: Regular "const" variable
YES:
**Can it be computed from existing state/props?**
YES: Derive state
NO:
**Should it re-render the component?**
NO: Ref(useRef)
YES:
**PLACE A NEW PIECE OF STATE IN COMPONENT!**
## Where to place your new state?
_Always start with a local state in the current component. Then ask yourself if it's:_
**Only used by this component?**
YES: Leave it in the component
NO: **Also used by a child component?**
YES: pass it to the child via props
NO: **Also used by one or a few sibling components?**
YES: List state up to first common parent
NO: **Used all over the component tree by more than a few sibling components?**
YES: Then, you probably need a Global State!
| atenajoon |
1,906,877 | Top SQL Interview Questions | Welcome to Day 01 of our 21-day coding challenge! Today, we're diving into some of the most common... | 0 | 2024-06-30T19:47:39 | https://dev.to/shruti_maheshwari_/top-sql-interview-questions-3gp0 | Welcome to Day 01 of our 21-day coding challenge! Today, we're diving into some of the most common SQL interview questions. Whether you're prepping for an interview or just looking to sharpen your SQL skills, these questions will give you a solid foundation.
1.What is SQL?
2.What are the different types of SQL commands?
3. What is a primary key?
4.What is a foreign key?
5. What is a JOIN? Explain different types of JOINs
6.What is normalization? Explain its types.
7.What are indexes?
8.What is a stored procedure?
9.What is a trigger?
10.Explain ACID properties.
**Practical Questions:**
1.How do you retrieve unique records from a table.
2.How do you find the second highest salary from an Employee table?
3.How do you update a record in a table?
4.How do you delete a record from a table?
5.What is a view and how do you create one?
Happy coding! Stay tuned for Day 02 where we’ll cover some exciting topics to keep your coding skills sharp.
#21DaysOfChallenge #SQL #CodingInterview #TechInterviews
| shruti_maheshwari_ | |
1,906,876 | CVE-2024-27867- Eavesdropping vulnerability AirPods | On 26th of June, Apple announced CVE-2024-27867. If you are the (happy) owner of either: AirPods... | 0 | 2024-06-30T19:47:06 | https://dev.to/yowise/cve-2024-27867-eavesdropping-vulnerability-airpods-3c4j | cybersecurity, community, ios | On 26th of June, Apple announced CVE-2024-27867.
If you are the (happy) owner of either:
- AirPods (2nd generation and later),
- AirPods Pro (all models),
- AirPods Max,
- Powerbeats Pro,
- Beats Fit Pro
then you shall ensure your device(s)' firmware is up to date.
The good news: if your Airpods/ Beats are charging and are connected to your iPhone, iPad or Mac via Bluetooth then the update is done automatically.
You can check the version of your AirPods/ Beats using one of the earlier specified devices. Be wary that your iPhone/ iPad/ Mac should also be at the latest version! 💡
The bad news: Your conversations were at risk of being intercepted by a ~~curious~~ malicious actor, using bluetooth sniffer.
## What is a bluetooth sniffer?
It's a tool used to intercept and read (i.e to sniff) the Bluetooth Low Energy (also known as BLE) packets, as they are transmitted.
Bluetooth sniffing is just one type of attack. You can read more about other types of Bluetooth attacks on [HTB Academy](https://academy.hackthebox.com/course/preview/brief-intro-to-hardware-attacks)
## Is the issue fixed?
Well, as mentioned earlier in the article, yes!
The issue is fixed on AirPods Firmware Update **6A326**, AirPods Firmware Update **6F8**, respectively Beats Firmware Update **6F8**.
## Instead of buh-bye
Always make sure that your devices are updated because this is a facile way to protect yourself online.
| yowise |
1,906,875 | Kiran Infertility Centre | Kiran Infertility Centre, across Hyderabad, Chennai, and Bangalore, offers top-notch fertility... | 0 | 2024-06-30T19:44:54 | https://dev.to/kiraninfertilitycentre/kiran-infertility-centre-p0n | healthydebate | [Kiran Infertility Centre](https://www.kiranfertilityservices.com), across Hyderabad, Chennai, and Bangalore, offers top-notch fertility solutions with compassionate care. Renowned for its advanced facilities and expert specialists, it provides personalized treatments including IVF, IUI, and donor services, guiding individuals and couples on their fertility path with integrity and innovation. | kiraninfertilitycentre |
1,906,874 | Learning a new language/framework | I recently took up an internship role where I had to use a programming language and framework I was... | 0 | 2024-06-30T19:43:26 | https://dev.to/emmo00/learning-a-new-languageframework-1ef7 | beginners, codenewbie, learning, laravel | I recently took up an internship role where I had to use a programming language and framework I was unfamiliar with. The role involved using Laravel as the backend framework and I had to learn it. [Laravel](https://laravel.com) is a PHP framework for building web applications.
In this post, I'd like to outline some steps I took from being clueless to contributing to the codebase within 2 months.
Don't be alarmed if this timeframe seems too short, this is probably, as I'll talk about in a minute, because I have had a little experience in programming and backend development before then. If you don't fall into this category or you are a total beginner in your field, these tips should still be helpful to you as a developer. Hang tight, let's explore.
## Having a good programming background
A really smart guy once said, "People need to start learning programming instead of programming languages". I agree with this saying. Programming languages are just tools software developers use to implement their solutions. The underlying way of thinking and problem-solving are still required no matter the number of languages you know.
Knowing a lot of programming languages, although impressive, doesn't mean the person can apply these concepts to build cool and helpful software, especially in this age of rising use of AI tools and companions.
Understanding concepts like Variables, Data types, control flow, data structures, algorithms, programming paradigms like Object Oriented Programming (OOP) and Functional programming, Design patterns, Testing, concurrency and parallelism, and error handling is a good start. In the real world, or even now, we apply and implement most of these without consciously/intentionally thinking about them.
## Learn the language first
I've heard people say "I'm good with React, but I have issues with JavaScript". This shouldn't be the case. One should be very familiar with a language and how it works before learning any of its frameworks.
Some key questions to ask when learning a new language may include:
- What does the general **Syntax** look?
- What are the provided/built-in **Data Types**?
- How do I declare **Variables**?
- Provided **Operators**.
- How to implement **Control Flows**
- How to declare **Function**
- Manual **Memory Management**?
- How to run my programs? **Compiled** or **Interpreted**?
- Is it a **Static or Dynamic Type** language?
- What **Paradigm** does it belong?
Of course, You'll notice answers to these questions for any programming language you choose are not the same. You might also get a "Not Quite" or "It depends" answer from some of these in some languages, and understanding these "It depends" scenarios is key.
For example, answering the question "How do I run my Java Program". You can run your Java program by first compiling it to bytecode with the java compiler: `javac MyCode.java`, then running the generated `.class` file with `java MyCode`. Despite the compilation step, java is "Not Quite" a compiled language. Java is considered both compiled and interpreted. It compiles to bytecode, then the bytecode is interpreted with the Java Virtual Machine (JVM).
Most of the knowledge you gain in this stage is transferable to other programming languages.
## Build Projects
This is a very important step in the process. Building projects is a way for you to exercise the knowledge you've acquired. It also allows you to experiment with the language or framework, which gives you a more in-depth knowledge of the language.
It is okay to start small as no one expects you to build anything too complex as you start, you are only advised to take up challenges.
## Introducing Frameworks/Libraries
After getting a grasp of the language through the docs/tutorials and building projects, it's time to choose a framework.
Frameworks are pre-written blocks of code that programmers can incorporate into their projects to save time and effort. Frameworks and libraries make implementation of common tasks very easy because of the provided *abstraction*.
Choosing a framework to learn can be hard and confusing as you're new to the space, so here are some tips:
- Start with a Popular one: Learning a popular framework with a vast community and resources makes it easier to find help and learning materials.
- Experiment: Once you're comfortable with one framework, consider exploring others in different domains to broaden your skillset and knowledge.
- Focus on Core Programming Concepts: Regardless of the framework, a solid understanding of programming fundamentals is crucial. The ability to learn new frameworks becomes easier as your programming foundation strengthens.
## Start learning the framework
Before diving into the framework, ensure you have a good grasp of the programming language it's built on. This will make understanding the framework's syntax and concepts much smoother.
Finding the right resources to learn the chosen framework is not always easy, but I always recommend the Official framework documentation. Most frameworks have comprehensive documentation that covers everything from basic concepts to advanced features. Start here to get a solid understanding of the framework's core principles and functionalities.
Other sources for learning about a framework include Tutorials (e.g. YouTube Videos, Blogs, and Articles), Video Courses (like Udemy or Coursera), and Forums.
Do not waste time in this stage before moving to the next step so you don't get stuck in "Tutorial Hell".
## Notice the Key concepts
Don't try to learn everything at once. Prioritize understanding of the framework's core functionalities, architecture, and design patterns. This foundational knowledge will be essential as you build more complex projects.
## Build Projects
Yet again, Build projects. Don't wait until you're an expert to start building. Look for small project ideas that allow you to practice the framework's concepts. This will solidify your learning and boost your confidence.
Speaking of taking challenges, allow me to introduce the [HNG internship program](https://hng.tech/internship). HNG is a company with a mission — we work with the very best techies to help them enhance their skills through our HNG internship program and build their network. HNG Internship is a fast-paced boot camp for learning digital skills. It's focused on advanced learners and those with some pre-knowledge, and it gets people into shape for job offers. To register for this program, click [here](https://hng.tech/internship).
I hope these tips were helpful. Remember, growth takes time and dedication so set realistic goals, take breaks and you can also find a study buddy to help you in your journey. Stay consistent friends🙋♂️.
| emmo00 |
1,906,873 | Understanding Cloud Computing | In Amazon Web Services (AWS), "cloud" refers to a broad collection of services and infrastructure... | 0 | 2024-06-30T19:43:23 | https://dev.to/oladipuposamuelolayinka/understanding-cloud-computing-2lg8 | aws, computerscience, webdev, beginners | In **Amazon Web Services (AWS)**, "cloud" refers to a broad collection of services and infrastructure that allow users to build, deploy, and manage applications and services through the internet rather than on local servers or personal computers. AWS provides a wide range of cloud computing services, including:
1. **Compute Services**:
- Amazon EC2 (Elastic Compute Cloud): Scalable virtual servers.
- AWS Lambda: Serverless computing to run code without
- provisioning servers.
- Amazon ECS (Elastic Container Service): Managed container service.
2. **Storage Services**:
- Amazon S3 (Simple Storage Service): Scalable object storage.
- Amazon EBS (Elastic Block Store): Block storage for use with
EC2.
- Amazon Glacier: Low-cost archival storage.
3. **Database Services**:
- Amazon RDS (Relational Database Service): Managed relational
databases.
- Amazon DynamoDB: Managed NoSQL database service.
- Amazon Redshift: Data warehousing service.
4. **Networking Services**:
- Amazon VPC (Virtual Private Cloud): Provisioning of isolated
networks.
- AWS Direct Connect: Dedicated network connection to AWS.
- Amazon Route 53: Scalable DNS web service.
5. **Developer Tools**:
- AWS CodeCommit: Source control service.
- AWS CodeBuild: Build and test code.
- AWS CodeDeploy: Automate code deployment.
- AWS CodePipeline: Continuous integration and delivery.
6. **Security and Identity**:
- AWS IAM (Identity and Access Management): Manage user access
and encryption keys.
- AWS KMS (Key Management Service): Create and control
encryption keys.
- AWS Shield: DDoS protection.
7. **Management Tools**:
- AWS CloudWatch: Monitoring and observability.
- AWS CloudFormation: Infrastructure as code.
- AWS Systems Manager: Operational data management.
8. **Machine Learning**:
- Amazon SageMaker: Build, train, and deploy machine learning
models.
- AWS Deep Learning AMIs: Machine learning and AI.
9. **Analytics**:
- Amazon Kinesis: Real-time data streaming.
- Amazon EMR (Elastic MapReduce): Big data processing using
Hadoop and Spark.
- AWS Glue: ETL service.
**Advantages & benefit of cloud computing**
1. **It go global in minutes**: You can deploy your applications around the world at the click of a button.
2. **It stop spending money running and maintaining data centers**: You can focus on building your applications instead of managing hardware.
3. **Benefits from massive economic of scale**: Volume discounts are passed on to you, which provides lower pay-as-you-go prices.
4. **Increase speed and agility**: The provided services allow you to innovate more quickly and deliver your applications faster, giving speed to the market.
5. **Stop guessing capacity**: Your capacity is matched exactly to your demand.
6. **Trade capital expenses for variable expenses **: You pay for what you use instead of making huge upfront investments.
7. **High Availability**: Highly available systems are designed to operate continuously without failure for a long time.These systems avoid loss of service by reducing or managing failures.
8. **Elasticity**: With elasticity, you don't have to plan ahead of time how much capacity you need. You can provision only what you need, and then grow and shrink based on demand.
9. **Durability**: Durability is all about long-term data protection. This means your data will remain intact without corruption.
**CapEx And OpEx**
**Capital Expenditures (CapEx)**
These are upfront purchases toward fixed assets. E.g. Equipment, property, computers and softwares.
**Operating Expenses (OpEx)**
These are funds used to run day-to- day operations.E.g research and development, employee salaries, rent and marketing.
Cloud Computing Models
1. **Infrastructure as a service (IaaS)**.
**Building Blocks**
Fundamental building blocks that can be rented.
**Web Hosting**
Monthly subscription to have a hosting company serve your website.
Examples: Amazon Web Services (AWS) EC2, Google Compute Engine, Microsoft Azure.
2. **Software as a service (SaaS).**
**Complete Application**
Using a complete application, on demand, that someone offers to users.
**Email Provider**
Your personal email that you access through a web browser is SaaS.
Examples: Google Workspace (formerly G Suite), Microsoft Office 365, Salesforce.
3. **Platform as a service (PaaS)**.
**Used by Developers**
Develop software using web-based tools without worrying about the underlying infrastructure.
**Storefront Website**
Tools provided to build a storefront application that runs on another company's server.
Examples: Google App Engine, Microsoft Azure App Services, Heroku.
**Cloud Deployment Models**
Cloud deployment models refer to the ways in which cloud computing resources are deployed and managed. There are four primary cloud deployment models:
1.**Public Cloud**:
.Resources are owned and operated by a third-party cloud service provider.
•Services are delivered over the internet.
•Examples include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)
•Cost-effective and scalable.
•Less control over security and data privacy compared to private clouds.
2.**Private Cloud**:
•Resources are used exclusively by a single organization.
•Can be hosted on-premises or by a third-party provider.
•Offers greater control over security, compliance, and data privacy.
•Typically more expensive than public clouds.
3.**Hybrid Cloud**:
•Combines public and private clouds, allowing data and applications
to be shared between them.
•Offers greater flexibility and optimization of existing infrastructure.
•Useful for balancing workloads and sensitive data handling.
**Leveraging the AWS Global Infrastructure**
**Region**: is a geographic area where AWS has multiple, isolated data centers called Availability Zones (AZs). Each Region is designed to be completely independent from the others to enhance fault tolerance and stability.
**Availability Zones(AZs)**: This consist of one or more physically separated data centers, each with redundant power, networking, and connectivity, housed in separate facilities.
Characteristics of AZs
. physical separated
. connected through low latency links
. fault tolerant
. Allows for high availability
**Edge locations**: These edge locations are strategically positioned around the world to cache and deliver content to end-users with low latency, providing a faster and more reliable user experience.
**The AWS Management Console** :
The AWS Management Console allows you to access your AWS account and manage applications running in your account from a web browser.
The console makes it easy to search for and find services.
**The Root User**
The root user in AWS is the initial user account created when you first set up an AWS account. This user has unrestricted access to all AWS services and resources in the account.
The root user should be protected with MFA.
**AWS Command Line Interface (CLI)**
The AWS Command Line Interface (CLI) is a unified tool to manage AWS services from the command line. It allows you to interact with AWS services using commands in your terminal or command prompt, providing a powerful way to automate tasks and manage AWS resources programmatically.
**Programming Access**
Programming access in AWS refers to the ability to interact with AWS services using software development tools and libraries, rather than manually through the AWS Management Console. This allows developers to automate tasks, integrate AWS services into their applications, and manage resources programmatically. Here are the main components and tools for programming access in AWS:
1. **Command line interface**: The CLI allows you to manage AWS
services from a terminal session on your laptop.
2. **Application code**: AWS services can be accessed from application code using SDKs and programmatic calls.
3. **Software Development Kits(SDKs)**;
AWS provides SDKs for various programming languages, including:
• Java
• Python
• JavaScript
• Ruby
• Go
• PHP
• C++
These SDKs include libraries and sample code to help you get started with integrating AWS services into your applications. They handle tasks such as authentication, request signing, and response parsing. | oladipuposamuelolayinka |
1,906,872 | SQL Database Migration in .NET with Entity Framework Core | Introduction Recently, I had an issue with migration in .NET using entity framework core.... | 0 | 2024-06-30T19:34:42 | https://dev.to/fredchuks/sql-database-migration-in-net-with-entity-framework-core-3117 | ## Introduction
Recently, I had an issue with migration in .NET using entity framework core. I kept getting error reports on the NuGet package manager console. This article would provide a detailed explanation of how I resolved this problem. I am Fredrick Chukwuma, a .NET Developer that is result and process-oriented.
## What Is Entity Framework?
Entity Framework is a modern object-relation mapper that lets you build a clean, portable, and high-level data access layer with .NET (C#) across a variety of databases. It makes programming easier and faster.
##Prerequisites
To follow along, ensure you have the following:
- Visual studio
- Familiarity with .NET and the Entity framework
- Installed .NET locally
- A SQL Server or Azure SQL database
##Step 1: Create a new ASP.NETCore Web API Project
Open visual studio, click on new project, select ASP.NETCore Web API and configure the project

##Step 2: Create a Model
Create a new folder named _"Models"_ in the root directory. Then create a new c# class named _"Student.cs"_ and add properties as done below:
```
public class Student
{
public Guid Id { get; set; }
public string? FirstName { get; set; }
public string? LastName { get; set; }
public string? Department { get; set; }
}
```
**Note**: The question mark at the end of each property data type is to make it nullable.
##Step 3: Include connection string in _appsettings.json_ file
Open the _appsettings.json_ file then add the line below. Replace the server name with the name of your server
```
"ConnectionStrings": { "DefaultConnection": "Server=Your-ServerName\\SQLEXPRESS;Database=StudentApp;Trusted_Connection=True;TrustServerCertificate=True" }
}
```
#Step 4: Create and Register your App DbContext
Create a folder named _Data_ in your root directory then create an _AppDbContext_ Class in the Data folder and inherit from the _DbContext_ class (The DbContext class is an in-built class in .NET). Also, create a field in this class as shown below
```
public class ApplicationDbContext : DbContext
{
public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options)
{
}
public DbSet<Student> Students { get; set; }
}
```
Register a service in the program class as shown below:
```
builder.Services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(builder.Configuration.GetConnectionString("DefaultConnection")));
```
##Step 5: Install Entity Framework Core Packages
On the menu bar, click on _tools_, select _NuGet package manager_ , then select _Manage NuGet Packages For solution_ . Click _browse_ and install the following packages:
- Microsoft.EntityFramewrokCore
- Microsoft.EntityFramewrokCore.SqlServer
- Microsoft.EntityFramewrokCore.Tools

**Note:** Ensure you install the version of each package that relates to the .NET Framework installed on your system.
##Step 6: Add Migration
On the menu bar, click on _tools_, select _NuGet package manager_, then select _Package Manager console_.
On the console type _"add-migration"_ space _"AddStudentTable"_ (this is the name of the migration, you can use any name of your choice) then press enter.

When, the build is done, on the Package Manager console, type _"update-database"_ then press enter.

Once you see this, you're done! Congrats! You understand migrations in .NET using Entity Framework core!
##Conclusion
As easy as these steps seem, I didn't find it funny a while ago. I encountered a serious issue because I couldn't get the accurate name of my local server and hence my "connection string" was faulty. I also had little idea of the AppDbContext class I was suppose to create to enable a smooth migration. I kept seeing error messages on my Package Manager console, but I figured it out in the end.
I'm loving the whole programming experience as I've seen myself transform from a "hello world" novice to where I am now.
I am also very excited to be a part of the **HNG Internship 11**. I've heard so much about how this internship program pushes individuals to explore their untapped potentials. I opted in for this internship when I came across the opportunity because I want to ace my programming skills, become a better developer, meet other developers and great minds. I'm really anticipating what it holds.
I will also advice individuals that wish to take their tech journey to the next level to apply for free via https://hng.tech/internship
There's also a premium package that gives you more opportunities. You can apply here https://hng.tech/premium
HNG also offers the best tech talents for individuals looking to hire for their firm https://hng.tech/hire
| fredchuks | |
1,906,870 | HTML Links and Navigation | HTML Links and Navigation: A Comprehensive Guide HTML (HyperText Markup Language) is the... | 0 | 2024-06-30T19:30:11 | https://dev.to/ridoy_hasan/html-links-and-navigation-529k | webdev, beginners, learning, html | ### HTML Links and Navigation: A Comprehensive Guide
HTML (HyperText Markup Language) is the standard language for creating web pages. One of the fundamental features of HTML is the ability to create links and navigation, which allows users to move between different pages and sections of a website. In this article, we'll explore how to create and use HTML links and navigation, along with code examples to illustrate each concept.
#### 1. Basic HTML Links
HTML links are created using the `<a>` (anchor) tag. The most important attribute of the `<a>` tag is the `href` attribute, which indicates the destination of the link.
**Example: Basic HTML Link**
```html
<!DOCTYPE html>
<html>
<head>
<title>Basic HTML Link</title>
</head>
<body>
<p>Click the link to visit ridoyweb:</p>
<a href="https://www.ridoyweb.com">Visit ridoyweb </a>
</body>
</html>
```
In this example, the text "Visit OpenAI" is clickable and directs the user to OpenAI's website when clicked.
#### 2. Internal Links
Internal links are used to navigate within the same webpage. This is useful for long pages with different sections.
**Example: Internal Link**
```html
<!DOCTYPE html>
<html>
<head>
<title>Internal Link Example</title>
</head>
<body>
<h1>Table of Contents</h1>
<ul>
<li><a href="#section1">Section 1</a></li>
<li><a href="#section2">Section 2</a></li>
<li><a href="#section3">Section 3</a></li>
</ul>
<h2 id="section1">Section 1</h2>
<p>This is Section 1.</p>
<h2 id="section2">Section 2</h2>
<p>This is Section 2.</p>
<h2 id="section3">Section 3</h2>
<p>This is Section 3.</p>
</body>
</html>
```
In this example, clicking on "Section 1" in the table of contents will scroll the page to the `<h2>` element with the `id="section1"`.
#### 3. External Links
External links are used to navigate to a different website. These links open in the same tab by default but can be configured to open in a new tab.
**Example: External Link**
```html
<!DOCTYPE html>
<html>
<head>
<title>External Link Example</title>
</head>
<body>
<p>Click the link to visit Google:</p>
<a href="https://www.google.com" target="_blank">Visit Google</a>
</body>
</html>
```
In this example, the `target="_blank"` attribute ensures that the link opens in a new tab.
#### 4. Navigation Menus
A navigation menu is a list of links that helps users navigate through the website. Navigation menus are typically implemented using `<nav>`, `<ul>`, `<li>`, and `<a>` elements.
**Example: Navigation Menu**
```html
<!DOCTYPE html>
<html>
<head>
<title>Navigation Menu Example</title>
<style>
nav ul {
list-style-type: none;
padding: 0;
}
nav ul li {
display: inline;
margin-right: 10px;
}
</style>
</head>
<body>
<nav>
<ul>
<li><a href="index.html">Home</a></li>
<li><a href="about.html">About</a></li>
<li><a href="services.html">Services</a></li>
<li><a href="contact.html">Contact</a></li>
</ul>
</nav>
</body>
</html>
```
In this example, the `<nav>` element contains an unordered list (`<ul>`) with list items (`<li>`). Each list item contains a link (`<a>`) to different pages of the website.
#### 5. Email Links
Email links allow users to send an email directly from the webpage. The `href` attribute for email links starts with `mailto:` followed by the email address.
**Example: Email Link**
```html
<!DOCTYPE html>
<html>
<head>
<title>Email Link Example</title>
</head>
<body>
<p>Click the link to send an email:</p>
<a href="mailto:example@example.com">Send Email</a>
</body>
</html>
```
In this example, clicking the "Send Email" link will open the user's default email client with the "To" field filled in with the provided email address.
#### 6. Telephone Links
Telephone links allow users to make a phone call directly from the webpage, which is particularly useful for mobile users. The `href` attribute for telephone links starts with `tel:` followed by the phone number.
**Example: Telephone Link**
```html
<!DOCTYPE html>
<html>
<head>
<title>Telephone Link Example</title>
</head>
<body>
<p>Click the link to make a call:</p>
<a href="tel:+1234567890">Call Us</a>
</body>
</html>
```
In this example, clicking the "Call Us" link will initiate a phone call to the provided phone number.
#### 7. Download Links
Download links allow users to download a file directly from the webpage. The `href` attribute points to the file to be downloaded, and the `download` attribute can be used to suggest a filename.
**Example: Download Link**
```html
<!DOCTYPE html>
<html>
<head>
<title>Download Link Example</title>
</head>
<body>
<p>Click the link to download the file:</p>
<a href="files/example.pdf" download="ExampleFile.pdf">Download PDF</a>
</body>
</html>
```
In this example, clicking the "Download PDF" link will download the `example.pdf` file and save it as `ExampleFile.pdf` on the user's device.
### Conclusion
HTML links and navigation are essential for creating a user-friendly and interactive website. Understanding how to implement various types of links, such as internal, external, email, telephone, and download links, as well as creating navigation menus, will help you build a well-structured and navigable website.
Connect with me on LinkedIn for more-
https://www.linkedin.com/in/ridoy-hasan7 | ridoy_hasan |
1,906,869 | Access control models in cryptography | Access control models are used to only allows access to an authentic user or resource. These allows... | 0 | 2024-06-30T19:29:56 | https://dev.to/himanshu_raj55/access-control-models-in-cryptography-dme | Access control models are used to only allows access to an authentic user or resource. These allows to create protocols as to who can use a particular resource in an organization.
There are various access control models some of which are stated below:
1. Discretionary access control: In discretionary access control the access control rules are create by the resource owner. Is considered less secure due to the possibility of the access protocols by the resource owner
2. Mandatory access control: The access control is defined by a central authority based on predefined security rules. Access is based on classification of various users and resources. Used by military and goverment organization. Is more secure due to strict rules by the central authority
3. Rule based access control: In rule based access control the access is defined based on rules defined by the system administrator. Is is similar to Mandatory access control but is more versatile. Can be used with other models as well
4. Role based access control: The access control is based on roles. Can be used on corporate organizations. Is simple and straightforward. Flexible and simplifes the role as access basis
5. Attribute based access control: The access granted is based on a variety of attributes , such as user attributes , resource attributes, environmental factors. Is very flexible model
6. Context based access control: The access is based on context, such as user type, location ,timezone. Used in mobile and cloud communication
7. Identitity based access control: The access is based on user identity. Can be user alongside DAC or other access control model
8. Time based access control: The access is based on time intervals. Can be used by organizations with strict shift based policies
9. Risk adaptive access control: The access is based of risk factors. Used by banks to decide whether to grant access considering mulitple risk factors | himanshu_raj55 | |
1,906,867 | 📚 How to Use a Template Repository on GitHub | A template repository on GitHub allows you to create new repositories with the same structure and... | 0 | 2024-06-30T19:23:41 | https://dev.to/marmariadev/how-to-use-a-template-repository-on-github-4o4j | webdev, programming, git, github | A template repository on GitHub allows you to create new repositories with the same structure and content as the template repository. It is useful for quickly starting projects with specific configurations without duplicating the commit history.
## 🎯 Objective
Create a template repository on GitHub to easily reuse configurations and code bases in new projects.
## 🛠️ Steps to Use a Template Repository
1. **Create a Template Repository**
- On GitHub, go to the repository you want to turn into a template.
- Go to the "Settings" tab.
- In the "Repository template" section, check the "Template repository" box.
2. **Create a New Repository from the Template**
- Go to the main page of the template repository.
- Click the "Use this template" button.
- Fill in the details for the new repository (name, description, visibility).
- Click "Create repository from template".
## 📌 Example
Suppose you have a repository with an initial setup for a basic backend and frontend that you want to use as a template:
1. **Create a Template Repository**
- Go to your repository: `https://github.com/your-username/backend-frontend-template`.
- Click on "Settings".
- Check the "Template repository" option.
2. **Create a New Repository from the Template**
- Navigate to `https://github.com/your-username/backend-frontend-template`.
- Click on "Use this template".
- Fill in the new repository details:
- Name: `new-client-project`
- Description: `Initial setup for the client's project`
- Click on "Create repository from template".
Using a template repository on GitHub is an efficient way to start new projects with a predefined setup, saving time and effort by avoiding code and configuration duplication. This practice allows you to keep the original code intact and provides a solid foundation to customize each project according to the specific needs of the client.
I hope this guide is helpful. Thank you for your comments and questions! | marmariadev |
1,906,866 | 📚 Cómo Usar un Template Repository en GitHub | 🚀 Introducción Un template repository en GitHub permite crear nuevos repositorios con la... | 0 | 2024-06-30T19:21:11 | https://dev.to/marmariadev/como-usar-un-template-repository-en-github-4h75 | webdev, programming, github, git | #### 🚀 Introducción
Un template repository en GitHub permite crear nuevos repositorios con la misma estructura y contenido que el repositorio plantilla. Es útil para iniciar proyectos con configuraciones específicas rápidamente, sin duplicar el historial de commits.
#### 🎯 Objetivo
Crear un template repository en GitHub para reutilizar configuraciones y bases de código en nuevos proyectos fácilmente.
#### 🛠️ Pasos para Usar un Template Repository
1. **Crear un Template Repository**
- En GitHub, ve al repositorio que quieres convertir en plantilla.
- Ve a la pestaña "Settings" (Configuración).
- En la sección "Repository template" (Plantilla de repositorio), marca la casilla "Template repository" (Repositorio de plantilla).
2. **Crear un Nuevo Repositorio desde la Plantilla**
- Ve a la página principal del template repository.
- Haz clic en el botón "Use this template" (Usar esta plantilla).
- Completa los detalles para el nuevo repositorio (nombre, descripción, visibilidad).
- Haz clic en "Create repository from template" (Crear repositorio desde plantilla).
#### 📌 Ejemplo
Supongamos que tienes un repositorio con una configuración inicial de backend y frontend básico que quieres usar como plantilla:
1. **Crear un Template Repository**
- Entra en tu repositorio `https://github.com/tu-usuario/backend-frontend-template`.
- Haz clic en "Settings".
- Marca la opción "Template repository".
2. **Crear un Nuevo Repositorio desde la Plantilla**
- Navega a `https://github.com/tu-usuario/backend-frontend-template`.
- Haz clic en "Use this template".
- Completa la información del nuevo repositorio:
- Nombre: `nuevo-proyecto-cliente`
- Descripción: `Configuración inicial para el proyecto del cliente`
- Haz clic en "Create repository from template".
#### 📝 Conclusión
Usar un template repository en GitHub es una manera eficiente de iniciar nuevos proyectos con una configuración predefinida, ahorrando tiempo y esfuerzo al evitar la duplicación de código y configuración. Esta práctica te permite mantener el código original intacto y proporciona una base sólida para personalizar cada proyecto según las necesidades específicas del cliente.
Espero que esta guía te sea útil. ¡Gracias por tus comentarios y preguntas! | marmariadev |
1,906,848 | Differentiating between Next js & React js | Frontend is a crucial aspect of website development and there seem to be two major rising ... | 0 | 2024-06-30T19:18:44 | https://dev.to/sirwes/differentiating-between-next-js-react-js-52a9 |
Frontend is a crucial aspect of website development and there seem to be two major rising technologies which includes React and Next both being a framework/library technology which utilizes JavaScript. I will be listing and talking about their differences, strengths, and weaknesses. I’ll share my expectations for the HNG11 Internship and my thoughts on working with React.
> **What is React?**
React is a JavaScript library developed by Facebook for building user interfaces. It allows developers to create reusable UI(user interface) components and manage the state of their applications efficiently.
Pros of React
* **Declarative**: React makes it easy to create interactive UIs by designing simple views for each state in your application. It allows you to efficiently update and render the right components when your data changes.
* **Component-Based**: React uses a component-based architecture where UIs are composed of small, isolated pieces of code called components. This makes it easier to build and maintain complex user interfaces.
* **Virtual DOM**: React uses a virtual DOM to improve performance. Instead of updating the DOM directly, React creates a virtual representation of it in memory and updates only the necessary parts.
* **Ecosystem and Community**: Large Community: React has a vast and active community, providing a wealth of resources, libraries, and tools
**Cons of React**
* **Complex configurations and setup**: Setting up a webpack configuration for a React project can be complex and time-consuming, especially for beginners. It involves managing multiple loaders, plugins, and configurations to handle different types of files and optimizations.
* **JSX syntax**: this involves the combination of html and JavaScript and can be a bit new to someone who is familiar with HTML and JavaScript separately.
* **Frequent Updates**: The React ecosystem evolves rapidly, with frequent updates and new features. Keeping up with these changes can be challenging and might require constant learning and refactoring of existing code. Whereby a react project gives errors and generally bugs due to conflicting versions of packages installed and used.
**What is Next.js?**
Next.js is a React framework developed by Vercel. It extends React’s capabilities by providing a robust solution for building production-ready applications with server-side rendering (SSR), static site generation (SSG), and other powerful features.
**Pros of Next.js**
* **Server-Side Rendering**: SSR improves performance by rendering pages on the server, leading to faster initial load times and better SEO(SEARCH ENGINE OPTIMIZATIONS).
* **Static Site Generation**: SSG generates static HTML at build time, providing optimal performance and scalability.
* **Built-in Routing**: Next.js provides a file-based routing system out of the box, simplifying the creation of dynamic routes.
**Cons of Next.js**
* **Complexity**: The framework introduces additional complexity compared to using plain React. Developers need to understand concepts like SSR, static generation, and API routes, which can increase the volume of terms to be learnt as compared to other JavaScript framework such as react.
* ** SSR and SSG**: Understanding and effectively utilizing SSR and SSG can require additional learning, especially for developers new to these concepts.
**My Expectations for the HNG11 Internship**
As I embark on the HNG11 Internship, I am very Enthusiastic about wat I will be able to accomplish during the duration of the internship such as building highly scalable, User friendly and highly optimized website and the level at which I will be able to meet fellow highly capable programmers and developers across the globe to test my self both physically and mentally
**Working with React at HNG**
React is a powerful and widely-used library for building user interfaces. I am thrilled to deepen my understanding of React during the internship.
I am looking forward to the great possibilities and potentials that react will bring and how I can used React for the immense capabilities in web development
> **Conclusion**
Both React and Next.js bring unique advantages to the table. React’s flexibility and extensive ecosystem make it a popular choice for building dynamic user interfaces, while Next.js’s production-ready features, like server-side rendering and static site generation, offer enhanced performance and developer experience.
As I embark on this journey with the HNG Internship, I am excited to leverage these technologies, enhance my skills, and contribute to innovative solutions. The future of frontend development is bright, and I am eager to be a part of it.
Visit any of the links below to learn more about the HNG11 Internship,
Internship Website: https://hng.tech/internship
Check out HNG Hire https://hng.tech/hire
| sirwes | |
1,906,846 | How can I upload images through the API? | I have written an Obsidian plugin that can publish notes from Obsidian as articles on DEV.to, which... | 0 | 2024-06-30T19:14:21 | https://dev.to/stroiman/how-can-i-upload-images-through-the-api-bp6 | devto, question | I have written an [Obsidian](https://obsidian.md/) plugin that can publish notes from Obsidian as articles on DEV.to, which also deals with some Obsidian specific stuff, e.g. converting Obsidian medialinks to markdown links, separating title from content, and convert MathJax syntax to proper `{% katex %}` expressions; and it can handle subsequent updates, by storing the article id as metadata after the article is created.
But what the plugin really lacks right now is support for. [The API documentation](https://developers.forem.com/api/v1) doesn't document any option for uploading images.
Is that possible?
Btw, [the plugin can be found here](https://github.com/stroiman/obsidian-dev-publish). It's waiting approval before it's available in the community plugin list, but can be installed through [BRAT](https://github.com/TfTHacker/obsidian42-brat). | stroiman |
1,906,845 | Traceability in Software Testing | Have you ever played a game of connect-the-dots? That’s essentially what TRACEABILITY in software... | 0 | 2024-06-30T19:12:57 | https://dev.to/lanr3waju/traceability-in-software-testing-1i8i | webdev, softwaretesting, learning, productivity | Have you ever played a game of connect-the-dots? That’s essentially what TRACEABILITY in software development is all about! It’s the art of ensuring every requirement is linked to its corresponding test cases, and it’s as crucial as your morning coffee ☕️.
But why is traceability such a big deal? Let’s break it down:
🔍**Why Traceability Matters:**
- **Clear Connections:** Imagine a roadmap where every destination (requirement) is directly connected to checkpoints (test cases). This clarity ensures that nothing is overlooked!
- **Accountability:** Traceability holds everyone accountable. You can easily track if all requirements are tested and verify that every feature works as intended.
- **Risk Management:** Early detection of gaps or issues becomes easier. A requirement that isn’t linked to a test case raises a red flag 🚩.
- **Streamlined Audits:** During audits, having a clear trail between requirements and test cases makes the process smooth and stress-free.
- **Enhanced Quality:** With every requirement traced to test cases, the end product’s quality is significantly improved, leading to happier users and stakeholders 🎉.
**🎯 How to Achieve Traceability:**
- **Use the Right Tools:** Leverage tools like Zephyr Scale, JIRA, or other test management tools that support traceability. They make linking requirements and test cases a breeze.
- **Maintain Detailed Documentation:** Document every requirement and corresponding test case meticulously. This documentation acts as your traceability matrix.
- **Regular Reviews:** Schedule regular reviews to ensure all requirements have corresponding test cases. Catching issues early saves time and resources down the line.
- **Collaborative Effort:** Foster collaboration between developers, testers, and business analysts. Traceability is a team sport! 🤝
- **Automation is Your Friend:** Where possible, automate the traceability process. Automation tools can maintain links between requirements and test cases, making your job easier.
**💡 In a Nutshell:**
_Traceability isn’t just a buzzword;_ it’s the backbone of robust and reliable software development. By linking test cases to requirements, you ensure clarity, accountability, and quality every step of the way.
So, next time you embark on a project, think of it as your personal connect-the-dots adventure. Happy tracing! 🌟
If you have questions, please leave them in the comments.
| lanr3waju |
1,906,844 | My goal for the next two years. | I have four goals I wish to achieve for the next two years, and they are: 1. Be a better product... | 0 | 2024-06-30T19:11:51 | https://dev.to/victor_88/my-goal-for-the-next-two-years-52de | I have four goals I wish to achieve for the next two years, and they are:
**1. Be a better product designer and frontend developer**: Making quality user interface that guarantees users a satisfactory experience is my goal for the next two years and I plan on achieving this from the HNG internship- https://hng.tech/internship.
**2. Be HNG finalist**: my goal is to complete this internship successfully as it will be beneficial to my career.
**3. Get a remote job**
**4. Constantly improve**: I have always loved learning and will continue to learn to advance my career.
Here is a diagram illustrating my goals- https://www.figma.com/design/B4Qi4X48RYqxRkByfrRR99/Untitled?node-id=0-1&t=FVMKwp6i4mLEJElB-0 | victor_88 | |
1,906,787 | FluentValidation inline validate | Learn FluentValidation inline validation One of the most important tasks in software... | 22,765 | 2024-06-30T19:08:46 | https://dev.to/karenpayneoregon/fluentvalidation-inline-validate-1ajh | csharp, dotnetcore, coding | ## Learn FluentValidation inline validation
One of the most important tasks in software development is to ensure that data saved meets requirements for business solutions. Learn how to use [FluentValidation](https://docs.fluentvalidation.net/en/latest/) library to validate data for basic solutions using FluentValidation inline validation while for complex validation see [the following article](https://dev.to/karenpayneoregon/fluentvalidation-tips-c-3olf) which demonstrates in both desktop and web projects.
> **Note**
> There is no formal documentation for inline validation. See [source code](https://github.com/FluentValidation/FluentValidation/blob/main/src/FluentValidation/InlineValidator.cs).
Secondary items, using a json file to dynamically read value for validating class/model properties and using custom extensions methods for extending FluentValidation.

For those who are use to conventional validation, inline does not change this as shown below.
{% cta https://github.com/karenpayneoregon/fluent-validation-tips/tree/master/InlineValidationSample1 %} Sample project {% endcta %}
```csharp
var validator = Person.Validator.Validate(person);
```
## Why using Windows Forms?
First off, the code presented will work in other project types, cross-platform, console and web. The reason for using Windows Forms is it is easy to learn from and work except on a Mac.
## Background
The task in this case is to validate a Person class, ensure that the Title property is valid against a list, FirstName and LastName are not empty, Gender is valid against a list and BirthDate fails in a specific range.
When a developer writes code against business specifications there would be no reason for validating Title and Gender against list but in the real world the human factor means code is prone to mistakes or not reading an adhering to specifications which is where validation is needed.
## Models
```csharp
public enum Gender
{
Male,
Female,
NotSet
}
```
- **Validator** is for inline validation
- The use of an interface, with one model it is not helpful but when dealing with more models and countless projects it becomes more important.
- Change notification is not always needed yet for some this may be new.
```csharp
public class Person : IHuman, INotifyPropertyChanged
{
#region Properties
private int _id;
private string _firstName;
private string _lastName;
private string _title;
private DateOnly _birthDate;
private Gender _gender;
public int Id
{
get => _id;
set
{
if (value == _id) return;
_id = value;
OnPropertyChanged();
}
}
public string FirstName
{
get => _firstName;
set
{
if (value == _firstName) return;
_firstName = value;
OnPropertyChanged();
}
}
public string LastName
{
get => _lastName;
set
{
if (value == _lastName) return;
_lastName = value;
OnPropertyChanged();
}
}
public string Title
{
get => _title;
set
{
if (value == _title) return;
_title = value;
OnPropertyChanged();
}
}
public DateOnly BirthDate
{
get => _birthDate;
set
{
if (value.Equals(_birthDate)) return;
_birthDate = value;
OnPropertyChanged();
}
}
public Gender Gender
{
get => _gender;
set
{
if (value == _gender) return;
_gender = value;
OnPropertyChanged();
}
}
#endregion
public static readonly InlineValidator<Person> Validator = new()
{
// validate against the Titles from Validation.json using ValidatorExtensions.In extension method
v => v.RuleFor(x => x.Title)
.In(Titles),
v => v.RuleFor(x => x.FirstName)
.NotEmpty()
.WithMessage("{PropertyName} is required"),
v => v.RuleFor(x => x.LastName)
.NotEmpty()
.WithMessage("{PropertyName} is required"),
// validate against the genders from Validation.json
v => v.RuleFor(x => x.Gender)
.NotNull()
.In(GenderTypes),
// validate against the BirthDateRule extension method
v => v.RuleFor(x => x.BirthDate)
.BirthDateRule()
};
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}
```
## Inline validation
Normally validation is written in a separate class than the business class/model for separation of concerns along with in some cases placing the validation code in a class project for reuse.
The only reason for using inline validation is for smaller projects and for learning purposes.
Usually this is how full validation is configured. The code below was used in this [article](https://dev.to/karenpayneoregon/fluentvalidation-tips-c-3olf) and is intended for normal style validation while inline is more quick and dirty, smaller project validation.
```csharp
public class Person
{
public string UserName { get; set; }
public string EmailAddress { get; set; }
public string Password { get; set; }
public string PasswordConfirmation { get; set; }
public string PhoneNumber { get; set; }
}
```
```csharp
public class UserNameValidator : AbstractValidator<Person>
{
public UserNameValidator()
{
RuleFor(person => person.UserName)
.NotEmpty()
.MinimumLength(3);
}
}
public class PasswordValidator : AbstractValidator<Person>
{
public PasswordValidator()
{
RuleFor(person => person.Password.Length)
.GreaterThan(7);
RuleFor(person => person.Password)
.Equal(p => p.PasswordConfirmation)
.WithState(x => StatusCodes.PasswordsMisMatch);
}
}
public class EmailAddressValidator : AbstractValidator<Person>
{
public EmailAddressValidator()
{
RuleFor(person => person.EmailAddress)
.Must((person, b) => new EmailAddressAttribute().IsValid(person.EmailAddress));
}
}
public class PhoneNumberValidator : AbstractValidator<Person>
{
public PhoneNumberValidator()
{
RuleFor(person => person.PhoneNumber)
.MatchPhoneNumber();
}
}
public class PersonValidator : AbstractValidator<Person>
{
public PersonValidator()
{
Include(new UserNameValidator());
Include(new PasswordValidator());
Include(new EmailAddressValidator());
Include(new PhoneNumberValidator());
}
}
```
## Implement in a project
Create a model/class
```csharp
public class Customers
{
public int CustomerIdentifier { get; set; }
public string CompanyName { get; set; }
public string Street { get; set; }
public string City { get; set; }
public string PostalCode { get; set; }
public string Phone { get; set; }
}
```
Add the InlineValidator as follows.
```csharp
public class Customers
{
public int CustomerIdentifier { get; set; }
public string CompanyName { get; set; }
public string Street { get; set; }
public string City { get; set; }
public string PostalCode { get; set; }
public static readonly InlineValidator<Customers> Validator = new()
{
v => v.RuleFor(x => x.CompanyName)
.NotEmpty()
.WithMessage("{PropertyName} is required"),
v => v.RuleFor(x => x.Street)
.NotEmpty()
.WithMessage("{PropertyName} is required"),
v => v.RuleFor(x => x.City)
.NotEmpty()
.WithMessage("{PropertyName} is required"),
v => v.RuleFor(x => x.PostalCode)
.NotEmpty()
.WithMessage("{PropertyName} is required"),
};
}
```
### Validate the model.
With a instance, in this case Customers call the validator.
```csharp
Customers customers = new Customers();
var customerValidator = Customers.Validator.Validate(customers);
if (customerValidator.IsValid)
{
// good to go
}
else
{
// validation failed
}
```
In a ASP.NET Core project (see example in source repository) if validation fails return the page.
```csharp
public async Task<IActionResult> OnPostAsync()
{
if (!ModelState.IsValid)
{
return Page();
}
}
```
In Windows forms, use the error provider.

### Blazor
See the following [documentation](https://docs.fluentvalidation.net/en/latest/blazor.html) for how to validate.
| karenpayneoregon |
1,906,843 | How do the modern build tools work? (vite, webpack) | What is a build tool? In simple words, a build tool is software that automates the... | 0 | 2024-06-30T19:07:26 | https://dev.to/aman2221/how-do-the-modern-build-tools-work-vite-webpack-37e0 | webdev, javascript, programming, beginners | ## What is a build tool?
1. In simple words, a build tool is software that automates the process of converting the code into the final product.
2. This process can include the Compilation of code, Minification, Building, Code spitting, Transpilation, Testing, Assets optimization, and Providing a development environment.
Let's understand the process one by one :
- **Compilation of the code**: In this process, the build tool compiles the code into browser-understandable languages eg. Typescript to Javascript, SASS to CSS.
- **Minification**: This process includes minifying the files(removing the unnecessary code, comments, removing the white spaces, etc).
- **Code spitting**: Break up the code into smaller chunks and load them available on demand.
- **Transpilation**: Converting the moders ES6/ES7 module code into older version ES5 for older browsers.
- **Testing**: Running tests to check if code changes can produce bugs.
- **Assets optimization**: Compressing the images and other files to reduce the size for best use and fast load.
- **Development Server**: Build tools provide the local server with live reloading for development.
## How build tool work?
- **Configuration**: Build tools are configured using the config files.
e.g. webpack.config.js, vite.config.js. This config file defines the task to be performed on the source code.
- **File processing**: This process includes transpiling of files such as typescript to javascript and SCSS to CSS
- **Output Generation**: The final processed optimized files are generated and stored inside a defined folder e.g. build, gist.
Thank you for reading.
| aman2221 |
1,906,842 | Is Social Media Dead for Digital Marketing ? | "Unlocking the Power of Social Media in Digital Marketing: A Trending Guide Featuring Neonage... | 0 | 2024-06-30T19:06:41 | https://dev.to/ashik_k_ec1d7b46a45890b23/is-social-media-dead-for-digital-marketing--543 | socialmedias, ads |
"Unlocking the Power of Social Media in Digital Marketing: A Trending Guide Featuring [Neonage Solutions](neonage.co.in)"

In today's fast-paced digital landscape, staying ahead of the curve is not just a goal; it's a necessity. As businesses navigate through the ever-evolving realm of digital marketing, one question continues to echo through boardrooms and marketing departments alike: Is social media still a viable tool?
Let's debunk the myth once and for all. Social media isn't just alive; it's thriving. With its unparalleled reach and ability to connect brands with billions of users worldwide, social platforms have become the cornerstone of modern digital marketing strategies.
**Embracing the Evolution**
Gone are the days when simply having a presence on Facebook or Twitter sufficed. Today, success lies in embracing the dynamic evolution of social media. From Instagram's visual storytelling prowess to TikTok's viral potential and LinkedIn's professional networking capabilities, each platform offers unique avenues for brands to engage and convert their audiences.
**Leveraging Trends**
To truly harness the power of social media, it's crucial to ride the wave of trending keywords and topics. Whether it's leveraging hashtags that are buzzing or tapping into viral challenges, staying relevant is key to maintaining visibility in an increasingly crowded digital space.
**Data-Driven Strategies**
In the age of analytics, understanding your audience is more important than ever. Social media platforms provide a wealth of data insights that can inform targeted marketing strategies. By analyzing user behavior, demographics, and engagement metrics, brands can refine their approach and deliver content that resonates on a deeper level.
**The Rise of Influencer Marketing**
Influencers have emerged as a formidable force in the realm of social media marketing. Collaborating with influencers who align with your brand values can amplify reach and credibility, driving authentic connections with consumers in ways that traditional advertising cannot.
**Video Dominance**
Video content continues to reign supreme across social platforms. From live streams to short-form videos and interactive stories, visual content captivates audiences and fosters genuine engagement. Incorporating video into your digital strategy can enhance brand storytelling and leave a lasting impression on viewers.
### Neonage Solutions: Leading Innovation
At [Neonage Solutions](neonage.co.in), we're at the forefront of digital innovation. Specializing in cutting-edge technology solutions, our commitment to excellence is matched only by our passion for driving results. Whether it's developing bespoke software, crafting intuitive user experiences, or implementing robust cybersecurity measures, Neonage Solutions empowers businesses to thrive in a digital-first world.
**Conclusion: A Bright Future Ahead**
In conclusion, social media remains a cornerstone of effective digital marketing strategies. By embracing the latest trends, leveraging data-driven insights, and harnessing the power of influencers and compelling visual content, brands can navigate the digital landscape with confidence and achieve measurable success.
So, is social media dead for [digital marketing](neonage.co.in)? Far from it. It's alive, thriving, and evolving—ready to propel your brand, like Neonage Solutions, to new heights in the digital age. | ashikk |
1,906,813 | Will AI make software engineers obsolete? | Is GitHub Copilot worth the money? Some months ago I tried out GitHub Copilot for free. At... | 27,942 | 2024-06-30T18:53:13 | https://medium.com/@kinneko-de/5756fc147022 | ai, go, githubcopilot, mongodb | ## Is GitHub Copilot worth the money?
Some months ago I tried out [GitHub Copilot](https://github.com/features/copilot) for free. At this time I started with [Go](https://go.dev/) and I was too lazy to read a book. I am a software engineer and normally use C# for programming. Copilot helped me to get started with the basics of Go. There are some stumbling blocks when you come from C#.

The initial euphoria quickly soon began to fade. The code examples did not always compile and were full of errors. My corrections were always accepted. But more than once I found myself going around in circles because the next time I suggested fixing something, previously fixed errors would get re-included.
Did I waste money by buying GitHub Copilot?
***
## Does GitHub Copilot make me smarter?
I am writing a microservice for storing files. I have decided to use [MongoDB](https://www.mongodb.com/) to persist the file metadata. I have no experience with this specific database and I have never used a database with Golang. In my opinion, this is a perfect spotlight for the use of AI.
**Task 1: Create a docker-compose file**
First of all, I need a test database. I use docker-compose for that. Let’s ask the AI for help.

```docker
version: '3.8'
services:
mongodb:
image: mongo:latest
container_name: mongodb_container
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- mongodb_data:/data/db
ports:
- 27017:27017
volumes:
mongodb_data:
```
The AI answers with an example and a lot of text about how docker-compose works. Apart from the fact that the newest version is 3.9 ( [the source is probably the outdated docker documentation](https://forums.docker.com/t/docker-compose-version-3-8-or-3-9-for-latest/102439) ), this is very useful. But one explanation makes me suspicious:

Previously I only used [PostgreSQL](https://www.postgresql.org/) and I have never declared a volume for the database in the docker-compose file. For use as a test system, I want all data to be removed when I delete the Docker container. Let’s ask the AI how to achieve this instead of using Google:

I didn’t know that but I expected this behaviour. Let’s teach the AI exactly what I want:

```
mongodb:
image: mongo:latest
container_name: mongodb_container
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- /data/db
ports:
- 27017:27017
```
Nice one. My existing docker-compose file uses a network to allow communication between the database and the service. GitHub Copilot did not detect this. The container name does not follow my naming scheme. I fixed both problems manually. Overall good work.
**Task 2: Create a repository**
The use of a [repository to encapsulate the database](https://medium.com/@pantaanish/understanding-repository-pattern-with-implementation-a-step-by-step-guide-ca1bf36be3b4) is quite common. In my experience, Go is a bit different and I prefer to follow standards. So let’s ask GitHub Copilot if I have the right design:

Never trust just one source is a good principle on the Internet. So ask about the sources the AI used:

GitHub Copilot’s last sentence sounds like he is talking to a child. But I am not a child anymore. I am an experienced software developer. Two of the links are broken because the authors have changed the names of the Git repositories on GitHub. But as a human, I can find the correct links.


GitHub Copilot accepts my corrections without objection. Who is the child now? 🤪 As a good practice, I have to ask the child if it understood what I explained. Listen and repeat:

The third source was removed. I have provided the corrected link and invested my time in correcting this. Let’s remind the AI.

Let’s try to convince them to add the link again. The link was quite useful and I do not want to make GitHub Copilot any dumber.

This time, I give it a thumbs up (highlighted in yellow) to increase the chances that the answer will be used later. At least I hope it works that way.
Now that I know I want a repository, let’s create one. Of course, I use AI to do this, because programming is a lazy job.

```golang
package repository
import (
"context"
"time"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
type Repository struct {
client *mongo.Client
db *mongo.Database
}
func NewMongoDBRepository(uri string, dbName string) (*Repository, error) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI(uri))
if err != nil {
return nil, err
}
db := client.Database(dbName)
return &Repository{
client: client,
db: db,
}, nil
}
func (r *Repository) Close() error {
return r.client.Disconnect(context.Background())
}
```
In general, that is ok. The code compiles, which was not the case in some other examples. Small modifications are needed. I prefer to use the [_context_ cancellation support](https://pkg.go.dev/context#WithCancel). Oops… I missed that there is a _WithTimeout_ function on the _context_. I always used _WithDeadline_. I have learned something from the AI. Today is a good day.

So I can create the repository once, store it in a package-level variable, and initialize it from the main.go file. I manually add an initialization function for this.
```golang
var File file.FileRepository // package-level variable
func InitializeMongoDBRepository(uri string, databaseName string) error {
var err error
File, err = NewMongoDBRepository(uri, databaseName)
return err
}
```
Phew… that one’s tough. I was typing code by myself. I want to rest now, so the AI should take over.

```golang
package main
import (
"log"
"github.com/kinneko-de/restaurant-file-store-svc/internal/app/repository"
)
func main() {
// The URI should be in the format: mongodb://<username>:<password>@<host>:<port>
uri := "mongodb://root:example@localhost:27017"
databaseName := "mydatabase"
err := repository.InitializeMongoDBRepository(uri, databaseName)
if err != nil {
log.Fatalf("Failed to initialize MongoDB repository: %v", err)
}
// The rest of your main function...
}
```
Stop… these are not the variables I used in my docker-compose file. Here is my final docker-compose file:
```
mongodb:
image: mongo:latest
networks:
- project-network
container_name: restaurant-file-store-db
environment:
MONGO_INITDB_ROOT_USERNAME: rootuser
MONGO_INITDB_ROOT_PASSWORD: rootpassword
volumes:
- /data/db
ports:
- 27017:27017
```
I did not show the docker-compose file to GitHub Copilot as a reference. So I show it the file so it can fix the variables itself.

```golang
package main
import (
"log"
"github.com/kinneko-de/restaurant-file-store-svc/internal/app/repository"
)
func main() {
// The URI should be in the format: mongodb://<username>:<password>@<host>:<port>
uri := "mongodb://rootuser:rootpassword@mongodb:27017"
databaseName := "mydatabase"
err := repository.InitializeMongoDBRepository(uri, databaseName)
if err != nil {
log.Fatalf("Failed to initialize MongoDB repository: %v", err)
}
// The rest of your main function...
}
```

Now the user and password are correct. But I think _‘mongodb’_ is the wrong value for the database host. I use _‘restaurant-file-store-db’_ as the container name in my docker-compose declaration. But humans are fallible, so let’s ask GitHub Copilot:

```golang
package main
import (
"log"
"github.com/kinneko-de/restaurant-file-store-svc/internal/app/repository"
)
func main() {
// The URI should be in the format: mongodb://<username>:<password>@<host>:<port>
uri := "mongodb://rootuser:rootpassword@restaurant-file-store-db:27017"
databaseName := "mydatabase"
err := repository.InitializeMongoDBRepository(uri, databaseName)
if err != nil {
log.Fatalf("Failed to initialize MongoDB repository: %v", err)
}
// The rest of your main function...
}
```
Now I am a little bit confused. In its answer, the AI told me to use the _‘service name’_ which is _‘mongodb’_ and not _‘restaurant-file-store-db’_. But it still replaced my suggestion in the example. Does it just want to follow what I say even if it is wrong? I need to test this.
Testing shows that both versions work. Using [Stackoverflow as a resource for human knowledge](https://stackoverflow.com/questions/71829173/does-docker-use-service-name-or-container-name-to-resolve-service-reference-in-d), it turns out that using the service name is preferable. It works even if you have multiple instances. Not that it matters for my tests here, but I change the host back to the service name _‘mongodb’_.

Overall, GitHub Copilot gives me a lot of good suggestions. The quality of the examples varies. It also cannot provide code examples for more advanced uses that a software engineer would normally have to take care of.
For example, I added health checks to the docker-compose file to ensure that the systems start in the right order and don’t crash non-deterministically. It also doesn’t realize that I need support for Kubernetes health checks and pod shutdown in the main.go file. The client, the database, and the collection of ‘mongodb’ are thread-safe and should not be recreated for each use. The examples are for demonstration and explanation only and are not production-ready.
***
## Conclusion
GitHub Copilot (and AI in general) is not (yet?) ready to replace me as a software engineer. It saves me some mindless typing and time-consuming internet research. It allows me to learn new things faster. I love this ❤.
At this stage, the AI is still a kid who needs my help to cross the street.

_Foto von <a href="https://unsplash.com/de/@benwhitephotography?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Ben White</a> auf <a href="https://unsplash.com/de/fotos/mann-halt-die-hand-des-babys-JJ9irt1OZmI?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>_
I’m excited to see the kid grow up.
| kinneko-de |
1,906,837 | Creating an Image Thumbnail Generator Using AWS Lambda and S3 Event Notifications with Terraform | In this post, we'll explore how to use serverless Lambda functions to create an image thumbnail... | 0 | 2024-06-30T18:51:39 | https://dev.to/chinmay13/creating-an-image-thumbnail-generator-using-aws-lambda-and-s3-event-notifications-with-terraform-4e13 | aws, terraform, lambda, awscommunitybuilder | In this post, we'll explore how to use serverless Lambda functions to create an image thumbnail generator triggered by S3 event notifications, all orchestrated using Terraform.
## Architecture Overview
Before we get started, let's take a quick look at the architecture we'll be working with:

## Step 1: Create Source and Destination Buckets
First, we'll create two S3 buckets: one for the source images and another for the generated thumbnails.
```hcl
################################################################################
# S3 Source Image Bucket
################################################################################
resource "aws_s3_bucket" "source-image-bucket" {
bucket = var.source_bucket_name
tags = merge(local.common_tags, {
Name = "${local.naming_prefix}-s3-source-bucket"
})
}
################################################################################
# S3 Thumbnail Image Bucket
################################################################################
resource "aws_s3_bucket" "thumbnail-image-bucket" {
bucket = var.thumbnail_bucket_name
tags = merge(local.common_tags, {
Name = "${local.naming_prefix}-s3-thumbnail-bucket"
})
}
```
## Step 2: Create a Policy
Next, we create a policy that grants permissions for the Lambda function to read from the source bucket and write to the destination bucket.
```hcl
################################################################################
# S3 Policy to Get and Put objects
################################################################################
resource "aws_iam_policy" "lambda_s3_policy" {
name = "LambdaS3Policy"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [{
"Effect" : "Allow",
"Action" : "s3:GetObject",
"Resource" : "${aws_s3_bucket.source-image-bucket.arn}/*"
}, {
"Effect" : "Allow",
"Action" : "s3:PutObject",
"Resource" : "${aws_s3_bucket.thumbnail-image-bucket.arn}/*"
}]
})
}
```
## Step 3: Create a Lambda AssumeRole
Attach the created policy along with the AWSLambdaBasicExecutionRole to a new IAM role.
```hcl
################################################################################
# Lambda IAM role to assume the role
################################################################################
resource "aws_iam_role" "lambda_s3_role" {
name = "LambdaS3Role"
assume_role_policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [{
"Effect" : "Allow",
"Principal" : {
"Service" : "lambda.amazonaws.com"
},
"Action" : "sts:AssumeRole"
}]
})
}
################################################################################
# Assign policy to the role
################################################################################
resource "aws_iam_policy_attachment" "assigning_policy_to_role" {
name = "AssigingPolicyToRole"
roles = [aws_iam_role.lambda_s3_role.name]
policy_arn = aws_iam_policy.lambda_s3_policy.arn
}
resource "aws_iam_policy_attachment" "assigning_lambda_execution_role" {
name = "AssigningLambdaExecutionRole"
roles = [aws_iam_role.lambda_s3_role.name]
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
```
## Step 4: Create a Lambda Function
Write python code for image processing and zip it first. Then, create a Lambda function using Python, incorporating Lambda Layers from Klayers, and add the necessary permissions. We have used python 3.12 runtime environment. User environment vairable DEST_BUCKET to read destination bucket name in code.
```hcl
################################################################################
# Compressing lambda_handler function code
################################################################################
data "archive_file" "thumbnail_lambda_source_archive" {
type = "zip"
source_dir = "${path.module}/lambda"
output_path = "${path.module}/lambda_function.zip"
}
################################################################################
# Creating Lambda Function
################################################################################
resource "aws_lambda_function" "create_thumbnail_lambda_function" {
function_name = "CreateThumbnailLambdaFunction"
filename = "${path.module}/lambda_function.zip"
runtime = "python3.12"
handler = "thumbnail_generator.lambda_handler"
memory_size = 256
timeout = 300
environment {
variables = {
DEST_BUCKET = aws_s3_bucket.thumbnail-image-bucket.bucket
}
}
source_code_hash = data.archive_file.thumbnail_lambda_source_archive.output_base64sha256
role = aws_iam_role.lambda_s3_role.arn
layers = [
"arn:aws:lambda:us-east-1:770693421928:layer:Klayers-p312-Pillow:2"
]
}
################################################################################
# Lambda Function Permission to have S3 as a Trigger for Lambda Function
################################################################################
resource "aws_lambda_permission" "thumbnail_allow_bucket" {
statement_id = "AllowExecutionFromS3Bucket"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.create_thumbnail_lambda_function.arn
principal = "s3.amazonaws.com"
source_arn = aws_s3_bucket.source-image-bucket.arn
}
```
## Step 5: Create S3 Event Notification
Set up an S3 event notification to trigger the Lambda function when a new image is uploaded.
```hcl
################################################################################
# Creating S3 Notification for Lambda when Object is uploaded in the Source Bucket
################################################################################
resource "aws_s3_bucket_notification" "thumbnail_notification" {
bucket = aws_s3_bucket.source-image-bucket.id
lambda_function {
lambda_function_arn = aws_lambda_function.create_thumbnail_lambda_function.arn
events = ["s3:ObjectCreated:*"]
}
depends_on = [
aws_lambda_permission.thumbnail_allow_bucket
]
}
```
## Step 6: Create CloudWatch Log Group
Finally, create a CloudWatch log group to capture logs from the Lambda function.
```hcl
################################################################################
# Creating CloudWatch Log group for Lambda Function
################################################################################
resource "aws_cloudwatch_log_group" "create_thumbnail_lambda_function_cloudwatch" {
name = "/aws/lambda/${aws_lambda_function.create_thumbnail_lambda_function.function_name}"
retention_in_days = 30
}
```
## Step 7: Write python code for lambda function
Dependencies: The function uses boto3 to interact with AWS S3 and Pillow for image processing. We have used existing Layer for Pillow from Klayers using ARN.
Event Handling: The function extracts the source bucket and object key from the event triggered by the S3 upload.
Environment Variable: The destination bucket is retrieved from the environment variable DEST_BUCKET.
Image Processing:
The image is downloaded from the source bucket.
A thumbnail is created using Pillow's thumbnail method.
The thumbnail is saved to a BytesIO object to prepare it for upload.
Uploading the Thumbnail: The thumbnail is uploaded to the destination bucket.
```python
import logging
import boto3
from io import BytesIO
from PIL import Image
import os
logger = logging.getLogger()
logger.setLevel(logging.INFO)
s3_client = boto3.client('s3')
def lambda_handler(event, context):
logger.info(f"event: {event}")
logger.info(f"context: {context}")
# Get the S3 bucket and object key from the event
bucket = event["Records"][0]["s3"]["bucket"]["name"]
key = event["Records"][0]["s3"]["object"]["key"]
# Define the destination bucket and thumbnail key
thumbnail_bucket = os.environ['DEST_BUCKET']
thumbnail_name, thumbnail_ext = os.path.splitext(key)
thumbnail_key = f"{thumbnail_name}_thumbnail{thumbnail_ext}"
logger.info(f"Bucket name: {bucket}, file name: {key}, Thumbnail Bucket name: {thumbnail_bucket}, file name: {thumbnail_key}")
# Open the image using Pillow
file_byte_string = s3_client.get_object(Bucket=bucket, Key=key)['Body'].read()
img = Image.open(BytesIO(file_byte_string))
logger.info(f"Size before compression: {img.size}")
# Create a thumbnail
img.thumbnail((500,500))
logger.info(f"Size after compression: {img.size}")
# Save the thumbnail to a BytesIO object
buffer = BytesIO()
img.save(buffer, "JPEG")
buffer.seek(0)
# Upload the thumbnail to the destination bucket
sent_data = s3_client.put_object(Bucket=thumbnail_bucket, Key=thumbnail_key, Body=buffer)
if sent_data['ResponseMetadata']['HTTPStatusCode'] != 200:
raise Exception('Failed to upload image {} to bucket {}'.format(key, bucket))
return event
```
If you dont want to use the Klayers lambda layers, you can create the package python codes along with dependencies using following.
```sh
mkdir package
pip install pillow -t package/
cp thumbnail_generator.py package/
cd package
zip -r ../lambda_function.zip .
cd ..
```
## Steps to Run Terraform
Follow these steps to execute the Terraform configuration:
```hcl
terraform init
terraform plan
terraform apply -auto-approve
```
Upon successful completion, Terraform will provide relevant outputs.
```hcl
Apply complete! Resources: 10 added, 0 changed, 0 destroyed.
```
## Testing
Source and Destination S3 buckets

Lambda S3 Role with attached policies



Lambda Function with runtime settings and layers


Uploading an image to source bucket with large size

Thumbnail created in destination bucket with small size

Cloudwatch Log group showing the lambda function logs

## Cleanup
Remember to stop AWS components to avoid large bills. Empty the buckets first.
```hcl
terraform destroy -auto-approve
```
## Conclusion
We have successfully used S3 Event notifications to trigger a Lambda function that generates image thumbnails. This serverless architecture ensures scalability and ease of maintenance.
Happy Coding!
## Resources
AWS S3 Notifications https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventNotifications.html
AWS Lambda: https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
Lambda Layers: https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html
Klayers: https://github.com/keithrozario/Klayers/tree/master
Tutorial: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html
Github Link: https://github.com/chinmayto/terraform-aws-s3-event-image-thumbnail-generator | chinmay13 |
1,906,836 | Talking about ReactJs and Nextjs. | Getting familiar with React React is an open source and flexible Javascript library that... | 0 | 2024-06-30T18:46:38 | https://dev.to/walerick/talking-about-reactjs-and-nextjs-32b3 | ## Getting familiar with React
React is an open source and flexible Javascript library that allows developers to develop scalable, simple, and fast frontend interfaces for Single Page Application. It supports a functional programming paradigm and a reactive approach.
It was developed by Facebook.
## What is React used for
- Social Media Platforms e.g Facebook, Instagram.
- Online Video streaming e.g Netflix
- SAAS tools e.g Zapier.
## Getting familiar with NextJs
Next.Js an open-source and JavaScript framework, allowing you to develop fast and user-friendly web applications and static websites using React.
## What is Next.js used for
- E-commerce
- Landing Pages
- Marketing Website
## Which should you use for my project?
Selecting a framework or library complete depends on your project needs. React and Next.js are emerging and beneficial tools for your project, but only for performing certain tasks.
Thanks for reading this far. I hope this article helps in a way. There is a an internship program that is currently going on at the moment, [HNG11](https://hng.tech/internship).
This year is their 11th year edition and it's completely free.
However, Applicants have the opportunity to[subscribe](https://hng.tech/premium) for a year premium package.
| walerick | |
1,906,833 | Language Model Level of Truth | Many of us knows that -in rough words- LLMs "just" forecast next word. They have been fine tuned... | 0 | 2024-06-30T18:40:17 | https://dev.to/joex69/language-model-level-of-truth-p18 | Many of us knows that -in rough words- LLMs "just" forecast next word. They have been fine tuned after gathering a pretrained model unless you have a Big Tech to: have enough level of computation, then build your own pretrained model from a tokenized corpus, then fine tune it, then deploy it for finally serving it hot just right out from the ouven ready to eat.
One of the big problems now are hallucinations and let's say it, LIES. Because a model will never say you "I don't know". They know it all even if they do not. So I'm interested on finding a way to verify and guarantee a safe level of true of a LLM with a low rate of hallucinations. Them are good when brainstorming but bad in most of the cases.
The pros will be a lot. To mention just one is that, in an age when agentic programming is an incipient area, is very often when we might need a "true checker" so as in conventional programming is done for example, by the argument inside an If-instruction works.
Because when we develope, compile and execute code at any deterministic turing machine, we know that the processor will tell us the truth.
I think this area of research is as amazing as difficult. Some approaches over there? If you are interested on that Contact Me or post here.
| joex69 | |
1,906,828 | Customize Filament Table Query | I am going to share how you can customize the query applying filters, this tip has a meilisearch... | 0 | 2024-06-30T18:32:44 | https://dev.to/arielmejiadev/customize-filament-table-query-2c2c | php, laravel, filamentphp, tailwindcss | I am going to share how you can customize the query applying filters, this tip has a meilisearch example borrowed as I found the tip in a laracast chat, the content was enriched by me.
To customize the table search to use some Laravel Scout Driver to get a fuzzyness search or a more powerful search in terms of speed:
## The Inline Way
```php
// filter criteria
$filter = '(status = Active) AND (type = big)';
// $this->query is a prop provided by extending from filament resource class
$company_ids = \App\Models\Company::search($this->query, function (\Meilisearch\Endpoints\Indexes $meilisearch, $query, $options) use($filter) {
// This is a custom configuration for Meilisearch only
$options['facets'] = ['type','status'];
$options['filter'] = $filter;
$options['hitsPerPage'] = 100;
return $meilisearch->search($query, $options);
})->get()->pluck('id');
return $table
->query(\App\Models\Company::whereIn('id', $company_ids))
->columns([
TextColumn::make('name')->sortable(),
TextColumn::make('status')->sortable(),
TextColumn::make('type')->sortable()
])
```
## The Override Way
We can also override the `getEloquentQuery` method, like this example removing a global scope for soft deletes:
```php
public static function getEloquentQuery(): Builder
{
return parent::getEloquentQuery()
->withoutGlobalScopes([SoftDeleted::class])
->with(['products']);
}
``` | arielmejiadev |
733,522 | HitBTC Review | HitBTC Review 2021 | Is it Safe or legit? Today we will review HitBTC, a cryptocurrency... | 0 | 2021-06-23T14:20:25 | https://medium.com/coinmonks/hitbtc-review-c5143c5d53c2 | trading, altcoins, cryptotrading, cryptocurrency | ---
title: HitBTC Review
published: True
date: 2021-06-20 06:36:19 UTC
tags: trading,altcoins,cryptotrading,cryptocurrency
canonical_url: https://medium.com/coinmonks/hitbtc-review-c5143c5d53c2
---
### HitBTC Review 2021 | Is it Safe or legit?

Today we will review [**HitBTC**](https://blog.coincodecap.com/go/hitbtc), a [cryptocurrency exchange](https://blog.coincodecap.com/go/crypto-exchange) that offers high liquidity and access to the largest cryptocurrency collection in the industry.
### What is HitBTC?
[**HitBTC**](https://blog.coincodecap.com/go/hitbtc) is a [cryptocurrency exchange](https://blog.coincodecap.com/go/crypto-exchange) platform that offers high liquidity and supports hundreds of cryptocurrencies. [HitBTC](https://blog.coincodecap.com/go/hitbtc) platform was developed in late 2013, making it one of the oldest [cryptocurrency exchanges](https://blog.coincodecap.com/crypto-exchange) in the industry. With this platform, users can buy, sell, and trade hundreds of different cryptocurrencies.
The biggest and primary concern for the [**Hitbtc exchange**](https://blog.coincodecap.com/go/hitbtc) is their accountability because the company location for where it operates is not known. Although the mailing address is in Hong Kong, and it claims to have an office in Chile and Estonia. Therefore, it should act as a significant red flag, as it would be difficult to locate them if, due to any reason, something happens to the funds. [HitBTC](https://blog.coincodecap.com/go/hitbtc) has around six million venture capital investments in Hong Kong, but it focuses more on the European crypto markets.
Nevertheless, on this platform, users can deposit funds using real-world money through direct bank transfer. It does not support debit/credit cards and e-wallets. The MSP (Main Selling Point) of [Hitbtc](https://blog.coincodecap.com/go/hitbtc) is its highly competitive fees, as it offers the lowest prices in the crypto market.
### Summary (TL;DR)
- It offers the most comprehensive selections of 380 cryptocurrencies in more than 800 cryptocurrency pairs.
- HitBTC allows users a Crypto-to-[Crypto exchange](https://blog.coincodecap.com/go/crypto-exchange).
- Verification is not mandatory to start trading.
- It is a secure platform, which has never been hacked till now.
- High competitive fees, as the trading fees are lower than the average industry trading fee.
- This exchange offers high liquidity.
- The customer support is multilingual.
- Users on HitBTC can make use of the [trading bots](https://blog.coincodecap.com/best-crypto-trading-bots), which work with the robust API.
- It also a demo feature for new cryptocurrency traders.
- It has a mobile app for both Android and iOS versions of smartphones.
### How to open an account at HitBTC and Trade?
This section is a step-by-step guide to register, deposit funds, and make your first trade.
#### Step 1: Hitbtc Signup Process
To sign up, users have to visit the [**HitBTC**](https://blog.coincodecap.com/go/hitbtc) homepage. On the top right-hand side of their home will be the “Sign up” button. Click on that to start the registration process.

After that, a Signup window will appear, which will ask users to fill few details. They will have to enter their email addresses and then choose a strong password. After filling, click on the “Sign up” button.

In the next move, you have to enter your residence, full name, and telephone number. In the last part, you will have to verify your email address by clicking on the link that [**HitBTC**](https://blog.coincodecap.com/go/hitbtc) will send to you on your email address. Copy and paste the code when the new link will open.
#### Step 2: Fund your account
After completing the registration process, you have to deposit some funds into your [HitBTC account](https://blog.coincodecap.com/go/hitbtc). This process includes depositing using a cryptocurrency rather than through a bank account. Depositing using the bank account requires a complete verification process, which takes time. On the other hand, cryptocurrency as a deposit method will only take 20 minutes or so for the funds to show up in your account.
You will see the green “Deposit” button on the top of the page; click on it. A long list of coins will show up on your window, using which you can make deposits. Enter the name of the coin you prefer in the search box. When it appears, click on the “+” blue button.
After this, you will see the unique deposit address of the coin you selected. Click on the “Copy” blue button to copy the wallet address to your clipboard. Then, go to your private wallet, paste that address, and transfer your funds.

#### Step 3: Buy Crypto
As soon as the deposited funds show up in your [**HitBTC**](https://blog.coincodecap.com/go/hitbtc) account, you can make your first trade. On the top left side of the page, you will find an “Exchange” button. By clicking on it, you will enter into the main trading area. The page can look intimidating for users who have never done trading. Either enter the name of your coin on the right-hand side or scroll through the list and select the coin you want to buy. After selecting, click on the trading pair you picked.
#### Step 4: Complete the Trade
After completing selecting the coin, you want to buy, move below the main chart. There you will find a “Buy” button. In the “Amount” box, enter the amount of cryptocurrency you want. After that, click on the green “Buy Limit” button to complete the trade. The trading will get completed after a few seconds.

### HitBTC Review: Verification Process
After email verification in the signup process, users can increase the limit by completing the verification process. To verify your account to increase the limits and fund your account using FIAT, you will need to fill in the following information to complete your **HitBTC KYC**.
- Personal information, including Full Name, Date-of-Birth, Nationality (as mentioned on the Government-issued ID).
- Residential Address.
- Bank account information to use FIAT.
- Proof of Identity in the form of a government-issued photo ID.
- Mobile number
- A photo with your identity ID.
Users should note that the IDs should be valid for the next three months, and the photo should be the original one and not a photocopy. Also, the image should be clear, colored, and with high resolution.
Based on the type of account, that is: Starter, Trader, Pro, the following limits apply:
- **Starter:** Users can do trading of 1 BTC per day and 5 BTC per month. FIAT currency is not available for starters.
- **Trader:** Free deposits for crypto, and users get a maximum of 100 BTC worth of cryptocurrency withdrawal per day. Users can make their payments using FIAT currency.
- **Pro:** **No fees for depositing cryptos** , and after providing additional documents like a source of your funds or opening an account for business, you can enjoy higher limits.
Pro account holders enjoy the leverage of the same liquidity as on [**HitBTC**](https://blog.coincodecap.com/go/hitbtc), dedicated account management, and no limits on deposit and withdrawals. To qualify as a Pro, users should have a minimum balance of 100 BTC or generate a turnover of 1000 BTC per month.

### HitBTC Fees
#### Hitbtc Withdrawal and Deposit Fees
Transactions through Debit and Credit card is not available in [**HitBTC**](https://blog.coincodecap.com/go/hitbtc), so no fees for that. However, for users who complete their transactions using European Bank Transfer ([SEPA](https://ec.europa.eu/info/business-economy-euro/banking-and-finance/consumer-finance-and-payments/payment-services/single-euro-payments-area-sepa_en)), the price is 0.90 EUR. For UK bank transfers, the fee is £5. Finally, for the International bank wire ([SWIFT](https://www.swift.com/)), users have to pay $9 as the fee.
#### HitBTC Trading Fees
The trading fee structure of [**HitBTC**](https://blog.coincodecap.com/go/hitbtc) is straightforward: users have to pay **0.01%** for trading fees. However, this is charged at both ends of the transaction. For example, if you buy crypto worth $3,000 and sell it with the same amount, users will have to pay two lots of $3.
Please note that the verified users on HitBTC fall into their tier fee system. According to the tier fee system, the fee is determined based on the user’s trading volume during the past 30 days.
Once a user reacher at tier 8 with 50,000 BTC per month or more, this platform pays a rebate of -0.01% per trade to the users.

### HitBTC Review: Payment Methods
Although [**HitBTC**](https://blog.coincodecap.com/go/hitbtc) used to support fiat currencies like USD, it does not keep it anymore. However, users can still buy supported 800 different crypto assets on the exchange. It also allows users to make transactions directly from their bank account. This facility of using a bank account is only available for users who have completed their KYC.
The process of depositing and withdrawing is simple and straightforward. All users have to do is to navigate to the “Account” tab and select the cryptocurrency. The withdrawal process usually takes several minutes and highly depends on the cryptocurrency and network speed.
### HitBTC Demo
[**HitBTC**](https://blog.coincodecap.com/go/hitbtc) exchange offers a Demo platform to try and trade cryptocurrencies for users who are beginners. To experience the demo trading, users can register themselves and access it.
The Demo option is available at the bottom of the screen, or you can directly [**click here to enter the demo**](https://demo.hitbtc.com/). Please note that users have to create a separate account for the demo platform. In this program, you will receive around 4000 USDT (Fake money) worth of cryptocurrency. Using that, you can experiment with strategies and portfolios.
### Hitbtc Review: OTC trading
HitBTC has [Over The Counter](https://hitbtc.com/otc) trading services for high-volume traders. OTC traders happen due to the result of the partnership with [TrustedVolumes](https://trustedvolumes.com/). To use this feature, users must exchange over 100,000 USD per trade, and each trade will incur a 0.1% transaction fee.
### HitBTC Review: Customer Support
HitBTC has a [support center](https://support.hitbtc.com/en/support/home), which helps answer almost all of the queries raised by the users. This Support Center contains:
- Extensive knowledgeable articles about [**HitBTC**](https://blog.coincodecap.com/go/hitbtc) platform features and how users can make a profit by using them.
- A Ticketing feature, which allows users to contact the HitBTC support team directly.
Users can ask questions related to Payments, KYC, and others. The team usually replies within 24 hours.
### Hitbtc Review: Security
HitBTC security level is good. It has never been hacked before. However, some sources were claiming that HitBTC was hacked with BTER and Excoin. The source never confirmed the information.
[**HitBTC**](https://blog.coincodecap.com/go/hitbtc) supports two-factor authentication for user’s accounts to add an extra layer of security. The security section in the “Setting” menu also allows users to check all the logins of their accounts. They should look for the IP address or location they don’t recognize. Just in case there’s an IP address that the user doesn’t recognize, they terminate all the sessions with one click. They also have options that let them automatically log out of all the sessions at a regular period.
HitBTC also supports the following options for the users to protect their account:
- Session termination tool and activity log
- Cold storage custody
- Auto logouts after 30 minutes of inactivity.
- Universal 2nd factor (Yubikey)
- [Crypto wallet](https://blog.coincodecap.com/best-crypto-wallets-app) address whitelisting
After **HitBTC signup** , it forces users to activate the 2FA because some features won’t be available for others without it.

### HitBTC Review: Pros and Cons
#### Pros
- More than 300 different coins with 800 pairs.
- Trading volumes are one of the largest in this industry.
- Simple and easy to navigate interface.
- KYC is not mandatory.
- The trading fee is too low and highly competitive.
- Demo mode is also available for users.
- Available in Multi-languages.
#### Cons
- It does not support Debit and Credit cards.
- FIAT currency is not available.
- Uncertainty about the Hitbtc location.
- Transparency issues.
- Not suitable for beginners.
### HitBTC Review: Conclusion
In general terms, [**HitBTC**](https://blog.coincodecap.com/go/hitbtc) is an intriguing and excellent platform for trading cryptocurrencies. It has the highest number of cryptocurrencies in the entire industry. The pricing on the cake is that it has the lowest trading fees too. The exchange is relatively secure and offers many settings and options for users to make their accounts more safe and secure.
However, it does have some transparency issues, which put red flags on this exchange. Therefore, it depends on you and your risk tolerance level — Whether to trade on it with completing the KYC and regulating extra securities or not? Just remember the golden rule — never trade more money than you can afford to lose, and go with your gut feelings.
### Frequently Asked Questions
**Is HitBTC safe?**
Hitbtc has never been hacked before, besides some sources saying otherwise without any proof regarding security. However, there are a certain number of complaints about users being hacked. There are chances that such an accusation may or may not be accurate.
**How do I Withdraw Money from HitBTC?**
Follow the steps to withdraw money from HitBTC:
– Go to the Hitbtc login page to log in to your account.
– Click on “Deposits.” It would be best to make sure that the currency you want to withdraw is present on the main account from your trading account.
– Then click on the “Withdraw” section.
– Enter the amount you want to withdraw.
– Enter your wallet address and 2FA code for your account.
– The finally, click on the “Withdraw” button again.
**How to open support ticket HitBTC?**
You can directly click [here](https://support.hitbtc.com/en/support/tickets/new) to open the support ticket of HitBTC. Another option is to scroll down till the end of the Home page and select the “Support Center.” In the support center, you will see a “Contact Us” button. After clicking on that button, it will take you to the support ticket.
**What Country is HitBTC In?**
According to their website, Hitbtc is located in Hong Kong, with their offices in Chile too. The postal address of Hitbtc is Unit 19, 7/F., One Midtown №11 Hoi Shing Road, Tsuen Wan, New Territories, Hong Kong.
- [How to buy Ethereum in India? [Mobile and Website 2021]](https://blog.coincodecap.com/buy-ethereum-in-india)
- [How to buy Bitcoin on WazirX 2021? [Also works on Mobile ]](https://blog.coincodecap.com/buy-bitcoin-on-wazirx)
- [How to Transfer Funds from Binance to Coinbase? [2021]](https://blog.coincodecap.com/binance-to-coinbase)
- [MultiCharts Review 2021: Is it Worth Buying?](https://blog.coincodecap.com/multicharts-review)
- [Staking Crypto — An Ultimate Guide on Crypto Staking [2021]](https://blog.coincodecap.com/staking-crypto)
> Join [Coinmonks Telegram Channel](https://t.me/coincodecap) and learn about crypto trading and investing
#### Also, Read
- [YouHodler vs CoinLoan vs Hodlnaut](https://medium.com/coinmonks/youhodler-vs-coinloan-vs-hodlnaut-b1050acde55a) | [Cryptohopper vs HaasBot](https://blog.coincodecap.com/cryptohopper-vs-haasbot)
- [Binance vs Kraken](https://blog.coincodecap.com/binance-vs-kraken) | [Dollar-Cost Averaging Trading Bot](https://blog.coincodecap.com/pionex-dca-bot)
- [How to buy Bitcoin in India?](https://medium.com/coinmonks/buy-bitcoin-in-india-feb50ddfef94) | [WazirX Review](https://medium.com/coinmonks/wazirx-review-5c811b074f5b) | [BitMEX Review](https://blog.coincodecap.com/bitmex-review)
- [Crypto Copy Trading Platforms](https://medium.com/coinmonks/top-10-crypto-copy-trading-platforms-for-beginners-d0c37c7d698c) | [Top 5 BlockFi Alternatives](https://blog.coincodecap.com/blockfi-alternatives)
- [CoinLoan Review](https://medium.com/coinmonks/coinloan-review-18128b9badc4) | [Crypto.com Review](https://medium.com/coinmonks/crypto-com-review-f143dca1f74c) | [Huobi Margin Trading](https://medium.com/coinmonks/huobi-margin-trading-b3b06cdc1519)
- [Top paid cryptocurrency and blockchain courses](https://blog.coincodecap.com/blockchain-courses) | [Binance Review](https://medium.com/coinmonks/binance-review-ee10d3bf3b6e)
- [How to use BitMEX in the USA?](https://blog.coincodecap.com/use-bitmex-in-usa) | [BitMEX Review](https://blog.coincodecap.com/bitmex-review) | [Binance vs Bittrex](https://blog.coincodecap.com/binance-vs-bittrex)
- [Best Free Crypto Signals](https://blog.coincodecap.com/free-crypto-signals) | [YoBit Review](https://medium.com/coinmonks/yobit-review-175464162c62) | [Bitbns Review](https://medium.com/coinmonks/bitbns-review-38256a07e161) | [OKEx Review](https://medium.com/coinmonks/okex-review-6b369304110f)
- [Coinbase Staking](https://blog.coincodecap.com/coinbase-staking) | [Hotbit Review](https://medium.com/coinmonks/hotbit-review-cd5bec41dafb) | [KuCoin Review](https://blog.coincodecap.com/kucoin-review) | [Futures Trading Bots](https://medium.com/coinmonks/futures-trading-bots-5a282ccee3f5)
- [Best Crypto Trading Signals Telegram](https://medium.com/coinmonks/best-crypto-signals-telegram-5785cdbc4b2b) | [MoonXBT Review](https://medium.com/coinmonks/moonxbt-review-6e4ab26d037)
_Originally published at_ [_https://blog.coincodecap.com_](https://blog.coincodecap.com/hitbtc-review) _on April 26, 2021._
* * * | coinmonks |
1,907,171 | Enable Auto Shutdown Schedule in Windows 11! | Here are the key points from the page on scheduling an auto shutdown in Windows 11 using Task... | 0 | 2024-06-30T18:30:00 | https://winsides.com/schedule-auto-shutdown-in-windows-11/ | beginners, devops, windows, task | Here are the key points from the page on scheduling an auto shutdown in Windows 11 using Task Scheduler:
## Importance of Auto Shutdown:
- Saves electricity and reduces energy costs.
- Enhances security by preventing unauthorized access.
Helps manage computer usage time.
## Setup Steps:
1. Open Task Scheduler (**taskschd.msc**).
2. Create a Basic Task named "**Auto Shutdown**."
3. Set the trigger (e.g., **daily at a specific time**).
4. Choose "Start a Program" action with `C:\Windows\System32\shutdown.exe` and arguments `/s /f /t 0`.
5. Finish and test the task.
## Benefits:
- Reduces energy consumption and carbon footprint.
- Ensures systems shut down after updates or maintenance.
Find more information @ [Winsides - Schedule Auto Shutdown](https://winsides.com/schedule-auto-shutdown-in-windows-11/)
| vigneshwaran_vijayakumar |
1,906,832 | 1579. Remove Max Number of Edges to Keep Graph Fully Traversable | 1579. Remove Max Number of Edges to Keep Graph Fully Traversable Hard Alice and Bob have an... | 27,523 | 2024-06-30T18:27:16 | https://dev.to/mdarifulhaque/1579-remove-max-number-of-edges-to-keep-graph-fully-traversable-4h7 | php, leetcode, algorithms, programming | 1579\. Remove Max Number of Edges to Keep Graph Fully Traversable
Hard
Alice and Bob have an undirected graph of `n` nodes and three types of edges:
- Type 1: Can be traversed by Alice only.
- Type 2: Can be traversed by Bob only.
- Type 3: Can be traversed by both Alice and Bob.
Given an array `edges` where <code>edges[i] = [type<sub>i</sub>, u<sub>i</sub>, v<sub>i</sub>]</code> represents a bidirectional edge of type <code>type<sub>i</sub></code> between nodes <code>u<sub>i</sub></code> and <code>v<sub>i</sub></code>, find the maximum number of edges you can remove so that after removing the edges, the graph can still be fully traversed by both Alice and Bob. The graph is fully traversed by Alice and Bob if starting from any node, they can reach all other nodes.
Return _the maximum number of edges you can remove, or return `-1` if Alice and Bob cannot fully traverse the graph_.
**Example 1:**

- **Input:** n = 4, edges = [[3,1,2],[3,2,3],[1,1,3],[1,2,4],[1,1,2],[2,3,4]]
- **Output:** 2
- **Explanation:** If we remove the 2 edges [1,1,2] and [1,1,3]. The graph will still be fully traversable by Alice and Bob. Removing any additional edge will not make it so. So the maximum number of edges we can remove is 2.
**Example 2:**

- **Input:** n = 4, edges = [[3,1,2],[3,2,3],[1,1,4],[2,1,4]]
- **Output:** 0
- **Explanation:** Notice that removing any edge will not make the graph fully traversable by Alice and Bob.
**Example 3:**

- **Input:** n = 4, edges = [[3,2,3],[1,1,2],[2,3,4]]
- **Output:** -1
- **Explanation:** In the current graph, Alice cannot reach node 4 from the other nodes. Likewise, Bob cannot reach 1. Therefore it's impossible to make the graph fully traversable.
**Constraints:**
- <code>1 <= n <= 10<sup>5</sup></code>
- <code>1 <= edges.length <= min(10<sup>5</sup>, 3 * n * (n - 1) / 2)</code>
- <code>edges[i].length == 3</code>
- <code>1 <= type<sub>i</sub> <= 3</code>
- <code>1 <= u<sub>i</sub> < v<sub>i</sub> <= n</code>
- All tuples <code>(type<sub>i</sub>, u<sub>i</sub>, v<sub>i</sub>)</code> are distinct.
**Solution:**
```
class UnionFind {
private $parent;
private $rank;
public function __construct($n) {
$this->parent = range(0, $n);
$this->rank = array_fill(0, $n + 1, 1);
}
public function find($x) {
if ($this->parent[$x] !== $x) {
$this->parent[$x] = $this->find($this->parent[$x]);
}
return $this->parent[$x];
}
public function union($x, $y) {
$rootX = $this->find($x);
$rootY = $this->find($y);
if ($rootX === $rootY) {
return false;
}
if ($this->rank[$rootX] > $this->rank[$rootY]) {
$this->parent[$rootY] = $rootX;
} elseif ($this->rank[$rootX] < $this->rank[$rootY]) {
$this->parent[$rootX] = $rootY;
} else {
$this->parent[$rootY] = $rootX;
$this->rank[$rootX]++;
}
return true;
}
}
class Solution {
/**
* @param Integer $n
* @param Integer[][] $edges
* @return Integer
*/
function maxNumEdgesToRemove($n, $edges) {
$ufAlice = new UnionFind($n);
$ufBob = new UnionFind($n);
usort($edges, function($a, $b) {
return $b[0] - $a[0];
});
$requiredEdges = 0;
foreach ($edges as $edge) {
$type = $edge[0];
$u = $edge[1];
$v = $edge[2];
if ($type === 3) {
$unionAlice = $ufAlice->union($u, $v);
$unionBob = $ufBob->union($u, $v);
if ($unionAlice || $unionBob) {
$requiredEdges++;
}
} elseif ($type === 1) {
if ($ufAlice->union($u, $v)) {
$requiredEdges++;
}
} elseif ($type === 2) {
if ($ufBob->union($u, $v)) {
$requiredEdges++;
}
}
}
for ($i = 2; $i <= $n; $i++) {
if ($ufAlice->find($i) !== $ufAlice->find(1) || $ufBob->find($i) !== $ufBob->find(1)) {
return -1;
}
}
return count($edges) - $requiredEdges;
}
}
```
**Contact Links**
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)**
| mdarifulhaque |
1,906,831 | Self-Healing Test Automation: A Key Enabler for Agile and DevOps Teams | Test automation is essential for ensuring the quality of software products. However, test automation... | 0 | 2024-06-30T18:26:54 | https://dev.to/jamescantor38/self-healing-test-automation-a-key-enabler-for-agile-and-devops-teams-1e8i | selfhealing, testgrid, automation | Test automation is essential for ensuring the quality of software products. However, test automation can be challenging to maintain, especially as software applications evolve over time. Self-healing test automation is an emerging concept in software testing that uses artificial intelligence and machine learning techniques to enable automated tests to detect and self-correct issues. This makes test automation more reliable and cost-effective, and reduces the amount of time and resources required to maintain test scripts.
In this article, we will discuss the benefits of self-healing test automation, how it works, and how to implement it in your organization.
## What Is Self/Auto-Healing Test Automation
Self healing test automation is a new approach to test automation that uses artificial intelligence (AI) and machine learning (ML) to make test scripts more robust and adaptable. With self healing test automation, test scripts can automatically detect and repair themselves when changes are made to the application under test, including shifting layouts and broken selectors. This makes it possible to automate tests for complex applications with frequently changing user interfaces without having to constantly maintain and update the test scripts.
## Why is Self Healing Test Automation Necessary?
Test automation scripts can easily break when changes are made to the user interface. This is because test automation scripts are typically designed to interact with specific elements on the screen, such as buttons, text fields, and labels. When these elements change, the script may no longer be able to find them or interact with them correctly. This can lead to test failures and false positives, which can be time-consuming and frustrating to resolve.
Also the user interfaces are constantly evolving, with new features and bug fixes being added frequently. This means that test automation scripts need to be updated regularly to adapt to these changes. However, updating test automation scripts can be a manual and time-consuming process, which can be challenging to keep up with the pace of change.
Self-healing test automation addresses this fragility of traditional test automation scripts by adapting to changes in the user interface automatically to make test scripts more robust and adaptable. Self-healing test scripts can automatically detect and repair themselves when changes are made to the application under test, which can help to reduce test maintenance costs, improve test quality, and increase test coverage.
## How Does Self-healing Mechanism Work?
**Step 1**: The self-healing mechanism gets triggered whenever “NoSuchElement” or similar an error occurs for an element mentioned in the automation script.
**Step 2**: The algorithm analyzes the test script to identify the root cause of the error.
Step 3: The algorithm uses AI-powered data analytics to identify the exact object in the test script that has changed. An object can be any interface item like a webpage, navigation button, or text box.
**Step 4**: The algorithm updates the test script with the new identification parameters for the affected object(s).
**Step 5**: The updated test case is re-executed to verify that the remediation has been successful.
## How TestGrid’s self-healing test automation adds value to your software delivery process?
### Saves Time and Effort
Self-healing test automation can save organizations a significant amount of time and effort in software testing. Traditional test automation approaches require manual intervention to fix errors or failures that occur during test execution. This can be a time-consuming and error-prone process, especially when dealing with large and complex test suites. Self-healing test automation eliminates the need for manual intervention, allowing tests to recover automatically from failures or errors.
### Improves Test Coverage
Self-healing test automation can help to improve test coverage by allowing testers to focus on writing new tests rather than maintaining existing tests. This is because self-healing tests can automatically adapt to changes in the software under test, which means that testers do not need to spend time updating their tests every time the software changes.
As a result, testers can focus on writing new tests to cover new features and functionality. Self-healing automation can improve test coverage by 5-10% by eliminating unnecessary code, resulting in shorter delivery times and higher returns on investment.
### Prevents Object Flakiness
Object flakiness is a common problem in test automation, especially for [GUI testing](https://testgrid.io/blog/a-to-z-about-gui-testing-in-software/). Object flakiness occurs when a test fails because it is unable to locate an object on the page. This can happen for a variety of reasons, such as changes to the UI, changes to the underlying code, or network latency.
Self-healing test automation can detect and prevent object flakiness by analyzing test results and identifying patterns that indicate flaky tests. By preventing object flakiness, teams can reduce the number of false positives and negatives, improving the overall accuracy and reliability of test results.
### Faster Feedback Loop
Self-healing test automation also enables a faster feedback loop. With traditional test automation approaches, tests are often run manually or through a continuous integration pipeline. However, with self-healing test automation, tests can be run continuously, providing immediate feedback on the quality of the application under test. This enables teams to identify and fix issues faster, improving the overall quality and reliability of the application.
## Conclusion
In agile methodology, applications are continuously developed and tested in short cycles. This can make it difficult to maintain test cases, as the application is constantly changing. Self-healing test automation can help to overcome this challenge by automatically updating test cases when the application under test changes.
Source : This blog is originally published at [Testgrid](https://testgrid.io/blog/self-healing-test-automation/)
| jamescantor38 |
1,906,827 | Beyond File Explorer: Alternatives for Windows File Explorer | While the built-in Windows File Explorer handles basic file management, power users and those... | 0 | 2024-06-30T18:26:08 | https://dev.to/coffmans/beyond-file-explorer-alternatives-for-windows-file-explorer-46c0 | windows, explorer, alternatives, software | 
While the built-in Windows File Explorer handles basic file management, power users and those seeking a more efficient workflow often crave additional features. Luckily, there's a plethora of alternatives offering a significant upgrade.
## Here are several free options to consider:
###1. Multi Commander
**Overview:** Multi Commander is a powerful, dual-pane file manager designed for efficiency and advanced functionality. It offers numerous features that surpass the standard Windows File Explorer.
[Read more details about Multi Commander in a story I published a few years ago.](https://medium.com/@coffmans/a-better-life-without-file-explorer-fbe61b22e069)

**Key Features:**
**Dual-pane interface:** Simplifies file transfers and organization.
**Customizable layout:** Tailor the interface to your workflow.
**Extensive plugin support:** Extend functionality with plugins.
**Advanced search capabilities:** Find files quickly and accurately.
**Built-in file viewer:** Preview files without opening them.
**Why It's Better:** Multi Commander's dual-pane interface makes file management tasks much more efficient, particularly when moving or copying files between directories. Its extensive plugin support and advanced search capabilities provide functionality that goes beyond what Windows File Explorer offers.
###2. FreeCommander
**Overview:** FreeCommander is a dual-pane file manager that offers a range of features to improve file management efficiency and organization.

**Key Features:**
**Dual-pane interface:** Facilitates easy file transfers and comparisons.
**Tabbed browsing:** Keep multiple folders open and easily switch between them.
**Built-in archive handling:** Manage ZIP files and other archives.
**Quick access toolbar:** Access frequently used commands quickly.
**File synchronization:** Sync directories effortlessly.
**Why It's Better:** FreeCommander's dual-pane and tabbed browsing features enhance productivity by making it easy to manage multiple folders simultaneously. Its built-in archive handling and file synchronization capabilities provide additional tools for efficient file management that are not available in the standard Windows File Explorer.
###3. Explorer++
**Overview:** Explorer++ is a lightweight, portable file manager that aims to offer a more feature-rich experience compared to Windows File Explorer.

**Key Features:**
**Tabbed browsing:** Open multiple tabs and manage folders easily.
**Customizable user interface:** Adjust the layout to suit your preferences.
**Advanced file operations:** Perform operations like merging, splitting, and filtering files.
**Fast search:** Quickly locate files within directories.
**Portable:** Use without installation.
**Why It's Better:** Explorer++'s tabbed browsing feature allows users to keep multiple directories open at once, reducing the need for multiple windows. Its advanced file operations and customizable interface make it a versatile and powerful tool for file management, all while remaining lightweight and portable.
###4. Q-Dir
**Overview:** Q-Dir, also known as the Quad Explorer, offers a unique multi-pane interface, allowing users to manage multiple directories simultaneously.

**Key Features:**
**Quad-pane interface:** View and manage four directories at once.
**Lightweight and portable:** Minimal resource usage and no installation required.
**Color filters:** Improve file visibility with color-coded filters.
**Quick access links:** Easily access frequently used folders.
**Drag-and-drop support:** Simplifies file transfers.
**Why It's Better:** Q-Dir's quad-pane interface is ideal for users who need to manage several directories simultaneously. Its lightweight nature and portability make it a convenient alternative to the standard Windows File Explorer, with the added benefit of color filters for better file organization.
###5. Double Commander
**Overview:** Double Commander is an open-source, cross-platform file manager with a dual-pane interface and a wide range of features.

**Key Features:**
**Dual-pane interface:** Efficiently manage files between directories.
**Customizable interface:** Adjust toolbars, panels, and keyboard shortcuts.
**Built-in text editor:** Edit text files directly within the file manager.
**Advanced search:** Powerful search capabilities with regular expression support.
**Total Commander plugins:** Compatible with many Total Commander plugins.
**Why It's Better:** Double Commander's dual-pane interface and customizable features make it a highly efficient file manager. Its advanced search functionality and support for Total Commander plugins provide additional tools and flexibility, making it a powerful alternative to Windows File Explorer.
###6. Files
**Overview:** Files is a modern file manager designed with a user-friendly interface and a focus on performance. It integrates well with Windows 10 and 11, providing a seamless user experience.

**Key Features:**
**Modern design:** Clean and intuitive interface that aligns with Windows 10/11 aesthetics.
**Tabbed browsing:** Manage multiple folders with ease.
**Dark mode:** Supports dark mode for comfortable use in low-light conditions.
**Cloud integration:** Easily access cloud storage services.
**Git Integration:** Easily manage your Git projects.
**Quick access:** Pin frequently used files and folders for easy access.
**Why It's Better:** Files offers a modern and visually appealing interface that aligns with the latest Windows design guidelines. Its tabbed browsing and quick access features improve productivity, while cloud integration ensures easy access to files stored online, making it a more versatile option than the standard Windows File Explorer.
#Here are several paid options to consider:
###1. Directory Opus
**Overview:** Directory Opus is a powerful and highly customizable file manager that has been around for a long time, known for its rich feature set and flexibility.

**Key Features:**
**Dual-pane interface:** Allows for easy file transfers and comparisons.
**Customizable layout:** Tailor the interface to your workflow.
**Advanced search:** Enhanced search capabilities surpassing Windows File Explorer.
**Scripting support:** Automate repetitive tasks with scripts.
**FTP and archive support:** Integrated FTP client and archive management.
**Why It's Better:** Directory Opus offers a dual-pane interface, which makes file management much more efficient, especially when moving or copying files between directories. Its customization options allow users to create a workspace that fits their specific needs, and the advanced search functionality is significantly more powerful than what is offered by Windows File Explorer.
###2. Total Commander
**Overview:** Total Commander is a classic file manager known for its robustness and a vast array of features designed for advanced file management.

**Key Features:**
**Dual-pane interface:** Facilitates easy file transfers and organization.
**Built-in FTP client:** Manage files on remote servers.
**Archive handling:** Support for numerous archive formats.
**File comparison:** Compare and synchronize directories.
**Extensive plugin support:** Enhance functionality with plugins.
**Why It's Better:** Total Commander's dual-pane interface simplifies file management tasks, and its built-in FTP client is invaluable for users who regularly manage files on remote servers. The extensive plugin support allows users to extend its capabilities far beyond those of Windows File Explorer.
###3. XYplorer
**Overview:** XYplorer is another highly versatile file manager, known for its speed and efficiency. It combines the best features of traditional file managers with innovative new ideas.

**Key Features:**
**Tabbed browsing:** Keep multiple folders open and easily switch between them.
**Tagging and coloring:** Organize files with tags and color coding.
**Scriptable:** Enhance functionality with scripts.
**Portable:** Can be run without installation.
**Preview capabilities:** Extensive preview options for various file types.
**Why It's Better:** The tabbed browsing feature of XYplorer makes it easy to manage multiple folders simultaneously without cluttering your workspace. The tagging and coloring system helps in organizing files more intuitively, which is a big step up from the standard Windows File Explorer.
#Conclusion
While the standard Windows File Explorer is suitable for basic file management tasks, these alternatives offer enhanced features, greater efficiency, and more customization options. Whether you need a dual-pane interface, advanced search capabilities, or extensive plugin support, these file managers provide a range of options to improve your file management experience. Give these alternatives a try to see how they can enhance your workflow.
| coffmans |
1,906,830 | The Role of Technology in Modern Drywall Estimating | Technology plays a crucial role in modern drywall estimating by automating measurements,... | 0 | 2024-06-30T18:25:11 | https://dev.to/madisson_harry_2594faffea/the-role-of-technology-in-modern-drywall-estimating-ge1 | Technology plays a crucial role in modern drywall estimating by automating measurements, calculations, and data analysis, significantly reducing human error. Advanced software tools provide precise material estimates and allow for quick adjustments, enhancing project accuracy and efficiency. This technological integration streamlines the planning process, saves time, and helps manage costs effectively in construction projects. [DryWall Takeoff](https://remoteestimation.us/service/drywall-takeoff-and-estimating-service/) | madisson_harry_2594faffea | |
1,906,829 | Discover the Benefits of a Colorado Concealed Carry Class | Are you considering obtaining your concealed carry permit in Colorado? Enrolling in a Colorado... | 0 | 2024-06-30T18:23:26 | https://dev.to/ericryan3132/discover-the-benefits-of-a-colorado-concealed-carry-class-3bgc | Are you considering obtaining your concealed carry permit in Colorado? Enrolling in a Colorado concealed carry class is an essential step towards understanding firearm safety, legal responsibilities, and gaining the confidence to carry concealed. At Brighton Tactical, we offer comprehensive courses designed to equip you with the knowledge and skills necessary for responsible firearm ownership. Whether you're a beginner or looking to renew your permit, our classes cater to all levels of experience.
**Understanding Colorado Concealed Carry Laws**
Navigating Colorado's concealed carry laws can be complex without proper guidance. Our [colorado concealed carry class](https://www.brightontactical.com/) provide in-depth insights into state-specific regulations, ensuring you understand where and how you can legally carry a concealed firearm. From prohibited locations to self-defense scenarios, our instructors cover all aspects to help you stay informed and compliant.
**Hands-On Training and Safety Protocols**
Safety is paramount in firearm handling. Our classes emphasize practical training sessions that focus on safe firearm usage, storage, and handling techniques. Whether you're practicing at our state-of-the-art facility or participating in live-fire exercises, our certified instructors prioritize safety protocols to ensure a secure learning environment.
**Developing Proficiency and Confidence**
Confidence comes from competence. Our structured curriculum is designed to enhance your shooting proficiency and decision-making skills in various situations. Through interactive drills and simulations, you'll gain the confidence needed to handle firearms responsibly and effectively protect yourself and others when necessary.
**Expert Guidance and Personalized Instruction**
At Brighton Tactical, our experienced instructors are dedicated to your success. They provide personalized instruction tailored to your skill level, addressing any questions or concerns you may have along the way. Whether you're learning basic firearm mechanics or honing advanced marksmanship techniques, our team is committed to supporting your journey towards becoming a confident concealed carrier.
**Joining a Community of Responsible Gun Owners**
By enrolling in a Colorado concealed carry class at Brighton Tactical, you're not just learning skills—you're joining a community of like-minded individuals committed to responsible firearm ownership. Our classes foster camaraderie and mutual respect among participants, creating a supportive network that extends beyond the classroom.
**Take the First Step Today**
Ready to embark on your journey towards obtaining your Colorado concealed carry permit? Visit Brighton Tactical to learn more about our upcoming classes, schedules, and registration details. Whether you're interested in learning about firearms for personal protection or simply expanding your knowledge, our courses provide a solid foundation. Empower yourself with the knowledge and skills needed to responsibly carry concealed in Colorado.
Experience the difference with Brighton Tactical. Visit our website today and take the first step towards becoming a confident concealed carrier.
| ericryan3132 | |
1,906,825 | RAG Systems Simplified - IV | Welcome to the fourth installment of our series on Generative AI and Large Language Models (LLMs). In... | 0 | 2024-06-30T18:20:57 | https://dev.to/mahakfaheem/rag-systems-simplified-iv-1dbe | ai, aiops, learning, community | Welcome to the fourth installment of our series on Generative AI and Large Language Models (LLMs). In this blog, we will delve into Retrieval-Augmented Generation (RAG) methods, exploring why they are essential, how they work, when to choose RAG, the components of a RAG system, available frameworks, techniques, pipeline, and evaluation methods.
#### Understanding RAGs
Retrieval-Augmented Generation (RAG) is a method that enhances the capabilities of large language models (LLMs) by combining information retrieval techniques with generative text generation. In a RAG system, relevant information is first retrieved from an external knowledge base and then used to inform the text generation process. This approach ensures that the generated content is both contextually relevant and factually accurate, leveraging the strengths of both retrieval and generation.
#### Benefits of RAGs
Retrieval-Augmented Generation (RAG) enhances the capabilities of traditional text generation models by integrating information retrieval techniques. This approach is particularly beneficial for the following reasons:
- **`Enhanced Accuracy:`** Traditional LLMs, while powerful, often generate responses based solely on patterns learned during training. This can lead to inaccuracies, especially when dealing with specific or niche queries. RAG systems, however, incorporate real-time data retrieval, allowing them to pull in relevant and up-to-date information from external knowledge bases. This integration significantly boosts the accuracy of the generated responses.
- **`Grounded Information:`** One of the critical limitations of traditional LLMs is their propensity to generate plausible-sounding but factually incorrect information, a phenomenon known as "hallucination." RAG mitigates this by grounding responses in external, verified data sources. This grounding ensures that the information provided is not only contextually relevant but also factually accurate.
- **`Handling Rare Queries:`** LLMs are trained on vast datasets, but they can still struggle with rare or long-tail queries that are underrepresented in the training data. By retrieving information from specialized databases or documents, RAG systems can effectively handle such queries, providing detailed and accurate responses that would otherwise be difficult to generate.
#### Key Components of a RAG System
A typical RAG system consists of several key components, each playing a vital role in the overall functionality:
- **`Retriever:`** The retriever is responsible for fetching relevant documents or passages from a knowledge base. This component often employs advanced search algorithms and indexing techniques to efficiently locate the most relevant information. Techniques like dense retrieval using embeddings or traditional term-based methods like TF-IDF can be used, depending on the requirements.
- **`Ranker:`** Once the retriever identifies a set of potentially relevant documents, the ranker sorts and prioritizes these documents based on their relevance to the query. This ensures that the most useful and accurate information is utilized in the generation process.
- **`Generator:`** The generator uses the retrieved and ranked information to produce a coherent response. This component is typically a large language model fine-tuned to generate text based on provided context. The integration of retrieval results into the generation process ensures that the output is both contextually relevant and factually accurate.
- **`Knowledge Base:`** The knowledge base serves as the external source of information. This can range from structured databases to collections of documents, web pages, or even real-time search engine results. The quality and comprehensiveness of the knowledge base are critical for the effectiveness of the RAG system.
- **`Integration Layer:`** This component ensures seamless interaction between the retriever and the generator. It handles the contextualization and formatting of retrieved information, preparing it for the generative model. The integration layer plays a crucial role in maintaining the coherence and relevance of the final output.
#### Working
Understanding the mechanics of RAG systems requires breaking down the process into its core components and workflow:
- **`Retrieval Mechanism:`** At the heart of RAG is the retrieval mechanism. When a query is received, the system first identifies and retrieves relevant documents or passages from an external knowledge base. This could be a database, a search engine, or a collection of indexed documents. The retrieval process often involves sophisticated search algorithms that can handle both structured and unstructured data.
- **`Generation Process:`** Once the relevant information is retrieved, it is fed into a generative model. This model, which could be an LLM like GPT-3 or BERT, uses the contextual information provided by the retrieved documents to generate a coherent and contextually accurate response. The key here is that the generation process is informed by the specific content retrieved, ensuring that the output is not only contextually appropriate but also factually grounded.
- **`Integration:`** The seamless integration of retrieval and generation is crucial for the effectiveness of a RAG system. This integration involves sophisticated algorithms that ensure the retrieved information is appropriately contextualized and formatted for the generative model. The result is a response that leverages the strengths of both retrieval and generation.

[_Image Source: Oracle Corporation. OCI Generative AI Professional Course._](https://mylearn.oracle.com/ou/course/oci-generative-ai-professional/136035)
#### Situations for Implementing RAG
RAG systems are not always the best choice for every application. Here are specific scenarios where implementing RAG can be particularly beneficial:
- **`Information-Heavy Applications:`** Applications that require precise and up-to-date information, such as customer support systems, technical documentation, and research assistance, can greatly benefit from RAG. By pulling in the latest data from trusted sources, these systems can provide accurate and relevant information quickly and efficiently.
- **`Complex Queries:`** When dealing with complex or uncommon queries that require specialized knowledge, RAG systems excel. The ability to retrieve and integrate specific information from external sources ensures that even the most intricate queries are handled with accuracy and depth.
- **`Content Creation:`** For tasks that involve generating well-researched and factual content, such as writing articles, reports, or summaries, RAG systems are invaluable. By integrating real-time data retrieval, these systems can produce content that is not only engaging but also thoroughly researched and factually correct.
#### Techniques for Effective RAG
Implementing a RAG system involves choosing the right techniques to ensure optimal performance. Here are some common techniques used in RAG systems:
- **`Dense Retrieval:`** Utilizes dense vector representations (embeddings) to retrieve relevant passages. Dense retrieval methods often involve training a model to map queries and documents into a shared vector space, where similarity can be measured using metrics like cosine similarity. This approach is highly effective for capturing semantic similarities and retrieving contextually relevant information.
- **`Sparse Retrieval:`** Traditional term-based retrieval methods, such as TF-IDF and BM25, rely on keyword matching to find relevant documents. While less sophisticated than dense retrieval, sparse retrieval can be highly efficient and effective for certain types of queries. Combining sparse and dense retrieval methods can often yield the best results.
- **`Hybrid Approaches:`** By combining dense and sparse retrieval techniques, hybrid approaches leverage the strengths of both methods. For instance, a hybrid system might use sparse retrieval to quickly narrow down a large corpus to a smaller set of relevant documents, followed by dense retrieval to refine the selection based on semantic similarity.
#### Building a RAG Pipeline
Creating an effective RAG pipeline involves several steps, each contributing to the overall functionality and performance of the system:
- **`Query Processing:`** The input query is processed and transformed into a format suitable for retrieval. This step may involve tokenization, normalization, and embedding generation to ensure the query can be effectively matched against the knowledge base.
- **`Document Retrieval:`** The retriever fetches relevant documents or passages from the knowledge base. This step often involves searching through large volumes of data and selecting the most relevant pieces of information based on predefined criteria.
- **`Contextual Integration:`** The retrieved information is integrated and formatted for the generative model. This step ensures that the generative model receives a coherent and contextually appropriate input, facilitating the generation of accurate and relevant responses.
- **`Response Generation:`** The generator produces a response using the integrated context. This step leverages the generative capabilities of the language model to construct a fluent and contextually accurate response based on the retrieved information.
- **`Post-Processing:`** The generated response is refined and formatted for delivery. This step may involve additional processing to ensure the response meets specific quality and format requirements, such as removing redundancies, correcting grammatical errors, and ensuring coherence.
#### Evaluating RAG Systems
Evaluating the performance of a RAG system involves several key metrics and considerations:
- **`Relevance:`** Assessing how relevant the retrieved information is to the query. This metric evaluates the effectiveness of the retrieval component and its ability to find the most pertinent information.
- **`Accuracy:`** Measuring the factual accuracy of the generated responses. Ensuring that the information provided is correct and reliable is crucial for the credibility of the RAG system.
- **`Fluency:`** Evaluating the linguistic quality and coherence of the responses. This metric assesses the generative model's ability to produce fluent, natural-sounding text that reads well and makes sense.
- **`Efficiency:`** Considering the computational efficiency and response time of the system. A RAG system must balance performance with resource consumption, ensuring that it can deliver accurate and relevant responses in a timely manner.
### Conclusion
Retrieval-Augmented Generation (RAG) systems represent a significant advancement in the field of text generation, offering enhanced accuracy, relevance, and contextual grounding. By understanding the why, how, and when of RAG, and by exploring its components, frameworks, techniques, and evaluation methods, we can effectively harness the power of RAG for various applications.
Stay tuned for the next installment in this series, where we'll dive into the security aspects of LLMs and explore how to protect and secure AI models and their outputs.
Thank you!
| mahakfaheem |
1,906,824 | Change Default Colors in FilamentPHP | Set Colors for a single Resources Using filament we are able to define the colors for all... | 0 | 2024-06-30T18:18:10 | https://dev.to/arielmejiadev/change-default-colors-in-filamentphp-10p2 | php, laravel, filament, tailwindcss | ## Set Colors for a single Resources
Using filament we are able to define the colors for all the resources in the `AppPanelProvider.php` class, this class is typically the one that is created by default in a filament project installation command, we can define a lot of features in this case we are going to set the colors for the whole resources:
```php
public function panel(Panel $panel): Panel
{
return $panel
->default()
->id('app')
->path('/')
->login()
->colors([
'primary' => Color::Slate,
'gray' => Color::Gray
]);
}
```
Now all the genereated resources are going to use this colors, take in mind that the value of the colors is handle by a Color class provided by Filament in the namespace `namespace Filament\Support\Colors` and it provides an identifier for all the colors in the TailwindCSS color pallete.
## Set colors in a custom action
You can define custom actions using `vainilla` livewire components, in this cases you would need to explicitly define the color by chaining the `colors` method:
```php
->color(Color::Slate)
```
## Set Colors Globally
From a service provider's `boot()` method you can define the colors for your app globally:
```php
public function boot(): void
{
FilamentColor::register([
'danger' => Color::Red,
'gray' => Color::Zinc,
'info' => Color::Blue,
'primary' => Color::Indigo,
'success' => Color::Green,
'warning' => Color::Amber,
]);
}
```
In this example this script replace the `Amber` color as the primary color for the `Indigo` all of the Tailwind Colors Pallette is available in the `Colors` class provided by Filament.
You can add more color options and even customize them from hex color codes to customize it even more you can follow the docs for this section [here](https://filamentphp.com/docs/3.x/support/colors#registering-extra-colors)
Thanks for reading!
| arielmejiadev |
1,906,022 | GitHub Repositories Every Software Engineer Should Know | Finally, after a long time, I am realizing my desire to write articles to help other software... | 0 | 2024-06-30T18:17:18 | https://dev.to/jrmarcio_/github-repositories-every-software-engineer-should-know-2e80 | programming, softwareengineering, algorithms, systemdesign | Finally, after a long time, I am realizing my desire to write articles to help other software engineers advance their careers. With this, I intend to help them improve their knowledge while allowing myself to learn and grow during the process.
In my first article, I present to you a compilation of interesting repositories for all software engineers who seek to stay updated and improve their skills whenever possible, regardless of their level or position.
Let's get straight to it, organized by categories:
- RoadMaps
- Books, Blogs, and Websites
- Algorithms
- Design Patterns
- System Design
- Design Resources
- Projects, Tutorials, and APIs
- Interviews
##RoadMaps
In the RoadMaps category, we have two repositories that provide a pathway to follow when you are looking to learn about a language or tool, giving you a direction on the basic knowledge you should acquire or already have.
{% embed https://github.com/kamranahmedse/developer-roadmap %}{% embed https://github.com/liuchong/awesome-roadmaps %}
##Books, Blogs, and Websites
After understanding the path to follow through the RoadMap, you should delve into documentation, books, blogs, and websites. For this, we have several repositories with various books, blogs, and important sites for you to structure your knowledge base solidly.
{% embed https://github.com/EbookFoundation/free-programming-books %}{% embed https://github.com/kilimchoi/engineering-blogs %}{% embed https://github.com/sdmg15/Best-websites-a-programmer-should-visit %}{% embed https://github.com/freeCodeCamp/freeCodeCamp %}
##Algorithms
With a well-formed knowledge base, you can visit the repositories below and deepen your knowledge in algorithms, checking implementations of various algorithms in different programming languages so you always know the best approach to take when faced with a problem.
{% embed https://github.com/TheAlgorithms %}{% embed https://github.com/arpit20adlakha/Data-Structure-Algorithms-LLD-HLD %}{% embed https://github.com/tayllan/awesome-algorithms %}
##Design Patterns
Through design patterns repositories, you can deepen your knowledge in patterns used in service and project implementations, understanding how they work and how you can implement them.
{% embed https://github.com/kamranahmedse/design-patterns-for-humans %}{% embed https://github.com/DovAmir/awesome-design-patterns %}
##System Design
With the System Design repositories, you can deepen your understanding of building your applications, considering scalability, performance, data storage methods, gaining knowledge to contribute to the technical definition of the application, and always developing a quality project.
{% embed https://github.com/ByteByteGoHq/system-design-101 %}{% embed https://github.com/donnemartin/system-design-primer %}{% embed https://github.com/InterviewReady/system-design-resources %}{% embed https://github.com/karanpratapsingh/system-design %}
##Design Resources
With the repositories below, you can access various design resources such as style guides, web templates, CSS frameworks, and create the best designs and design patterns for your projects.
{% embed https://github.com/goabstract/Awesome-Design-Tools %}{% embed https://github.com/bradtraversy/design-resources-for-developers %}
##Projects, Tutorials, and APIs
To get hands-on and create your projects, the repositories below bring you ideas, already implemented projects, and provide public APIs giving you resources and tools to practice everything you have learned and solidify the acquired knowledge.
{% embed https://github.com/florinpop17/app-ideas %}{% embed https://github.com/practical-tutorials/project-based-learning %}{% embed https://github.com/public-apis/public-apis %}
##Interviews
Finally, in the repositories below, after all the preparation and project implementation, we have various tools and documents to help you improve your interview preparation and perform them in the best possible way, advancing in your career and contributing to others.
{% embed https://github.com/kdn251/interviews %}{% embed https://github.com/yangshun/tech-interview-handbook %}{% embed https://github.com/DopplerHQ/awesome-interview-questions %}
##Conclusion
That's it, folks. Feel free to comment, suggest other repositories, and follow me for the upcoming articles.
I hope you have enjoyed this post and start to learn something new.
Thanks ❤️
Linkedin: https://www.linkedin.com/in/marcio-mendes/
Github: https://github.com/marciojr
| jrmarcio_ |
1,906,802 | #4 Interface Segregation Principle ['I' in SOLID] | ISP - Interface Segregation Principle The Interface Segregation Principle is the fourth principle in... | 0 | 2024-06-30T18:15:55 | https://dev.to/vinaykumar0339/4-interface-segregation-principle-i-in-solid-3g97 | interfacesegregation, solidprinciples, designprinciples | **ISP - Interface Segregation Principle**
The Interface Segregation Principle is the fourth principle in the Solid Design Principles.
1. Clients should not be forced to depend on interfaces they do not use.
**Violating ISP:**
```swift
protocol Worker {
func eat()
func work()
}
class HumanWorker: Worker {
func eat() {
print("Human is eating...")
}
func work() {
print("Human is working...")
}
}
class RobotWorker: Worker {
func work() {
print("Robot is working...")
}
func eat() {
fatalError("Robot can't eat...")
}
}
// usage
let robot = RobotWorker()
robot.eat() // This will cause a runtime error
```
**Issues with Violating ISP:**
1. Unnecessary Methods:
* Classes are forced to implement methods they do not use, leading to potential runtime errors.
2. Increased Complexity:
* Interfaces become bloated with methods irrelevant to some implementations.
3. Reduced Maintainability:
* Changes to interfaces affect all implementing classes, even if they do not use the changed methods.
**Adhering to ISP**
To adhere to ISP, separate the protocol into small pieces:
```swift
protocol Workable {
func work()
}
protocol Eatable {
func eat()
}
protocol WorkerISP: Workable, Eatable {}
class HumanWorkerISP: WorkerISP {
func eat() {
print("HumanWorkerISP is eating...")
}
func work() {
print("HumanWorkerISP is working...")
}
}
class RobotWorkerISP: Workable {
func work() {
print("RobotWorkerISP is eating...")
}
}
// usage
let humanWorker = HumanWorkerISP()
humanWorker.eat()
humanWorker.work()
let robotWorker = RobotWorkerISP()
robotWorker.work()
```
**Benefits of Adhering to ISP:**
1. Improved Maintainability:
* Smaller, specific interfaces are easier to understand and maintain.
2. Enhanced Flexibility:
* Changes to an interface do not affect clients that do not use that part of the interface.
3. Increased Reusability:
* Specific interfaces can be reused more easily across different parts of the application.
**Drawbacks of Adhering to ISP:**
1. More Interfaces:
* Can lead to an increase in the number of interfaces, making the codebase larger.
2. Initial Complexity:
* Designing multiple specific interfaces may require more detailed planning and initial effort.
**Mitigating Drawbacks:**
1. Balanced Approach:
* Apply ISP judiciously, balancing between simplicity and the need for specificity.
2. Clear Documentation:
* Maintain clear and concise documentation to help developers understand the purpose of each interface.
3. Use of Design Patterns:
* Employ design patterns that naturally adhere to ISP, such as the Adapter pattern, to manage complexity.
**Conclusion:**
By thoughtfully understanding and applying the Interface Segregation Principle, you can create more maintainable, understandable, and flexible software. Ensuring that clients only depend on the interfaces they use promotes better software design and enhances the overall quality of the codebase.
[Liskov Substitution Principle](https://dev.to/vinaykumar0339/3-liskov-substitution-principle-l-in-solid-1jo2)
[Dependency Inversion Principle](https://dev.to/vinaykumar0339/5-dependency-inversion-principle-d-in-solid-1ip2)
[Check My GitHub Swift Playground Repo.](https://github.com/vinaykumar0339/SolidDesignPrinciples) | vinaykumar0339 |
1,906,823 | sanal ortam oluşturma | Harika! Sanal ortam oluşturmak ve kullanmak için aşağıdaki adımları izleyebilirsiniz: Sanal ortamı... | 0 | 2024-06-30T18:14:51 | https://dev.to/mustafacam/sanal-ortam-olusturma-3i8o | Harika! Sanal ortam oluşturmak ve kullanmak için aşağıdaki adımları izleyebilirsiniz:
1. **Sanal ortamı oluşturun:**
```bash
python -m venv myenv
```
2. **Sanal ortamı aktif hale getirin:**
- **Windows**:
```bash
myenv\Scripts\activate
```
- **macOS/Linux**:
```bash
source myenv/bin/activate
```
Sanal ortamı başarıyla aktif hale getirdiğinizde komut satırında `(myenv)` gibi bir gösterge görmelisiniz.
3. **Gerekli paketleri yükleyin:**
```bash
pip install "gerekli paketler"
```
Bu komut, gerekli paketleri sanal ortamınıza yükleyecektir.
4. **Paketleri kullanın:**
Sanal ortam aktifken, Python komutlarını veya betiklerinizi çalıştırdığınızda bu ortamda yüklü olan paketler kullanılacaktır. Örneğin:
```bash
python my_script.py
```
5. **Sanal ortamı devre dışı bırakmak:**
Sanal ortamdan çıkmak ve global Python ortamınıza geri dönmek için:
```bash
deactivate
```
Bu adımları takip ederek sanal ortamda paketlerinizi sorunsuz bir şekilde yükleyip kullanabilirsiniz. Sanal ortamı paket uyumsuzlukları olursa global python'dan bağımsız kullanabilmeniz için idealdir. yani yeni bir paket yüklerseniz sizde yüklü olan python sürümleri ile uyumlu olmalıdır. | mustafacam | |
1,906,814 | Load Balancers in AWS | Load balancers are servers that forward traffic to multiple servers downstream. They are crucial for... | 0 | 2024-06-30T18:12:23 | https://dev.to/vivekalhat/load-balancers-in-aws-5gbb | aws, cloud | Load balancers are servers that forward traffic to multiple servers downstream. They are crucial for distributing incoming traffic across different servers such as EC2 instances, in multiple Availability Zones. This increases high availability of your application. A load balancer ensures that no single server bears too much load, thus enhancing the performance and reliability of your application.
AWS Elastic Load Balancer (ELB) is a managed load balancer. It is integrated with many AWS services including EC2, ECS, Route53, and CloudWatch. While it might be costlier than setting up your own load balancer, the time and effort saved in managing and configuring ELB make it a preferred choice for many.
AWS Elastic Load Balancer has following types of managed load balancers:
1. Classic Load Balancer (old generation)
2. Application Load Balancer
3. Network Load Balancer
4. Gateway Load Balancer
Classic Load Balancer (CLB) comes under the old generation in AWS. It is recommended to use the newer generation of load balancers as they provide more features.
Following are some of the key features of load balancers:
- They distribute traffic across multiple downstream instances, ensuring efficient handling of requests.
- They provide a single DNS point of access for your application.
- They can seamlessly manage failures in downstream instances.
- They perform regular health checks on your instances to ensure only healthy instances receive traffic.
- They can operate across multiple AZs for high availability.
- They can segment public and private traffic.
In this article, we will explore Application Load Balancer (ALB) in detail.
### Application Load Balancer (ALB)
An Application Load Balancer (ALB) operates at application layer of the OSI model making it ideal for HTTP/HTTPS traffic. It supports advanced routing mechanisms such as:
- Path based routing
- Hostname based routing
- Query string or header based routing
ALB can route traffic to multiple target groups, including EC2 instances, ECS tasks, Lambda functions, or private IP addresses. It supports modern HTTP, HTTPS, HTTP/2, WebSocket protocols. ALB provides a single DNS name that clients can use to access your application thus simplifying DNS management.
Let's create a simple Application Load Balancer between two EC2 instances. The purpose of this load balancer will be to distribute traffic to EC2 instances. To follow the below steps, you need to create minimum 2 EC2 instances. You can refer [this](https://dev.to/vivekalhat/beginners-guide-to-aws-ec2-2adk) article to learn about creating a new EC2 instance.
### Steps for creating a new Application Load Balancer
- On EC2 homepage, select `Load Balancers` option in the menu and click on `Create load balancer` option.

- Select `Application Load Balancer` as a type and click create.

- Give a name to the load balancer, select internet facing as a scheme and `IPv4` as IP address type.

- Select availability zone mapping in which load balancer will route the traffic.

- Select or create a new security group for load balancer. You can add inbound rules specific to your use case to allow traffic to the EC2 instances.

- Create a new target group to which load balancer will route the incoming traffic. In this example, we will be routing the traffic via load balancers to two EC2 instances.

- On `Create target group` page, select `Instances` as a target type and add a name for the target group. You can keep other settings as default.


- On the register targets page, select the EC2 instances to which load balancer will route the incoming traffic and click on `Include as pending below` option. After registering the targets, click on `Create target group` option.

- Select the newly created target group in load balancer configuration page under `Listening and routing section`.

- Click on `Create load balancer` option to create a new load balancer to route traffic between the selected target group based on the configuration.
After creating a new load balancer, any incoming traffic to the EC2 instances will be handled by the rules mentioned in the load balancer. You can also specify custom rules inside the load balancer. Let's create a new custom rule to handle error route in the application.
### Creating a custom rule in Load Balancer
- Select the load balancer and under `Listeners and rules` section, select the default `HTTP:80` listener.

- Click on `Add rule` option and add a name to the custom rule.

- In `Define rule conditions`, add a path based condition to match `/error` path.

- Under `Define rule actions`, select `Return fixed response` option add a response body to be displayed when the `/error` route is accessed.

- Set rule priority as `1` and click next and create the custom rule.
After creating a custom rule, if you access the `/error` path on the Load Balancer's DNS address, you will see the custom error response body as configured.
In this way, you can create a load balancer and custom rules using AWS Elastic Load Balancer. You can refer the official user [guide](https://aws.amazon.com/elasticloadbalancing/) to learn more about load balancing in AWS. | vivekalhat |
1,906,822 | React.js or Angular? | In the ever-evolving world of technology, several technologies have emerged to streamline software... | 0 | 2024-06-30T18:12:22 | https://dev.to/nickndolo/reactjs-or-angular-1il6 | In the ever-evolving world of technology, several technologies have emerged to streamline software development process. These technologies are essential tools for developers as they provide a predefined structure for building dynamic web pages and mobile apps.
In this article, we are going to dive direct into frontend development and compare the two giants of web development. But what is a framework? In web development, a framework is a term used to describe tools and libraries that simplifies the process of building websites or web applictions. A framework provides a structured way to develop by offering pre-written code, templates and best practices.
Frontend frameworks focus on the user interfaces and client-side scripting.
At the present time, there are several frameworks available in the market,each with its own set of unique features and strengths. As a developer the choice of one over the rest comes down to the specific requirements of the project at hand and personal preferences.Then, you might be tempted to ask "And what should be my go-to framework?" Get ready because I am going to take you on a tour of the two major frameworks that are currently in demand.
## **React vs Angular**
It is time to get to know the rivalry between react and angular.Which one is worth investing time learning? It is worth noting that these two are the most popular frontend frameworks and each offers unique advantages, making them very powerful tools in web development.
## **Reactjs**
React, commonly referred as the Library for Building UIs is a popular JavaScript Library for building fast and interactive user interfaces. It was developed by facebook in 2007. It uses a component-based architecture and emphasizes declarative programmig, making it easier to manage complex UIs.
React is an unopinionated, meaning that it doesn't enforce any rules, rather it lets you invent your own thus giving developers freedom and flexibility.Just like every other technology, react has its own pros and cons.
**Pros of using reactjs**
1.** Reusable components**: One of the major benefits of using react is its potential to reuse components. It saves time for developers as they don't have to write various codes for the same feature.
2. **Virtual DOM**: React uses a virtual DOM to efficiently update the user interface, enhancing performance.
3.**Flexibility**: As a library, react allows integration with various other libraries and frameworks, giving developers more control over their tech stack.
4.**Easy to learn**: React, compared to other popular frontend frameworks like Angular & Vue, is much easier to learn.
5. **Performance**: React JS was designed to provide high performance in mind. The core of the framework offers a virtual DOM program and server-side rendering, which makes complex apps run extremely fast.
**Cons of using reactjs**
1.JSX syntax as a barrier: ReactJS uses JSX which is a syntax extension that allows HTML with JavaScript to be mixed. Although it has advantages, inexperienced developer lament difficulty of the learning curve.
2. State Management Complexity: As the application grows, state management can become complex,hence requiring additional libraries such as redux.
3.ReactJS only covers the UI Layers of the app, and thus there is always a need to choose some other technologies for other development.
## **Angular**
If you need structure and like best practices to be enforced, angular might be the best fit for you.This is because unlike react, angular plays by the rules.
Angular was released by Google in 2009. It particularly sticks out in enterprise-level setups, especially when paired with Typescript.
Here are some advantages and disadvantages of using angular.
**Pros**
1.**Custom and reusable components**: Just like react, Angular allows developers to create their own components. These components can be reused, combined and nested, providing a construction kit for building the application.
2.**Complete Framework**: Angular provides a full suite of tools out of the box, reducing the need for additional libraries.
3.**Use of TypeScript**: Use of typescript leads to improved code quality by enforcing type checks and structure, leading to more robust code.
4.**Dependency Injection**: Angular’s dependency injection system facilitates modular development and testing.
**Cons**
1.**High pace of development**: It results in continuous and fast environmental changes, making it hard to adapt to all the changes. Thus the developers' skills must continually be updated along with the changes.
2.**Steep Learning Curve**: Angular’s extensive feature set can be daunting for new developers, making it harder to learn.
3.**Complexity**: The framework can be complex and verbose, especially for smaller projects.
**Conclusion**
To conclude, both react and angular have their own strengths and weaknesses. The optimal choice hinges on your project's specific requirements. If you prioritize structure, scalability, and maintainability for complex applications, Angular might be the way to go. If flexibility, rapid UI development, and a vibrant community are your top concerns, ReactJS could be the better fit.
Are you looking for an internship to put your skills into practice?
Check out https://hng.tech/internship, https://hng.tech/hire, or https://hng.tech/premium
Happy hacking!
| nickndolo | |
1,906,821 | Handling Concurrent Access to Shared Resources in Golang | Golang's concurrency model with goroutines is powerful but can lead to race conditions when accessing... | 0 | 2024-06-30T18:06:47 | https://dev.to/adeyinka_boluwatife_66b0e/handling-concurrent-access-to-shared-resources-in-golang-2h4l | Golang's concurrency model with goroutines is powerful but can lead to race conditions when accessing shared resources. Here's a brief guide on handling these issues effectively.
**Problem**
Race conditions occur when multiple goroutines access shared resources concurrently, causing unpredictable behavior.
**Solutions**
- **Mutexes:**Mutexes ensure only one goroutine can access a shared resource at a time.
Example:
```
package main
import (
"fmt"
"sync"
)
var (
counter int
mutex sync.Mutex
)
func increment(wg *sync.WaitGroup) {
defer wg.Done()
for i := 0; i < 1000; i++ {
mutex.Lock()
counter++
mutex.Unlock()
}
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go increment(&wg)
}
wg.Wait()
fmt.Println("Final counter value:", counter)
}
```
- **Channels:**
Channels can synchronize goroutines by controlling access to shared resources.
Example:
```
package main
import (
"fmt"
"sync"
)
func increment(counter chan int, wg *sync.WaitGroup) {
defer wg.Done()
for i := 0; i < 1000; i++ {
c := <-counter
c++
counter <- c
}
}
func main() {
var wg sync.WaitGroup
counter := make(chan int, 1)
counter <- 0
for i := 0; i < 10; i++ {
wg.Add(1)
go increment(counter, &wg)
}
wg.Wait()
fmt.Println("Final counter value:", <-counter)
}
```
- **Atomic Operations:**
The sync/atomic package provides atomic memory primitives for safe concurrent access.
Example:
```
package main
import (
"fmt"
"sync"
"sync/atomic"
)
var counter int64
func increment(wg *sync.WaitGroup) {
defer wg.Done()
for i := 0; i < 1000; i++ {
atomic.AddInt64(&counter, 1)
}
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go increment(&wg)
}
wg.Wait()
fmt.Println("Final counter value:", counter)
}
```
**Conclusion**
Prevent race conditions in Golang by using mutexes, channels, or atomic operations to synchronize access to shared resources. Choose the method that best fits your use case.Hng helped me groom my skills as a problem solver and i would want whoever reading this to look into the innternship and let them groom your skills as a developer [HNG Internship](https://hng.tech/internship) and [premium subscription](https://hng.tech/premium).click the link to learn more. | adeyinka_boluwatife_66b0e | |
1,906,819 | [Python] Tool Hacking Plus | Herramienta para uso diario, scanea cualquier host obteniendo datos del sistema. Por ahora está en... | 0 | 2024-06-30T18:02:49 | https://dev.to/jkdevarg/python-tool-hacking-plus-11e | python, hacking, opensource, security | Herramienta para uso diario, scanea cualquier host obteniendo datos del sistema.
Por ahora está en versión pre-alpha lo cual falta agregar mas utilidades que se van agregar mas adelante.
```
████████╗██╗ ██╗██████╗
╚══██╔══╝██║ ██║██╔══██╗
██║ ███████║██████╔╝
██║ ██╔══██║██╔═══╝
██║ ██║ ██║██║
╚═╝ ╚═╝ ╚═╝╚═╝
usage: thp [-h] [-a] [-p PORT] [-t HOST] [-v] [-o OUTPUT]
THP CLI tool for network analysis
options:
-h, --help show this help message and exit
-a, --about Show help information
-p PORT, --port PORT Port number
-t HOST, --host HOST Host to analyze
-v, --verbose Enable verbose output
-o OUTPUT, --output OUTPUT
Output file name
```
repositorio:
https://github.com/JkDevArg/thp_python | jkdevarg |
1,906,000 | Immutable Object in C# | Immutable Object in C# -... | 0 | 2024-06-30T18:00:29 | https://dev.to/ipazooki/immutable-object-in-c-o0g | csharp, dotnet, tutorial, programming | {% embed https://youtu.be/Pg-wMNDqYok?si=FhE0lXlu85ly8H1s %}
👋 Welcome to the channel everyone! Today we are diving into the fascinating world of immutability in C#. It may sound a bit fancy, but it's simply a way of creating objects that cannot be changed after they are created. Think of them like historical documents - once written, they stay that way. We'll cover immutable objects, immutable collections, and finally, records. So without further ado, let's get started!
## Immutable Objects and Collections
In certain situations, you may require an object or collection that remains consistent throughout its lifecycle and cannot be altered after creation. This immutability ensures predictability and safety, as they can't be changed. Let's take a look at how they are implemented in code.
### Immutable Objects
Imagine a class representing a person. Traditionally, you can change a person's name, but let's see how to make it immutable.
One solution is to initialize a field in the constructor and return that field. In this case, the property is read-only because there is no setter for that property, but you could have a method to change the person's name internally, which is not ideal for immutability.
```csharp
public class Person
{
private string _name;
public Person(string name)
{
_name
}
public string Name => _name;
public void ChangeName(string newName)
{
_name = newName;
}
}
```
#### Using Read-Only Keyword
By using `readonly`, we can initialize the field only once, ensuring that it can't be changed afterwards. However, this approach may involve a lot of code. Is there a simpler solution? The answer is "yes."
#### Using Init Keyword
We can use the `init` keyword to initialize the property only once. This syntax is cleaner and more concise. Here’s a simple example:
```csharp
public class Person(string name)
{
public string Name { get; init; } = name;
}
```
### Records
Records, introduced in C# 9, are a special kind of class specifically designed for holding data. They come with immutability built right in, so you don’t have to worry about accidentally changing them. Plus, records automatically generate boilerplate code for things like equality checks, making your life a whole lot easier. They are perfect for scenarios like DTOs or configuration settings because they’re not supposed to change.
Here's how to create a record class for a person:
```csharp
public record Person(string Name);
```
As you can see, it is much slimmer than the traditional class with the same properties. The only difference is that its record is immutable by default.
By using a record, a lot of boilerplate code is generated for us. For instance, it inherits from the `IEquatable` interface, and we have a backing field for the `Name` property along with additional code like `Clone`.
### Immutable Collections
Now, let's discuss immutable collections. Suppose we have an immutable collection of a person represented by a record. On the other hand, we have a mutable collection of a person that can be changed. How can we make it immutable?
#### Using `IEnumerable`
One approach is to convert it to an `IEnumerable`, which does not have any add or edit methods, making it suitable for scenarios where only **one** iteration is needed. However, what can we do when we need to iterate over more than one item? In such cases, we can use `IReadOnlyList`.
#### Using `ImmutableList`
`IReadOnlyList` is perfect when you need a read-only collection and will not guarantee immutability! To make it immutable, we should use `ImmutableList`. We can use the `ToImmutableList` method to make the collection immutable. For example:
```csharp
var list = new List<Person> { new Person("John"), new Person("Jane") };
var immutableList = list.ToImmutableList();
```
Even though the add method is available, it doesn't modify the existing list but creates a new one. here as the collection is immutable, the add method will work exactly like the `with` keyword in Record and create a new instance of the collection.
## Summary
That's the basic idea of immutable objects, records, and collections in C#. They might seem a bit different at first, but they offer a powerful way to write cleaner, safer, and more predictable code.
## Join the Conversation!
I'd love to hear your thoughts! Have you used immutability in your projects? Share your experiences and opinions in the comments below. | ipazooki |
1,906,817 | Overcoming Execution Policy Restrictions in PowerShell: My Journey with the HNG Internship | I decided to embark on django backend development through HNG intership. setting up my virtual... | 0 | 2024-06-30T18:00:13 | https://dev.to/ogunsolu007/overcoming-execution-policy-restrictions-in-powershell-my-journey-with-the-hng-internship-4j9f | backend, hng, django | I decided to embark on django backend development through HNG intership. setting up my virtual environment was the first starting point where i encoutered issues with power shell. and i will like to share how i was able to find solution to the issue. below is the error message encoutered.
PS C:\Users\qwerty\documents\learn-django> .\myvirenv\Scripts\Activate
.\myvirenv\Scripts\Activate : File C:\Users\qwerty\documents\learn-django\myvirenv\Scripts\Activate.ps1 cannot be
loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at
https:/go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:1
+ .\myvirenv\Scripts\Activate
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
PS C:\Users\qwerty\documents\learn-django>
* first i try to understand the problem.The error message indicated that running scripts is disabled on my system due to PowerShell's execution policy settings. PowerShell's execution policy is a safety feature that controls the conditions under which PowerShell loads configuration files and runs scripts. By default, it is set to restrict the execution of scripts to prevent the running of potentially harmful scripts.
* Opened powershell as an administrator to confirm the current status of the execution policy with the following command
Get-ExecutionPolicy
* It returned RESTRICTED, this confirms that no script was allowed to run
* To allow the activation script to run, I needed to change the execution policy. There are several options available, but I opted for the RemoteSigned policy, which allows scripts created on the local computer to run but requires that scripts downloaded from the internet be signed by a trusted publisher. below is the command for this
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
after these step i ran my virtual environment setup again , and it was successful.
Technical challenges are an inevitable part of a developer’s journey, Here’s to a successful journey with the HNG Internship, where I look forward to tackling more challenges, learning new skills, and growing as a backend developer!. HNG provides a platform for growth and excellence. follow this links for more [Internship](https://hng.tech/internship) [hire](https://hng.tech/hire)
| ogunsolu007 |
1,906,816 | Intro to solidity | The block header includes several pieces of data: Previous Block Hash: The hash of the previous... | 0 | 2024-06-30T17:59:33 | https://dev.to/arsh_the_coder/intro-to-solidity-2dbi | The block header includes several pieces of data:
- Previous Block Hash: The hash of the previous block in the chain.
- Merkle Root: A hash representing all the transactions in the block.
- Timestamp: The current time when the block is mined.
- Difficulty Target: A value that determines the difficulty of the cryptographic puzzle.
- Nonce: A variable number that miners change to find a valid hash.
Nonce - number only used once
This is a number used for setting the difficulty of the hash of current block upto the current blockchain's threshold.
A miner is a participant in a blockchain network who uses computational power to solve complex cryptographic puzzles. In return for their efforts, miners are rewarded with newly minted cryptocurrency coins (e.g., bitcoins) and transaction fees.
Since blockchain is decentralised, every miner has the data available enough to generate hash but it should be done by finding the right nonce.
Miners change the nonce value and hash the block header repeatedly.
Each hash is compared to the difficulty target.
**Process in a nutshell**
A miner collects transactions and forms a block.
They create a block header with all necessary information, including a nonce initially set to zero.
The miner hashes the block header.
If the resulting hash is below the difficulty target, the block is valid, and the miner broadcasts it to the network.
If the hash is not below the target, the miner increments the nonce and tries again.
This process repeats until a valid hash is found.
**Nodes**
They are just computers actively working on the blockchain network.
Types of nodes (ChatGPT)
1. Full Nodes
Definition:
Full nodes store the entire blockchain history and validate all transactions and blocks according to the network's consensus rules.
Characteristics:
Complete Ledger: Keeps a copy of the entire blockchain.
Validation: Verifies and validates all transactions and blocks.
Network Backbone: Provides data to other nodes and helps maintain the network's integrity.
Examples:
Bitcoin Core in the Bitcoin network.
Geth (Go Ethereum) in the Ethereum network.
2. Light Nodes (Lightweight Nodes)
Definition:
Light nodes, also known as lightweight or simplified payment verification (SPV) nodes, store only a portion of the blockchain and rely on full nodes for transaction and block validation.
Characteristics:
Partial Ledger: Stores only block headers or relevant parts of the blockchain.
Dependence on Full Nodes: Requests data from full nodes to verify transactions.
Resource Efficiency: Requires less storage and computational power.
Examples:
Mobile wallets or light clients for Bitcoin and Ethereum.
3. Mining Nodes
Definition:
Mining nodes participate in the mining process by solving cryptographic puzzles to create new blocks and add them to the blockchain.
Characteristics:
Proof of Work (PoW): Commonly found in PoW networks like Bitcoin.
High Computational Power: Uses significant computational resources to solve puzzles.
Block Creation: Competes to find the correct nonce and create new blocks.
Examples:
ASIC miners for Bitcoin.
GPU miners for Ethereum (prior to Ethereum 2.0).
4. Staking Nodes (Validators)
Definition:
Staking nodes, or validators, are used in Proof of Stake (PoS) networks. They validate transactions and create new blocks based on the amount of cryptocurrency they hold and "stake" as collateral.
Characteristics:
Proof of Stake (PoS): Participates in block validation and creation based on staked cryptocurrency.
Energy Efficiency: More energy-efficient than PoW mining.
Rewards: Earns rewards through staking rather than mining.
Examples:
Validators in Ethereum 2.0.
Validators in Cardano and Tezos networks.
5. Masternodes
Definition:
Masternodes perform specialized functions beyond simple transaction validation, such as facilitating instant transactions, privacy features, and governance in certain blockchain networks.
Characteristics:
Collateral Requirement: Requires a significant amount of cryptocurrency to operate.
Specialized Functions: Provides additional services like instant transactions and privacy.
Governance: Often participates in network governance and decision-making.
Examples:
Masternodes in the Dash network.
Masternodes in the PIVX network.
6. Archival Nodes
Definition:
Archival nodes store the entire history of the blockchain, including all states and transactions. They provide historical data and are essential for developers and researchers.
Characteristics:
Complete Storage: Keeps all historical data and states.
Resource Intensive: Requires significant storage capacity.
Data Availability: Provides comprehensive historical data for analysis.
Examples:
Full archival nodes in Ethereum (maintaining all historical states).
7. Relay Nodes
Definition:
Relay nodes facilitate communication between nodes in geographically dispersed regions, helping to propagate transactions and blocks more efficiently.
Characteristics:
Network Bridging: Acts as intermediaries to reduce latency and improve connectivity.
Propagation: Helps in faster dissemination of blocks and transactions.
Examples:
Relay nodes in Bitcoin and Ethereum networks.
8. Oracles
Definition:
Oracles are specialized nodes that provide external data to smart contracts on the blockchain, allowing them to interact with real-world events.
Characteristics:
Data Providers: Supplies real-world data to smart contracts.
Trust and Security: Ensures the accuracy and reliability of external data.
Examples:
Chainlink oracles.
Oraclize (now Provable) oracles.
**Geth**
It is used to make your PC a node
Depending upon it's installation it makes your PC into a node
After that, you can connect your PC to the mainnet, or even start mining if you would like to.
**
Smart contracts**
These are just programs stored on blockchain.
Assume that there is a transaction which should only happen when a logic if satisfied. Where shall we be coding that logic, the answer is smart contract.
Some features
Smart contracts are immutable as they get stored on blockchain
They have their own accounts where it can store cryptocurrency
No human intervention is required for cryptocurrency transfer or receiving
**Solidity**
It is an OOPs language for implementing smart contracts for the etherium blockchain
High-level statically typed programming language
Case sensitive
With solidity we can create contracts for voting, crowdfunding, blind auctions & multi-signature wallets.
**
SOLIDITY compilation process**
There is a .sol file
Solidity compiler takes the file and gives two files in output, the ABI and Byte code
Byte code is deployed on blockchain
ABI is used to interact with the smart contract's variables and functions
Bytecode is public in readable form
**SPDX**
To maintain trust in blockchain world, the codes can be protected using license.
Just add a comment containing "SPDX-License-Identifier: <SPDX-License>
to each source file
(Refer link for more : https://etherscan.io/contract-license-types )
**Pragma**
It specifies compiler version that the source file should use.
It is primarily of 3 types and their meaning
pragma solidity 0.8.0 Exact specified version required
pragma solidity ^0.8.0 Version should be specified one or newer that is backward compatible but not 0.9.0
pragma solidity >=0.5.0 <0.9.0 Any version from 0.5.0 upto but not including 0.9.0
**Variables**
These are of two types
**1. State variable**
Permanently stored in contract storage
Cost (gas expensive)
Reading of such variable is free but writing to it is costly.
**2. local variable**
Declared inside functions and are kept on the stack not on storage
Don't cost gas
**Functions**
When we declare a public state variable a getter function is automatically created.
We use some keywords which we describe to function type based on what the function is expected to do. Two popular types are : View & Pure
**1. View :**
Allows reading data from blockchain state but doesn't modify it
Suitable for functions that need to fetch and return data stored in the contract.
**2. Pure : **
It neither reads nor modifies the blockchain state.
Suitable for utility functions that perform calculations or operations based only on the provided input parameters.
**# Good to know**
The difficulty of a blockchain increases periodically.
Contract doesn't have to be public
Bytecode is immutable | arsh_the_coder | |
1,906,815 | I was told it's not discrimination | About a month ago, I left the company that I’d worked for for almost seven years. A company that I... | 0 | 2024-06-30T17:58:00 | https://dev.to/sarah_bruce_83fc98defc6d5/i-was-told-its-not-discrimination-3ceg | womenintech, workplace | About a month ago, I left the company that I’d worked for for almost seven years. A company that I once considered my family. In a position that I loved, that both challenged me and fulfilled me. As a lead for a team of engineers that left me in awe daily with their passion, intelligence, and camaraderie.
I plan to share more of my story soon, but for now I’m going to share this small part that is the culmination of the last two years (although the word “small” is probably a disservice to the profound way that it’s affected me and will forever affect me).
I’m scared to share this. Of how I’ll be perceived. Of how other people will be perceived. Of how it may impact my career. But the fact that I’m so scared is just a reminder of why it’s so important that I share it. Countless other people, especially women, have been the object of hostile behavior and discrimination in the workplace. And they’re scared to talk about it because they’ve been dismissed and their concerns minimized.
This is what happened to me. I faced hostile behavior for two years, along with what I understand to be discrimination based on the law. My concerns were sometimes met with validation, but most often met with excuses and arguments.
One example is a peer review that I received from a male coworker during our annual reviews. My manager went over the review with me and told me that the coworker was glad our engineering team had a female lead (me). But that I’m too quiet and self-deprecating, and the coworker is concerned that it’s setting a bad example for our other female engineers, and that I should be a more outspoken and confident lead for our women. I told my manager that I disagreed with my coworkers’s assessments, and my manager told me he believed me and doubted that what my coworker said is accurate. But my manager went on to tell me that I still need to respect my coworker’s feedback and opinions, because that is his reality, even if it’s not accurate. What I took away from this is that I need to respect a person’s warped and sexist views of me. I don’t know any other way to take it. (There was a male lead on our team who actually self-deprecates often, and my guess is he didn’t receive a review saying he needs to be a better lead for our male engineers.)
After several mental breakdowns, I decided it was time to leave the company. I made a formal report of discrimination soon after I left. Two days ago, I received this letter from the company with their findings. They didn’t find information to support any discrimination.

Obviously, I disagree with their findings since I made the report in the first place. But I’m not writing this post and sharing this letter in hopes that it will change the outcome - I know it won’t. I’m writing this because too many people are scared to share their experiences and truths. I spent the last two years feeling isolated, doubting my experiences and perception of reality. And it will take time and therapy to undo those false narratives that play in my head.
My hope is that if someone reads this who is experiencing something similar, you know you’re not alone. That your experiences and concerns are valid. And that you have at least one person on your side, me.
Note: I know that discrimination is a serious allegation, and I didn’t make the report lightly. I’m also working with the EEOC to file a charge. | sarah_bruce_83fc98defc6d5 |
1,906,548 | Key Features of Functional Programming in C# | What is Functional Programming? Functional programming is a programming paradigm that emphasizes the... | 0 | 2024-06-30T17:57:16 | https://dev.to/waelhabbal/key-features-of-functional-programming-in-c-13ia | csharp, functional, programmingprinciples, softwaredevelopment | **What is Functional Programming?**
Functional programming is a programming paradigm that emphasizes the use of pure functions, immutability, and the avoidance of changing state. It's a way of writing code that focuses on what the code should do, rather than how it should do it.
**Key Features of Functional Programming in C#:**
1. **Immutability**: In functional programming, data is immutable, meaning it cannot be changed once it's created. This makes it easier to reason about the code and reduces the risk of bugs.
Example:
```csharp
// Immutable class
public class Person
{
private readonly string _name;
private readonly int _age;
public Person(string name, int age)
{
_name = name;
_age = age;
}
public string Name { get { return _name; } }
public int Age { get { return _age; } }
}
```
2. **Pure Functions**: Pure functions are functions that always return the same output given the same inputs and have no side effects.
Example:
```csharp
// Pure function
public int CalculateTotal(int[] numbers)
{
return numbers.Sum();
}
```
3. **Lambda Expressions**: Lambda expressions are small, anonymous functions that can be defined inline within a method or expression.
Example:
```csharp
// Lambda expression
List<int> numbers = new List<int> { 1, 2, 3, 4, 5 };
numbers.Sort((x, y) => x.CompareTo(y));
```
4. **Extension Methods**: Extension methods are methods that can be added to existing types without modifying their definition.
Example:
```csharp
// Extension method
public static class StringExtensions
{
public static string ToUpper(this string str)
{
return str.ToUpper();
}
}
string myString = "hello";
string upperCaseString = myString.ToUpper();
```
5. **Expression Trees**: Expression trees are a way to represent code as data, allowing you to manipulate and analyze code at runtime.
Example:
```csharp
// Expression tree
Expression<Func<int, int>> expression = x => x * x;
// Compile the expression tree
Delegate compiled = expression.Compile();
int result = compiled.Compile().Invoke(5); // returns 25
```
6. **Pattern Matching**: Pattern matching is a way to match an object against a set of patterns and execute different blocks of code based on the result.
Example:
```csharp
// Pattern matching
string input = "hello";
if (input switch
{
"hello" => Console.WriteLine("Hello!"),
"goodbye" => Console.WriteLine("Goodbye!"),
_ => Console.WriteLine("Invalid input")
})
{
Console.WriteLine("Pattern matched!");
}
```
**How to Use Functional Programming in C#:**
1. **Use Immutable Data Structures**: Use immutable data structures instead of mutable ones to reduce bugs and improve performance.
Example:
```csharp
// Immutable data structure
public class BankAccount
{
private readonly decimal _balance;
public BankAccount(decimal balance)
{
_balance = balance;
}
public void Deposit(decimal amount)
{
_balance += amount;
}
public void Withdraw(decimal amount)
{
_balance -= amount;
}
public decimal GetBalance()
{
return _balance;
}
}
```
2. **Compose Functions**: Compose small, pure functions together to create more complex functionality.
Example:
```csharp
// Composed function
Func<int, int> doubleAndAddOne = x => x * 2 + 1;
Func<int, int> addThree = x => x + 3;
int result = doubleAndAddOne.Compose(addThree).Invoke(5); // returns 13
```
3. **Use Lambda Expressions**: Use lambda expressions to create small, anonymous functions that can be used inline.
Example:
```csharp
// Lambda expression
List<int> numbers = new List<int> { 1, 2, 3, 4, 5 };
numbers.Sort((x, y) => x.CompareTo(y));
```
4. **Use Extension Methods**: Use extension methods to add functionality to existing types without modifying their definition.
Example:
```csharp
// Extension method
public static class StringExtensions
{
public static string ToUpper(this string str)
{
return str.ToUpper();
}
}
string myString = "hello";
string upperCaseString = myString.ToUpper();
```
5. **Use Expression Trees**: Use expression trees to manipulate and analyze code at runtime.
Example:
```csharp
// Expression tree
Expression<Func<int, int>> expression = x => x * x;
// Compile the expression tree
Delegate compiled = expression.Compile();
int result = compiled.Compile().Invoke(5); // returns 25
```
6. **Use Pattern Matching**: Use pattern matching to match an object against a set of patterns and execute different blocks of code based on the result.
Example:
```csharp
// Pattern matching
string input = "hello";
if (input switch
{
"hello" => Console.WriteLine("Hello!"),
"goodbye" => Console.WriteLine("Goodbye!"),
_ => Console.WriteLine("Invalid input")
})
{
Console.WriteLine("Pattern matched!");
}
```
Functional programming in C# is a paradigm that emphasizes immutability, pure functions, and avoiding changing state. This approach can help reduce bugs, improve performance, and make code more maintainable and scalable. By applying functional programming principles and techniques, you can write more robust and efficient C# code. | waelhabbal |
699,493 | Álgebra booliana | Há algum tempo, quando a bug_elseif ainda estava fazendo listas de exercícios em Python, apareceu um... | 0 | 2021-05-16T01:46:39 | https://eduardoklosowski.github.io/blog/algebra-booliana/ | math, braziliandevs | Há algum tempo, quando a [bug_elseif](https://www.twitch.tv/bug_elseif) ainda estava fazendo [listas de exercícios em Python](https://wiki.python.org.br/ListaDeExercicios), apareceu um problema que envolvia verificar se um ano era bissexto ou não. Embora a construção de uma expressão para verificar se um ano é bissexto seja até intuitiva, como estávamos utilizando a condição invertida (verificar se o ano não era bissexto), sua construção não estava sendo fácil, porém é possível usar um pouco de matemática para chegar nela.
## Construção da expressão
Primeiramente vamos construir uma expressão para verificar se um ano é bissexto. Para isso, ele deve ser múltiplo de 4, porém se o ano terminar com 00, ele também deve ser múltiplo de 400. Para verificar se um número termina com 00, basta verificar se ele é múltiplo de 100, e para verificar se um número é múltiplo de outro, podemos verificar o resto da divisão ou módulo (quem sabe falo sobre matemática modular em outro artigo), caso o resultado dessa operação seja 0, o primeiro número é divisível pelo segundo, e caso seja qualquer outro valor, o primeiro número não é divisível pelo segundo (ou não possui uma divisão inteira).
Assim, a expressão para verificar se o ano é bissexto pode ser construída como:
```
(ano % 4 == 0 && ano % 100 != 0) || ano % 400 == 0
```
A primeira coisa a ser observada é que existem duas subexpressões com o conectivo disjuntivo ("ou" `||`), ou seja, para um ano ser bissexto basta ele cumprir uma das duas condições (subexpressões). A primeira condição também é dividida em outras duas subexpressões, porém dessa vez com o conectivo conjuntivo ("e" `&&`), assim é necessário que as duas condições sejam verdadeiras para que o seu valor seja considerado verdadeiro, onde a primeira verifica se o ano é divisível por 4 (resto da divisão é igual a 0), e a segunda verifica se ele não é divisível por 100 (resto da divisão é diferente de 0). Essa é a primeira possibilidade para um ano ser bissexto. A outra possibilidade é se ele for divisível por 400 (resto da divisão é igual a zero).
Assim, essa expressão retorna verdadeiro se o ano for bissexto, e falso caso ele não for.
## Invertendo a expressão
Porém na ocasião, a expressão que estávamos usando deveria retornar verdadeiro caso o ano não fosse bissexto, e falso caso ele fosse bissexto (o contrário da expressão apresentada). Isso poderia ser feito negando a expressão anterior, ou escrevendo uma expressão de tal forma que retorne o oposto, e era justamente essa segunda opção que estávamos tentando fazer.
Entretanto, existe uma forma matemática de trabalhar com a negação da expressão, alteando-a até que ela chegue próximo ou a exata expressão que estávamos construindo. Isso é possível através de propriedades das operações boolianas, substituindo parte da expressão a cada vez que uma propriedade por aplicada. Sendo as mais comuns para esse tipo de operação as propriedades de negação da negação (`!!a = a`), distributiva (`a || (b && c) = (a || b) && (a || c)` e `a && (b || c) = (a && b) || (a && c)`, que lembra a distributiva da matemática `2 * (3 + 4) = (2 * 3) + (2 * 4)`), e as leis de De Morgan (`!(a || b) = !a && !b` e `!(a && b) = !a || !b`). Para mais propriedades veja a página sobre o assunto na [Wikipédia](https://pt.wikipedia.org/wiki/Álgebra_booliana).
Para esse caso é necessário aplicar apenas as leis de De Morgan. Partindo da negação da expressão, aplicando-a passo a passo, temos:
```
!((ano % 4 == 0 && ano % 100 != 0) || ano % 400 == 0)
!(ano % 4 == 0 && ano % 100 != 0) && !(ano % 400 == 0)
(!(ano % 4 == 0) || !(ano % 100 != 0)) && !(ano % 400 == 0)
(ano % 4 != 0 || ano % 100 == 0) && ano % 400 != 0
```
Onde essa última expressão é a que precisávamos para o código.
## Considerações
Álgebra booliana é interessante para trabalhar condições como de `if` e laços de repetições dos códigos, seja para otimizá-la ou inverter os blocos de código do `if` e `else`, por exemplo, o que pode ser utilizado para deixar o código mais fácil de entender, colocando os blocos de código em uma ordem que faça mais sentido para a leitura. Ela também pode ser utilizada para facilitar a construção de expressões, como no caso apresentando, onde é muito mais fácil e intuitivo escrever uma expressão que verifica se o ano é bissexto do que um ano que não é, onde essa última pode até ser contraintuitiva, onde a álgebra booliana permite partir da expressão mais fácil para a mais difícil.
E para quem quiser se aprofundar nesse assunto, recomendo as [aulas do RiverFount](https://www.youtube.com/playlist?list=PL8iUCCJD339ezAJWqFaKriz_9tyBw6hE-), que é professor de filosofia. | eduardoklosowski |
1,286,949 | Browser hot-reloading for Python ASGI web apps using arel | One day I was just frustrated from this fact that while doing web development using Python ASGI... | 0 | 2024-06-30T17:51:49 | https://dev.to/ashleymavericks/browser-hot-reloading-for-python-asgi-web-apps-using-arel-1l19 | python, fastapi, webdev, programming | One day I was just frustrated from this fact that while doing web development using Python ASGI frameworks like FastAPI there is no browser hot-reloading functionality available. I dig deeper and found about [arel](https://github.com/florimondmanca/arel). This tutorial will you to implement arel in your development workflow for a better experience.
Browser hot reloading is a popular feature for developers that allows for rapid iteration and debugging of web applications. It allows for changes made to the codebase to be reflected instantly in the browser, eliminating the need for manual reloading or rebuilding.
One framework that makes use of hot reloading is FastAPI, a high-performance web framework for building APIs. FastAPI is built on top of ASGI (Asyncronous Server Gateway Interface) and utilizes the Uvicorn web server to provide a smooth and efficient development experience.
To enable hot reloading in FastAPI, we can use the arel library. Arel is a lightweight library that provides hot reloading functionality for Python web applications. It works by automatically detecting changes to the codebase and reloading the server whenever a change is detected.
```python
pip install arel
```
## Package Requirements
- arel
- fastapi
- jinja2
- unicorn[standard]
or
- webhook
With these changes, our FastAPI app will now have hot reloading enabled. Any changes made to the codebase will be reflected in the browser instantly, without the need for manual reloading.
There are a few additional configuration options available for arel. For example, we can specify the directories that arel should watch for changes, or we can disable hot reloading in certain environments (such as production).
Overall, using arel to enable hot reloading in FastAPI can greatly improve the development experience, allowing for faster iteration and debugging of our web application. It is a useful tool for any FastAPI developer looking to streamline their workflow and improve their productivity.
debug flag needs to be set to True in order for hot reloading to work in FastAPI. The debug flag can be set in the FastAPI application object, like so:
```python
app = FastAPI(debug=True)
```
Alternatively, you can also set the debug flag using an environment variable. In this case, you would need to set the UVICORN_DEBUG environment variable to 1 before starting the FastAPI server.
Implementation: https://github.com/ashleymavericks/browser-hot-reloading | ashleymavericks |
1,899,193 | Apache Spark-Structured Streaming :: Cab Aggregator Use-case | Building helps you retain more knowledge. But teaching helps you retain even more. Teaching is... | 0 | 2024-06-30T17:50:09 | https://dev.to/snehasish_dutta_007/apache-spark-structured-streaming-cab-aggregator-use-case-2od0 | apachespark, dataengineering, streaming, realtimedata | _Building helps you retain more knowledge.
But teaching helps you retain even more. Teaching is another modality that locks in the experience you gain from building.--Dan Koe_
## Objective
Imagine a very simple system that can automatically warn cab companies whenever a driver rejects a bunch of rides in a short time. This system would use Kafka to send ride information (accepted, rejected) and Spark Structured Streaming to analyze it in real-time. If a driver rejects too many rides, the system would trigger an alert so the cab company can investigate.
## What is Spark Structured Streaming ?

Spark Structured Streaming is a powerful tool for processing data streams in real-time. It's built on top of Apache Spark SQL, which means it leverages the familiar DataFrame and Dataset APIs you might already use for batch data processing in Spark. This offers several advantages:
**Unified Programming Model:** You can use the same set of operations for both streaming and batch data, making it easier to develop and maintain code.
**Declarative API:** Spark Structured Streaming lets you describe what you want to achieve with your data processing, rather than writing complex low-level code to handle the streaming aspects.
Fault Tolerance: Spark Structured Streaming ensures your processing jobs can recover from failures without losing data. It achieves this through techniques like checkpointing and write-ahead logs.
Here's a breakdown of how Spark Structured Streaming works:
**Streaming Data Source:** Your data comes from a streaming source like Kafka, Flume, or custom code that generates a continuous stream of data.
**Micro-Batching:** Spark Structured Streaming breaks down the continuous stream into small chunks of data called micro-batches.
**Structured Processing:** Each micro-batch is processed like a regular DataFrame or Dataset using Spark SQL operations. This allows you to perform transformations, aggregations, and other data manipulations on the streaming data.
**Updated Results:** As new micro-batches arrive, the processing continues, and the results are constantly updated, reflecting the latest data in the stream.
**Sinks:** The final output can be written to various destinations like databases, dashboards, or other streaming systems for further analysis or action.
**Benefits of Spark Structured Streaming:**
**Real-time Insights:** Analyze data as it arrives, enabling quicker decision-making and proactive responses to events.
**Scalability:** Handles large volumes of streaming data efficiently by leveraging Spark's distributed processing capabilities.
**Ease of Use:** The familiar DataFrame/Dataset API makes it easier to develop and maintain streaming applications.
In essence, Spark Structured Streaming bridges the gap between batch processing and real-time analytics, allowing you to analyze data as it's generated and gain valuable insights from continuous data streams.
## Project Architecture

**Extract** From : Apache Kafka
**Transform** Using : Apache Spark
**Load** Into : Apache Kafka
## Producer and Infrastructure
Repository : https://github.com/snepar/cab-producer-infra
It is a Simple Application which ingests data into Kafka
It ingests Random Events either Accepted or Rejected
Sample Event
```
{
"id": 3949106,
"event_date": 1719749696532,
"tour_value": 29.75265579847153,
"id_driver": 3,
"id_passenger": 11,
"tour_status": rejected
}
```
Start the Infrastructure
```
docker compose up
```
Radom Events Generator
```
val statuses = List("accepted", "rejected")
val status = statuses(Random.nextInt(statuses.length))
while (true) {
val topic = "ride"
val r = scala.util.Random
val id = r.nextInt(10000000)
val tour_value = r.nextDouble() * 100
val id_driver = r.nextInt(10)
val id_passenger = r.nextInt(100)
val event_date = System.currentTimeMillis
val payload =
s"""{"id":$id,"event_date":$event_date,"tour_value":$tour_value,"id_driver":$id_driver,"id_passenger":$id_passenger,"tour_status":"$status"}""".stripMargin
EventProducer.send(topic, payload)
Thread.sleep(1000)
}
```
Send Random Events to Producer
```
def send(topic: String, payload: String): Unit = {
val record = new ProducerRecord[String, String](topic, key, payload)
producer.send(record)
}
```
See the produced events from Topic named **ride** in the Docker Terminal
```
kafka-console-consumer --topic ride --bootstrap-server broker:9092
```

## Spark Structured Streaming Application
Repository : https://github.com/snepar/spark-streaming-cab
Create Spark Session to Execute the application locally ::
```
val spark = SparkSession.builder()
.appName("Integrating Kafka")
.master("local[2]")
.getOrCreate()
spark.sparkContext.setLogLevel("WARN")
```
Configure Reader and Writer - Kafka topics
```
val kafkahost = "localhost:9092"
val inputTopic = "ride"
val outputTopic = "rejectalert"
val props = new Properties()
props.put("host", kafkahost)
props.put("input_topic",inputTopic)
props.put("output_host", kafkahost)
props.put("output_topic",outputTopic)
props.put("checkpointLocation","/tmp")
```
Define Schema for the Events
```
val schema = StructType(Seq(
StructField("id", IntegerType, nullable = true),
StructField("event_date", LongType, nullable = false),
StructField("tour_value", DoubleType, nullable = true),
StructField("id_driver", StringType, nullable = false),
StructField("id_passenger", IntegerType, nullable = false),
StructField("tour_status", StringType, nullable = false)
))
```
Read from Kafka Topic and Create the Streaming Dataframe
```
val df = spark.readStream.format("kafka")
.option("kafka.bootstrap.servers","localhost:9092")
.option("failOnDataLoss","false")
.option("startingOffsets", "latest")
.option("subscribe", "ride").load()
```
Parse the Dataframe with Schema and filter out only Events which are marked as Rejected
**The Rejected Events Signify that a Driver has rejected a ride**
```
val parsedDF = df.selectExpr("cast(value as string) as value")
.select(from_json(col("value"), schema).as("data"))
.select("data.*").where("tour_status='rejected'")
```
Aggregate in a Window of 1 minute how many rides were rejected and Group By driver ID , also calculate how much money has been lost due to this rejection
```
val driverPerformance: DataFrame = parsedDF.groupBy(
window(to_utc_timestamp(from_unixtime(col("event_date") / 1000, "yyyy-MM-dd HH:mm:ss"), "UTC+1")
.alias("event_timestamp"),
"1 minute"),
col("id_driver"))
.agg(count(col("id")).alias("total_rejected_tours"),
sum("tour_value").alias("total_loss"))
.select("id_driver", "total_rejected_tours", "total_loss")
```
Set a threshold of 3 cancellations , if it crosses 3 generate an event
```
val thresholdCrossedDF = driverPerformance.where(col("total_rejected_tours").gt(3))
```
Write this DataFrame to a Kafka Topic **rejectalert**
```
thresholdCrossedDF.selectExpr("CAST(id_driver AS STRING) AS key", "to_json(struct(*)) AS value")
.writeStream
.format("kafka")
.option("kafka.bootstrap.servers",
prop.getProperty("output_host","localhost:9092"))
.option("topic",prop.getProperty("output_topic","rejectalert"))
.outputMode("update".option("checkpointLocation",prop.getProperty("checkpoint","/tmp"))
.start().awaitTermination()
```
Run the Complete Application : https://github.com/snepar/spark-streaming-cab/blob/master/src/main/scala/rideevent/AlertGenerator.scala
Using A Consumer on Kafka Broker Subscribe to these Alerts

## References
https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html
https://github.com/rockthejvm/spark-streaming
| snehasish_dutta_007 |
1,906,810 | Building a Mail Analyzer and Responder Using ChatGPT API: A Backend Development Journey | In the realm of backend development, challenges abound, pushing us to innovate and refine our skills... | 0 | 2024-06-30T17:47:30 | https://dev.to/ifeoluwa_sulaiman_cef54af/building-a-mail-analyzer-and-responder-using-chatgpt-api-a-backend-development-journey-2j1o | In the realm of backend development, challenges abound, pushing us to innovate and refine our skills continuously. Recently, I embarked on an ambitious project: creating a backend application that analyzes incoming emails and uses the ChatGPT API to generate appropriate responses. This undertaking was as complex as it was rewarding, presenting numerous obstacles that required creative and technical prowess to overcome. Allow me to walk you through this journey, highlighting the hurdles encountered and the strategies employed to build a robust solution.
The Challenge: Analyzing Emails and Responding Appropriately
The primary objective was to develop an application capable of:
1) Fetching and analyzing incoming emails.
2) Determining the context and sentiment of each email.
3) Generating accurate and contextually relevant responses using the ChatGPT API.
4) Sending these responses back to the original senders.
5) Schedules and sends these responses using Gmail API and BullMQ.
Conclusion
Building this backend application involved integrating the Gmail API for reading and sending emails, ChatGPT API for generating responses, and BullMQ for scheduling. The process was a testament to the importance of systematic problem-solving in backend development.
My Journey with HNG Internship
I’m thrilled to join the HNG Internship, where I’ll further hone my skills and tackle new challenges. The HNG Internship offers a hands-on, immersive learning experience, perfect for developers looking to elevate their careers. Learn more about the program and explore how you can hire top talent from HNG.
My Journey with HNG Internship
I’m thrilled to join the HNG Internship, where I’ll further hone my skills and tackle new challenges. The HNG Internship https://hng.tech/internship offers a hands-on, immersive learning experience, perfect for developers looking to elevate their careers. Learn more about the program and explore how you can hire top talent from HNG https://hng.tech/hire.
This project exemplifies the exciting, problem-solving nature of backend development, and I’m eager to continue this journey with HNG, ready to embrace new opportunities and contribute to innovative solutions.
For more information on the HNG Internship and how it can transform your career, visit HNG Internship: https://hng.tech/internship and explore their premium offerings here: https://hng.tech/premium. | ifeoluwa_sulaiman_cef54af | |
1,906,809 | REACT VS NEXT.JS: THE DIFFERENCE AND WHICH IS BETTER. | REACT VS NEXT.JS: THE DIFFERENCE AND WHICH IS BETTER. Front-end technologies are the foundation and... | 0 | 2024-06-30T17:44:16 | https://dev.to/xtoluck/react-vs-nextjs-the-difference-and-which-is-better-43do | webdev, javascript, beginners, programming | REACT VS NEXT.JS: THE DIFFERENCE AND WHICH IS BETTER.
Front-end technologies are the foundation and building blocks of a website, defining its user interface and shaping the overall user experience. Front-end enables you to interact with the web and its actions. In modern web development, a plethora of tools, frameworks, and practices contribute to creating engaging and responsive websites and content.
WHAT IS REACT?
React (also known as React.js or ReactJS) is a free and open-source front-end JavaScript library for building user interfaces based on components. It is a UI library created by Meta (formerly Facebook) to build reactive apps based on event triggers. React components can be stateless or stateful and only re-render within the scope of the applied state.
React is built to be declarative. That means you get to decide the workflow. You get to control how your app works, making React a powerful tool.
FEATURES OF REACT
• JavaScript Syntax Extension
JSX is a combination of JavaScript and HTML. This syntax extension is basically used to create React elements.
• Component
React adheres to the declarative programming paradigm. Everything in React is a component. Multiple React components are coupled together to create simple user interfaces for very large and complex UI. Each of these components can have its logic and behaviors. Components are reusable in any part of the web page by just calling it.
• Virtual DOM
Another notable feature is the use of a virtual Document Object Model, or Virtual DOM. React creates an in-memory data-structure cache, computes the resulting differences, and then updates the browser's displayed DOM efficiently. This procvess is called reconciliation.
If the two DOMs are equal, the actual DOM never gets touched; the actual DOM gets updated.
• One-Way Data Binding
A component logic contains the data to be displayed on the user interface. The connection flow between the data displayed and the logic component is called data binding.
ADVANTAGES OF REACT.JS
• Easy to Learn
Currently, the educational state of React.js is good because, over the years, React has grown with its community, and the community has made thousands of materials available for reference. The good supply of documentation and tutorial videos makes React.js a good catch.
• JavaScript Syntax Extension
JSX is a JavaScript syntax extension that makes writing dynamic web apps in React.js easier. It enables React to display error and warning messages, which aids in debugging.
• Reusable Components
React is a powerful JavaScript library that enables developers to create Reusable User Interfaces (UI). A key feature of React is the ability to create Components, which are self-contained units of code that can be reused throughout your app. This enables you to build UI from small, reusable pieces, making code more readable and maintainable.
• Performance Enhancement
One of the reasons for its success is its performance. React uses a virtual DOM to manage updates to the user interface which makes React apps fast and responsive. When a user interacts with a React app, only the relevant parts of the DOM are updated which means that there is no need to redraw the entire page. This makes React apps much faster than traditional JavaScript frameworks.
• React.js Is SEO Friendly
React.js is a JavaScript library that is used for building user interfaces. It is also known for its speedy performance and small size. React.js is SEO friendly because it uses server-side rendering. This means that the content of your React.js application will be rendered on the server before it is sent to the client. This can help to improve your website's search engine optimization because the content will be visible to search engines as soon as it is rendered.
• Easily Write Unit Tests
One of the reasons it is so popular is because it is easy to test. There are many libraries available that make it easy to set up unit tests, and there are also options for end-to-end testing.
DISADVANTAGES OF REACT
• Fast-Developing
One of the biggest problems is that React.js is constantly changing, and it can be difficult to keep up with the latest improvements. This means that developers need to invest a lot of time in keeping up with the React ecosystem, which can be a deterrent for some people.
• Complex Documentation
React documentation is essential for learning how to use the library. However, as the library grows in size and complexity, the documentation has become increasingly difficult to navigate. In order to keep up with the rapidly changing landscape, developers need to spend a significant amount of time reading through the documentation. This has become a major barrier to entry for new developers.
• Re-Rendering
React component life-cycle makes React re-render unnecessarily. React web apps are all made up of components, and when there is a change, Reacts checks for the change and renders it, but it may meet an unexpected change due to how JavaScript handles comparisons and equality. Such unintended change can cause unnecessary re-rendering.
WHAT IS NEXT.JS?
Next.js is a light framework built on top of React that makes it easy to create fast, server-rendered websites. It is an open-source framework.
FEATURES OF NEXT.JS
• File System Routing
Next.js is a JavaScript framework that makes creating routes for your web app easy.
• Server-Side Rendering
Next.js supports the rendering of pages on user requests on the server-side by generating a non-interactive HTML.
• Static Site Generator
Next.js supports static page generation which makes it stand out as statically generated web pages are said to be SEO optimized because of their speed, which can make Google rank that page higher.
ADVANTAGES OF NEXT.JS
• Speed
Next.js supports static site generation and server-side rendering. Static generation is fast because all web application pages have been pre-rendered, cached, and served over a CDN.
Server-side rendering is fast as the server handling the request, but on request, these pages are already built on the server rather than being built on the client side.
• Less Setup
In Next.js, most features you get come with zero configuration as they are inbuilt.
• Easily Create Your Own Back-End
Easily create your custom back-end functionalities to power your own front-end which does not affect the size bundle of your client-side application.
• Built-In CSS Support
One of the key features of Next.js is its built-in CSS support. This means that developers can include CSS Stylesheets within their Next.js projects without needing to use any additional libraries or tooling.
DISADVANTAGES OF NEXT.JS
• Development and Maintenance
Building a Next.js application requires a significant upfront investment. Not only do you need developers who are familiar with Nextjs, but you also need to dedicate ongoing resources to maintaining the application.
NEXT.JS VS. REACT COMPARISON
If you're already familiar with React, you'll find Next.js easy to learn. That's because Next.js is built on top of React, so it inherited all the benefits of React (such as being component-based and declarative) while also adding its own features and functionality.
React, however, has a low difficult learning level. Over time, resources have been made in a very substantial amount; with all these, the learning curve is not too steep.
MY EXPECTATIONS FOR THE HNG INTERNSHIP
As I embark on the HNG Internship, I am willing and ready to give it all it takes to derive the enormous benefits that the program promises to offer. I believe this program will provide life-changing opportunities to embark on real-world projects, enhance my development skills, and gain real-life programming experience.
WORKING WITH REACT AT HNG
Considering the fact that React is a powerful and widely-used JS library for building user interfaces, I am excited to broaden my knowledge and understanding of React during the internship program.
I look forward to exploring and mastering React as I build real-life projects.
To learn more about the HNG Internship,
Visit: https://hng.tech/internship
If you are looking to hire talented developers,
Check out HNG Hire https://hng.tech/hire
OR
https://hng.tech/premium
**CONCLUSION**
Both React and Next.js are widely used in the real-word technological solutions as they are both powerful tools for Frontend Web Development. React’s flexibility and extensive ecosystem makes it a go-to JS library for building dynamic user interfaces, while Next.js’s production-ready features, like server-side rendering and static site generation, give enhanced performance and user experience.
As I embark on this journey with the HNG Internship, I am elated to leverage these technologies, enhance my skills, and contribute to innovative solutions development. The future of front-end development is promising and I am happy to be a part of it.
| xtoluck |
1,906,808 | Dream den | Home Design AI Free: Elevating Interiors without Cost Thinking of elevating your home interiors? Do... | 0 | 2024-06-30T17:41:09 | https://dev.to/phi_leo_b83f45476dba6d02a/dream-den-1dh9 | Home Design AI Free: Elevating Interiors without Cost
Thinking of elevating your home interiors? Do you know you can now have design ideas for free? No, we are not kidding. This is what AI home design tools can bring to you.
In the realm of interior design, creativity knows no bounds. However, sometimes budget constraints can dampen our aspirations for creating the perfect living space. Fortunately, with the advent of artificial intelligence (AI) technology, designing and elevating interiors has become more accessible than ever before. In this blog post, we will look into how free ai for home design tools can transform your space, without breaking the bank.
What are AI Generated Home Design tools?
Artificial intelligence has permeated various aspects of our lives, and home design is no exception. With the help of home design ai tools, homeowners and designers can unleash their creativity and visualize their ideas in ways that were previously unimaginable. These AI tools leverage advanced algorithms to analyze spaces, recommend design elements and even simulate how different choices will look in real life. What's more, many of these tools are available for free, making them accessible to anyone with an internet connection and a bit of creativity.
How to Start Your Home Design AI Free Journey With DreamDen?
1. Start by uploading your space pictures.
2. DreamDen facilitates design creation with a variety of trending styles including contemporary, traditional, and minimalistic. This process is quick, easy, and tailored to your preferences.
3. Once your moodboard is created, we provide you with a to-do list.
4. You can also select products and services, with your details instantly shared with relevant vendors in real-time. We cover most US ZIP codes.
5. Our ultimate aim is to assist users in planning their homes within hours.
6. Our app-based workflow ensures notifications and reminders are sent to different vendors in real-time.
7. We offer suggestions and recommend suitable product images based on the design board you've generated.
8. With no anxiety or stress, you can plan your dream home interior in just hours.
Benefits of Free home AI design tools
Free AI home design tools offer a plethora of benefits that can revolutionize the way homeowners approach interior design. Here are some of the key advantages:
• Cost-Effective Solutions: Traditional interior design services can be expensive, often requiring hefty fees for consultations and designs. A free AI home design generator, on the other hand, offers cost-effective alternatives. These allow homeowners to experiment with different designs and layouts without spending a dime.
• Accessible Design Expertise: Not everyone has an eye for design, but AI generated home design tools can bridge that gap by offering accessible design expertise. These tools often come equipped with pre-designed templates, color palettes and furniture arrangements. Thus, they make it easy for even the most design-challenged individuals to create visually stunning interiors.
• Time-Saving Features: Redesigning a space can be a time-consuming process, especially when considering different layouts and color schemes. AI design home tools streamline this process by offering quick and efficient design solutions. With just a few clicks, users can experiment with various design options and visualize the end result in a matter of minutes.
• Customization Options: While free AI home interior design tools may offer pre-designed templates, they also provide ample opportunities for customization. Users can tweak designs to suit their personal preferences, adjusting everything from furniture placement to wall colors, until they achieve the perfect look for their space.
• Realistic Visualizations: One of the most impressive features of AI generated home design tools is their ability to provide realistic visualizations. These tools can simulate how different design choices will look in a real-life setting, allowing homeowners to make informed decisions about their interiors.
Tips for Maximizing the Benefits of a Free Home Design AI tool
Here are some tips to help you maximize the benefits of using an AI for home design free tool:
• Experiment Freely: Don't be afraid to experiment with different design ideas and layouts. The best free AI home design tools provide a risk-free environment for testing out new concepts and seeing what works best for your space.
• Gather Inspiration: Browse through design magazines, websites, and social media platforms for inspiration. Use an AI for home design tool to recreate your favorite design elements and incorporate them into your own space.
• Take Advantage of Tutorials: Many AI generated home design tools offer tutorials and guides to help users get started. Take advantage of these resources to learn more about the features and capabilities of the tool you're using.
• Seek Feedback: Don't hesitate to seek feedback from friends, family, or online communities. Sometimes a fresh pair of eyes can provide valuable insights and help you refine your design ideas.
• Stay Updated: As technology evolves, so too do free home AI design tools. Stay informed about the latest updates and features to ensure you are getting the most out of your chosen tool.
The Footnote:
Free AI home interior design tools offer a wealth of benefits for homeowners looking to elevate their interiors without breaking the bank. From cost-effective solutions to realistic visualizations, these tools empower users to unleash their creativity and transform their living spaces with ease. By taking advantage of the features and capabilities of free home design AI tools, anyone can achieve their dream home design without the need for expensive professional services. So why wait? Start exploring these tools today and embark on your journey to a beautifully designed home.
Frequently Asked Questions ( FAQs)
1. Can these free AI tools be used by beginners or do they require advanced skills?
DreamDen, free AI tool for home design, is designed to be user-friendly and accessible to beginners. It typically features intuitive interfaces, drag-and-drop functionality and pre-designed templates, making them easy to navigate and use. While some familiarity with basic design principles may be helpful, beginners can quickly learn to use these tools with minimal prior experience.
1. Do these tools offer customization options for personalizing the designs?
Yes, free AI tools for home design typically offer a range of customization options to personalize designs according to individual preferences. Users can adjust everything from furniture placement to color schemes. These tools facilitate tailored solutions that reflect the unique style and taste of the users. With options to tweak layouts, textures, and decorative elements, these tools empower users to create spaces that feel truly personalized and inviting.
1. Can free AI tools for home design be integrated with other software or platforms for a more comprehensive design experience?
Yes, free AI tools for home design can often be integrated with other software or platforms to enhance the design experience. For example, some tools may offer compatibility with augmented reality (AR) apps for visualizing designs in real-world settings. Additionally, integration with cloud storage services allows users to easily save and share their designs across multiple devices. Furthermore, some tools may offer plugins or extensions that enable seamless integration with popular design software suites. Thus, they provide users with a more comprehensive and versatile design experience.
https://www.dreamden.ai/home-design-ai-free-elevating-interiors-without-cost
| phi_leo_b83f45476dba6d02a | |
1,906,806 | Monitoring, troubleshooting, and query analytics for PostgreSQL on Kubernetes | If you are learning about databases and Kubernetes or running or migrating PostgreSQL to Kubernetes,... | 0 | 2024-06-30T17:31:07 | https://dev.to/dbazhenov/monitoring-troubleshooting-and-query-analytics-for-postgresql-on-kubernetes-2onj | kubernetes, postgres, opensource, database | If you are learning about databases and Kubernetes or running or migrating PostgreSQL to Kubernetes, I would like to show you a great open-source tool for database monitoring and troubleshooting.
I will discuss a tool to help you better understand your database, its parameters, and its health. You can access a Query Analytics tool to help you find slow queries. In addition, you will have dashboards to monitor the Kubernetes cluster itself.
In [the previous article](https://dev.to/dbazhenov/running-pgadmin-to-manage-a-postgresql-cluster-in-kubernetes-616), I discussed the pgAdmin and PostgreSQL cluster created using Percona Everest. Today, I installed [Percona Monitoring and Management (PMM)](https://www.percona.com/open-source-database-monitoring-tools-for-mysql-mongodb-postgresql-more-percona) in my cluster, made some test queries to the database using pgAdmin, and explored the dashboards.
PMM is a free, open-source database monitoring tool for MySQL, PostgreSQL, and MongoDB. PMM has configured Grafana dashboards to monitor various PostgreSQL metrics:
- Connections, Tuples, Transactions
- Checkpoints, Buffers, and WAL usage
- Blocks, Conflicts and Locks
- Disk Cache and Memory Size
- CPU, RAM, Disk IO
- Vacuum monitoring and more


You need to install it (I'll show it below)
1. PMM Server, which includes dashboards and collects metrics from your databases.
2. PMM Client for each of your databases that sends database metrics to PMM Server. You need to configure pg_stat_monitor or pg_stat_statements extensions for PostgreSQL.
If you use Percona Operator for PostgreSQL or Percona Everest, PMM is already integrated and enabled in the settings.
Let's get to the installation.
## Installing PMM in a Kubernetes cluster.
You can install the PMM server on any server or cluster; I use the same cluster where the database is installed.
The documentation offers many [installation methods](https://docs.percona.com/percona-monitoring-and-management/setting-up/index.html), such as using Docker, Podman, AWS, or HELM.
I used the installation with HELM and the instructions from the [official documentation](https://docs.percona.com/percona-monitoring-and-management/setting-up/server/helm.html).
1. Create a separate namespace or use an existing one. I create a separate one
```
kubectl create namespace monitoring
```
2. I have installed HELM and the Percona repositories as per the documentation, and now install PMM using the commands:
```
helm repo add percona https://percona.github.io/percona-helm-charts/
```
```
helm install pmm -n monitoring \
--set service.type="ClusterIP" \
--set pmmResources.limits.memory="4Gi" \
--set pmmResources.limits.cpu="2" \
percona/pmm
```
I added parameters with resource limits for PMM since my test cluster has limited resources.
The installation is quick, and I have the next steps.

3. We need to get the administrator password created during installation. _(I just took that command from the last step.)_
```
kubectl get secret pmm-secret -n monitoring -o jsonpath='{.data.PMM_ADMIN_PASSWORD}' | base64 --decode
```
3. Let's do a port-forward for Pod with pmm to an available port on our laptop to open PMM in the browser. (_I used 8081 because 8080 is used for Percona Everest, which manages the database._)
```
kubectl -n monitoring port-forward pmm-0 8081:80
```
4. Opened PMM in a browser and used the password to log in.

## Connecting the database to the PMM server
Now that we have the PMM itself, we need to make our Postgres database pass metrics to it. I created the cluster using Percona Everest; however, you can connect any PostgreSQL cluster to PMM.
1. If you are not using Percona's Postgres, please refer to the documentation on installing [the PMM Client](https://docs.percona.com/percona-monitoring-and-management/setting-up/client/index.html) and [Postgres extensions](https://docs.percona.com/percona-monitoring-and-management/setting-up/client/postgresql.html) ([pg_stat_statements](https://www.postgresql.org/docs/current/pgstatstatements.html) or [pg_stat_monitor](https://docs.percona.com/pg-stat-monitor/)).
2. If you are using [Percona Distribution for PostgreSQL](https://docs.percona.com/postgresql/), [Percona Operator for PostgreSQL](https://docs.percona.com/percona-operator-for-postgresql/2.0/), or [Percona Everest](https://docs.percona.com/everest/index.html), then the necessary extensions are already installed. I will explain how to enable monitoring below.
### Postgres database created using Percona Operator for PostgreSQL
The setup process is described in sufficient detail in the [documentation](https://docs.percona.com/percona-operator-for-postgresql/2.0/monitoring-tutorial.html#install-pmm-client), if briefly:
1. You need to create an API key in the PMM settings.
2. Specify the API key as the PMM_SERVER_KEY value in the deploy/secrets.yaml secrets file. Using the deploy/secrets.yaml file, create the Secrets object.
3. Update the pmm section in the deploy/cr.yaml file.
```
pmm:
enabled: true
image: percona/pmm-client:2.42.0
secret: cluster1-pmm-secret
serverHost: monitoring-service
```
Apply the changes, and you will see the databases in PMM.
### PostgreSQL cluster created using Percona Everest
That's my way.
1. We need to get the IP address of the PMM in the Kubernetes cluster.
```
kubectl get svc -n monitoring
```
2. Now, in the Percona Everest settings, let's add a new Monitoring Endpoint using the IP address, user, and password from PMM.

3. Let's edit the database and enable monitoring in the created endpoint.

Done; now we will see the metrics in PMM.

## Testing how it works.
1. Open pgAdmin and make some complex queries.
I found a SQL query that generates random data in rows.
```
INSERT INTO demo.LIBRARY(id, name, short_description, author,
description,content, last_updated, created)
SELECT id, 'name', md5(random()::text), 'name2'
,md5(random()::text),md5(random()::text)
,NOW() - '1 day'::INTERVAL * (RANDOM()::int * 100)
,NOW() - '1 day'::INTERVAL * (RANDOM()::int * 100 + 100)
FROM generate_series(1,1000) id;
```
And made several million rows by changing the value of `generate_series(1,1000)`
I've also done various SELECT queries.

2. After that, I went to look at the dashboards, which immediately showed that I had a problem. I got a list of slow queries and a spike in the graph.
_I created a test table without indexes, and queries were already processing slowly on many rows. I did this purposely to see the result in the monitoring tool._


I also found a dashboard that shows the cluster resource utilization for each Pod, such as CPU and RAM.

## Conclusion
PMM has various dashboards for monitoring PostgreSQL. I won't show you all of them, but I recommend installing and monitoring your database, especially if you are not using database monitoring tools.
| dbazhenov |
1,906,807 | Future of Automotive Solutions with 360-Degree Camera Technology | Introduction In the rapidly evolving automotive industry, technological advancements are... | 0 | 2024-06-30T17:30:31 | https://dev.to/aryanb001/future-of-automotive-solutions-with-360-degree-camera-technology-1m9b | automotive, automotivesolutions, 360degreecarcamera | ## Introduction
In the rapidly evolving automotive industry, technological advancements are continually pushing the boundaries of what's possible. Among these innovations, 360-degree camera technology stands out as a transformative solution poised to redefine driving safety, convenience, and the overall driving experience. This article delves into the future of [automotive solutions](https://kritikalsolutions.com/automotive-solutions/) with 360-degree camera technology, exploring its current state, potential advancements, and the myriad benefits it offers to drivers and manufacturers in the USA and globally.
## What is 360-Degree Camera Technology?
A 360-degree camera system in vehicles integrates multiple cameras positioned around the car to provide a comprehensive, all-around view. These cameras capture wide-angle images from different perspectives, which are then stitched together to create a seamless panoramic view of the vehicle's surroundings. This technology enhances situational awareness, making driving safer and more convenient.
## Current Applications of 360-Degree Camera Technology
**Enhanced Parking Assistance**
One of the most common applications of 360-degree camera systems is in parking assistance. The system provides a bird’s-eye view of the car, helping drivers navigate tight parking spaces with ease. By eliminating blind spots, these systems reduce the risk of collisions with other vehicles, pedestrians, and obstacles.
**Improved Safety Features**
[360-degree cameras for cars](https://kritikalsolutions.com/how-does-the-360-camera-work-on-cars/) is integral to advanced driver-assistance systems (ADAS). They assist in lane-keeping, collision avoidance, and pedestrian detection by providing real-time visual data that complements other sensors such as radar and LiDAR. This comprehensive data input allows for more accurate and timely responses to potential hazards.
**Driver Comfort and Convenience**
Beyond safety, 360-degree camera systems enhance driver comfort and convenience. For example, they make it easier to maneuver in difficult driving conditions such as narrow streets, heavy traffic, or poor visibility. The ability to see all around the vehicle without turning one's head significantly reduces driver fatigue and stress.
## The Future of 360-Degree Camera Technology
**Integration with Autonomous Driving**
As the automotive industry moves towards autonomous driving, 360-degree camera technology will play a crucial role. Autonomous vehicles rely on a combination of sensors to understand their environment and make driving decisions. The comprehensive visual data provided by 360-degree cameras will enhance the vehicle's ability to navigate complex environments safely and efficiently.
**Advanced AI and Machine Learning**
Future advancements in AI and machine learning will further enhance the capabilities of 360-degree camera systems. By processing and analyzing visual data more effectively, these systems will be able to predict and respond to potential hazards with greater accuracy. For instance, AI could help identify patterns in traffic behavior, enabling proactive rather than reactive safety measures.
**Enhanced Night Vision and Weather Adaptability**
One of the current limitations of 360-degree camera systems is their performance in low-light or adverse weather conditions. However, advancements in imaging technology, such as improved infrared sensors and better image processing algorithms, will enhance the functionality of these systems. Future iterations may provide clear, high-resolution images regardless of lighting or weather conditions, ensuring consistent safety and performance.
**Integration with Augmented Reality (AR)**
Augmented Reality (AR) has the potential to revolutionize how drivers interact with 360-degree camera systems. By overlaying real-time data onto the driver’s view, AR can provide additional context and guidance. For example, AR could highlight potential hazards, display navigation information, or even provide visual cues for optimal parking.
## Benefits of 360-Degree Camera Technology
**Enhanced Safety**
The primary benefit of 360-degree camera technology is enhanced safety. By providing a comprehensive view of the vehicle’s surroundings, these systems help prevent accidents caused by blind spots, misjudged distances, and unseen obstacles. This is particularly beneficial in urban environments where the risk of collisions is higher.
**Increased Driver Confidence**
With better situational awareness, drivers can navigate complex driving scenarios with increased confidence. This is particularly beneficial for novice drivers or those who may be uncomfortable with certain driving tasks such as parallel parking or driving in heavy traffic.
**Reduced Repair Costs**
By preventing minor collisions and scrapes, 360-degree camera systems can help reduce the frequency of repairs and associated costs. This not only saves money for vehicle owners but also reduces the environmental impact associated with vehicle repairs and part replacements.
**Support for Autonomous Vehicles**
As mentioned earlier, 360-degree camera systems are critical for the development and deployment of autonomous vehicles. By providing comprehensive visual data, these systems enable autonomous vehicles to navigate safely and efficiently, bringing us closer to a future where self-driving cars are the norm.
## Challenges and Considerations
**Data Privacy and Security**
As with any technology that collects and processes data, there are privacy and security concerns associated with 360-degree camera systems. Manufacturers must ensure that these systems are secure from hacking and that collected data is handled in compliance with privacy regulations.
**Cost and Accessibility**
Currently, 360-degree camera systems are primarily available in higher-end vehicles. For this technology to become widespread, it needs to be more affordable and accessible to the average consumer. This will require advancements in manufacturing processes and economies of scale.
**Technological Limitations**
While 360-degree camera technology has made significant strides, it still faces certain limitations, particularly in low-light conditions and adverse weather. Ongoing research and development are necessary to overcome these challenges and ensure consistent performance in all driving conditions.
## Conclusion
The future of automotive solutions with 360-degree camera technology is incredibly promising. As this technology continues to evolve, it will play a pivotal role in enhancing driving safety, convenience, and overall vehicle performance. From supporting autonomous driving to integrating with AI and AR, the potential applications of 360-degree camera systems are vast and varied. However, for this technology to reach its full potential, manufacturers must address current challenges related to cost, accessibility, and performance in adverse conditions. As these challenges are overcome, we can expect 360-degree camera technology to become a standard feature in vehicles, driving us towards a safer and more efficient automotive future.
In conclusion, 360-degree camera technology is not just a passing trend but a fundamental shift in how we approach automotive safety and convenience. With continued innovation and development, this technology will undoubtedly become an integral part of the driving experience, paving the way for safer roads and more confident drivers. | aryanb001 |
1,906,804 | HNG BLOG POST | Comparing Tailwind CSS and Traditional CSS Introduction When it comes to styling web pages,... | 0 | 2024-06-30T17:26:16 | https://dev.to/samuel_adedigba_703233/hng-blog-post-57ih | **<u>Comparing Tailwind CSS and Traditional CSS</u>**
**Introduction**
When it comes to styling web pages, developers have several options. Traditional CSS has been the go-to method for years, but modern tools like Tailwind CSS offer a different approach. In this article, we’ll compare **Tailwind CSS** and **Traditional CSS**, exploring their features, strengths, and ideal use cases. As a participant in the HNG Internship, where React.js is extensively used, I’ll also share my thoughts on integrating these styling methods with React and my excitement about mastering them.
<u>Traditional CSS: The Classic Styling Language</u>
**CSS (Cascading Style Sheets)** is the traditional way of styling web pages. It’s a style sheet language used for describing the look and formatting of a document written in HTML.
<u>Key Features</u>
1. **Selectors and Properties**: CSS uses selectors to target HTML elements and apply styles using various properties.
2. **Cascading and Specificity**: The “cascading” part of CSS allows styles to cascade down from parent elements to children, with specificity rules determining which styles are applied.
3. **Flexibility**: CSS provides a high degree of flexibility, allowing for detailed control over the appearance of elements.
4. **Media Queries**: CSS supports responsive design through media queries, allowing styles to change based on screen size or other characteristics.
<u>Strengths</u>
- **Detailed Control**: CSS gives developers precise control over the styling of each element.
- **Widely Supported**: As a core web technology, CSS is universally supported by all browsers.
- **Mature Ecosystem**: There’s a wealth of resources, frameworks, and tools available to help with CSS development.
<u>Use Cases</u>
- **Comprehensive Styling**: Suitable for projects that require detailed and specific styling.
- **Legacy Projects**: Essential for maintaining and updating older websites.
<u>Tailwind CSS: The Utility-First Framework</u>
**Tailwind CSS** is a utility-first CSS framework that provides low-level utility classes to build custom designs without writing traditional CSS.
**<u>Key Features</u>**
1. **Utility Classes**: Tailwind provides a vast array of utility classes for common styling needs, such as padding, margin, colors, and typography.
2. **Responsive Design**: Tailwind’s responsive utilities allow for easy development of responsive designs without writing media queries.
3. **Customization**: Tailwind is highly customizable, allowing developers to configure and extend the framework to meet their needs.
4. **Component-Based**: Tailwind encourages a component-based approach, making it a natural fit for modern frontend frameworks like React.
**<u>Strengths</u>**
- **Rapid Development**: Tailwind’s utility classes enable rapid prototyping and development.
- **Consistent Styling**: Ensures consistent styling across a project without the need for writing extensive custom CSS.
- **Ease of Maintenance**: Utility classes reduce the need for complex CSS, making styles easier to maintain.
**<u>Use Cases</u>**
- **Prototyping**: Ideal for quickly creating prototypes and iterating on designs.
- **Modern Web Apps**: Well-suited for use with component-based frameworks like React.
**<u>React at HNG Internship</u>**
At the HNG Internship, React.js is extensively used for building modern web applications. React’s component-based architecture pairs well with both Tailwind CSS and traditional CSS, depending on the project’s needs.
<u>Integrating Tailwind CSS with React</u>
- **Component Styling**: Tailwind’s utility classes can be directly applied to React components, making it easy to style components consistently.
- **Custom Components**: Tailwind’s customization options allow for creating reusable, styled components in React.
<u>Integrating Traditional CSS with React</u>
- **CSS Modules**: Traditional CSS can be scoped to individual components using CSS modules, preventing style conflicts.
- **Styled Components**: Libraries like styled-components enable writing traditional CSS in JavaScript, providing the benefits of both approaches.
<u>Why I’m Excited About React and Styling</u>
As I delve deeper into React.js during the HNG Internship, I am excited about the flexibility and power that both Tailwind CSS and traditional CSS offer. Tailwind’s utility-first approach speeds up development, while traditional CSS provides the control needed for complex designs.
**_<u>Conclusion</u>_**
Both Tailwind CSS and traditional CSS have their unique strengths and use cases. Tailwind’s utility-first approach accelerates development and ensures consistent styling, while traditional CSS offers detailed control and is essential for legacy projects.
As I continue my journey with the HNG Internship, I look forward to mastering these styling techniques and integrating them into my React projects. If you’re interested in learning more about the HNG Internship and how it can accelerate your development career, check out [HNG Internship](https://hng.tech/internship) and [HNG Premium](https://hng.tech/premium). | samuel_adedigba_703233 | |
1,906,803 | Secure Coding - Beyond the Surface with Snyk | Original: https://codingcat.dev/podcast/secure-coding-beyond-the-surface-with-snyk ... | 26,111 | 2024-06-30T17:25:58 | https://codingcat.dev/podcast/secure-coding-beyond-the-surface-with-snyk | webdev, javascript, beginners, podcast |
Original: https://codingcat.dev/podcast/secure-coding-beyond-the-surface-with-snyk
{% youtube https://www.youtube.com/embed/u0aC9OqSOz4 %}
## Summary
* 🎤 **Introduction to Snyk:** The video starts with an introduction to Snyk and its capabilities in secure coding. It highlights the importance of secure development practices and how Snyk aids in identifying and fixing vulnerabilities in code and open source dependencies.
* 🌐 **Guest Introduction:** Ryan Clark is introduced, sharing his background and experience in software development, including his work at Disney and Microsoft. His journey to becoming a developer advocate is discussed.
* 💻 **Experience at Disney:** Ryan shares insights from his time at Disney, emphasizing the importance of secure coding and how he got started with application security.
* 🚀 **Transition to Microsoft:** Discussion on Ryan's transition to Microsoft and his role in developer advocacy. He talks about the opportunities and challenges he faced while promoting secure coding practices.
* 🐱💻 **Working with Snyk:** The video dives into Ryan's current role at Snyk, discussing how Snyk integrates with various development environments and the benefits it offers to developers.
* 📈 **Snyk's Capabilities:** Detailed explanation of Snyk’s features, including its ability to scan for vulnerabilities in code, open source dependencies, and container configurations. The video showcases how Snyk can be integrated into CI/CD pipelines.
* 🧑🏫 **Educational Aspect:** The educational aspect of developer advocacy is highlighted, focusing on how developers can learn and improve their security practices through tools like Snyk.
* 🛠 **Live Demo:** A live demo of Snyk’s integration with Visual Studio Code is shown. The process of identifying and fixing vulnerabilities in a project using Snyk’s tools is demonstrated.
* 🔧 **Practical Tips:** Practical tips on using Snyk effectively, including setting up GitHub integrations and using Snyk CLI for deeper analysis.
### Why Snyk
Secure coding is a critical aspect of software development that ensures applications are protected from vulnerabilities and attacks. Snyk is a powerful tool that helps developers identify and fix security issues in their code, open source dependencies, and container configurations. In this article, we will explore the key features of Snyk and how it aids in secure coding practices.
### Understanding Snyk
Snyk is a developer-first security platform that seamlessly integrates with various development environments. It scans for vulnerabilities in code, open source libraries, and container images, providing actionable insights to developers. Snyk supports a wide range of programming languages, including JavaScript, Python, Java, and more.
### Key Features of Snyk
1. **Vulnerability Scanning:** Snyk scans your code and dependencies for known vulnerabilities. It provides detailed information on the nature of each vulnerability and how it can be exploited.
2. **Integration with Development Tools:** Snyk integrates with popular development tools such as Visual Studio Code, GitHub, GitLab, and CI/CD pipelines. This allows developers to identify and fix vulnerabilities during the development process.
3. **Automatic Fixes:** Snyk offers automated fixes for many vulnerabilities. It can open pull requests with the necessary changes to update vulnerable dependencies.
4. **Continuous Monitoring:** Snyk continuously monitors your projects for new vulnerabilities, ensuring that you are always aware of potential security issues.
### Practical Tips for Using Snyk
* **Integrate Early:** Integrate Snyk into your development environment early in the development process to catch vulnerabilities before they make it to production.
* **Use CLI Tools:** Snyk’s CLI tools provide deeper analysis and can be integrated into your CI/CD pipelines for automated security checks.
* **Educate Your Team:** Promote security awareness within your development team. Use Snyk’s educational resources to stay updated on the latest security practices.
* **Regular Audits:** Regularly audit your projects with Snyk to ensure that dependencies are up-to-date and free from vulnerabilities.
### Conclusion
Snyk is an invaluable tool for developers who want to ensure the security of their applications. By integrating Snyk into your development workflow, you can identify and fix vulnerabilities early, reducing the risk of security breaches. Continuous learning and proactive security measures are essential for maintaining secure software development practices. | codercatdev |
1,906,801 | Bringing Web Pages to Life with CSS Animations: A Step-by-Step Guide | Animation is an art of creating an illusion of movement from still characters. It is a process that... | 0 | 2024-06-30T17:22:11 | https://dev.to/kemiowoyele1/bringing-web-pages-to-life-with-css-animations-a-step-by-step-guide-3o8o | Animation is an art of creating an illusion of movement from still characters. It is a process that causes still characters to be rendered in such a way that they change form gradually. The term animation is derived from the word “anime”. Anime is a Japanese word that means “to move”, “or to give live”.
Animations are very important in any field that requires visual presentation of results. They have become more prevalent in creating educational materials, advertising, marketing, simulation training, data visualization, game development and of course, in entertainment.
In web development, animations are used to add life our web pages. Animated elements can be more attention grabbing, interactive and can create better user experience than still characters.
For instance;
• Animations can be helpful in guiding users on how to navigate through the website.
• It may help them to know when to be a little patient (like the loading animation).
• Can be helpful with providing feedback like letting them know that a message has been sent.
• It can also make the page more beautiful and captivating, thereby ensuring that users spend more time on the page.
In CSS, animation is the gradual change in style or set of styles applied to an HTML element. To animate an element with CSS, you have to write the codes to define the animation properties, and define the keyframes for the steps to the implementation of the animations and the properties to be animated.
## CSS keyframes
Keyframes signifies start points, and end points for the specific animations/ transitions.
Syntax
```
@keyframes animation-name {
from {
property: initial value;
property2: initial value;
etc.
}
to {
property: final value;
property2: final value;
etc.
}
}
```
This is to say that the styles applied in the “from” block, should gradually morph into the styles applied in the “to” block. The “from” and “to” keywords are not so commonly used. What you are most likely to come across or use will be percentage values.
```
@keyframes animation-name {
0% {
property: value;
property2: value;
etc.
}
100% {
property: value;
property2: value;
etc.
}
}
```
Percentage values allow you to have as many transitional points as you may be required to have.
## CSS animation properties
## animation-name
This is the name of the animation. You can give your animation any name you like. This name will serve as the identifier when applying the keyframes and styles to the animated element.
Syntax
```
animation: animate;
```
Then use the name as the identifier for the keyframes
```
@keyframes animate {
0% {
border-radius: 0;
}
100% {
border-radius: 50%;
}
}
```
Using animation-name property alone, you will not see any animation. To effect animation, you will have to at least set duration for the animation.
## animation-duration
This is used to determine how long the animation should run. It is usually measured in seconds(s) or milliseconds(ms). 1000 milliseconds is equal to 1second.
syntax
```
animation-duration: 10000ms;
```
or
```
animation-duration: 10s;
```
**example**
HTML
```
<div class="box"></div>
```
CSS
```
.box {
height: 300px;
width: 300px;
background-color: red;
border-radius: 50%;
animation: animate;
animation-duration: 2s;
}
@keyframes animate {
0% {
border-radius: 0;
}
100% {
border-radius: 50%;
}
}
```
The div will gradually change shape from a square to a circle in 2 seconds.
## animation-delay
If animation-delay is set to an animation, the animation will wait for the set time before it starts. If the animation is set to repeat, only the first instance will be delayed, the subsequent animations will not be delayed. The animation-delay property accepts negative values. Hence, delay of 2s will cause the animation to wait for 2 seconds, while animation-delay of -2s will cause the animation to start as though the animation has started 2 seconds ago.
**Syntax**
`Animation-delay: <time>;
`
example
```
animation: animate;
animation-duration: 2s;
animation-delay: 0.5s;
```
## animation-play-state
animation-play-state enables you to pause or play animation as you may need to. When you pause an animation, the animation will remain at the point it was when pause was activated, and resume from the same place as soon as the state is switched to running. The two major values for this property is “paused” and running.
For example, you could set the animation-play-state of an animated element to pause when hovered on, or you can use JavaScript to dynamically change the animation state based on interaction
Example
```
.box {
height: 300px;
width: 300px;
background-color: red;
border-radius: 50%;
animation: animate;
animation-duration: 2s;
animation-play-state: running;
}
.box:hover {
animation-play-state: paused;
}
@keyframes animate {
0% {
border-radius: 0;
}
100% {
border-radius: 50%;
}
}
```
## animation-iteration-count
This property is used to set the number of times that the animation will run. The values for this property must be a positive integer or infinite. The value of infinite will cause the animation to continue running till infinity.
Example
```
.box {
height: 300px;
width: 300px;
background-color: red;
animation: animate;
animation-duration: 2s;
animation-iteration-count: 6;
}
@keyframes animate {
0% {
border-radius: 0;
}
100% {
border-radius: 50%;
}
}
```
In this example, the animation will run for 2 seconds each, six times.
## animation-timing-function:
animation-timing-function is used to set the rate of change of speed over the duration of the animation. Values to this property include;
ease: speed increases towards the middle of the animation, then slows down towards the end. This is the default value;
ease-in: starts slows, then becomes fast to the end
ease-out: starts quickly, then slows down
ease-in-out: starts slowly, seeds up then slows down again
linear: this will ensure that the speed is even all through.
steps(): makes stops according to the number of steps set
cubic-bezier(): used to set custom speed rates.
## animation-direction
animation-direction is used to set whether the animation should play forward, backward, or alternate between the two. Values to this property include;
normal: The animation will play forward.
reverse: The animation will play backward.
alternate: The animation will play forward on the first cycle, then backward on the second cycle, and so on.
alternate-reverse: The animation will play backward on the first cycle, then forward on the second cycle, and so on.
## animation-fill-mode
animation-fill-mode sets the style of the element after the animation is completed. It determines if the element will go back to its original style before the animation, or take up one of the styles specified in the animation keyframes. Values for the animation-fill-mode property include;
**forwards:** The animation will apply the styles from the last keyframe to the element after the animation executes.
**backwards:** The animation will apply the styles from the first keyframe to the element before the animation executes.
**both:** The animation will apply the styles from the first keyframe to the element before the animation executes and the styles from the last keyframe to the element after the animation executes.
## animation shorthand property
The animation shorthand property is a CSS property that allows you to define multiple animation properties in a single declaration. It is a concise way to define animations, making it easier to write and maintain code.
**Syntax**
CSS
```
animation: <name> <duration> <timing-function> <delay> <iteration-count> <direction> <fill-mode> <play-state>;
```
**Example**
```
animation: animate 2s linear 1s 3 alternate forwards running;
```
It is not compulsory to use all the values. Default values will be applied in place of the omitted values.
## Multiple CSS animations on the same element:
You can apply multiple animations to the same element. All you have to do is separate the values with a comma.
**Example**
```
animation: animate-one 2s linear 1s 3 alternate forwards running,
animate-Two 4s steps(5) 4s 3 reverse forwards running;
```
## Illustration
To illustrate all we have been learning so far, we are going to make a simple text typing animation example.
To start with, we create an html page with a simple short text.
**HTML**
```
<h1>CSS ANI<span>MA</span>TION.</h1>
```
Then to add styles, we add some styles to the body tag to remove margins and add background color.
**CSS**
```
body {
margin: 0;
padding: 0;
color: tan;
background-color: rgb(2, 2, 33);
}
```
Add basic styles to the h1 tag;
```
h1 {
overflow: hidden;
white-space: nowrap;
font-family: consolas;
border-right: 4px solid tan;
width: 14ch;
margin-left: 50px;
font-size: 6rem;
}
```
The hidden overflow will ensure we are unable to see what text venture outside the width of the h1. We set the white-space to no-wrap, so that all our text will be on a single line, and the width we set to 14 characters because that is the total number of characters we want to implement the typing animation on. Next, we add out animation styles to the h1 styles.
**CSS**
```
h1 {
animation-name: type;
animation-duration: 3s;
animation-timing-function: steps(14);
animation-fill-mode: forwards;
animation-iteration-count: 5;
animation-direction: alternate;
overflow: hidden;
white-space: nowrap;
font-family: consolas;
border-right: 4px solid tan;
width: 14ch;
margin-left: 50px;
font-size: 6rem;
}
```
Here, the animation name is type, the name is used as identifier for the keyframes
```
@keyframes type {
0% {
width: 0ch;
border-right: 4px solid tan;
}
99% {
width: 15ch;
border-right: 4px solid tan;
}
100% {
width: 15ch;
border-right: none;
}
}
```
The animation will last for 3 seconds, and it will make 15 moves, one move per character within a time frame of three seconds. This will happen 5 times as we have set the iteration count to 5. The animation will also go as though the typed text is erased and rewritten a couple of times because we have set the direction to alternate. To illustrate animation play state, we will add a hover effect that will cause the animation to pause when we hover on the animation.
```
h1:hover {
animation-play-state: paused;
filter: hue-rotate(60deg);
}
```
We can also add another animation with a few styles to the span tag with the animation shorthand property.
```
h1 span {
animation: rotate-hue 1s linear 2s infinite normal forwards;
text-shadow: 0 5px 15px rgb(114, 255, 6);
-webkit-text-stroke: 1px rgb(250, 0, 79);
}
@keyframes rotate-hue {
0% {
filter: hue-rotate(0deg);
}
100% {
filter: hue-rotate(360deg);
}
}
```
**Output**

## Advantages of CSS animations
1. Flexibility : CSS animations can be used to animate a wide range of properties, including transforms, colors, and more.
2. Customizable : CSS animations can be customized using various timing functions, delays, and iteration counts.
3. Interactive : CSS animations can be interactive, responding to user input and hover states.
4. No JavaScript required : CSS animations can be created without any JavaScript code.
5. Improved user experience : CSS animations can enhance the user experience by providing visual feedback and smooth transitions.
6. Cross-browser support : CSS animations are widely supported by modern browsers, including Chrome, Firefox, Safari, and Edge.
## Disadvantages of CSS animation
1. Browser compatibility issues: While CSS animations are widely supported, there may be compatibility issues with older browsers.
2. Performance issues with complex animations: Complex CSS animations can lead to performance issues, such as slow frame rates or lag.
3. Limited support for conditional animations: CSS animations have limited support for conditional animations, making it difficult to animate elements based on specific conditions.
4. Limited support for complex animations: CSS animations are best suited for simple animations, and complex animations may require JavaScript.
## Cautions to take when animating elements:
1. Consider accessibility: Ensure animations don't cause accessibility issues, such as seizure triggers or distracting visuals.
2. Avoid over-animating: Too many animations can cause visual overload and slow performance.
3. Be cautious with animation durations: Long animation durations can lead to slow performance and visual lag.
4. Be mindful of browser support: Ensure animations work across different browsers and versions.
5. Note that not all CSS properties can be animated. Some CSS properties that cannot be animated are;
1) display
2) overflow
3) position (except for position: fixed which can be animated)
4) z-index
5) visibility (except for toggling between visible and hidden)
6) cursor
7) pointer-events
8) box-sizing
9) caret-color (except in Firefox)
10) unicode-bidi (except in Firefox)
11) writing-mode (except in Firefox)
12) direction
13) user-select
14) nav-index
15) tab-size
16) shape-image-threshold (except in Firefox)
17) shape-margin (except in Firefox)
18) clip-path (except in Firefox)
19) filter (except in Firefox)
20) backdrop-filter (except in Firefox)
6. Check the MDN documentation or other resources for the most up-to-date information on animatable properties, as browser support and specifications can change over time.
## Conclusion
In conclusion, CSS animations are a versatile tool that can be used to create a wide range of effects, from subtle interactions to complex animations. By leveraging the techniques and properties discussed in this article, developers can create web pages that are not only visually stunning but also provide a rich and engaging user experience.
| kemiowoyele1 | |
1,906,791 | Useful aliases for docker | Docker has been there for a long time and its my top most used tool whether for spinning up a web... | 0 | 2024-06-30T17:20:11 | https://dev.to/rubiin/useful-aliases-for-docker-3kli | docker, cli, devops | ---
title: Useful aliases for docker
published: true
description:
tags: docker,cli, devops
cover_image: https://blog.codewithdan.com/wp-content/uploads/2023/06/Docker-Logo-1024x576.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-30 17:15 +0000
---
Docker has been there for a long time and its my top most used tool whether for spinning up a web server or trying out a new tool.
If you are like me and use docker on your day to day dev workflow, these aliases would help you save few keystrokes for common use cases and save your time.
You can set up in your shell configuration file (like .bashrc, .zshrc, etc.):
### Get latest container ID
```bash
alias dl="docker ps -l -q"
```
### Get container process
```bash
alias dps="docker ps"
```
### Get process included stop container
```bash
alias dpa="docker ps -a"
```
### Get images
```bash
alias di="docker images"
```
### Get container IP
```bash
alias dip="docker inspect --format '{{ .NetworkSettings.IPAddress }}'"
```
### Run daemonized container, e.g., $dkd base /bin/echo hello
```bash
alias dkd="docker run -d -P"
```
### Run interactive container, e.g., $dki base /bin/bash
```bash
alias dki="docker run -i -t -P"
```
### Execute interactive container, e.g., $dex base /bin/bash
```bash
alias dex="docker exec -i -t"
```
### Stop all containers
```bash
alias dstop='docker stop $(docker ps -a -q)'
```
### Remove all containers
```bash
alias drm='docker rm $(docker ps -a -q)'
```
### Stop and Remove all containers
```bash
alias drmf='docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q)'
```
### Remove all images
```bash
alias dri='docker rmi $(docker images -q)'
```
### Dockerfile build, e.g., $dbu tcnksm/test
```bash
alias dbu='docker build -t=$1 .'
```
### Show all alias related docker
``` bash
dalias() { alias | grep 'docker' | sed "s/^\([^=]*\)=\(.*\)/\1 => \2/" | sed "s/['|\']//g" | sort; }
```
### Bash into running container
```bash
alias dbash='docker exec -it $(docker ps -aqf "name=$1") bash'
```
| rubiin |
1,906,789 | Angular vs. Ember.js: A Comparison of Frontend Frameworks | Comparing Angular and Ember.js Two Frontend Frameworks Angular: The All-Inclusive... | 0 | 2024-06-30T17:13:48 | https://dev.to/muritala_ahmed_23d51e7a3b/angular-vs-emberjs-a-comparison-of-frontend-frameworks-3cb6 | webdev, programming, react, frontend |
## Comparing Angular and Ember.js Two Frontend Frameworks
**Angular: _The All-Inclusive Framework_**
**Overview**
Angular, developed and maintained by Google, is a comprehensive frontend framework that offers a complete solution for building dynamic web applications. Since its initial release in 2010, Angular has undergone significant transformations, with Angular 2+ being a complete rewrite from AngularJS.
**Key Features**
- Component-Based Architecture
- Dependency Injection
- Two-Way Data Binding
- Angular CLI
**Benefits**
- Robust Ecosystem
- Strong Community Support
- Scalability
- TypeScript Support
**Use Cases**
Angular is ideal for developing enterprise-level applications, complex SPAs, and applications that require a robust structure and maintainability. Companies like Google, Microsoft, and IBM use Angular to build large-scale applications.
**Ember.js**: _The Convention Over Configuration Framework_
**Overview**
Ember.js, created by Yehuda Katz, is an opinionated framework that promotes convention over configuration. It provides a structured and standardized approach to building ambitious web applications. Ember's stability and backward compatibility are key features that ensure long-term viability.
**Key Features**
- Convention Over Configuration
- Ember CLI
- Handlebars Templating
- Data Layer (Ember Data)
**Benefits**
- Strong Conventions
- Stability and Backward Compatibility
- Comprehensive Documentation
- Active Community
**Use Cases**
Ember.js is well-suited for building complex, data-driven applications. Its conventions and structure make it an excellent choice for large teams and long-term projects. Companies like LinkedIn, Netflix, and Heroku use Ember.js to create scalable and maintainable applications.
Comparing Angular and Ember.js
While both Angular and Ember.js are robust frameworks, they have distinct philosophies and strengths:
- `Complexity`: Angular offers a comprehensive solution with a steeper learning curve, whereas Ember.js provides strong conventions to streamline development.
- `Flexibility`: Angular's flexibility allows for a wide range of use cases, while Ember.js's opinionated nature ensures consistency and best practices.
- `Tooling`: Both frameworks offer powerful CLI tools, but Ember CLI is known for its ease of use and extensive addons.
_**My Expectations with React in HNG**_
As part of the HNG Internship, I look forward to knowledge into action in React. Given its component-based architecture and powerful ecosystem, React is an excellent tool for building scalable and maintainable web applications. I'm excited to work on real-world projects, collaborate with other talented interns, and gain hands-on experience that will enhance my skills as a front-end developer.
The HNG Internship provides a fantastic platform to learn, grow, and network with people of same mind. To learn more about the HNG Internship, visit the [HNG Internship website](https://hng.tech/internship) and explore the opportunities to [hire top talents](https://hng.tech/hire) or [access premium resources.
](https://hng.tech/premium)
_Conclusion_
Both Angular and Ember.js are powerful frameworks that cater to different project needs. Angular's comprehensive ecosystem and flexibility make it suitable for large-scale applications, while Ember.js's conventions and stability ensure consistent and maintainable development. By exploring these frameworks, developers can choose the best tools for their specific requirements and create exceptional user experiences.
As I embark on this journey with the HNG Internship, I'm excited to apply what I learn, build amazing projects with React, and contribute to the vibrant community of frontend developers. | muritala_ahmed_23d51e7a3b |
1,906,788 | Starting Your AWS Adventure: Essential First Steps | Amazon Web Services (AWS) is widely recognized as a leading cloud platform, providing a wide array of... | 0 | 2024-06-30T17:11:35 | https://dev.to/gauravk_/starting-your-aws-adventure-essential-first-steps-39l8 | cloud, cloudcomputing, aws, webdev | Amazon Web Services (AWS) is widely recognized as a leading cloud platform, providing a wide array of over 200 services from data centers located globally. Whether you are a novice in cloud computing or a seasoned developer aiming to enhance your expertise, having a well-organized roadmap can be instrumental in effectively exploring the diverse range of AWS services and tools. This article offers a comprehensive guide for mastering AWS, accompanied by practical project illustrations to reinforce your comprehension.
## Why Learn AWS?
Amazon Web Services (AWS) is a leading contender in the field of cloud computing, with widespread usage by companies of various scales around the world. Acquiring proficiency in AWS can open doors to numerous prospects, spanning from creating adaptable applications to supervising extensive infrastructure. Below is a comprehensive manual to assist you in commencing and progressing with AWS.
## 1. Start with the Basics
### Understand Cloud Computing Concepts
Before diving into AWS, it's essential to have a fundamental understanding of cloud computing concepts such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
### AWS Free Tier
Leverage the AWS Free Tier to experiment with various services without incurring costs. The free tier provides access to many AWS services for a year, allowing you to learn and build without financial pressure.
### Core Services to Learn First
- **Amazon EC2 (Elastic Compute Cloud):** Understand how to provision virtual servers.
- **Amazon S3 (Simple Storage Service):** Learn about scalable storage solutions.
- **Amazon RDS (Relational Database Service):** Explore managed databases.
**Project Example:** Set up a static website hosted on Amazon S3 and use Amazon Route 53 for DNS routing.
## 2. Dive Deeper into Key Services
### Compute
- **EC2 Auto Scaling:** Learn how to scale your instances based on demand.
- **AWS Lambda:** Explore serverless computing to run code without provisioning servers.
### Storage
- **Amazon EBS (Elastic Block Store):** Understand block storage volumes for use with EC2.
- **Amazon Glacier:** Learn about long-term storage solutions.
### Databases
- **Amazon DynamoDB:** Explore NoSQL databases.
- **Amazon Aurora:** Dive into high-performance managed databases.
**Project Example:** Create a serverless application using AWS Lambda, API Gateway, DynamoDB, and S3 to handle file uploads and metadata storage.
## 3. Networking and Content Delivery
### Key Services
- **Amazon VPC (Virtual Private Cloud):** Learn to isolate and control network configurations.
- **Amazon CloudFront:** Understand content delivery networks (CDN) for faster content distribution.
- **Elastic Load Balancing:** Distribute incoming traffic across multiple targets.
**Project Example:** Set up a VPC with public and private subnets, configure security groups, and use CloudFront to serve a web application globally with low latency.
## 4. Security and Identity Management
### Key Services
- **AWS IAM (Identity and Access Management):** Master user, group, and role management for secure access.
- **AWS KMS (Key Management Service):** Learn about managing cryptographic keys for data encryption.
**Project Example:** Implement a secure login system for your application using AWS Cognito for user authentication and authorization.
## 5. Monitoring and Management
### Key Services
- **Amazon CloudWatch:** Monitor your AWS resources and applications.
- **AWS CloudTrail:** Track user activity and API usage for security and compliance.
- **AWS Config:** Assess, audit, and evaluate the configurations of your AWS resources.
**Project Example:** Set up CloudWatch Alarms to monitor EC2 instance health and use CloudTrail to log all API calls made in your AWS account.
## 6. Advanced Topics and Specializations
### Big Data and Analytics
- **Amazon Redshift:** Understand data warehousing solutions.
- **Amazon EMR (Elastic MapReduce):** Learn about big data processing using Hadoop and Spark.
### Machine Learning
- **Amazon SageMaker:** Explore building, training, and deploying machine learning models.
- **AWS Rekognition:** Understand image and video analysis.
### DevOps
- **AWS CodePipeline:** Learn about continuous integration and delivery.
- **AWS CloudFormation:** Master infrastructure as code to automate resource provisioning.
**Project Example:** Build a data pipeline that ingests, processes, and analyzes streaming data using Amazon Kinesis, Lambda, and Redshift.
## 7. Certifications and Continuous Learning
### AWS Certifications
Pursuing AWS certifications can validate your expertise and enhance your career prospects. Start with the **AWS Certified Cloud Practitioner** for foundational knowledge, then progress to associate and specialty certifications based on your career goals.
### Community and Resources
Engage with the AWS community through forums, blogs, and meetups. Utilize resources like AWS documentation, online courses (e.g., Coursera, Udemy), and hands-on labs.
## Conclusion
Becoming proficient in AWS is an ongoing journey that requires consistent learning and practical experience. By following this roadmap and engaging in real-world projects, you can establish a strong foundation in AWS and enhance your skills to handle complex cloud challenges. Embrace the cloud, and enjoy your learning journey!
*Stay tuned for more insights and tutorials on cloud technologies, Game and Web Development*
| gauravk_ |
1,901,729 | All 29 Next.js Mistakes Beginners Make | Next.js introduces a lot of new concepts such as server components, server actions, suspense and... | 0 | 2024-06-30T17:10:32 | https://dev.to/azeem_shafeeq/all-29-nextjs-mistakes-beginners-make-56nj | beginners, programming, tutorial, nextjs |
Next.js introduces a lot of new concepts such as server components, server actions, suspense and streaming, static and dynamic rendering, and much more. It's very easy to make mistakes even if you're an experienced developer. In this blog post, we'll go through 29 common mistakes beginners make in Next.js and how to avoid them.
1. Putting the use client Directive Too High
When you add the use client directive too high up in your component tree, it can inadvertently turn server components into client components. This can lead to performance issues as server components have benefits like fetching data directly on the server and keeping large dependencies on the server. Always try to keep the use client directive at the edges of your component tree.
2. Not Refactoring for Client Components
If you need to add interactivity to a part of your page, you might be tempted to add use client to the top of your file. Instead, create a new component for the interactive part and add use client there. This way, only the necessary parts of your application become client components.
3. Misunderstanding Component Types
A component without use client is not necessarily a server component. If it is imported into a file with use client, it becomes a client component. Always check where your components are being imported to understand their type.
4. Wrapping Server Components in Client Components
Wrapping a server component inside a client component does not automatically make it a client component. Server components can stay server components even when rendered within client components, as long as they are passed as children.
5. Using State Management on the Server Side
State management solutions like Context API, Zustand, and Redux only work on the client side. The server operates on a request-response cycle and does not keep track of state between requests.
6. Misusing the use server Directive
The use server directive is not for making a component a server component. Everything in the Next.js app router is a server component by default. Using use server creates a server action, which exposes a POST endpoint on your server.
7. Leaking Sensitive Data
Be cautious when passing data from server components to client components. Sensitive data like passwords can be exposed on the client side. Always validate and sanitize your data.
8. Misunderstanding Client Component Execution
Client components run both on the server (for pre-rendering) and on the client. This can lead to confusion, especially when using browser APIs like localStorage.
9. Incorrectly Using Browser APIs
Browser APIs like localStorage are not available on the server. Use checks like typeof window !== 'undefined' or use useEffect to ensure these APIs are only accessed on the client.
10. Getting Hydration Errors
Hydration errors occur when the HTML generated on the server does not match the HTML generated on the client. This can happen if you use browser APIs incorrectly or have mismatched HTML structures.
11. Incorrectly Dealing with Third-Party Components
Third-party components that use React hooks or event handlers need to be wrapped in a client component. If they use browser APIs, consider using dynamic imports to ensure they only run on the client.
12. Using Route Handlers for Data Fetching
You don't need to create separate API routes for data fetching in Next.js. Server components can fetch data directly, simplifying your code.
13. Worrying About Duplicate Data Fetching
Fetching the same data in multiple components is fine. React and Next.js handle caching for you, so you don't need to worry about making multiple fetch calls.
14. Creating Waterfalls When Fetching Data
Avoid sequential data fetching, which can create a waterfall effect and slow down your application. Use Promise.all to fetch data in parallel.
15. Submitting Data to Server Components or Route Handlers
You cannot submit data to server components. Use server actions for data mutations like POST, PUT, and DELETE requests.
16. Confusion When Page Doesn't Reflect Data Mutation
Next.js caches the result of server components. Use revalidatePath to bust the cache and ensure your page reflects data mutations.
17. Thinking Server Actions Are Only for Server Components
Server actions can be used in both server and client components. They are not limited to server components.
18. Forgetting to Validate and Protect Server Actions
Always validate the input data in server actions and ensure proper authentication and authorization checks are in place.
19. Misusing the use server Directive for Server-Only Code
The use server directive creates a server action, not a server component. Use the server-only package to ensure code runs only on the server.
20. Misunderstanding Dynamic Routes
Dynamic routes use square brackets in the file name (e.g., [id].tsx). Use the params prop to access dynamic route parameters.
21. Incorrectly Working with searchParams
Using the searchParams prop in a page component makes the route dynamically rendered. Use the useSearchParams hook in client components to avoid this.
22. Forgetting to Handle Loading States
Use the loading.tsx file to handle loading states for your pages. This provides a better user experience by showing a loading indicator while data is being fetched.
23. Not Being Granular with Suspense
Wrap only the parts of your component tree that need to wait for data fetching in a Suspense component. This prevents blocking the entire page.
24. Placing Suspense in the Wrong Place
Ensure the Suspense component is placed higher in the component tree than the code that fetches data. Otherwise, it won't work as expected.
25. Forgetting the key Prop for Suspense
When using Suspense with dynamic data, use the key prop to ensure React re-triggers the suspense boundary when the data changes.
26. Accidentally Opting Out of Static Rendering
Using features like searchParams, cookies, or headers in your page component opts it out of static rendering. Be mindful of this to avoid unnecessary dynamic rendering.
27. Hardcoding Secrets
Never hardcode secrets in your components. Use environment variables to keep them secure and avoid exposing them to the client.
28. Mixing Client and Server Utilities
Use the server-only package to ensure utility functions that should only run on the server are not accidentally used in client components.
29. Using redirect in try/catch
The redirect function throws an error, which will be caught in a try/catch block, preventing the redirect. Use redirect outside of try/catch blocks.
Conclusion
Next.js is a powerful framework, but it comes with its own set of challenges. By being aware of these common mistakes, you can avoid them and build more efficient and secure applications. If you want to master React and Next.js, consider taking a comprehensive course to deepen your understanding and skills. | azeem_shafeeq |
1,872,203 | What are Api Standards or REST Style? | Hello WebDevs, I bet whichever framework you have been working on, you must have crossed your roads... | 0 | 2024-06-30T17:08:28 | https://dev.to/yogeshgalav7/what-are-api-standards-or-rest-style-39i5 | api, rest, restapi, standard | Hello WebDevs,
I bet whichever framework you have been working on, you must have crossed your roads with API's and would have stuck for moment what should be the standard path for the URL you could use for that API.
Naming is the most difficult or brain consuming task for programmers, and when it comes to full url it becomes more difficult because they are not scoped by any block, they are all global and can be confusing if not used correctly.
For this Purpose some elite member of society created a framework or architectural style called REST. It's very small concept but it becomes difficult when you start using it in real world project and brain consumption becomes equal. Hence I'm writing this blog to simplify the things for you, Let's Start.
## What is REST?
REST(Representational State Transfer) is a standard that guides the design and development of processes for interacting with data stored on web servers. It simplifies communication by providing various HTTP methods (such as GET, POST, PUT, DELETE) that allow us to perform CRUD operations (Create, Read, Update, Delete) on server data. Think of REST as the set of rules that govern how we communicate with servers using the HTTP protocol.
In simple words,
> It tells us how to change the state of our resource(db tables, files) on server with help of standard API pattern.
Here server means, machine from which we are requesting to change the state of our resources.
Now let's jump to real world example:
**1. Fetch Data: Get Request**
- If you want a list of items like list of users or post, or excel/csv containing list then we will always use Get request.
- If you want to filter or apply condition to the same list just use query parameters in get request.
- Even if you have large form for filter, try to pass only id's. but use get request. This will help you retain filter and pagination when you navigate back and forth on website.
- Example 1: /users/list?paginate=3&email=mr.yogesh.galav&name=yogesh
- Example 2: /users/:id/details
- Example 3: /users/:id/orders/csv
**2. Create Data: Post Request**
- It's the most common type of web request, but real purpose of this type is to create resources on server, like create db row, create user in registration, create token in login, create or save some file like profile pic/resume, etc.
- If you want some operation like createOrFind, updateOrCreate then also use post request.
- If you don't know what type of request it should be, but want to send lot of data to server like form data or file, then use this.
- Example 1: /users/create
- Example 2: /users/:id/orders/create
- Example 3: /users/:id/profile-pic
**3. Update Data: Patch Request**
- For updating resources on server Patch request is used. Most of the time we may think that create and update may share same controller or business logic, but as our business logic grows it becomes harder to maintain both at same place, hence a separate route/url with patch request is beneficial.
- If you want update some record on database on server then use this.
- Example 1: /users/update
- Example 2: /users/:id/orders/update
**4. Replace Data: Put Request**
- Unlike Database resources, files are generally replaced on server when it comes to update operation.
- If you want replace some file on server then use this.
- Example 1: /users/:id/profile-pic/replace
**3. Delete Data: Delete Request**
- This is most simple type of web request, as the name suggests, it is used to delete resources on server.
- If you want to delete some database record or file or token while logout use this.
- Example 1: /logout
- Example 2: /users/:id/delete
- Example 2: /users/:id/history/delete
> These are only standards and not any hard and fast rule which will throw error somewhere so you can bend it like your choice but fundamentals should remain same so that url is clearly understandable without having to look into context or code.
I hope you enjoyed reading this.
Thanks & Regards,
Yogesh Galav | yogeshgalav7 |
1,906,786 | MY JOURNEY AS A BACKEND ENGINEER INTERN. | MY JOURNEY AS A BACKEND ENGINEER INTERN. I am Ifeoma Eunice Ugwu, a goal-getter, hardworking,... | 0 | 2024-06-30T17:08:16 | https://dev.to/ifyeunice/my-journey-as-a-backend-engineer-intern-5b4c | webdev, javascript, beginners, programming | MY JOURNEY AS A BACKEND ENGINEER INTERN.
I am Ifeoma Eunice Ugwu, a goal-getter, hardworking, determined, an easy-going person. I studied Metallurgical and Materials Engineering at Nnamdi Azikiwe University. I also studied courses on Backend Engineering (NodeJs, Typescript, NestJs) and frontend courses like HTML, CSS, JavaScript.
Joining HNG for internship program was what I looked forward to. I jumped into the offer when a friend sent me their link https://hng.tech/internship. I registered through this link: https://hng.tech/premium. I registered and followed their procedure judiciously in order to be a part of it knowing that I will be able to learn to build different APIs, collaborate with others, connect with my fellow engineers, learn to work under pressure and meet up with targets and more.
My work as a Backend Engineer has been a smooth journey until I encountered this particular problem in NodeJS when I was building a Todo API application. I was stocked trying to query the registered tasks in my database for the app.
Firstly, I created tasks using the below code and logging in tasks in order to save them in my database with the help of taskSchema.
// to create tasks
app.post("/task", (req, res) => {
let tasks = req.body;
tasksModel.create(tasks)
.then((doc) => {
res.status(201).send({ message: "Task Registered successfully" })
})
.catch((err) => {
console.log(err);
res.status({ message: "Some problem occured" })
})
})
I then tried to get all tasks using the below code.
// get all tasks
app.get("/tasks, (req, res) => {
tasksModel.findAll({ tasks })
.then((tasks) => {
res.status(200).json({ tasks, message: "Tasks returned successfully"})
console.log(tasks);
}).catch(err=>{
console.log(err)
})
})
Here is the code for getting tasks by id:
// get task by id
app.get("/tasks/:_id", (req, res) => {
const _id = req.params._id
console.log(_id)
tasksModel.findOne({ _id })
.then((tasks) => {
res.status(200).json({ tasks, message: "Task returned successfully"})
// console.log(tasks);
}).catch(err=>{
console.log(err)
})
})
Here is the code for updating the registered tasks.
// to update task by id
app.put("/tasks/:_id", (req, res) =>{
const _id = req.params._id
const Name = req.body.Name
const Status = req.body.Status
tasksModel.updateOne({ _id }, { Name , Status })
.then((tasks) => {
res.status(200).json({ tasks, message: "Task updated successfully"})
console.log(tasks);
})
})
Finally, I was able to solve the problem using the below code to delete the required task by id.
// to delete tasks by id
app.delete("/tasks/:_id", (req, res) => {
const _id = req.params._id
tasksModel.deleteOne({ _id})
.then((tasks) => {
res.status(200).json({ tasks, message: "Task deleted successfully"})
console.log(tasks);
})
})
| ifyeunice |
1,906,785 | **Comparing Frontend Technologies: ReactJS vs. Vue.js** | Comparing FrontendTechnologies: ReactJS vs. Vue.js In the ever-evolving world of frontend... | 0 | 2024-06-30T17:08:14 | https://dev.to/favour_efemiaya_b11dd87cd/comparing-frontend-technologies-reactjs-vs-vuejs-1ph2 | react, vue | **Comparing FrontendTechnologies**: ReactJS vs. Vue.js
In the ever-evolving world of frontend development, choosing the right technology can make a big difference in the success of a project. Two popular frameworks in this space are ReactJS and Vue.js. This article will compare these two technologies, highlighting their differences, strengths, and what makes them unique.
**ReactJS**
ReactJS is a widely used JavaScript library developed by Facebook. It’s known for its component-based architecture and efficiency in building interactive user interfaces.
**Key Features of ReactJS:**
1. **Component-Based Architecture:** React breaks down the UI into reusable components, making it easier to manage and maintain code.
2. **Virtual DOM:** React uses a virtual DOM to efficiently update and render components, improving performance.
3. **Strong Community Support:** With a large community and extensive ecosystem, finding resources, tutorials, and third-party libraries is easy.
4. **Flexibility:** React can be used in various environments, including web, mobile (via React Native), and even VR.
**Why Use ReactJS?**
- **Scalability:** Ideal for large applications with complex UIs.
- **Performance:** Efficient updates with the virtual DOM.
- **Community and Ecosystem:** Extensive resources and third-party librarie
**Vue.js**
Vue.js is another popular JavaScript framework that is known for its simplicity and ease of integration. It offers a flexible and approachable way to build user interfaces.
**Key Features of Vue.js:**
1. **Reactive Data Binding:** Vue’s reactivity system makes it easy to keep the UI and data in sync.
2. **Component-Based Structure:** Similar to React, Vue uses components to build UIs, promoting code reuse and organization.
3. **Simplicity:** Vue’s syntax and structure are straightforward, making it easy for beginners to learn and get started.
4. **Flexible Integration:** Vue can be integrated into projects incrementally, allowing you to enhance parts of an existing application without a complete rewrite.
**Why Use Vue.js?**
- **Ease of Learning:** Simple and intuitive, perfect for beginners.
- **Flexibility:** Can be used for both small and large projects.
- **Reactive System:** Keeps the UI updated efficiently.
**ReactJS vs. Vue.js: A Comparison**
**My Expectations with React in HNG**
During the HNG Internship, I expect to delve deeply into ReactJS. React’s powerful features will enable me to build dynamic and responsive user interfaces. I’m excited about the opportunity to work on real-world projects, learn from experienced mentors, and improve my skills in a collaborative environment.
The HNG Internship offers a great platform to grow as a developer. By participating, I aim to master ReactJS, contribute to impactful projects, and enhance my career prospects. If you're interested in learning more about the program, visit the [HNG Internship website](https://hng.tech/internship). You can also explore [HNG Hire](https://hng.tech/hire) for hiring developers and [HNG Premium](https://hng.tech/premium) for premium services.
Conclusion
Both ReactJS and Vue.js have their unique strengths and are suitable for different project needs. ReactJS is ideal for large-scale applications requiring high performance and scalability, while Vue.js is perfect for both small enhancements and full-scale applications due to its simplicity and flexibility. Understanding the key features and benefits of each can help you make an informed decision for your next project. As I embark on the HNG Internship, I look forward to applying these technologies and advancing my skills in frontend development. | favour_efemiaya_b11dd87cd |
1,906,784 | React vs. Angular: A Comprehensive Comparison | Introduction: In the world of front-end development, two technologies have emerged as industry... | 0 | 2024-06-30T17:05:54 | https://dev.to/paul_ameh_c6f95df8b725981/react-vs-angular-a-comprehensive-comparison-187h | webdev, javascript, googlecloud, frontend |
Introduction:
In the world of front-end development, two technologies have emerged as industry leaders: React and Angular. Both frameworks have their strengths and weaknesses, making them suitable for different projects and development teams. In this article, we'll delve into the core features, advantages, and disadvantages of React and Angular, helping you decide which technology is best suited for your next project.
**React**:
- Core Features:
- Component-based architecture
- Virtual DOM for efficient rendering
- One-way data binding
- Advantages:
- Fast and efficient rendering
- Easy to learn and adopt
- Flexible and customizable
- Disadvantages:
- Steep learning curve for complex applications
- Requires additional libraries for state management and routing
**Angular**:
- Core Features:
- Model-View-ViewModel (MVVM) architecture
- Two-way data binding
- Opinionated framework with built-in tools and libraries
- _Advantages_:
- Robust and scalable architecture
- Built-in tools for state management, routing, and forms
- Large community and extensive documentation
- _Disadvantages_:
- Steeper learning curve due to opinionated nature
- Heavier bundle size compared to React
Conclusion:
React excels in efficiency and customizability, making it ideal for smaller-scale applications and teams familiar with JavaScript. Angular, with its robust architecture and built-in tools, suits larger-scale applications and teams seeking a comprehensive framework. Ultimately, the choice between React and Angular depends on project requirements, team expertise, and personal preferences.
If you are a front end developer trying to establish and move your skill to the next level, being an intern in the HNG internship (https://hng.tech/internship )program is where you out to be...
HNG internship workspace (https://hng.tech/hire)takes your skill to the next level . | paul_ameh_c6f95df8b725981 |
1,906,783 | Build an Advanced RAG App: Query Rewriting | In the last article, I established the basic architecture for a basic RAG app. In case you missed... | 0 | 2024-06-30T17:02:54 | https://dev.to/rogiia/build-an-advanced-rag-app-query-rewriting-h3p | In the last article, I established the basic architecture for a basic RAG app. In case you missed that, I recommend to first read that article over here. That will set the base from which we can improve our RAG system. Also in that last article, I listed some common pitfalls that RAG applications tend to fail on. We will be tackling some of them with some advanced techniques in this article.
To recap, a basic RAG app uses a separate knowledge base that aids the LLM to answer the user’s questions by providing it with more context. This is also called a retrieve-then-read approach.
## The problem
To answer the user’s question, our RAG app will retrieve appropriate based on the query itself. It will find chunks of text on the vector DB with similar content to whatever the user is asking. Other knowledge bases (search engine, etc.) also apply. The problem is, the chunk of information where the answer lies, might not be similar to what the user is asking. The question can be badly written, or expressed differently to what we expect. And, if our RAG app can’t find the information needed to answer the question, it won’t answer correctly.
There are many ways to solve this problem, but for this article, we will look at query rewriting.
## What is Query Rewriting?
Simply put, query rewriting means we will rewrite the user query in our own words, that our RAG app will know best how to answer. Instead of just doing retrieve-then-read, our app will do a rewrite-retrieve-read approach.
We use a Generative AI model to rewrite the question. This model be a large model, like (or the same as) the one we use to answer the question in the final step. Or it can also be a smaller model, specially trained to perform this task.
Also, query rewriting can take many different forms depending on the needs of the app. Most of the time, basic query rewriting will be enough. But, depending on the complexity of the questions we need to answer, we might need more advanced techniques like HyDE, multi-querying or step-back questions. More information on those in the following section.
## Why does it work?
Query Rewriting usually gives better performance in any RAG app that is knowledge intensive. This is because RAG applications are sensitive to the phrasing and specific keywords of the query. Paraphrasing this query is helpful in the following scenarios:
1. It restructures oddly written questions so they can be better understood by our system.
2. It erases context given by the user which is irrelevant to the query.
3. It can introduce common keywords, which will give it a better chance of matching up with the correct context.
4. It can split complex questions into different sub.questions, which can be more easily responded separately, each with their corresponding context.
5. It can answer question that require multiple levels of thinking by generating a step-back question, which is a higher-level concept question to the one from the user. It then uses both the original and the step-back question to retrieve context.
6. It can use more advanced query rewriting techniques like HyDE to generate hypothetical documents to answer the question. These hypothetical documents will better capture the intent of the question and match up with the embeddings that contain the answer in the vector DB.
## How to implement Query Rewriting
We have stablished that there are different strategies of Query Rewriting depending on the complexity of the questions. We will briefly visit how to implement each of them. After, we will see a real example to compare the result of a basic RAG app versus a RAG app with Query Rewriting. You can also follow all the examples in [the article’s Google Colab notebook](https://colab.research.google.com/drive/1-NT0_mmyoSnaDQJ1Zuo0XX613TG5lzjZ?usp=sharing).
### Zero-shot Query Rewriting
This is simple query rewriting. Zero-shot refers to the prompt engineering technique of giving examples of the task to the LLM, which in this case we give none.

### Few-shot Query Rewriting
For a slightly better result at the cost of using a few more tokens per rewrite, we can give some examples of how we want the rewrite to be done.

### Trainable rewriter
We can fine-tune a pre-trained model to perform the query rewriting task. Instead of relying on examples, we can teach it how query rewriting should be done to achieve the best results in context retrieving. Also, we can further train it using Reinforcement Learning so it can learn to recognize problematic queries and avoid toxic and harmful phrases. Or we can also use an open-source model that has already been trained by somebody else on the task of query rewriting.
### Sub-queries
If the user query contains multiple questions, this can make context retrieval tricky. Each question probably needs different information, and we are not going to get all of it using all the questions as basis for information retrieval. To solve this problem, we can decompose the input into multiple sub-queries, and perform retrieval for each of the sub-queries.

### Step-back prompt
Many questions can be a bit too complex for the RAG pipeline’s retrieval to grasp the multiple levels of information needed to answer them. For these cases, it can be helpful to generate multiple additional queries to use for retrieval. These queries will be more generic than the original query. This will enable the RAG pipeline to retrieve relevant information on multiple levels.

### HyDE
Another method to improve how queries are matched with contexts chunks are Hypothetical Document Embeddings or HyDE. Sometimes, questions and answers are not that semantically similar, which can cause the RAG pipeline to miss critical context chunks in the retrieval stage. However, even if the query is semantically different, a response to the query should be semantically similar to another response to the same query. The HyDE method consists of creating hypothetical context chunks that answer the query and using them to match the real context that will help the LLM answer.


## Example: RAG with vs without Query Rewriting
Taking the RAG pipeline from the last article, “How to build a basic RAG app”, we will introduce Query Rewriting into it. We will ask it a question a bit more advanced than last time and observe whether the response improves with Query Rewriting over without it. First, let’s build the same RAG pipeline. Only this time, I’ll only use the top document returned from the vector database to be less forgiving to missed documents.

The response is good and based on the context, but it got caught up in me asking about evaluation and missed that I was specifically asking for tools. Therefore, the context used does have information on some benchmarks, but it misses the next chunk of information that talks about tools.
Now, let’s implement the same RAG pipeline but now with Query Rewriting. As well as the query rewriting prompts, we have already seen in the previous examples, I’ll be using a Pydantic parser to extract and iterate over the generated alternative queries.

The new query now matches with the chunk of information I wanted to get my answer from, giving the LLM a better chance of answering a much better response for my question.
## Conclusion
We have taken our first step out of basic RAG pipelines and into Advanced RAG. Query Rewriting is a very simple Advanced RAG technique but a powerful one for improving the results of a RAG pipeline. We have gone over different ways to implement it depending on what kind of questions we need to improve. In future articles we will go over other Advanced RAG techniques that can tackle different RAG issues than those seen in this article. | rogiia | |
1,906,782 | Improving throughput and latency using Java Virtual Threads in Spring | One of the major changes that Java 21 brought about is Virtual threads. There is so much hype around... | 0 | 2024-06-30T17:01:29 | https://dev.to/vigneshm243/improving-throughput-and-latency-using-java-virtual-threads-in-spring-16mp | spring, java, performance | One of the major changes that Java 21 brought about is Virtual threads. There is so much hype around it, but let's see a real-world example with some metrics. Spring introduced the ability to create GraalVM native images that use Spring Boot and Java 21's virtual threads(Project Loom).
We will look at virtual threads in detail and the many features that come with them in another article. We will focus on a simple project where we can enable the features of Java Virtual Threads and see the difference in performance gains.
## Why virtual threads are faster than normal threads?
Normal threads in Java are tied to OS threads. So, there is a limitation to the actual number of threads it can create. Also, time is lost waiting for blocking I/O Calls. Normal Threads are called **Platform Threads**. They are an expensive resource that needs to be managed well.
Now, when it comes to virtual threads, they are lightweight constructs. Multiple virtual threads can be tied to a single platform thread which in turn gets tied to an OS thread.
The simple idea behind Virtual Threads
## The Spring Project used for benchmarking
The project used in this article can be found [here](https://github.com/vigneshm243/thirukkuralAPI). It has a simple fetch API that returns the Thirukkural based on the ID sent.

Now, by default the http server in Tomcat can run many threads in parallel and this will not let us test out this feature easily. So let's throttle it to 10 max threads by adding the below property in the *application.properties* file. This will allow Tomcat to only use 10 threads at max.
```properties
server.tomcat.threads.max=10
```
We will mimic a blocking IO call using *sleep*. We will also log the current thread it's using.
```java
Thread.sleep(1000);
log.info("Running on " + Thread.currentThread());
```
## Benchmarking using hey
We will benchmark this using hey tool.
```
hey -n 200 -c 30 http://localhost:8080/thirukural/1
Summary:
Total: 36.1759 secs
Slowest: 8.0463 secs
Fastest: 2.0080 secs
Average: 5.6840 secs
Requests/sec: 4.9757
Response time histogram:
2.008 [1] |
2.612 [10] |■■■
3.216 [0] |
3.819 [0] |
4.423 [11] |■■■
5.027 [0] |
5.631 [0] |
6.235 [156] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
6.839 [0] |
7.442 [0] |
8.046 [2] |■
Latency distribution:
10% in 4.0305 secs
25% in 6.0212 secs
50% in 6.0263 secs
75% in 6.0340 secs
90% in 6.0376 secs
95% in 6.0387 secs
99% in 8.0463 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0007 secs, 2.0080 secs, 8.0463 secs
DNS-lookup: 0.0005 secs, 0.0000 secs, 0.0041 secs
req write: 0.0000 secs, 0.0000 secs, 0.0003 secs
resp wait: 5.6832 secs, 2.0079 secs, 8.0419 secs
resp read: 0.0001 secs, 0.0000 secs, 0.0005 secs
Status code distribution:
[200] 180 responses
```
A few log entries are listed below to see that these requests are using **Platform Threads**.
```
2024-06-30T22:07:29.659+05:30 INFO 2120 --- [thriukural] [nio-8080-exec-7] p.v.thriukural.web.ThirukuralController : Running on Thread[#45,http-nio-8080-exec-7,5,main]
2024-06-30T22:07:29.659+05:30 INFO 2120 --- [thriukural] [io-8080-exec-10] p.v.thriukural.web.ThirukuralController : Running on Thread[#48,http-nio-8080-exec-10,5,main]
2024-06-30T22:07:29.659+05:30 INFO 2120 --- [thriukural] [nio-8080-exec-2] p.v.thriukural.web.ThirukuralController : Running on Thread[#40,http-nio-8080-exec-2,5,main]
```
Now, to enable the application to use virtual threads, we just add the below property in *application.properties*.
```properties
spring.threads.virtual.enabled=true
```
Let's run and see the same benchmark results again.
```
hey -n 200 -c 30 http://localhost:8080/thirukural/1
Summary:
Total: 12.2478 secs
Slowest: 2.1747 secs
Fastest: 2.0086 secs
Average: 2.0410 secs
Requests/sec: 14.6965
Response time histogram:
2.009 [1] |
2.025 [149] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
2.042 [0] |
2.058 [0] |
2.075 [0] |
2.092 [0] |
2.108 [0] |
2.125 [0] |
2.141 [0] |
2.158 [0] |
2.175 [30] |■■■■■■■■
Latency distribution:
10% in 2.0115 secs
25% in 2.0129 secs
50% in 2.0158 secs
75% in 2.0188 secs
90% in 2.1724 secs
95% in 2.1739 secs
99% in 2.1747 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0015 secs, 2.0086 secs, 2.1747 secs
DNS-lookup: 0.0014 secs, 0.0000 secs, 0.0093 secs
req write: 0.0000 secs, 0.0000 secs, 0.0002 secs
resp wait: 2.0392 secs, 2.0085 secs, 2.1656 secs
resp read: 0.0003 secs, 0.0000 secs, 0.0026 secs
Status code distribution:
[200] 180 responses
```
A few log entries to see them running in virtual threads.
```
2024-06-30T22:12:38.485+05:30 INFO 17700 --- [thriukural] [mcat-handler-48] p.v.thriukural.web.ThirukuralController : Running on VirtualThread[#103,tomcat-handler-48]/runnable@ForkJoinPool-1-worker-6
2024-06-30T22:12:38.485+05:30 INFO 17700 --- [thriukural] [mcat-handler-66] p.v.thriukural.web.ThirukuralController : Running on VirtualThread[#121,tomcat-handler-66]/runnable@ForkJoinPool-1-worker-12
2024-06-30T22:12:38.485+05:30 INFO 17700 --- [thriukural] [mcat-handler-69] p.v.thriukural.web.ThirukuralController : Running on VirtualThread[#124,tomcat-handler-69]/runnable@ForkJoinPool-1-worker-7
```
## Benchmarking results
As evident from the results, we see an increased throughput and better latency as well once virtual threads are enabled.
| Parameter | Without Virtual Threads | With Virtual Threads |
|--------------|-------------------------|----------------------|
| Requests/sec | 4.9757 | 14.6965 |
| 99% Latency | 8.0463 sec | 2.1747 sec |
## Summary
A simple parameter change has enabled us with a lot of improved performance and throughput. We will further explore in future articles some of the considerations that need to be taken before using virtual threads.
----------------------------------------------------------
Originally published at [vignesh.page](https://vignesh.page/posts/java_virtual_threads_in_spring/).
Please let me know if any improvements you have experienced using Java Virtual Threads | vigneshm243 |
1,906,781 | Experience Excellence at Westbourne College (Singapore) | Are you ready to embark on an academic journey that prepares you for the world’s top universities and... | 0 | 2024-06-30T16:59:40 | https://dev.to/westbourne_college/experience-excellence-at-westbourne-college-singapore-1a5l | Are you ready to embark on an academic journey that prepares you for the world’s top universities and future leadership roles? Look no further than Westbourne College (Singapore), a premier private British-International IB Diploma and IGCSE school, dedicated to nurturing students aged 15-18. At Westbourne, we pride ourselves on offering an award-winning IB Diploma Programm and IGCSE courses, with a strong emphasis on academic excellence and leadership development.

## Why Choose Westbourne College (Singapore)?
Westbourne College (Singapore) offers an unparalleled educational experience. Our [International IB Diploma Programme](https://westbournecollege.com.sg/ib-sixth-form/) is designed to foster critical thinking, global awareness, and a passion for lifelong learning. We are committed to ensuring our students achieve exceptional academic results, positioning them for success in top-tier global universities. Additionally, our focus on leadership ensures that students are well-prepared for future challenges and opportunities.
## Outstanding Academic Results
Our dedication to academic excellence is reflected in our students’ outstanding results. Westbourne’s rigorous academic standards and innovative teaching methods ensure that each student reaches their full potential. Our comprehensive support system, coupled with a future-focused curriculum, equips students with the knowledge and skills necessary to excel in STEM and business fields.
## A Global Community
Westbourne College (Singapore) is more than just a school; it’s a vibrant global community. From the moment students join, they benefit from unique international networking opportunities, fostering connections that will support them throughout their educational and professional journeys. Our diverse student body and dedicated faculty create a dynamic and inclusive environment where every student feels valued and empowered.
## Empowering Future Leaders
At Westbourne, we believe that education extends beyond the classroom. Our holistic approach to education emphasizes both functional and emotional development, preparing students to become effective leaders in an ever-changing global landscape. By cultivating leadership and global networking skills, we ensure that our graduates are not only academically accomplished but also equipped to make a meaningful impact in the world.
Discover the transformative power of a [Westbourne College](https://westbournecollege.com.sg/ib-sixth-form/) (Singapore) education and join a community committed to excellence, leadership, and global success.
| westbourne_college | |
1,906,780 | Overcoming a Challenging Backend Project Issue: My Experience and Journey to HNG Internship. | Being a backend developer demands more than simply knowing how to write codes; continuous learning,... | 0 | 2024-06-30T16:58:59 | https://dev.to/somtoochukwu/overcoming-a-challenging-backend-project-issue-my-experience-and-journey-to-hng-internship-1a73 | Being a backend developer demands more than simply knowing how to write codes; continuous learning, troubleshooting and problem-solving abilities are additional skills required to make a successful career in backend development. I was recently met with a challenge on a project I worked on which involved setting up a Node.js application to interact with a MySQL database on a cloud hosted Ubuntu server. Solving the issues I encountered in the course of that project showed me the level of importance continuous learning, persistence and resilience is for career advancement as a Backend Developer and it well aligns with the new adventure I'm about starting with the HNG Internship.
**The Challenge**
The project was a simple user registration form with User login. Database Connection failure and authorization error were seemingly encountered while trying to connect to the MySQL database from the Nodejs application.
**Step-by-Step Solution**
1) First I uninstalled MySQL from my Ubuntu server having observed that it wasn’t the latest version.
2) I downloaded and installed the latest version of MySQL-version 8.4.0 LTS on my server, after which I secured the Installation.
3)I then configured MySQL to listen on my server's public IP address. This involved editing the MySQL configuration file where I had to set the ‘bind-address’ and ‘mysql-bind-address’ to my server’s public IP address
4) To apply the Changes, I restarted the MySQL Service.
5)Next, I logged into MySQL to create a database and a user for my application
6)In my Node.js application, I used the MySQL library to connect to the database.
7)Despite careful setup, I encountered an “ECONNREFUSED” error. This usually indicates that the server is not accepting connections on the specified IP and port. I checked the firewall settings to ensure port 3306 was open.
8)After confirming that firewall was not blocking the connection, I encountered an ER_NOT_SUPPORTED_AUTH_MODE error, which I resolved by updating the MySQL user authentication method.
9)Finally, My Nodejs application successfully connected to the database and authentication issues resolved.
**About Me**
My name is Somtoochukwu Okonkwo, I am a recent graduate of Electrical/Electronic Engineering with skills in Backend Development(junior level) and cloud computing. I am highly fascinated about tech especially with the underlying technologies behind them. I recently finished my one year National Youth service Program as a Cloud Solutions Architect graduate intern at Huawei Technologies where I majorly did Huawei Cloud Presales, customer engagements, Solution design, POC and Cloud solutions Implementations. I’m passionate about Backend Development and I look forward to doing amazing things as a Backend developer leveraging my knowledge and experience as a Cloud Solutions Architect.
I love serving people; making positive impacts in their lives and also contributing to a better society. During my Undergraduate studies at Federal University of Technology Owerri, I served as the Class Representative of my class; Electrical/Electronic Engineering Class of 2021, where I effectively led, inspired and coordinated my Class mates into carrying out various projects and volunteering works including School environment sanitation exercise and tree planting geared towards developing the School Community. I also served as student Volunteer and Public Relations Officer of Institute of Electrical & Electronic Engineering(IEEE) of my school community. Recently served as the President of ICT Community Development Service group for Eti-osa 1 LG VI Lagos State in the course of my one year National youth Service Program where I successfully coordinated and led my fellow Corp members into undertaking ICT sensitization and career awareness campaigns and computer lab sessions for students at Kuramo High School Lagos State.
Currently, I am a Volunteer at Boys to Men Foundation Lagos State, a foundation with the mission and vision of producing better men in the society through it’s campaigns of catching them young, a program targeted at educating young men on the dangers of addictions(alcohol, sex and betting, drugs) and other social vices. Self-discipline, diligence honesty and integrity are my Core beliefs.
**My HNG Internship Journey**
This challenge spurred me into looking out for ways to learn, grow and improve as a Backend developer, so I reached out to some of my class mates that are into tech; fortunately, I met two of them by names: Eburu Evans and Ugwunna Gerald who happens to have attended HNG internship and they recommended HNG internship for me on the basis that it was what helped them to learn and grow in their tech career. Being motivated with the level of their tech career progress and by the possibilities of what I can achieve in tech as well, I set sail for HNG internship. From their experience and testimonies, I believe strongly that the HNG internship is not just about learning to code but about solving real-world problems and growing as a developer. I am excited about the opportunity to collaborate with other developers, learn from industry experts, acquire mentorship and also work on meaningful projects. The structured learning environment and the exposure to practical challenges will undoubtedly accelerate my career growth.
**Why HNG Internship**
Just like it did for my classmates Eburu Evans and Ugwunna Gerald who are working remotely and doing well as Backend developers; I believe the HNG Internship will provide me with the perfect platform to hone my skills and gain valuable industry experience. The program's focus on real-world projects, mentorship from industry experts, and collaborative learning aligns well with my goals. I am specifically excited about the prospects of working on projects with tangible impacts and also to learn from the diverse and talented community at HNG.
I call on you to join me in this growth adventure of learning, joining a vibrant community of fellow learners and also gaining hands-on-experience, just check out the [HNG Internship](https://hng.tech/internship) to be on boarded. Employer? just explore how you can [hire talented interns](https://hng.tech/hire) from the program.
**Conclusion**
Excelling as a Backend developer requires a blend of technical knowledge, critical thing/problem-solving skills, persistence, resilience and continuous learning. The recent challenge I faced with MySQL and Node.js was a learning experience that emphasized the importance this skillsets. As I kickstart my HNG internship, I look forward to further honing these skills and growing as a developer. I am highly optimistic about the new challenges and exciting opportunities that lie ahead!
| somtoochukwu | |
1,906,779 | Lulo box pro APK | What is Lulubox Pro APK? | 0 | 2024-06-30T16:57:27 | https://dev.to/jasmeen_dave_176c32310f6c/lulo-box-pro-apk-2ll9 | react | What is [Lulubox ](https://luloboxpinapk.com/)Pro APK?
| jasmeen_dave_176c32310f6c |
1,906,778 | How I ensured user authentication, by sending emails in Spring Boot | How to send email in Java Spring Boot First of all, add the spring-starter-mail... | 0 | 2024-06-30T16:56:50 | https://dev.to/walerick/how-i-ensured-user-authentication-by-sending-emails-in-spring-boot-866 |
## How to send email in Java Spring Boot
First of all, add the `spring-starter-mail` dependency.
This can also be added by modifying your project `pom.xml` file to include the following
```
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-mail</artifactId>
</dependency>
```
After you've successfully added the dependency, create a class object and access the the `JavaMailSender` Interface in your code by following these few steps:
- Use the `@Autowired` annotation, to inject he `JavaMailSender` interface into the `EmailService` class.
- Create a `DemoMailMessage` object.
- Add email properties by using specific methods (`setTo`, `setSubject`, `setText`).
- Call the `send()` method to send the email message.
Your code should be similar to this:
```
package com.example;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.mail.SimpleMailMessage;
import org.springframework.mail.javamail.JavaMailSender;
import org.springframework.stereotype.Service;
public class MyClass {
@Service
public class EmailService {
@Autowired
private JavaMailSender mailSender;
public void sendEmail(String to, String subject, String body) {
SimpleMailMessage message = new SimpleMailMessage();
message.setTo(to);
message.setSubject(subject);
message.setText(body);
mailSender.send(message);
}
}
}
```
Finally, you need to configure the SMTP server settings in your `application.properties` file as shown below
```
spring.mail.host=smtp.example.com
spring.mail.port=25
spring.mail.username=setusername
spring.mail.password=setpassword
spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true
```
Then you're good to go.
Since you made it this far, There is an internship program, [](https://hng.tech/internship) that I stumbled upon twitter where developers can learn more and network better. There is also an option to get Certificate if you subscribe to the [premium](https://hng.tech/premium) package instead. | walerick | |
1,906,777 | How to Fuzzy Search: Finding File Names and Contents using Bash Scripting and Commandline Tools | This blog explains a bash script that does a fuzzy search against file names and file contents of all... | 0 | 2024-06-30T16:53:52 | https://blog.chardskarth.me/blog/how-to-fuzzy-search-file-names-and-file-contents-using-bash/ | devtools, bash, programming | This blog explains a bash script that does a fuzzy search against file names and file contents of all the files in the directory.
### Introduction
When searching for specific text (or code), you often rely on your IDE or file manager.
But if you're like me who wants to:
<br/>
1. Search files fast against different directories
1. Filter filenames, then search for text within the filtered files
1. Control which files be ignored
You might find searching using IDE or the file manager to be slow, inefficient and too limiting. That's why I wrote this script.
## TLDR (just give me the bash script)
```bash title=vicontrolp wrap showLineNumbers=false
fd \
-E '*.key'\
-E '*.crt'\
-E '*lock.yaml'\
-E '*.jar'\
-E '*.db'\
| xargs \
-I{} awk \
-e '/^([\[\]-}{#]|[[:space:]])+$/{next;}{ print "{}:" NR ":" $0}'\
{} 2> /dev/null \
| fzf \
--delimiter : \
--preview="bat --color=always --style=plain --highlight-line={2} {1}" \
--preview-window +{2}-5 \
--bind="enter:execute(nvim {1} +{2})"
```
I have this hooked up with [`zellij`](https://zellij.dev/) and did a keybind with `⌥ + p`.
```nginx title="~/.config/zellij/config.kdl" showLineNumbers=false
keybinds {
normal {
...
bind "π" {
Run "vicontrolp" {
close_on_exit true;
in_place true;
}
}
}
}
```
With this, I can trigger this script and search for files whenever I'm
in the comfort of my terminal.
<video width="320" height="240" controls>
<source src="https://blog.chardskarth.me/public/demo_vicontrolp.webm" type="video/webm"/>
</video>
## 🤔 Understanding the script
Right! Of course you're not some mediocre developer who just copy pastes stuff.
You want to understand how this works so you can expand your knowledge so then you can write your own
developer tools that will enhance your developer experience!
## Prerequisites
In order for the script to work, you need the following:
1. :orange[fd], this commandline tool is required to be installed on your system. See [this link](https://github.com/sharkdp/fd?tab=readme-ov-file#installation) to install.
1. :orange[xargs], this commandline tool is builtin so you don't need to install this. Just listing this here because we'll explain what this does later.
1. :orange[fzf], this commandline tool is required to be installed on your system. See [this link](https://github.com/junegunn/fzf?tab=readme-ov-file#installation) to install.
1. :orange[bat], this commandline tool is required to be installed on your system. See [this link](https://github.com/sharkdp/bat?tab=readme-ov-file#installation) to install.
1. :orange[creating an executable script], this consists of (1) creating the file, (2) making it executable and (3) including in your `$PATH`
```sh
touch ~/.localscripts/vicontrolp
# open ~/.localscripts/vicontrolp, and pasting the commands in this file
chmod +x ~/.localscripts/vicontrolp
echo export PATH=$PATH:~/.localscripts >> ~/.bashrc
# if you use zsh, change bashrc to zshrc
```
1. :orange[zellij] (optional) if you want to add the same binding mentioned above.
## Explaining the script
#### 1. [`fd`](https://github.com/sharkdp/fd) command
`fd` is an alternative to the builtin `find` command. It recursively lists all files within the directory.
While listing the files, it ignores the files in `.gitignore`.
#### 1.1. `-E` option
Besides the files in my `.gitignore` there are other file formats I want to ignore. The currently excluded files are not exhaustive, you'll also want to ignore
non-text searchable files like images, videos, etc.
#### 2. `xargs` command
This is a command that allows me to execute a new command using the inputs from stdin as argument.
#### 2.1 What is `stdin`?
`stdin` is short for `standard input`. It is a file stream from which a program may read it's input from.
In command line scripting you'll often want to to pipe output from one command as input to another command.
#### 2.2 What is `pipe`?
Pipe, indicated by the pipe operator: :orange[`|`], means to take the result of one command and pass it to the next command so it can read it as input and output a new set of data.
#### 2.3 So how what did `fd -E ... | xargs -I{} awk ... {}` do?
The result of `fd` was piped to `xargs`. Then `xargs` executes `awk` for each files outputed by `fd`
#### 2.4. `-I{}` option
This option tells `xargs` which character should be used to replace them with inputs from the stdin.
If for example your current directory consists the following structure:
```sh showLineNumbers=false
.
├── cskth-kt.mdx
├── images
│ └── restore_2.jpg
├── lorem-ipsum.md
└── tips-for-aspiring-professionals.mdx
```
doing `fd | xargs -I{} cp {} {}.bak` will be like executing the following commands (copies each file with the new file appended with `.bak`)
```sh showLineNumbers=false frame=none
cp cskth-kt.mdx cskth-kt.mdx.bak
cp restore_2.jpg restore_2.jpg.bak
cp lorem-ipsum.md lorem-ipsum.md.bak
cp tips-for-aspiring-professionals.mdx tips-for-aspiring-professionals.mdx.bak
```
In our command we run the `awk` command for each file instead.
#### 3. `awk` command
This program scans each line of an input file and allows you to do an action for each that matches a pattern.
#### 3.1 `-e '...'` option
This option consists of three parts:
#### 3.1.1. The pattern: `/^([\[\]-}{#]|[[:space:]])+$`
This `regex` matches when a line consists only of `{`,`[`, `-`, `#`, `]`, `}` or **whitespaces**
#### 3.1.2. The action of the pattern: `{next;}`
If previous pattern matches, it tells awk to proceed to the next line and don't do any further actions.
This ultimately skips the next block which prints the important line to be piped to `fzf`
#### 3.1.3. The action block: `{ print "{}:" NR ":" $0 }`
Remember that this option is still under `xargs` which means `{}` will be replaced by the filename.
Then `NR` in awk will print the current line of the file that's being scanned. Then `$0` points to the current line.
So in our previous example, we may see the following output:
```
cskth-kt.mdx:1:# Heading 1
cskth-kt.mdx:2:Heading 1 content
cskth-kt.mdx:3:Heading 1 content, second line
lorem-ipsum.md:1:Lorem ipsum dolor sit amet, consectetur adipiscing elit.
lorem-ipsum.md:2:Vivamus non dapibus est, a rutrum nisi.
...
```
<figcaption>Depending of course on the contents of your files in your current directory </figcaption>
❗️Take note of this formatted output because we'll be explaining this later when this is piped into `fzf`.
#### 3.2. `{} 2> /dev/null`
This is the parameter passed to `awk` command which is replaced again by `xargs` with the file name from `fd`'s input.
`2> /dev/null` is called a stdout redirection. It simply means to redirect errors into `/dev/null` stream. It means to ignore any error messages
by outputting it to a blackhole or a non existent file stream: (`/dev/null`).
#### 4. `fzf` command
This awesome commandline program is the heart of this script. From `awk` command, all contents of each file, prepended by their filename, is now piped into this program.
#### 4.1. `--delimeter :` option
This tells fzf to use `:` character to separate words. This is used to separate file name, line number, and file contents again because we printed them as one line
from `awk` earlier.
#### 4.2 `--preview ...` option
This tells `fzf` to use `bat` program to preview the whole file. Along with using `bat`, we also added parameter to highlight the current line of the current fuzzy search match.
#### 4.3 `--preview-window ...` option
This tells `fzf` to scroll the preview window to include the current line that's being searched in fzf.
#### 4.4 `--bind ...` option
Lastly, this tells `fzf` to do a keybinding that when `enter key` is pressed, open neovim and directly jump into the line number of the search match.
## Conclusion
... there's really not much to conclude this with. Hopefully you'll find this script useful. ✌🏻
| chardskarth |
1,906,536 | Building a Lisp Interpreter in Rust | I’ve been playing around with Rust for some time now; it's a pretty cool systems language with a very... | 0 | 2024-06-30T16:52:35 | https://dev.to/galzmarc/building-a-lisp-interpreter-in-rust-2njj | rust, programming, coding | I’ve been playing around with Rust for some time now; it's a pretty cool systems language with a very nice set of features: it's statically typed, and it enforces memory safety without a garbage collector (it uses a borrow checker instead) by statically determining when a memory object is no longer in use. These characteristics make it a “different” language that I was interested in exploring, plus the community is absolutely amazing.
As for the question "why building a Lisp interpreter?", it just happens to be one of John Crickett's [Coding Challenges](https://codingchallenges.fyi/) and I decided to give it it a try. Also, Scheme's minimal syntax offers a clear pathway for building core language features, thus making it an ideal language for my purpose.
#### Inspiration and acknowledgements
I have two sources to cite here:
1. Peter Norvig's [Lispy](http://norvig.com/lispy.html): Many years ago, Peter Norving wrote this beautiful article about creating a Lisp interpreter in Python. I used his article as the main guide for my Rust implementation.
2. Stepan Parunashvili's [Risp](https://stopa.io/post/222): Norvig's article inspired Stepan to write Risp, a Lisp interpreter in Rust. While I took some different decisions in my own code, this was a great source.
As Norvig does, we are going to actually create a Scheme interpreter (so not exactly Lisp). I will also likely depart from both articles at a certain point, but I'll do my best to note when I do that.
## Step 1: A Basic Calculator
I will not even try to match Norvig's ability to offer definitions and explanations, so please refer to his article above for any of that. In here, I'll only try to document my thought process and provide some guidance to anyone willing to try this feat.
The only thing we need to remember at this point is the flow of an interpreter:
_our program ⟶ **parse** ⟶ abstract syntax tree ⟶ **eval** ⟶ result_
In short, our goal is to get the following:
```
> parse(program)
['define', 'r', 10], ['*', 'pi', ['*', 'r', 'r']]
> eval(parse(program))
314.1592653589793
```
### Type Definitions
For now, our interpreter is going to have three kinds of values: a Scheme expression is either an Atom or List, with the former being a string or a float.
```
#[derive(Debug, Clone, PartialEq)]
pub enum Atom {
Symbol(String),
Number(f64),
}
#[derive(Debug, Clone, PartialEq)]
pub enum Exp {
Atom(Atom),
List(Vec<Exp>),
}
```
We are also going to need a Scheme environment, which is essentially a key-value mapping of {variable: value}; we are going to use a HashMap for this, although we wrap our own struct around it:
```
#[derive(Debug, Clone, PartialEq)]
pub struct Env {
data: HashMap<String, Exp>,
}
```
### Parsing
The first step in our parsing process is lexical analysis, which means we take the input character string and break it up into a sequence of tokens (at this point, parentheses, symbols, and numbers):
```
tokenize("(+ 1 2)") //=> ["(", "+", "1", "2", ")"]
```
And we do that as follows:
```
pub fn tokenize(exp: String) -> Vec<String> {
exp.replace("(", " ( ")
.replace(")", " ) ")
.split_whitespace()
.map(|x| x.to_string())
.collect()
}
```
Then we proceed with syntactic analysis, in which the tokens are assembled into an abstract syntax tree.
Essentially, we take the first token; if it’s the beginning of a list “(“, we start reading and parsing the tokens that follow, until we hit a closing parenthesis:
```
fn read_from_tokens(tokens: &mut Vec<String>) -> Result<Exp, String> {
if tokens.is_empty() {
return Err("No input provided. Please provide a Scheme expression.".to_string());
}
let token = tokens.remove(0);
if token == "(" {
let mut list: Vec<Exp> = Vec::new();
while tokens[0] != ")" {
list.push(read_from_tokens(tokens)?);
}
tokens.remove(0); // pop off ')'
Ok(Exp::List(list))
} else if token == ")" {
return Err(format!("Unexpected ')'."));
} else {
Ok(atom(token))
}
}
```
Otherwise, it can only be an atom, so we parse that:
```
fn atom(token: String) -> Exp {
match token.parse::<f64>() {
Ok(num) => Exp::Atom(Atom::Number(num)),
Err(_) => Exp::Atom(Atom::Symbol(token)),
}
}
```
Finally, we put everything together in our parse() function:
```
pub fn parse(input: String) -> Result<Exp, String> {
// Read a Scheme expression from a string
read_from_tokens(&mut tokenize(input))
}
```
### Environment
Environments are where we store variable definitions and built-in functions, so we need to create a default one for our interpreter.
To start implementing our basic built-in operations (+, -), we need a way to save Rust functions references. To do that, we need to update our Exp enum to allow an additional type, Func(fn(&[Exp]):
```
#[derive(Debug, Clone, PartialEq)]
pub enum Exp {
Bool(bool),
Atom(Atom),
List(Vec<Exp>),
Func(fn(&[Exp]) -> Exp),
}
```
Then, we can implement some methods and functions for our Env struct:
```
impl Env {
fn new() -> Self {
Env {
data: HashMap::new(),
}
}
pub fn get(&self, k: &str) -> Option<&Exp> {
self.data.get(k)
}
}
pub fn standard_env() -> Env {
// An environment with some Scheme standard procedures
let mut env = Env::new();
// Adding basic arithmetic operations
env.insert("+".to_string(), Exp::Func(|args: &[Exp]| add(args)));
env.insert("-".to_string(), Exp::Func(|args: &[Exp]| subtract(args)));
env
}
```
And finally, our add() and subtract() functions:
```
pub fn add(args: &[Exp]) -> Exp {
let sum = args.iter().fold(0.0, |acc, arg| {
if let Exp::Atom(Atom::Number(num)) = arg {
acc + num
} else {
panic!("Expected a number");
}
});
Exp::Atom(Atom::Number(sum))
}
pub fn subtract(args: &[Exp]) -> Exp {
let first = if let Some(Exp::Atom(Atom::Number(n))) = args.iter().next() {
*n
} else {
panic!("Expected a number");
};
let result = args.iter().skip(1).fold(first, |acc, arg| {
if let Exp::Atom(Atom::Number(num)) = arg {
acc - num
} else {
panic!("Expected a number");
}
});
Exp::Atom(Atom::Number(result))
}
```
### Evaluation:
We are now ready to write our eval() function. As a refresher, right now our Exp can assume 4 values: an Atom (either a Symbol or a Number), a List, or a Func. This is the implementation:
```
pub fn eval(exp: Exp, env: &Env) -> Result<Exp, String> {
match exp {
Exp::Atom(Atom::Symbol(s)) => {
env.get(&s).cloned().ok_or_else(|| panic!("Undefined symbol: {}", s))
},
Exp::Atom(Atom::Number(_)) => Ok(exp),
Exp::List(list) => {
let first = &list[0];
if let Exp::Atom(Atom::Symbol(ref s)) = first {
if let Some(Exp::Func(f)) = env.get(s) {
let args = list[1..].iter()
.map(|x| eval(x.clone(), env))
.collect::<Result<Vec<_>, _>>()?;
return Ok(f(&args))
} else {
panic!("Undefined function: {}", s);
}
} else {
panic!("Expected a symbol");
}
},
Exp::Func(_) => Ok(exp),
}
}
```
### Let's make it prettier
We now have functioning code, but an hypothetical input of "(+ 1 1)" will give us the not so pretty stdout of `Atom(Number(2.0))`. Not great.
To make it look better, we need to work with the Display trait of our Atom struct:
```
impl fmt::Display for Atom {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Atom::Symbol(s) => write!(f, "{}", s),
Atom::Number(n) => write!(f, "{}", n),
}
}
}
```
Next, we add fmt::Display for Exp as well, so that it uses the Display implementation of Atom:
```
impl fmt::Display for Exp {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Exp::Atom(atom) => write!(f, "{}", atom),
Exp::List(list) => {
let formatted_list: Vec<String> =
list.iter().map(|exp| format!("{}", exp)).collect();
write!(f, "({})", formatted_list.join(" "))
}
Exp::Func(_) => write!(f, "<function>"),
}
}
}
```
### REPL
One of Lisp's best features is the notion of an interactive read-eval-print loop: a way for a programmer to enter an expression, and see it immediately read, evaluated, and printed, without having to go through a lengthy build/compile/run cycle.
We can do that by using a loop in our main() function:
```
use std::env;
use rustylisp::{eval, parse, standard_env};
fn main() {
let args: Vec<String> = env::args().skip(1).collect();
if args.is_empty() {
eprintln!("Error: No input provided. Please provide a Lisp expression.");
std::process::exit(1);
}
let env = standard_env();
let input = args.join(" ");
let parsed_exp = match parse(input) {
Ok(exp) => exp,
Err(e) => {
eprintln!("Error during parsing: {}", e);
std::process::exit(1);
}
};
let result = match eval(parsed_exp, &env) {
Ok(res) => res,
Err(e) => {
eprintln!("Error during evaluation: {}", e);
std::process::exit(1);
},
};
println!("{:?}", result);
}
```
And voilà, the first implementation of our interpreter is done! We now have a REPL that allows us to perform addition and subtraction.
## Step 2: A Better Calculator
Let's add support for multiplication, division, and comparison operators.
### Multiplication and division
To add multiplication and division, we need to create two new helper functions:
```
pub fn multiply(args: &[Exp]) -> Exp {
let product = args.iter().fold(1.0, |acc, arg| {
if let Exp::Atom(Atom::Number(num)) = arg {
acc * num
} else {
panic!("Expected a number");
}
});
Exp::Atom(Atom::Number(product))
}
pub fn divide(args: &[Exp]) -> Exp {
let first = if let Some(Exp::Atom(Atom::Number(n))) = args.iter().next() {
*n
} else {
panic!("Expected a number");
};
let quotient = args.iter().skip(1).fold(first, |acc, arg| {
if let Exp::Atom(Atom::Number(num)) = arg {
if *num == 0.0 {
panic!("Cannot divide by zero")
}
acc / num
} else {
panic!("Expected a number");
}
});
Exp::Atom(Atom::Number(quotient))
}
```
Next, we add them to our Env (since we are at it, let's also add the Pi constant):
```
use std::f64::consts::PI;
...
pub fn standard_env() -> Env {
// An environment with some Scheme standard procedures
let mut env = Env::new();
// Adding basic arithmetic operations
env.insert("+".to_string(), Exp::Func(|args: &[Exp]| add(args)));
env.insert("-".to_string(), Exp::Func(|args: &[Exp]| subtract(args)));
env.insert("*".to_string(), Exp::Func(|args: &[Exp]| multiply(args)));
env.insert("/".to_string(), Exp::Func(|args: &[Exp]| divide(args)));
// Adding pi
env.insert("pi".to_string(), Exp::Atom(Atom::Number(PI)));
env
}
```
### Booleans
To add support for comparison operators, we first need to introduce booleans (otherwise, what's "(> 1 0)" going to return?); let's do that.
First we edit our Exp enum
```
pub enum Exp {
Bool(bool),
Atom(Atom),
List(Vec<Exp>),
Func(fn(&[Exp]) -> Exp),
}
```
at which point Rust will tell us to update Display
```
impl fmt::Display for Exp {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Exp::Bool(a) => write!(f, "{}", a),
...
}
}
}
```
Then we'll change eval() to consider booleans
```
pub fn eval(exp: Exp, env: &Env) -> Result<Exp, String> {
match exp {
Exp::Bool(_) => Ok(exp),
Exp::Atom(Atom::Symbol(s)) => env
...
}
}
```
Last but not least, we update our atom() function
```
fn atom(token: String) -> Exp {
match token.as_str() {
"true" => Exp::Bool(true),
"false" => Exp::Bool(false),
// Numbers become numbers; every other token is a symbol
_ => match token.parse::<f64>() {
Ok(num) => Exp::Atom(Atom::Number(num)),
Err(_) => Exp::Atom(Atom::Symbol(token)),
},
}
}
```
### Comparison Operators
We are now ready to add comparison operators to our Env:
```
pub fn standard_env() -> Env {
...
// Adding comparison operators
env.insert(
"=".to_string(),
Exp::Func(|args: &[Exp]| compare(args, "=")),
);
env.insert(
">".to_string(),
Exp::Func(|args: &[Exp]| compare(args, ">")),
);
env.insert(
"<".to_string(),
Exp::Func(|args: &[Exp]| compare(args, "<")),
);
env.insert(
">=".to_string(),
Exp::Func(|args: &[Exp]| compare(args, ">=")),
);
env.insert(
"<=".to_string(),
Exp::Func(|args: &[Exp]| compare(args, "<=")),
);
env
}
```
And our helper function compare():
```
pub fn compare(args: &[Exp], op: &str) -> Exp {
if args.len() != 2 {
panic!("Comparison operators require exactly two arguments");
}
let a = if let Exp::Atom(Atom::Number(n)) = args[0] {
n
} else {
panic!("Expected a number");
};
let b = if let Exp::Atom(Atom::Number(n)) = args[1] {
n
} else {
panic!("Expected a number");
};
let result = match op {
"=" => a == b,
">" => a > b,
"<" => a < b,
">=" => a >= b,
"<=" => a <= b,
_ => panic!("Unknown operator"),
};
Exp::Bool(result)
}
```
We now have a pretty decent calculator, supporting basic arithmetic operations and comparison operators!
**Note:** Stepan's implementation uses an approach taken from Clojure, where comparison operators can take more than 2 args, and return true if they are in a monotonic order that satisfies the operator. I have instead decided to limit my interpreter to two args only.
## Step 3: Almost a Language
At this point, we can build on booleans to add _if_ statements, and we are going to introduce the _define_ keyword to allow us to create our own variables and functions within the Env.
### Define
We need to update our eval() function so that it recognizes the **'define'** keyword; when it does, it creates a new variable in our Env, using the first arg as the key, and the second as the value:
```
fn eval(exp: Exp, env: &mut Env) -> Result<Exp, String> {
match exp {
...
Exp::List(list) => {
let first = &list[0];
if let Exp::Atom(Atom::Symbol(ref s)) = first {
if s == "define" {
if list.len() != 3 {
return Err("define requires exactly two arguments".into());
}
let var_name = match &list[1] {
Exp::Atom(Atom::Symbol(name)) => name.clone(),
_ => return Err("The first argument to define must be a symbol".into()),
};
let value = eval(list[2].clone(), env)?;
env.insert(var_name, value.clone());
Ok(value)
} else if let Some(Exp::Func(f)) = env.get(s) {
// Clone the function to avoid borrowing `env` later
let function = f.clone();
let args: Result<Vec<Exp>, String> = list[1..]
.iter()
.map(|x| eval(x.clone(), env))
.collect();
Ok(function(&args?))
} else {
Err(format!("Undefined function: {}", s))
}
} else {
Err("Expected a symbol".into())
}
}
Exp::Func(_) => Ok(exp),
}
}
```
We now have some nice built-in functionality, allowing us to do this:
```
> (define r 10)
10
> (+ r 5)
15
```
We can also further improve on this feature in the next step, so that we will be able to **'define'** functions and turn this into a full-on language.
### Improved **'define'**
To extend the define functionality to support the definition of new functions, we'll need to update the parser, evaluator, and the Env structure to handle function definitions and closures. Specifically, this involves recognizing when a function is being defined, parsing its arguments and body, and storing this in the environment.
First we update the Exp
```
pub enum Exp {
Bool(bool),
...
FuncDef {
params: Vec<Exp>,
body: Vec<Exp>,
env: Env,
},
}
```
and the related Display trait
```
impl fmt::Display for Exp {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
...
Exp::FuncDef{..} => write!(f, "<function>"),
}
}
}
```
and finally eval()
```
fn eval(exp: Exp, env: &mut Env) -> Result<Exp, String> {
match exp {
...
Exp::List(list) => {
let first = &list[0];
if let Exp::Atom(Atom::Symbol(ref s)) = first {
if s == "define" {
if list.len() < 3 {
return Err("'define' requires at least two arguments".into());
}
// Define a new function
if let Exp::List(ref func) = list[1] {
if let Exp::Atom(Atom::Symbol(ref func_name)) = func[0] {
let params = func[1..].to_vec();
let body = list[2..].to_vec();
let lambda = Exp::FuncDef {
params,
body,
env: env.clone(),
};
env.insert(func_name.clone(), lambda);
return Ok(Exp::Atom(Atom::Symbol(func_name.clone())));
}
// Define a new variable
} else if let Exp::Atom(Atom::Symbol(ref var_name)) = list[1] {
let value = eval(list[2].clone(), env)?;
env.insert(var_name.clone(), value.clone());
return Ok(value);
} else {
return Err("Invalid define syntax".into());
}
} else if let Some(exp) = env.get(s) {
if let Exp::Func(f) = exp {
// Clone the function to avoid borrowing `env` later
let function = f.clone();
let args: Result<Vec<Exp>, String> = list[1..]
.iter()
.map(|x| eval(x.clone(), env))
.collect();
return Ok(function(&args?));
} else if let Exp::FuncDef { params, body, env: closure_env } = exp {
// Clone `env` to avoid borrowing later
let env_clone = &mut env.clone();
let args: Result<Vec<Exp>, String> = list[1..]
.iter()
.map(|x| eval(x.clone(), env_clone))
.collect();
let mut local_env = closure_env.clone();
for (param, arg) in params.iter().zip(args?) {
if let Exp::Atom(Atom::Symbol(param_name)) = param {
local_env.insert(param_name.clone(), arg);
} else {
return Err("Invalid parameter name".into());
}
}
let mut result = Exp::Bool(false);
for exp in body {
result = eval(exp.clone(), &mut local_env)?;
}
return Ok(result);
}
}
return Err(format!("Undefined function: {}", s));
} else {
Err("Expected a symbol".into())
}
}
Exp::Func(_) => Ok(exp),
Exp::FuncDef { .. } => {
Err("Unexpected function definition".into())
}
}
}
```
With the changes above, we now have the possibility to do something like this, which I think is very cool:
```
> (define (square x) (* x x))
square
> (square 5)
25
```
### If statements (and some refactoring)
As mentioned above, we can build on our booleans to implement if statements. This is the function:
```
fn eval_if(list: &[Exp], env: &mut Env) -> Result<Exp, String> {
if list.len() < 4 {
return Err("'if' requires at least three arguments".into());
}
let condition = eval(list[1].clone(), env)?;
match condition {
Exp::Bool(true) => eval(list[2].clone(), env),
Exp::Bool(false) => eval(list[3].clone(), env),
_ => Err("Invalid condition in if expression".into()),
}
}
```
Since the eval() function is getting quite long, we handle 'define' and 'if' separately
```
fn eval_define(list: &[Exp], env: &mut Env) -> Result<Exp, String> {
if list.len() < 3 {
return Err("'define' requires at least two arguments".into());
}
// Define a new function
if let Exp::List(ref func) = list[1] {
if let Exp::Atom(Atom::Symbol(ref func_name)) = func[0] {
let params = func[1..].to_vec();
let body = list[2..].to_vec();
let lambda = Exp::FuncDef {
params,
body,
env: env.clone(),
};
env.insert(func_name.clone(), lambda);
return Ok(Exp::Atom(Atom::Symbol(func_name.clone())));
} else {
return Err("Invalid define syntax".into());
}
// Define a new variable
} else if let Exp::Atom(Atom::Symbol(ref var_name)) = list[1] {
let value = eval(list[2].clone(), env)?;
env.insert(var_name.clone(), value.clone());
return Ok(value);
} else {
return Err("Invalid define syntax".into());
}
}
```
```
fn eval(exp: Exp, env: &mut Env) -> Result<Exp, String> {
match exp {
...
Exp::List(list) => {
let first = &list[0];
if let Exp::Atom(Atom::Symbol(ref s)) = first {
match s.as_str() {
"define" => eval_define(&list, env),
"if" => eval_if(&list, env),
_ => {
// Truncated
}
}
}
Exp::Func(_) => Ok(exp),
Exp::FuncDef { .. } => Err("Unexpected function definition".into()),
}
}
```
We have added some new functionality, and can now type something like this:
```
> (define r 10)
10
> (if (< r 8) true false)
false
```
## Step 4: A Full Language
**Note:** This is where I depart the most from Stepan's implementation. While he introduced a Lambda type for his Exp enum, I have decided to build upon FuncDef. My reasoning was that I _did not really need_ lambda functions **at all**.
In the end, typing `(define (circle-area r) (* pi (* r r)))` was perfectly valid syntax with my implementation. What I _really needed_ was a scoped environment for new functions, which would also allow for recursion because the key was for _the function to access itself_, and if this is not magic, I don't know what is.
What's _even better_ is that all of the above could be achieved [drumroll] **with one single line of code**: `local_env.insert(s.clone(), exp.clone());`
That's it. One line.
The full eval() function is below:
```
fn eval(exp: Exp, env: &mut Env) -> Result<Exp, String> {
match exp {
Exp::Bool(_) => Ok(exp),
Exp::Atom(Atom::Symbol(s)) => env
.get(&s)
.cloned()
.ok_or_else(|| format!("Undefined symbol: {}", s)),
Exp::Atom(Atom::Number(_)) => Ok(exp),
Exp::List(list) => {
let first = &list[0];
if let Exp::Atom(Atom::Symbol(ref s)) = first {
match s.as_str() {
"define" => eval_define(&list, env),
"if" => eval_if(&list, env),
_ => {
if let Some(exp) = env.get(s) {
match exp {
Exp::Func(f) => {
// Clone the function to avoid borrowing `env` later
let function = f.clone();
let args: Result<Vec<Exp>, String> =
list[1..].iter().map(|x| eval(x.clone(), env)).collect();
Ok(function(&args?))
}
Exp::FuncDef {
params,
body,
env: closure_env,
} => {
// Clone `env` to avoid borrowing later
let env_clone = &mut env.clone();
let args: Result<Vec<Exp>, String> = list[1..]
.iter()
.map(|x| eval(x.clone(), env_clone))
.collect();
let mut local_env = closure_env.clone();
local_env.insert(s.clone(), exp.clone());
for (param, arg) in params.iter().zip(args?) {
if let Exp::Atom(Atom::Symbol(param_name)) = param {
local_env.insert(param_name.clone(), arg);
} else {
return Err("Invalid parameter name".into());
}
}
let mut result = Exp::Bool(false);
for exp in body {
result = eval(exp.clone(), &mut local_env)?;
}
Ok(result)
}
_ => Err(format!("Undefined function: {}", s)),
}
} else {
Err(format!("Undefined function: {}", s))
}
}
}
} else {
Err("Expected a symbol".into())
}
}
Exp::Func(_) => Ok(exp),
Exp::FuncDef { .. } => Err("Unexpected function definition".into()),
}
}
```
With the support for lambdas, we can now harness the power of recursion, and write something like this:
```
> (define fact (n) (if (<= n 1) 1 (* n (fact (- n 1)))))
fact
> (fact 5)
120
```
## Fin
Rest here weary traveler. You've reached the end of this post.
My implementation is far from complete, and even further from elegant. I have willingly omitted some parts of my code in order to focus on what I deemed most important, and I have since done some refactoring as well.
You can find my full project [here](https://github.com/galzmarc/rustylisp/).
Contributions are more than welcome, so if you get to it, send me your thoughts 🙂. | galzmarc |
1,906,773 | ocidpatchdb - Automate GI/RU Patches on OCI DB System | { Abhilash Kumar Bhattaram : Follow on LinkedIn } ocidpatchdb I have extensively... | 0 | 2024-06-30T16:48:19 | https://dev.to/nabhaas/ocidpatchdb-automate-giru-patches-on-oci-db-system-56cg | oracle, automation, database, oci | [](
<style>
.libutton {
display: flex;
flex-direction: column;
justify-content: center;
padding: 7px;
text-align: center;
outline: none;
text-decoration: none !important;
color: #ffffff !important;
width: 200px;
height: 32px;
border-radius: 16px;
background-color: #0A66C2;
font-family: "SF Pro Text", Helvetica, sans-serif;
{ Abhilash Kumar Bhattaram : </style> <a class="libutton" href="https://www.linkedin.com/comm/mynetwork/discovery-see-all?usecase=PEOPLE_FOLLOWS&followMember=abhilash-kumar-85b92918" target="_blank">Follow on LinkedIn</a>) }
## ocidpatchdb
I have extensively on OCI CLI and have found a way to automate GI/RU Patches for OCI DB System class of databases. ocidpatchdb utilities are the perfect way to do this. The absolute power of ocidpatchdb is seen in the last section where you can see patches are seen applied for multiple databases at once.
The real motivation to create this utility is that DBA's struggle in operational efficiencies when this is done manually. Patching is an activity that must run every quarter for GI/RU.
Oracle provides Fleet Patching and Provisioning as seen [here](https://docs.oracle.com/en/database/oracle/oracle-database/21/fppad/fleet-patching-provisioning.html) in an elegant way (but be prepared to pay $$$). My ocidtadbdb does the exact same thing , so I call mine poor man's Fleet Patching and Provisioning.
As a techincal consultant one needs to find cheaper ways to provide solutions with the same efficiency, this is one such example.
This is why I created the ocidtab utility a couple of years ago and now expanding it to multiple OCI automations like ocidpatchdb.
_NOTE : Before we proceed any further this Blog assumes you are familiar with OCI CLI and the efficiency it provides. If you are new to OCI CLI I strongly suggest to reference my older article on
[ocidtab](https://dev.to/nabhaas/the-ocidtab-e7n)_
## Where to find ocidpatchdb utilities ?
ocidpatchdb utilies are available in my github below
[https://github.com/abhilash-8/ocidpatchdb](https://github.com/abhilash-8/ocidpatchdb)
I strongly recommend to do through the README and understand how the script performs
## Pre-requisites for running ocidpatch
Ideally ocidpatch must be running from a jumpbox where you have OCI CLI configured , the following prerequisites are needed to use the ocidpatchdb scripts.
1) OCI DB System Hostname [ as seen in OCI Web Console ]
2) OCI DB Name [ as seen in OCI Web Console ]
3) OCI VCN Name / relevant OCI CLI profile to be used
4) jq to be installed in Linux
```
$ sudo yum install jq
```
5) The OCI User in the profile will need to have the required IAM Policies for OCI Services to generate the OCID
6) ocidtab environment variable files , if not availble please refer : https://github.com/abhilash-8/ocidenv
## Example for running GI/RU Patches
Once you are familiar with the
Let us assume that we have a DEV database which needs to apply 19.23 RU Patch , all you need to specify is the environment , hostname , name of the database , version and patch apply action
**Example of applying RU Patches **
```
# ./ocidpatch_ru.sh DEV devdb01 orcl 19.23.0.0.0 PRECHECK
# ./ocidpatch_ru.sh DEV devdb01 orcl 19.23.0.0.0 APPLY
```
**Example of applying GI Patches
**```
# ./ocidpatch_gi.sh DEV devdb01 orcl 19.23.0.0.0 PRECHECK
# ./ocidpatch_gi.sh DEV devdb01 orcl 19.23.0.0.0 APPLY
```
## Automating multiple GI/RU Patches
Once you have a fair grip on understanding how to PRECHECK/APPLY for one environment you could actually automate as below
**Example of Automating RU PRECHECK for multiple database systems**
```
#!/bin/bash
#
# Automating PRECHECK for all DEV Environments
~/ocidpatch_ru.sh DEV devdb01 orcl01 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh DEV devdb02 orcl02 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh DEV devdb03 orcl03 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh DEV devdb04 orcl04 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh DEV devdb05 orcl05 19.23.0.0.0 PRECHECK
#
# Automating PRECHECK for all TST Environments
~/ocidpatch_ru.sh TST tstdb01 orcl06 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh TST tstdb02 orcl07 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh TST tstdb03 orcl08 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh TST tstdb04 orcl09 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh TST tstdb05 orcl10 19.23.0.0.0 PRECHECK
#
# Automating PRECHECK for all UAT Environments
~/ocidpatch_ru.sh UAT tstdb01 orcl11 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh UAT tstdb02 orcl12 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh UAT tstdb03 orcl13 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh UAT tstdb04 orcl14 19.23.0.0.0 PRECHECK
~/ocidpatch_ru.sh UAT tstdb05 orcl15 19.23.0.0.0 PRECHECK
```
**Example of Automating RU APPLY for multiple database systems**
```
#!/bin/bash
#
# Automating APPLY for all DEV Environments
~/ocidpatch_ru.sh DEV devdb01 orcl01 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh DEV devdb02 orcl02 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh DEV devdb03 orcl03 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh DEV devdb04 orcl04 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh DEV devdb05 orcl05 19.23.0.0.0 APPLY
#
# Automating APPLY for all TST Environments
~/ocidpatch_ru.sh TST tstdb01 orcl06 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh TST tstdb02 orcl07 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh TST tstdb03 orcl08 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh TST tstdb04 orcl09 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh TST tstdb05 orcl10 19.23.0.0.0 APPLY
#
# Automating APPLY for all UAT Environments
~/ocidpatch_ru.sh UAT tstdb01 orcl11 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh UAT tstdb02 orcl12 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh UAT tstdb03 orcl13 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh UAT tstdb04 orcl14 19.23.0.0.0 APPLY
~/ocidpatch_ru.sh UAT tstdb05 orcl15 19.23.0.0.0 APPLY
```
**Example of Automating GI PRECHECK for multiple database systems**
```
#!/bin/bash
#
# Automating PRECHECK for all DEV Environments
~/ocidpatch_gi.sh DEV devdb01 orcl01 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh DEV devdb02 orcl02 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh DEV devdb03 orcl03 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh DEV devdb04 orcl04 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh DEV devdb05 orcl05 19.23.0.0.0 PRECHECK
#
# Automating PRECHECK for all TST Environments
~/ocidpatch_gi.sh TST tstdb01 orcl06 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh TST tstdb02 orcl07 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh TST tstdb03 orcl08 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh TST tstdb04 orcl09 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh TST tstdb05 orcl10 19.23.0.0.0 PRECHECK
#
# Automating PRECHECK for all UAT Environments
~/ocidpatch_gi.sh UAT tstdb01 orcl11 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh UAT tstdb02 orcl12 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh UAT tstdb03 orcl13 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh UAT tstdb04 orcl14 19.23.0.0.0 PRECHECK
~/ocidpatch_gi.sh UAT tstdb05 orcl15 19.23.0.0.0 PRECHECK
```
**Example of Automating GI APPLY for multiple database systems**
```
#!/bin/bash
#
# Automating APPLY for all DEV Environments
~/ocidpatch_gi.sh DEV devdb01 orcl01 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh DEV devdb02 orcl02 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh DEV devdb03 orcl03 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh DEV devdb04 orcl04 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh DEV devdb05 orcl05 19.23.0.0.0 APPLY
#
# Automating APPLY for all TST Environments
~/ocidpatch_gi.sh TST tstdb01 orcl06 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh TST tstdb02 orcl07 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh TST tstdb03 orcl08 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh TST tstdb04 orcl09 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh TST tstdb05 orcl10 19.23.0.0.0 APPLY
#
# Automating APPLY for all UAT Environments
~/ocidpatch_gi.sh UAT tstdb01 orcl11 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh UAT tstdb02 orcl12 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh UAT tstdb03 orcl13 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh UAT tstdb04 orcl14 19.23.0.0.0 APPLY
~/ocidpatch_gi.sh UAT tstdb05 orcl15 19.23.0.0.0 APPLY
```
If you have successfully come this far in the article and testing the same in your environments you have achieved Nirvana.
| abhilash8 |
1,906,775 | Functions as User Interface | Hi, y'all. ✨ My journey in the tech world has been filled with countless lines of code, numerous... | 0 | 2024-06-30T16:46:25 | https://dev.to/khadijagardezi/functions-as-user-interface-35c8 | javascript, react, webdev, programming | Hi, y'all. ✨
My journey in the tech world has been filled with countless lines of code, numerous tools, and multiple languages. I have experience with JavaScript, ReactJS, CraftCMS, Twig, and a few more. As a woman in tech, I understand the challenges of this field. I am dedicated to sharing my knowledge and experiences to help others like me grow and thrive. Let's get started and become experts together!
**Functions as UI**
In React, functions can be used to define UI components. This allows you to create reusable and modular pieces of your user interface. To illustrate this concept, let's build a simple React component that displays the text "Hello, Khadija."
In standard HTML, we write something like below to display a message:
```html
<h1>Hello JavaScript</h1>
```
This works fine for static content. right? isn't it? But what if we want to change the message dynamically based on user interaction or some other logic? That’s where React and JavaScript functions come into play.
First, we will create a function component named App. This component will use another function, getLanguage, to get the text "Khadija" and display it within an `<h1>` element.
```react
import React from 'react';
const Message = () => {
// Without return, the function would return undefined.
return "Khadija"; }
const App = () => {
const getName = Message();
return (
<div>
<h1>Hello, {getName}</h1>
</div>
);
}
export default App;
```
In this code snippet, we defined a simple React component called App that uses a function Message to return the string "Khadija". This string is then displayed inside an h1 tag in our component.
By using functions in this way, we can make our UI more dynamic and responsive to changes in data or state. It allows us to create interactive and responsive applications with ease. This example demonstrates how we can use a simple function to dynamically update the content of our components.
Thank you for following along & Happy coding! 💻
_The next part is coming soon_.📝 | khadijagardezi |
1,906,017 | Getting Started with JavaScript: Variables and Data Types | Introduction One of the foundational concepts in JavaScript is understanding how to... | 0 | 2024-06-30T16:45:13 | https://dev.to/ishaq_360/getting-started-with-javascript-variables-and-data-types-4p8h | javascript, beginners, programming, webdev | ## Introduction
One of the foundational concepts in JavaScript is understanding how to declare and use variables, along with the different data types available. This guide is designed and written for beginners and early intermediate let's journey you through the nitty and gritty of JavaScript variables and data types, using var, let, and const declarations.
**Variable:** A variable is a container that stores a value that can be used in a program. Variables have a name, a data type, and a value.
**Data Type:** A data type is a classification of data that determines the type of value a variable can hold. Data types also determine the operations that can be performed on the data.
JavaScript variable is used to store data and can be created using var, let, and const keywords. Below is a simple javascript code of the three (3) declarations.
```javascript
let myVariable = 'value'
const myConstant = 'constant';
var myVar = 'variable';
```
**Explanation**
The code demonstrates the following:
- declares a variable with the value 'value'
- declares a constant variable with the value 'constant'
- declares a variable with the value 'variable'
**Javascript var declaration keyword**
The var keyword is the oldest keyword amongst the variable declaration keywords, it can be function-scoped or global-scoped which means it can be accessed within a function in which it is defined, and outside if defined, can be accessed by any function.
**Example 1**: `num1` below is a global-scoped variable and `num2` is a function-scoped variable that can only be accessed via the function `func()`.
```javascript
var num1 = 10
function func() {
var num2 = 20
console.log(num1, num2)
}
func();
console.log(num1);
```
**Output**
> 10 20
> 10
**Explanation:**
* `var num1 = 10` declares a global variable `num1` with the value `10`.
* `function func()` defines a function `func` that declares a local variable `num2` with the value `20` and logs both `num1` and `num2` to the console.
* `func()` calls the function, logging `10, 20` to the console.
* `console.log(num1)` logs the global variable `num1` to the console, resulting in `10`.
The below code tries to access the function-scoped variable
```javascript
console.log(num2);
```
**Output**
> ReferenceError: num2 is not defined
**Explanation**
- `num2` is only defined inside the scope of the `func()` function, so when you try to log it outside of that scope, it's undefined.
**Example 2**: variables can be re-declared, such as `num1` and `num2` in the below code were redeclared showcasing how it affects both global-scoped and function-scoped.
```javascript
var num1 = 10
function func() {
var num2 = 20
var num2 = "ten"
console.log(num2)
}
func();
var num1 = 20
console.log(num1);
```
**Output**
> ten
> 20
**Explanation**
- `num1` is a global variable, defined outside the function, so it can be accessed and modified from anywhere in the code.
- `num2` is a local variable, defined inside the function, so it can only be accessed and modified within the `func()` function. Once the function finishes executing, `num2` is no longer accessible.
**JavaScript let declaration Keyword**
The let declaration keyword is introduced in ECMAScript 2015(ES6). It is an improved version of the var declaration keyword. Variables declared with let uses block scope. It is not accessible outside a particular code block.
Example 1: The code below demonstrates the let var declaration
```javascript
function happy() {
let mood = "don't know"
if (mood == "happy"){
console.log("I'm am ",mood)
}
else if (mood == "sad"){
console.log("I'm am ",mood)
}
else{
console.log("I'm am ",mood)
}
}
happy();
```
**Output:**
> I'm am don't know
**Explanation**
- The code defines a function `happy()` that:
- Sets a variable mood to `"don't know"`
- Checks if the `mood` is `"happy"`, `"sad"`, or `neither`
- Logs a message to the console with the value of `mood`
Example 2: The code below demonstrates let variable declaration outside the if statement's block.
```javascript
function happy() {
let mood = "jolly"
if (mood == "happy"){
let mood = "don't know"
console.log("I'm am ",mood)
}
console.log("I'm am ", mood)
}
happy();
console.log("I'm am ", day)
happy();
```
**Explanation**
- The happy() function sets mood to "jolly".
- The if statement checks if mood is "happy", but it's not, so the code inside the if block is skipped.
- The function logs "I'm am jolly" to the console.
- The function is called, executing the code inside it.
- Finally, the code tries to log the value of day, but day is not defined, so it logs undefined.
**Output:**
> console.log("I'm am ", day)
> ^
> ReferenceError: day is not defined
**JavaScript constdeclaration Keyword**
The const declaration keyword possesses all let keyword features their major difference is that const can’t be changed, it is used to assign variables that have fixed values examples such as pi, acceleration due to gravity, etc.
Example 1: The code below demonstrates the `pi` declaration later it was re-declared which returns an error code.
```javascript
const pi = 3.14159;
function func() {
pi = 34
console.log(pi)
}
func();
```
**Output:**
> pi = 34
> ^
> TypeError: Assignment to constant variable.
### JavaScript consists of two types of data types which are:
1. Primitive data types
2. Non-primitive(complex) data types
**Primitive Data Types:** These are data types whose variables can hold only a single value, and are immutable (i.e. cannot be changed in memory location).
• **Number**: These values consist of numbers, represented as floating-point numbers (e.g., 42.67, 3.14, etc.). and Special values like NaN (Not a Number), Infinity, and -Infinity.
```javascript
let num1 =12
let num = 10.7
```
• **String**: A sequence of characters (e.g., "hello", 'hello'), represented as a string of Unicode characters.
• ```javascript
• let firstName = “Bala”
• let LastName = ‘Babangida’
• ```
• **Boolean**: A true or false value.
```javascript
let isStudent = true
let isLecturer = false
```
• **Null**: A null value, representing the intentional absence of any object value.
```javascript
let firstName = “Bala”
let LastName = ‘Babangida’
let middleName = null
```
• **Undefined**: A variable is said to be undefined if it is uninitialized (i.e it has no assign value). Below is code demonstration of undefined value.
```javascript
let middleName;
let isPregnant;
let Age;
```
• **BigInt:** is are whole numbers larger than 2^53 - 1, which is the largest number JavaScript can reliably represent with the Number primitive. BigInt values are created by appending n to the end of an integer or by using the BigInt function.
```javascript
let bigNumber = 1234567890123456789012345678901234567890n;
```
- **Symbol**: A unique identifier, used to create unique property keys (new in ECMAScript 2015).
```javascript
// Create a Symbol
let mySymbol = Symbol();
// Create an object with a Symbol key
let obj = {
[mySymbol]: "Symbol value"
};
// Access the value using the Symbol key
console.log(obj[mySymbol]); // Output: "Symbol value"
// Try to access the value with a string key
console.log(obj["mySymbol"]); //
```
Output:
undefined
### Difference between null data type and undefined data types
For starter null and undefined data types mean variable values that are empty. But what distinguishes the both can best be understood using a case study, below are case study of null and undefined data types.
**Case Study 1:**
- Bala creates a new account on an e-commerce platform.
- He skips adding his age (which is an optional field) and middle name (which is not required).
- His age is assigned a null value because it's an attribute he has (i.e., the field exists, but he chose not to fill it).
- His middle name is assigned an undefined value because it's an attribute he lacks (i.e. the field doesn't exist or does not apply to him)
**Case Study 2:**
- A developer is working on a program that retrieves a user's address from a database.
- The database has fields for street, city, state, and zip code.
- If a user has not provided their address, the database returns null for all address fields.
- However, if the database doesn't have a field for a particular piece of information (e.g., fax number), it returns undefined.
- In this case, the developer can check for null to handle missing data and undefined to handle non-existent fields.
#### These case studies illustrate the difference between null and undefined:
- Null represents an empty or missing value for an existing attribute or field.
- Undefined represents a non-existent attribute or field, or a lack of value for a particular property or variable.
**Non-Primitive Data Types (Complex):**
• **Object**: A collection of key-value pairs, where keys are strings or symbols and values can be any type (e.g., {name: "John", age: 30}). Objects are changed and can also be extended or modified.
The code below demonstrates creating an object person with properties name and age, and a method greet.
```javascript
// Object example
const person = {
name: "John",
age: 30,
greet: function() {
console.log("Hello, my name is " + this.name + " and I am " + this.age + " years old.");
}
};
person.greet(); // Output: Hello, my name is John and I am 30 years old.
```
• **Array:** A collection of values of any type, indexed by integers (e.g., [1, 2, 3], ["a", "b", "c"]). Arrays can be changed and extended or modified. The code below demonstrates creating an array of colors and accessing its elements.
```javascript
// Array example
const colors = ["red", "green", "blue"];
console.log(colors[0]);
colors.push("yellow"); // Add a new element to the array
console.log(colors);
```
**Output**
> red
> ["red", "green", "blue", "yellow"]
**Explanation**
- Creates an array colors with three elements: "red", "green", and "blue".
- Logs the first element of the array (colors[0]), which is "red".
- Adds a new element "yellow" to the end of the array using the push() method.
- Logs the updated array, which now contains four elements: "red", "green", "blue", and "yellow".
The indexing of array starts from zero (0) up to the last index which is n-1. Where n is any number between 0 and the length of the array.
• **Function:** A block of code that can be called with arguments, returning a value (e.g., function greet(name) { console.log("Hello, " + name); }). Functions are objects and can have properties and methods. The code example below demonstrates defining a function add that takes two arguments and returns their sum.
**Example 1:** Below is a code demonstration of an addition Function.
```javascript
function add(x, y) {
return x + y;
}
const result = add(2, 3);
console.log(result); // Output: 5
```
**Example 2:** Function as an object
```javascript
function greet(name) {
console.log("Hello, " + name);
}
greet("Jane");
console.log(greet.name);
console.log(typeof greet);
```
**Output**
> Hello, Jane
> greet
> function
**Explanation**
- Defines a function greet that takes a name and logs a greeting message.
- Calls the function with the name "Jane", logging "Hello, Jane".
- Logs the function's name (greet) and type (function).
## Conclusion
Understanding variables and data types is crucial for effective programming in JavaScript. By using `var`, `let`, and `const`, you can manage the scope and mutability of your variables. Recognizing the difference between primitive and non-primitive data types helps in organizing and manipulating data efficiently. Keep practicing these concepts to become more proficient in JavaScript.
| ishaq_360 |
1,906,772 | Is Your Sidebar Hurting Your UX? Our Redesign Journey to Effortless Navigation | Have you ever felt frustrated navigating a website with a clunky sidebar? We hear you. At Hexmos,... | 0 | 2024-06-30T16:41:00 | https://dev.to/ganesh-kumar/is-your-sidebar-hurting-your-ux-our-redesign-journey-to-effortless-navigation-391d |
Have you ever felt frustrated navigating a website with a clunky sidebar? We hear you. At [Hexmos](https://hexmos.com/), we're constantly striving to improve the user experience.
Recently, we took a critical look at our sidebar design and identified many areas for improvement.
This blog post will delve into the journey of redesigning our sidebar navigation, taking inspiration from industry leaders.
## Our Existing Sidebar Design
### High Contrast and Visual Distraction
Our initial sidebar design boasted a bold black-and-white theme.
While visually striking, it created a high visual contrast that drew unnecessary attention away from the content users came for.
This distracted from the overall user experience.
### Solving issues in the initial design
We recognized the need for improvement with our initial sidebar design.
Prioritized user focus on the content by implementing a lighter theme for the sidebar and navigation bar.
<div style="text-align: center;">
<img src="https://journal-wa6509js.s3.ap-south-1.amazonaws.com/d52dc6e5c155176af994d8535405941d676fe7d06798df0b572a3535af63a8b9.png" alt="image 373" width="400"/>
</div>
This shift in color scheme achieved two key goals:
**Reduced Visual Clutter:** The lighter theme created a more balanced visual experience. The sidebar no longer competed with the main content area for user attention.
**Content Takes Center Stage:** By adopting a lighter theme, the content on the page became the focal point. Users could now easily find the information they sought without being distracted by the sidebar.
However, the User experience was not good due to multiple issues.
### Inefficient Multi-level Navigation
While the lighter theme addressed the initial contrast issue, further exploration revealed a new challenge.
Our multi-level navigation structure requires users to click through three levels to reach certain sections.
This inefficiency became apparent as users expressed frustration with the navigation flow. Imagine trying to find a specific setting buried within three layers of menus!
**A solution was needed.**
A three-click journey wasn't ideal. We brainstormed solutions to improve the navigation flow and user experience.
Our goal was to create a system that was both intuitive and efficient, allowing users to find what they needed quickly and effortlessly.
Before going into the solution let's understand the product we are building.
### Hexmos Feedback
We are a small team, building a product called [Hexmos Feedback](https://hexmos.com/feedback).
Feedback Consistently Motivate Your Team with Less Effort.
More than 60% of employees yearn for regular, meaningful feedback, but often feel undervalued.
Hexmos Feedback helps your people provide frequent feedback to one another effortlessly
so that your team always remains highly motivated.
Feedback keeps teams motivated and engaged through recognition and continuous feedback.
We have attendance management as a part of Hexmos Feedback. We go beyond simple attendance. We help you look at "participation".
Explore Feedback: https://hexmos.com/feedback
Our book: https://turnoverbook.com/
## Case Studies
To find inspiration for a more intuitive navigation system, we turned to industry leaders known for their user-friendly interfaces. Here's what we learned:
### [Notion](https://www.notion.so/)
Notion offered a clean, single-level sidebar.
It has some issues which cannot be used in our use case.
#### Absence of multi-level navigation
While this worked for their specific content structure, it wouldn't accommodate our complex, multi-level navigation needs.
#### Single-Level Navigation Wasn't Enough
Having all the elements in the sidebar will clutter the sidebar, making it difficult for users to find what they're looking for.
Continue Reading [Article](https://journal.hexmos.com/is-your-sidebar-hurting-your-ux-our-redesign-journey-to-effortless-navigation/#google-play-consolehttpsplaygooglecomconsole) | ganesh-kumar | |
1,906,771 | #3 Liskov Substitution Principle ['L' in SOLID] | LSP - Liskov Substitution Principle The Liskov Substitution Principle is the third principle in the... | 0 | 2024-06-30T16:40:59 | https://dev.to/vinaykumar0339/3-liskov-substitution-principle-l-in-solid-1jo2 | liskov, solidprinciples, designprinciples | **LSP - Liskov Substitution Principle**
The Liskov Substitution Principle is the third principle in the Solid Design Principles.
1. Objects of the Super-Class should be replaceable with objects of a Sub-Class without affecting the correctness of the program.
**Violating LSP:**
```swift
class Bird {
func fly() {
print("Bird is Flying...")
}
}
class Duck: Bird {
override func fly() {
print("Duck is Flying...")
}
}
class Ostrich: Bird {
//we will not be implementing this function in Ostrich because it does not make sense to add this function as Ostrich can't fly. so for demenstrate purposes keep this code with fatalError.
override func fly() {
fatalError("Ostrich doesn't fly so behaviour of the application going to change if we pass Ostrich Object to any function which having contract to Bird")
}
}
func flyBird(bird: Bird) {
bird.fly()
}
// usage
let bird = Bird()
let duckBird = Duck()
let ostrichBird = Ostrich()
flyBird(bird: bird)
flyBird(bird: duckBird)
flyBird(bird: ostrichBird)
```
**Issues with Violating SRP:**
1. Unexpected Behavior:
* Subclasses that change the expected behaviour of the base class can lead to runtime errors and bugs.
2. Reduced Maintainability:
* Modifying subclasses to fix LSP violations can make the code harder to maintain.
3. Inconsistent Substitution:
* Clients using the base class might not work correctly with all subclasses, leading to inconsistent behaviour.
**Adhering to LSP**
1. To adhere to LSP, Using Proper Inheritance, or using protocols etc.
### Using Class Separation
```swift
class BirdLSP {
func eat() {
print("Bird is eating...")
}
}
class FlyingBird: BirdLSP {
func fly() {
print("FlyingBird is flying...")
}
}
class DuckLSP: FlyingBird {
override func fly() {
print("Duck is flying...")
}
}
class OstrichLSP: BirdLSP {
override func eat() {
print("Ostrich eating...")
}
}
func flyBirdLSP(flyingBird: FlyingBird) {
flyingBird.fly()
}
// usage
let birdLsp = BirdLSP()
let flyingBirdLSP = FlyingBird()
let duckLSP = DuckLSP()
let ostrichLSP = OstrichLSP()
flyBirdLSP(flyingBird: flyingBirdLSP)
flyBirdLSP(flyingBird: duckLSP)
//The below example will throw an error at compile time
// flyBirdLSP(flyingBird: birdLsp)
// flyBirdLSP(flyingBird: ostrichLSP)
```
### Using Protocol
```swift
protocol Eatable {
func eat()
}
protocol Flyable {
func fly()
}
class BirdLSPUsingProtocol: Eatable {
func eat() {
print("BirdLSPUsingProtocol is eating")
}
}
class DuckLSPUsingProtocol: Eatable, Flyable {
func eat() {
print("DuckLSPUsingProtocol is eating")
}
func fly() {
print("DuckLSPUsingProtocol is flying")
}
}
class OstrichLSPUsingProtocol: Eatable {
func eat() {
print("OstrichLSPUsingProtocol is eating")
}
}
func flyingBirdLSPUsingProtocol(flyer: Flyable) {
flyer.fly()
}
// usage
let birdLSPUsingProtocol = BirdLSPUsingProtocol()
let duckLSPUsingProtocol = DuckLSPUsingProtocol()
let ostrichLSPUsingProtocol = OstrichLSPUsingProtocol()
flyingBirdLSPUsingProtocol(flyer: duckLSPUsingProtocol)
// below will throw an error on the compile time
// flyingBirdLSPUsingProtocol(flyer: birdLSPUsingProtocol)
// flyingBirdLSPUsingProtocol(flyer: ostrichLSPUsingProtocol)
```
**Benefits of Adhering to LSP:**
1. Improved Maintainability:
* Ensures that derived classes can be used interchangeably with base classes without unexpected behaviour.
2. Enhanced Flexibility:
* Encourages the use of polymorphism, allowing more flexible and scalable code.
3. Increased Reliability:
* Substitutable objects ensure consistent behaviour, reducing the likelihood of runtime errors.
**Drawbacks of Adhering to LSP:**
1. More Classes and Interfaces:
* Can lead to an increase in the number of classes and interfaces, making the codebase larger.
2. Initial Complexity:
* Implementing LSP correctly may require a more complex initial design and detailed planning.
**Mitigating Drawbacks:**
1. Balanced Approach:
* Apply LSP judiciously, balancing between simplicity and the need for extensibility.
2. Clear Documentation:
* Maintain clear and concise documentation to help developers understand class hierarchies and responsibilities.
3. Use of Design Patterns:
* Employ design patterns that naturally adhere to LSP, such as the Strategy pattern, to manage complexity.
## Conclusion:
By thoughtfully understanding and applying the Liskov Substitution Principle, you can create more maintainable, understandable, and reliable software. Ensuring that subclasses can be substituted for their superclasses without altering the program's correctness promotes better software design and flexibility.
[Open/Close Principle](https://dev.to/vinaykumar0339/2-open-close-principle-o-in-solid-2jj6)
[Interface Segregation Principle](https://dev.to/vinaykumar0339/4-interface-segregation-principle-i-in-solid-3g97)
[Check My GitHub Swift Playground Repo.](https://github.com/vinaykumar0339/SolidDesignPrinciples) | vinaykumar0339 |
1,906,770 | s-pro | Hi friends, I had the pleasure of collaborating with https://s-pro.io/ recently, and I must say, they... | 0 | 2024-06-30T16:40:19 | https://dev.to/andriano_nestorios_c95ecc/s-pro-3p07 | Hi friends, I had the pleasure of collaborating with [https://s-pro.io/](https://s-pro.io/) recently, and I must say, they exceeded all my expectations. Their innovative AI-driven software solutions have completely transformed how we do business, leading to significant improvements in both performance and profitability. The team's commitment to understanding our specific needs and delivering exceptional results has been outstanding. If you're looking to thrive in today's competitive market, I highly recommend S-PRO. They have the expertise and passion to take your business to the next level. | andriano_nestorios_c95ecc | |
1,906,768 | “==” and “===” difference in javascript | In javascript both “==” and “===” used for comparison but they have purposes and behaviors ... | 0 | 2024-06-30T16:38:52 | https://dev.to/sagar7170/-and-difference-in-javascript-44c1 | javascript, webdev, beginners | In javascript both “==” and “===” used for comparison but they have purposes and behaviors
## == (Double equal)
:
The == operator is used for loose equality comparison . When we compare with == it will only compare values not the data type means if we compare string of “5” and integer 5 the result will be true
```
console.log(5=="5") // true
console.log(5==5) // true
```
## 2. === (triple equal):
The === operator is used for strict equality comparison. As we studied == only compare values not data type but === compare both values and datatype means if we compare string of “5” and integer 5 the result will be false both values and data type must same
```
console.log(5=="5") // false
console.log(5==5) // true
```
| sagar7170 |
1,906,767 | Confusion vs Diffusion in cryptography | Confusion and Diffusion are essential concepts in cyrptography and network security. Both confusion... | 0 | 2024-06-30T16:36:16 | https://dev.to/himanshu_raj55/confusion-vs-diffusion-in-cryptography-i0n | cryptography, cybersecurity, cipher | Confusion and Diffusion are essential concepts in cyrptography and network security. Both confusion and diffusion are cryptographic techniques that are used to stop the deduction of the secret writing key from the attacker.
The major differences between confusion and diffusion are as follows:
Confusion: Diffusion:
1. In confusion the relationship between 1. In diffusion the
the key and the ciphertext is obscured the relationship between the
plaintext and ciphertext is
obscured
2. Confusion creates faint cipher text 2. Diffusion creates cryptic
plain text
3. Confusion uses substitution 3. Diffusion uses transposition
4. Vagueness is increased in the 4. Redundancy is increased in
resultant in the resultant
5. Both stream and block cipher uses 5. Only block cipher uses
confusion diffusion
| himanshu_raj55 |
1,906,463 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-30T10:02:17 | https://dev.to/topeciw546/buy-verified-cash-app-account-59ge | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | topeciw546 |
1,906,766 | Variable & Variable Scope | PHP Fundamentals | Create Variable in PHP The rules when you create variables in PHP: Variable declaration... | 0 | 2024-06-30T16:33:30 | https://dev.to/gunawanefendi/variable-variable-scope-in-php-3l4a | webdev, php, learning, beginners |
## Create Variable in PHP
The rules when you create variables in PHP:
1. Variable declaration with dollar ($) followed by variable name
2. Variable name must start with a letter or underscore (_)
3. Variable name is case-sensitive
Valid variables:
```
$name = "Gunawan"; //valid
$Name = "Gunawan"; //valid
$_name = "Gunawan; //valid
```
Not valid variables:
```
$4name = "Gunawan"; //not valid
$user-name = "Gunawan"; //not valid
$this = "Gunawan"; //not valid
```
## Variable Scope
PHP has 3 variable scopes:
1. Global
2. Local
3. Static
## Global scope
```
$name = "Gunawan";
function get_name() {
echo $name; // not valid
}
get_name();
```
To access a global variable within a function you must declare a global variable with the keyword 'global' within a function.
```
$name = "Gunawan";
function get_name() {
global $name;
echo $name; // valid
}
get_name();
```
## Use Array GLOBALS to Access Global Variable
The second way to access global variables is to use a global array.
```
$name = "Gunawan";
function get_name() {
echo $GLOBALS['name']; // valid
}
get_name();
```
## Static Variable
```
function test() {
static $number = 0;
echo $number;
$number++;
}
```
##Variable Super Global in PHP:
1. $GLOBALS
2. $_SERVER
3. $_GET
4. $_POST
5. $_FILES
6. $_COOKIE
7. $_SESSION
8. $_REQUEST
9. $_ENV
Download my repository php fundamental from [my github](https://github.com/gugunefendi/php-fundamental).
| gunawanefendi |
1,906,765 | FRONTEND WEB DEVELOPMENT (comparism on two technologies). | In the field of software development, what is developed is divided into two categories: everything... | 0 | 2024-06-30T16:33:13 | https://dev.to/joseb007/frontend-web-development-comparism-on-two-technologies-4p10 |
In the field of software development, what is developed is divided into two categories: everything visible to the user and the operations that run in the background. Front-end technology is what we see and interact with when we visit a website or use a mobile app. Back End Technology and DevOps encompass all of the behind-the-scenes activities that distributes data, as well as the speed with which it arrives.
The front end stack is comprised of a variety of languages and libraries. While they differ by application, only a few generic languages are supported by all web browsers.
Comparing two frontend web technologies (CSS and Bootstrap) under the following areas; Definition, Usage, Learning curve, Customization, Speed of development, Flexibility and control, File size and performance, and Community and support.
1. CSS is a core technology that enables developers to style and layout web pages, providing complete control over presentation elements such as colors, fonts, spacing, and responsiveness.

Bootstrap, on the other hand, is a CSS-based framework that includes a set of pre-designed components and a grid system to help developers get things done faster.

2. CSS is used to design and layout web pages, such as adjusting the font, color, size, and spacing of information, dividing it into numerous columns, and adding animations and other ornamental elements. While Bootstrap is used to create responsive and mobile-first websites using prefabricated CSS and JavaScript components.
3. CSS is essential in web development, and understanding it is critical for styling online pages. It may be easier to learn from scratch, but mastering it takes time. While Bootstrap has a steeper learning curve for absolute beginners, it does require basic CSS (and, optionally, HTML and JavaScript) to use the framework effectively.
4. CSS enables complete customization from the ground up, allowing you to develop one-of-a-kind designs without the use of predefined components. While Bootstrap provides considerable modification capabilities via predefined classes and components, such customizations might add complexity.
5. CSS creates unique designs from scratch, which can take longer than using a framework like Bootstrap. While Bootstrap allows for the rapid construction of layouts and components, it is best suited for projects with tight deadlines.
6. CSS gives you complete control over the styling of web pages, allowing for detailed and specialized designs without the limits of a framework. While Bootstrap comes with a powerful grid system and adaptable design, its predefined components may limit fine-grained design control.
7. CSS files are often smaller than a whole framework, resulting in speedier page load times, particularly for simple websites. While the Bootstrap framework (even the full version) can result in bigger file sizes, which may hinder load times for small websites.
8. CSS has widespread support; unique design solutions necessitate independent research or contact with community forums and documentation. Bootstrap has a strong community and abundant documentation, making it easier to solve problems or learn from pre-existing templates and themes.
The knowledge about frontend web development is best acquired through practice when compared to the theoretical knowledge you have above.
Below are links that can usher you into your dream life of becoming a frontend web developer.
[Hng free intern package](https://hng.tech/internship),[Hng premium intern package](https://hng.tech/premium). | joseb007 | |
1,906,764 | MyFirstApp - React Native with Expo (P11) - Create a Layout Settings | MyFirstApp - React Native with Expo (P11) - Create a Layout Settings | 27,894 | 2024-06-30T16:32:42 | https://dev.to/skipperhoa/myfirstapp-react-native-with-expo-p11-create-a-layout-settings-4fd4 | react, reactnative, webdev, tutorial | MyFirstApp - React Native with Expo (P11) - Create a Layout Settings
{% youtube Y-sIi1N60m0 %} | skipperhoa |
1,906,763 | Enhance Your Home with Copper Gutters in Litchfield, CT | Copper gutters are becoming a popular choice among homeowners in Litchfield, CT, due to their... | 0 | 2024-06-30T16:29:12 | https://dev.to/allkj21uu/enhance-your-home-with-copper-gutters-in-litchfield-ct-n06 | Copper gutters are becoming a popular choice among homeowners in Litchfield, CT, due to their exceptional durability, timeless beauty, and low maintenance requirements. In this comprehensive guide, we’ll explore the benefits of copper gutters, the installation process, and why they are an excellent choice for homes in Litchfield.**[copper gutters litchfield ct](https://alllitchfieldgutters.com/litchfield-ct-copper-gutters/)**
**The Benefits of Copper Gutters**
1. Durability
Copper gutters are incredibly durable. Unlike other materials such as aluminum or vinyl, copper is highly resistant to rust and corrosion. With proper maintenance, copper gutters can last more than 50 years, making them a long-term investment for your home in Litchfield, CT.
2. Aesthetic Appeal
One of the most compelling reasons to choose copper gutters is their aesthetic appeal. Copper starts with a shiny, reddish-brown appearance and gradually develops a beautiful patina over time. This patina, which ranges from green to dark brown, not only adds to the charm of your home but also provides an additional layer of protection against the elements.
3. Low Maintenance
Copper gutters require less maintenance compared to other gutter materials. They are naturally resistant to mold, mildew, and algae, reducing the need for frequent cleaning. Their robust nature also means fewer repairs and replacements over the years.
4. Eco-Friendly
Copper is a natural material that is fully recyclable. Opting for copper gutters helps reduce environmental impact compared to synthetic materials that contribute to landfill waste. This makes copper gutters an eco-friendly choice for environmentally conscious homeowners in Litchfield.
**The Installation Process of Copper Gutters**
1. Planning and Measurement
The first step in installing copper gutters is careful planning and accurate measurement. Professional installers assess your home’s roofline to determine the required length and style of gutters. Precise measurements are crucial for a seamless installation.
2. Material Selection
After measurements are taken, the appropriate copper gutter materials are selected. Copper gutters come in various styles, such as half-round and K-style, and additional components like downspouts, brackets, and hangers are chosen to match your home’s design.
3. Removal of Old Gutters
If your home has existing gutters, they need to be removed carefully to avoid damaging the roof or fascia. This involves detaching the old gutters and disposing of them properly.
4. Preparing the Fascia
Before installing new copper gutters, the fascia board must be inspected and prepared. Any damaged or rotten sections should be repaired or replaced to ensure a solid foundation for the new gutters.
5. Installing Copper Gutters
The installation begins with attaching gutter hangers or brackets to the fascia. The copper gutters are then carefully aligned and secured in place. Proper alignment ensures efficient water flow and prevents sagging or leakage.
6. Sealing and Testing
After installation, all joints and seams are sealed with a high-quality sealant to prevent leaks. The system is then tested by running water through the gutters to ensure there are no leaks or blockages and that water flows smoothly towards the downspouts.
**Why Choose Copper Gutters for Your Litchfield Home?**
1. Historical and Architectural Significance
Litchfield, CT, is known for its historic homes and architectural beauty. Copper gutters complement the traditional and colonial styles prevalent in the area, maintaining the historical integrity of your property while providing modern functionality.
2. Investment in Quality
Although copper gutters are more expensive upfront compared to other materials, their long lifespan and low maintenance requirements make them a cost-effective investment in the long run. You save on frequent replacements and repairs, making copper gutters a wise financial choice.
3. Enhanced Property Value
Homes with copper gutters often have a higher property value due to the premium quality and aesthetic appeal they add. Installing copper gutters can be a selling point if you decide to put your home on the market, attracting buyers looking for durability and elegance.
4. Weather Resistance
Litchfield experiences a range of weather conditions, from heavy rainfall to snow and ice. Copper gutters are well-suited to handle these challenges due to their strength and resistance to thermal expansion and contraction. They remain functional and visually appealing through all seasons.
5. Customization Options
Copper gutters offer various customization options to match your home’s unique style. You can choose from different shapes, sizes, and finishes, allowing you to create a gutter system that perfectly complements your home’s exterior design.
**Maintaining Copper Gutters**
1. Regular Inspection
Regular inspection of your copper gutters is essential to ensure they remain in good condition. Check for any signs of damage, leaks, or clogs and address them promptly to prevent further issues.
2. Cleaning
While copper gutters require less maintenance, occasional cleaning is still necessary. Remove any debris, leaves, or twigs that may accumulate in the gutters, especially during the fall season. This helps maintain proper water flow and prevents blockages.
3. Sealing and Repairs
Inspect the seals and joints periodically to ensure they remain intact. If you notice any leaks or damage, apply a copper-friendly sealant to repair them. Addressing minor issues early can prevent more significant problems down the line.
4. Patina Care
The patina that forms on copper gutters is a natural protective layer. Avoid using harsh chemicals or abrasive materials to clean the gutters, as this can damage the patina. If you prefer to maintain the shiny appearance of new copper, you can apply a protective coating to slow down the patina formation.
**Conclusion**
Copper gutters are an excellent investment for homeowners in Litchfield, CT, offering a combination of durability, aesthetic appeal, and low maintenance. Their ability to withstand harsh weather conditions and their long lifespan make them a superior choice for protecting your home. By choosing copper gutters, you enhance your property’s value and contribute to its historical and architectural significance. Regular maintenance and care will ensure your copper gutters remain functional and beautiful for decades to come.
Embrace the elegance and durability of copper gutters and elevate your home's exterior with a timeless, eco-friendly choice.
| allkj21uu | |
1,906,762 | The importance of TypeScript, its uses, and types | In the world of software development, TypeScript stands out as one of the most powerful and... | 0 | 2024-06-30T16:25:59 | https://dev.to/mhmd-salah/the-importance-of-typescript-its-uses-and-types-507e | javascript, typescript, website, node | #### In the world of software development, TypeScript stands out as one of the most powerful and influential tools for improving code quality and development speed. Developed by Microsoft, TypeScript is an open source programming language based on JavaScript with powerful Type Annotations. In this article, we will review the importance of TypeScript, its uses, and its different types.
TypeScript is a powerful tool that can significantly improve code quality and development speed. By catching errors early, improving the developer experience, and making maintenance easier, TypeScript provides an ideal development environment for large, complex projects. Whether you're developing web, server, mobile, or even games applications, TypeScript gives you the tools you need to succeed.
## The importance of TypeScript
- Early detection of errors:
One of the biggest advantages of TypeScript is its ability to detect errors early during the development process. Using qualitative feedback, syntactic and logical errors can be detected before the code is run, reducing debugging time and increasing application reliability.
- Improve developer experience:
TypeScript provides a better development experience by offering features such as Autocomplete and Intelligent Suggestions. These features make writing code faster and more accurate.
- Maintenance improvement:
Thanks to qualitative comments, the code becomes clearer and easier to understand. This makes it easier for large teams to work on the same project and makes maintenance and adding new features smoother.
- JavaScript compatibility:
TypeScript is fully compatible with JavaScript, which means that any JavaScript code can be used within a TypeScript project. This makes it easier to switch to TypeScript gradually without having to rewrite the entire code.
## TypeScript uses
- Web application development:
TypeScript is widely used in developing large and complex web applications. Many popular frameworks like Angular are primarily based on TypeScript.
- Server-side application development:
TypeScript can be used with Node.js to develop server applications. TypeScript provides powerful features that make developing server applications safer and easier.
- Mobile application development:
Using frameworks like React Native, TypeScript can be used to develop mobile applications for iOS and Android.
- Game development:
TypeScript is also used in game development with engines like Phaser, providing a powerful and flexible development environment.
## TypeScript types
1 - Type Annotations:
Type comments are used to specify the type of variables, parameters, and return values from functions. This helps in catching errors and makes the code more clear.
2 - Interfaces:
Interfaces are used to define the appearance of objects. Interfaces help ensure that objects follow a specific structure, making it easier to work with code.
3 - Classes:
Classes are used to define objects and provide object-oriented programming (OOP) features such as inheritance and encapsulation.
4 - Enums:
Enumerations are used to define a set of constant values. Enumerations help make the code clearer and easier to understand.
| mhmd-salah |
1,906,760 | Top Crypto-Friendly Countries in 2022 | Cryptocurrency has revolutionized finance and investment, and its influence is set to grow. However,... | 27,673 | 2024-06-30T16:20:50 | https://dev.to/rapidinnovation/top-crypto-friendly-countries-in-2022-34de | Cryptocurrency has revolutionized finance and investment, and its influence is
set to grow. However, not all countries are equally welcoming to crypto. Some
have stringent regulations, while others are more lenient. Curious about which
jurisdictions offer the best conditions for crypto in terms of regulations and
taxes? Read on to discover the top crypto-friendly countries and regions in
2022.
## Bermuda
Bermuda's regulatory regime is one of the first designed specifically for
digital assets. With no taxes on income, capital gains, or withholding taxes
on crypto transactions, Bermuda is a haven for crypto investors and blockchain
projects. Notably, Bermuda was the first country to accept cryptocurrencies
for tax payments.
## Portugal
Portugal is another crypto-tax-friendly country. Individual investors don’t
pay taxes on income or capital gains from crypto. However, businesses
accepting crypto payments do have to pay income tax, making it less ideal for
companies.
## El Salvador
El Salvador made headlines by declaring Bitcoin as legal tender. To attract
foreign investments and reduce dependency on the US dollar, the country
imposes no taxes on income and capital gains from Bitcoin. However, the
regulatory framework is still maturing, posing potential risks for investors.
## Singapore
Singapore, a fintech hub, is pro-crypto. The Monetary Authority of Singapore
maintains a balance between minimal regulation and preventive monitoring.
There is no tax on capital gains for both individuals and companies, although
businesses accepting crypto payments must pay income tax.
## Georgia
Georgia, with its affordable hydroelectric power, is a center for crypto
mining. Cryptocurrencies are considered properties, not legal tender.
Individual investors enjoy no capital gains tax, but companies face a 15%
corporate income tax and a 5% personal dividend tax.
## Cyprus
Cyprus currently has no legal framework for cryptocurrencies, meaning
individual investors likely won’t pay taxes on crypto trading profits. Legal
entities, however, must pay a 12.5% tax on all income generated.
## Switzerland
Switzerland is a top destination for crypto investors, thanks to its
pioneering banks and "Crypto Valley." Tax regulations vary by canton, with
some offering zero-capital-gains tax on crypto trading. However, income from
crypto mining and trading is taxed in certain cantons.
## Slovenia
Slovenia is one of Europe’s best jurisdictions for crypto. Individual capital
gains from crypto trading are not taxed, but businesses must pay corporate
income tax if they receive payments in crypto. Ljubljana, the capital, is home
to several crypto start-ups and merchants accepting Bitcoin.
## Germany
Germany treats cryptocurrencies as private money for individuals. Residents
who hold crypto for over a year don’t pay taxes on it. However, businesses are
subject to capital gains tax, making it less favorable for companies.
## Estonia
Estonia treats cryptocurrencies as digital assets, not legal tender.
Individual income from crypto is taxed, but there is no specific crypto-
related tax for companies. Estonia’s established legal framework makes its
crypto environment more trustworthy and lower risk.
## Malta
Malta offers a comprehensive regulatory package for the crypto ecosystem.
Foreign companies and individual investors don’t pay income and capital gains
tax for long-term crypto investments. Trading income, however, is subject to a
35% income tax, with structuring options to reduce it to 0-5% for non-
residents.
## Conclusion
Digital currencies are increasingly accepted as a store of value and have the
potential to expand into conventional investment. The crypto-friendly
countries covered here have a significant competitive advantage. If you’re
preparing your business for Web 3.0, consider jurisdictions with favorable
conditions. Thoroughly research cryptocurrency regulations to decide what
matters most to you—low capital gains tax or an established regulatory
framework.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
## URLs
* <http://www.rapidinnovation.io/post/top-crypto-friendly-countries-and-regions-2022>
## Hashtags
#CryptoFriendlyCountries
#CryptoRegulations
#BlockchainInvestment
#CryptoTaxHavens
#DigitalAssets
| rapidinnovation | |
1,899,514 | Next.js 14 and NextAuth v4 : Credentials Authentication A Detailed Step-by-Step Guide | Authentication is a crucial feature in web applications, and it can also be challenging to implement... | 0 | 2024-06-30T16:20:05 | https://www.coderamrin.com/blog/nextjs-14-next-auth-v4-credentials-authentication | webdev, authjs, nextjs, beginners | Authentication is a crucial feature in web applications, and it can also be challenging to implement correctly. In this article, you'll learn how to integrate authentication in Next.js using NextAuth (v4).
Let’s get started.
### Prerequisites
To make the most out of this guide, here’s what you should have under your belt:
1. A basic understanding of HTML, CSS, and JavaScript.
2. Some familiarity with Next.js and TypeScript.
3. Experience using Tailwind CSS, even if it’s just the basics.
4. Node.js installed on your computer. If unsure, check by running `node -v` in your terminal!
### Install Next.js
To get started you need a Next.js project. You can skip this part if you want to integrate NextAuth into your existing Next.js project.
Run this command to initialize a bare-bone next.js project.
```bash
npx create-next-app@latest
```
Follow the prompts and go with the default.

This will generate a Next.js project with all the necessary files/folders and dependencies.
After the installation **cd** into the project directory and run this command:
```bash
npm run dev
```
This will open your project on [localhost:3000](http://localhost:3000)
### Database Configuration
For authentication, you have to store the user data on a Database.
So, before configuring NextAuth configure the database.
In this guide, we will use Postgresql with Prisma.
Install all these dependencies for integrating NextAuth with a Database.
```bash
npm install prisma next-auth @next-auth/prisma-adapter bcrypt
```
### Create the Database Schema
On the root of your project folder create the schema file **prisma → schema.prisma**
Now, copy this over to that file.
```jsx
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
}
model User {
id String @id @default(cuid())
name String?
email String @unique
emailVerified DateTime?
password String?
image String?
accounts Account[]
sessions Session[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Account {
userId String
type String
provider String
providerAccountId String
refresh_token String?
access_token String?
expires_at Int?
token_type String?
scope String?
id_token String?
session_state String?
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
@@id([provider, providerAccountId])
}
model Session {
sessionToken String @unique
userId String
expires DateTime
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model VerificationToken {
identifier String
token String
expires DateTime
@@id([identifier, token])
}
```
The first two blocks of code connect the database with Prisma.
So, we need the DATABASE_URL to make it work.
There are a lot of PostgreSQL providers available for example:
1. Neon
2. Supabase
3. Xata
After you generate the DATABASE_URL update the .env file
```jsx
DATABASE_URL="YOUR_DATABASE_URL"
```
**Migration commands**
Now, run these commands to create the tables on the database.
```jsx
npx prisma generate
npx prisma db push
```
**Create the Prisma client**
Now create the Prisma Client to perform all the database-related operations.
Inside the app directory create **libs/prismaDB.ts** file and paste this code.
```jsx
import { PrismaClient } from "@prisma/client";
const globalForPrisma = global as unknown as { prisma: PrismaClient };
export const prisma =
globalForPrisma.prisma ||
new PrismaClient({
log: ["query"],
});
if (process.env.NODE_ENV !== "production") globalForPrisma.prisma = prisma;
```
Now you can import `prisma` from anywhere on the project and use it to run the Database queries.
### Configure NextAuth
Now we have to create the authOptions config.
Create the authOptions object inside the **libs/auth.ts** file ****and paste this code.
```jsx
import { prisma } from "@/app/libs/prismaDB";
import { PrismaAdapter } from "@next-auth/prisma-adapter";
import bcrypt from "bcrypt";
import { NextAuthOptions } from "next-auth";
export const authOptions: NextAuthOptions = {
pages: {
signIn: "/auth/signin",
},
session: {
strategy: "jwt",
},
adapter: PrismaAdapter(prisma),
secret: process.env.SECRET,
providers: [
// the providers goes here
],
// debug: process.env.NODE_ENV === "development",
};
```
In this code snippet, we imported the Prisma client, PrismaAdapter, and NextAuthOptions from next-auth.
And created authOptions object, which contains all the nextAuth-related configs.
Here’s the break-down:
**pages:** It defines the page for auth. Used for redirects and fallback.
**session:** Defines the session strategy. We will use **JWT.**
**adapter:** Defines the adapter to manage Database operations. We will use **Prisma.**
**secret:** A secret for the Session.
**providers:** An Array of all the providers. We will use the Credentials provider. ****
### Configuring Credentials Provider
Now, let’s configure the Credentials provider.
Go to **auth.ts** file and import the **Credentials provider** from nextAuth
```jsx
import CredentialsProvider from "next-auth/providers/credentials";
```
Now, add this code snippet inside the providers array:
```jsx
CredentialsProvider({
name: "credentials",
id: "credentials"
credentials: {
email: { label: "Email", type: "text", placeholder: "jsmith" },
password: { label: "Password", type: "password" },
username: {
label: "Username",
type: "text",
placeholder: "John Smith",
},
},
async authorize(credentials) {
// check to see if email and password is there
if (!credentials?.email || !credentials?.password) {
throw new Error("Please enter an email and password");
}
// check to see if user exists
const user = await prisma.user.findUnique({
where: {
email: credentials?.email,
},
});
// if no user was found
if (!user || !user?.hashedPassword) {
throw new Error("No user found");
}
// check to see if password matches
const passwordMatch = await bcrypt.compare(
credentials.password,
user.hashedPassword
);
// if password does not match
if (!passwordMatch) {
throw new Error("Incorrect password");
}
return user;
},
}),
```
Here’s the breakdown of the credentials config:
**Name:** The name of the credentials provider. If only one Credentials provider is used, it is specified in the `signIn` function.
**Id:** A unique identifier for the credentials provider. When you have more than one credentials provider, you must use this identifier in the `signIn` function instead of the name.
**Credentials:** Specify the values required for signing in.
**Authorize:** A callback function that uses the provided credentials to perform the authorization operations.
### Configure User Registration
You need a registered user on the Database to be able `signIn`
Let’s set up user registrations.
Create the register route in **api→auth→register→route.ts** and paste this code.
```jsx
import bcrypt from "bcrypt";
import { NextResponse } from "next/server";
import { prisma } from "@/app/libs/prismaDB";
export async function POST(request: any) {
const body = await request.json();
const { name, email, password } = body;
if (!name || !email || !password) {
return new NextResponse("Missing Fields", { status: 400 });
}
const exist = await prisma.user.findUnique({
where: {
email,
},
});
if (exist) {
throw new Error("Email already exists");
}
const hashedPassword = await bcrypt.hash(password, 10);
const user = await prisma.user.create({
data: {
name,
email,
hashedPassword,
},
} as any);
return NextResponse.json(user);
}
```
To register users, create a POST route that gets the user data from the request body.
Check if any required fields are missing; if they are, send an error message.
Next, fetch the user with the provided email. If the user already exists, return an error message and prompt them to log in or try a different email. If no user is found, create a new user with the provided credentials and return the user.
### Sing Up Form
Let’s create the sign-up form to register User.
Create the Sign-Up route in **app → auth → signup → page.tsx** and paste this code.
```jsx
"use client";
import Link from "next/link";
import React from "react";
import axios from "axios";
import { useRouter } from "next/navigation";
import { useState } from "react";
import toast from "react-hot-toast";
const Signup = () => {
const router = useRouter();
const [data, setData] = useState({
name: "",
email: "",
password: "",
rePassword: "",
});
const { name, email, password, rePassword } = data;
const handleChange = (e: any) => {
setData({ ...data, [e.target.name]: e.target.value });
};
const registerUser = async (e: any) => {
e.preventDefault();
if (!name || !email || !password) {
return toast.error("Something went wrong!");
}
axios
.post("/api/auth/register", {
name,
email,
password,
})
.then(() => {
toast.success("User has been registered.");
router.push("/auth/signin");
setData({
name: "",
email: "",
password: "",
rePassword: "",
});
})
.catch(() => toast.error("Something went wrong"));
};
return (
<form
onSubmit={registerUser}
className="w-[300px] mx-auto flex flex-col justify-center h-screen"
>
<h1 className="text-3xl font-bold mb-5">Sign Up</h1>
<div className="mb-6">
<label
htmlFor="name"
className="block mb-2 text-sm font-medium text-gray-900 dark:text-white"
>
Name
</label>
<input
type="text"
id="name"
name="name"
value={data.name}
onChange={handleChange}
className="bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2.5 dark:bg-gray-700 dark:border-gray-600 dark:placeholder-gray-400 dark:text-white dark:focus:ring-blue-500 dark:focus:border-blue-500"
placeholder="jhon doe"
required
/>
</div>
<div className="mb-6">
<label
htmlFor="email"
className="block mb-2 text-sm font-medium text-gray-900 dark:text-white"
>
Email
</label>
<input
type="email"
id="email"
name="email"
value={data.email}
onChange={handleChange}
className="bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2.5 dark:bg-gray-700 dark:border-gray-600 dark:placeholder-gray-400 dark:text-white dark:focus:ring-blue-500 dark:focus:border-blue-500"
placeholder="john.doe@company.com"
required
/>
</div>
<div className="mb-6">
<label
htmlFor="password"
className="block mb-2 text-sm font-medium text-gray-900 dark:text-white"
>
Password
</label>
<input
type="password"
id="password"
name="password"
value={data.password}
onChange={handleChange}
className="bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2.5 dark:bg-gray-700 dark:border-gray-600 dark:placeholder-gray-400 dark:text-white dark:focus:ring-blue-500 dark:focus:border-blue-500"
placeholder="•••••••••"
required
/>
</div>
<button
type="submit"
className="text-white bg-blue-700 hover:bg-blue-800 focus:ring-4 focus:outline-none focus:ring-blue-300 font-medium rounded-lg text-sm w-full sm:w-auto px-5 py-2.5 text-center dark:bg-blue-600 dark:hover:bg-blue-700 dark:focus:ring-blue-800"
>
Submit
</button>
<div className="mt-5">
<p>
Already have an account?{" "}
<Link className="text-blue-700" href="/auth/signin">
Signin
</Link>{" "}
or Go to{" "}
<Link className="text-blue-700" href="/">
Home
</Link>
</p>
</div>
</form>
);
};
export default Signup;
```
It is a simple sign-up form. To keep it simple I put it right into the route. You can keep your sign-in, and sign-up form inside the components folder.
I’ve also used toast to show the message. Check out the **Setup the Session Provider** section to see how to configure toast notification.
### Sign In Form
Lastly, we need this sign-in form.
Just like the sign-up route create the sign-in route and paste this code into it.
```jsx
"use client";
import Link from "next/link";
import React, { useState } from "react";
import { signIn } from "next-auth/react";
import toast from "react-hot-toast";
const Signin = () => {
const [data, setData] = useState({
email: "",
password: "",
});
const { email, password } = data;
const handleChange = (e: any) => {
setData({ ...data, [e.target.name]: e.target.value });
};
const handleSubmit = async (e: any) => {
e.preventDefault();
const result = await signIn("credentials", {
redirect: false,
email,
password,
});
if (result?.ok) {
toast.success("Logged in successfully");
setData({
email: "",
password: "",
});
window.location.href = "/";
} else {
toast.error("Invalid credentials");
}
};
return (
<div className="w-[300px] mx-auto flex flex-col justify-center h-screen">
<h1 className="text-3xl font-bold mb-5">Sign in</h1>
<form onSubmit={handleSubmit}>
<div className="mb-6">
<label
htmlFor="email"
className="block mb-2 text-sm font-medium text-gray-900 dark:text-white"
>
Email
</label>
<input
type="email"
id="email"
name="email"
value={email}
onChange={handleChange}
className="bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2.5 dark:bg-gray-700 dark:border-gray-600 dark:placeholder-gray-400 dark:text-white dark:focus:ring-blue-500 dark:focus:border-blue-500"
placeholder="john.doe@company.com"
required
/>
</div>
<div className="mb-6">
<label
htmlFor="password"
className="block mb-2 text-sm font-medium text-gray-900 dark:text-white"
>
Password
</label>
<input
type="password"
id="password"
name="password"
value={password}
onChange={handleChange}
className="bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2.5 dark:bg-gray-700 dark:border-gray-600 dark:placeholder-gray-400 dark:text-white dark:focus:ring-blue-500 dark:focus:border-blue-500"
placeholder="•••••••••"
required
/>
</div>
<button
type="submit"
className="text-white bg-blue-700 hover:bg-blue-800 focus:ring-4 focus:outline-none focus:ring-blue-300 font-medium rounded-lg text-sm w-full sm:w-auto px-5 py-2.5 text-center dark:bg-blue-600 dark:hover:bg-blue-700 dark:focus:ring-blue-800"
>
Submit
</button>
<div className="mt-5">
<p>
Don't have an account?{" "}
<Link className="text-blue-700" href="/auth/signup">
Signup
</Link>{" "}
or Go to{" "}
<Link className="text-blue-700" href="/">
Home
</Link>
</p>
</div>
</form>
</div>
);
};
export default Signin;
```
So, we have a simple sign-in form. We used state to store the form data.
And handled the form submission slightly differently, we didn’t call our auth/sign-in API instead we used the `signIn` function from NextAuth.
The sign function takes two arguments, first the name of the provider and then a callback function to check the response. (Just like our API)
### Setup the Session provider
Lastly, you have to wrap the whole application with the SessionProvider.
The SessionProvider will allow you to use `useSession` on your application.
Inside the app directory create **providers→index.tsx** and paste this code into it.
```jsx
"use client";
import { SessionProvider } from "next-auth/react";
import { Toaster } from "react-hot-toast";
const Proveiders = ({ children }: { children: React.ReactNode }) => {
return (
<>
<div className="z-[99999]">
<Toaster position="top-center" reverseOrder={false} />
</div>
<SessionProvider>{children}</SessionProvider>
</>
);
};
export default Proveiders;
```
In this provider's file, you can add any of the providers you need to use, it will help you keep `layout.tsx` file clean.
At the top of the session provider, I added Toaster for the toast notification.
Don’t forget to install `react-hot-toast` to show the toast notification.
Now, import `Providers` inside the `layout.tsx` and wrap `children` inside it.
```jsx
import type { Metadata } from "next";
import { Inter } from "next/font/google";
import "./globals.css";
import Proveiders from "./providers";
const inter = Inter({ subsets: ["latin"] });
export const metadata: Metadata = {
title: "Create Next App",
description: "Generated by create next app",
};
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body className={inter.className}>
<Proveiders>{children}</Proveiders>
</body>
</html>
);
}
```
### Getting the User Session in a Client Component
Now you can see the logged-in user’s data from the Session.
Here’s an example of how to get the session with `useSession` hook:
```jsx
"use client";
import Link from "next/link";
import { useSession } from "next-auth/react";
import { signOut } from "next-auth/react";
export default function Home() {
const { data: session } = useSession();
return (
<main className="flex flex-col items-center justify-between p-24">
{session?.user && (
<div>
<p className="text-3xl">Welcome, {session.user.name}</p>
<div className="mb-5">{JSON.stringify(session?.user, null, 2)}</div>
<button
onClick={() => signOut()}
className="bg-black hover:bg-gray-700 text-white font-bold py-2 px-4 rounded"
>
Logout
</button>
</div>
)}
{!session?.user && (
<div className="mt-5 space-x-3">
<Link
className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"
href="/auth/signin"
>
Signin
</Link>
<Link
className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"
href="/auth/signup"
>
Signup
</Link>
</div>
)}
</main>
);
}
```
Retrieve the user session using the `useSession` hook and render the user data. If the session is empty, the user is not logged in. Ask the user to sign in or sign up to view the session.
### Getting the User Session in a Server Component
Getting a user session on a Server component is a bit different than a Client component because you can’t use `useSession` hook on a server component.
Here’s how you can get the logged-in user session on the Server component.
```jsx
import React from "react";
import { authOptions } from "../libs/auth";
import { getServerSession } from "next-auth";
import Link from "next/link";
const UserPage = async () => {
const session = await getServerSession(authOptions);
return (
<main className="flex flex-col items-center justify-between p-24">
{session?.user && (
<div>
<p className="text-3xl">Welcome, {session.user.name}</p>
<div className="mb-5">{JSON.stringify(session?.user, null, 2)}</div>
<Link href="/">Go back to home</Link>
</div>
)}
{!session?.user && (
<div className="mt-5 space-x-3">
<Link
className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"
href="/auth/signin"
>
Signin
</Link>
<Link
className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"
href="/auth/signup"
>
Signup
</Link>
</div>
)}
</main>
);
};
export default UserPage;
```
First, import authOptions from auth.ts file and import `getServerSession` from `next-auth`
Inside the component call the `getServerSession` function with `authOptions`.
It will return the current user session.
You have the user session on the Server Component.
Congrats!
You successfully integrated Authentication on your Next.js App.
### Resources
**Source code:** https://github.com/Coderamrin/nextauth-integration-v4/tree/credentials-signin
Documentation: https://authjs.dev/getting-started/authentication/credentials
### Conclusion
In this article, you learned how to integrate NextAuth in the Next.js 14 app. We covered the Credentials provider only, In the next part we will cover Social login and Magic link.
Follow me to get notified when the next part is published.
**Connect With Me**
[Twitter/x](https://x.com/CoderAmrin)
[Github](https://github.com/coderamrin/)
[LinkedIn](https://www.linkedin.com/in/coderamrin/)
Happy Coding.
| coderamrin |
1,906,758 | The most recent backend problem I've had to solve. | The most recent backend problem I have solved, is non existent. I am a beginner in the field of... | 0 | 2024-06-30T16:13:46 | https://dev.to/halimat_yakubu_f257fd0215/the-most-recent-backend-problem-ive-had-to-solve-2g0d | The most recent backend problem I have solved, is non existent. I am a beginner in the field of backend development, therefore, I have no projects to show. However, I joined HNG with the view to leave the internship with dozens of projects to show for it.
**A little about myself**
I am a fast learner, critical thinker and very good at collaborating with people, I have not worked on any tough projects that can actually show my grit and determination, therefore the only way I can display these qualities is by being a part of this internship and sticking it through until the end, so rather than this being an article to show my programming skills, it is an article to show my anticipation and determination to learn and improve my skills.
**Why I want to be part of HNG Internship**
As stated above, I want learning and improving my skills to be my top priority as a member of this internship. Additionally, I would love to collaborate with like minded individuals, mentors, and colleagues in order to have a better grasp of the tech workspace.
If you have gotten this far and are interested in being a part of something amazing, then to learn more about the HNG Internship,
Visit: https://hng.tech/internship
If you are looking to hire talented developers,
Check out HNG Hire https://hng.tech/hire | halimat_yakubu_f257fd0215 | |
1,906,756 | Strong Performance with EC2, Lambda, and the Momento SDK for Rust | I wrote recently about Mind Boggling Speed with Caching with Momento and Rust and wanted to continue... | 0 | 2024-06-30T16:11:20 | https://www.binaryheap.com/momento-sdk-rust-webhook-ec2/ | rust, serverless, aws, architecture | I wrote recently about [Mind Boggling Speed with Caching with Momento and Rust](https://www.binaryheap.com/caching-with-momento-and-rust/) and wanted to continue in that theme as I explore the Momento SDK for Rust. Caching is a technique that builders reach for when looking to accomplish either improved performance or reduce the burden on resource-dependent parts of an application. It might also be a choice when looking to save costs if an operation is charged per read such as with DynamoDB. In any of those scenarios, caching must be fast. But caching must not also introduce a high amount of complexity. This is where I love [Momento](https://www.gomomento.com/) because I truly do get the best of both worlds. High-performance caching with the simplicity of serverless.
No caching solution would be complete however with the ability to subscribe to cache key changes. Topics aren't new in the engineering world, but I wanted to sit down and write some code against the Momento SDK for Rust and see how the ergonomics felt in addition to how well it performed. But comparing it in a vacuum against itself didn't seem like a lot of fun, so I am going to pair it against their companion product in Webhooks.
If you aren't familiar with what a webhook is, think of it like this. Perhaps there is a disconnected consumer who wants to perform some operation when a change on the topic happens. As a developer, I supply the endpoint, and Momento will do the heavy lifting to make sure that my endpoint receives a consistent payload and contains the body of the content in the message. The hard part of retries and connections is handled by Momento. And all I need to do is handle the request.
Webhooks are a great fit in cases when I can't stay subscribed full time like with a Lambda Function. But does that come at a performance hit? But just how much and is it worth it? Let's explore.
## The Setup
We've got to have some code for this type of article and instead of asking you to scroll to the bottom like usual, here's the [Repository](https://github.com/benbpyle/momento-rust-cache-off). Before you dig and start exploring, let me share what's inside there.
First, it's a lot of Rust. There are three binary projects inside that repository. Project one is the publisher code, which I'll walk you through below. Project two is a Lambda Function webhook handler. Project three is a console program that will subscribe to the Momento topic and process messages.
I'll be exploring how well a Lambda Function works as a webhook handler vs the connected EC2 running console program. Not to spoil the results, but we can all guess which run runs faster on average. I don't think I'd ever write another EC2-based service if Lambda won, but the results are super close and so close that unless you have consistent usage to justify the always-on cost, I don't think I'd consider anything other than the Lambda Function.
### Publishing
The Momento SDK for Rust allows clients to take advantage of working with the control plane and data plane pieces of their API. In this scenario, I'll be working with the Topics data plane.
#### Fetch Secrets
I'm not going to dig through setting up the cache, keys, and topics in the Momento console but I do need a way to fetch my long-lived access key to make requests. For that, I'm using AWS Secrets Manager.
```rust
let region_provider = RegionProviderChain::default_provider();
let config = from_env().region(region_provider).load().await;
// create the ssm and ddb client
// ssm is used to fetch Momento's Key
let client = aws_sdk_ssm::Client::new(&config);
let parameter = match client
.get_parameter()
.name("/keys/momento-pct-key")
.send()
.await
{
Ok(p) => p,
Err(_) => panic!("error with aws sdk client"),
};
// if no key, panic and don't start
let api_key = match parameter.parameter {
Some(p) => p.value.unwrap(),
None => panic!("Error with parameter"),
};
```
#### Building the Client
With an API Key, I can now build the Topics Client.
```rust
let topic_client = match TopicClient::builder()
.configuration(momento::topics::configurations::Laptop::latest())
.credential_provider(CredentialProvider::from_string(api_key).unwrap())
.build()
{
Ok(c) => c,
Err(_) => panic!("error with momento client"),
};
```
What I enjoy about the Momento SDK for Rust is that it feels a lot like the AWS SDK for Rust. Which I am a fan of. I build a client and can then call operations on it.
#### Publishing Some Messages
Now that I have a client established, let's get to publishing messages. No surprise, it isn't hard. For my example, I'm going to run 100 sequential publishes via this `while` loop.
My message body will be a `MomentoModel` which I'll use in my webhook handler and topic subscriber code.
```rust
let mut i = 0;
while i < 100 {
let m = MomentoModel::new(String::from("KeyOne"), String::from("KeyTwo"), i);
let t = serde_json::to_string(&m).unwrap();
match topic_client.publish("cache-off", "cache-off", t).await {
Ok(_) => {
println!("Published message");
}
Err(e) => {
println!("(Error)={e:?}");
}
}
i += 1;
}
```
As my kids would say, "Easy peasy lemon squeezy".
### The Webhook
Now it wouldn't be fair if my webhook wasn't coded in Rust now would it? My Lambda Function does have a few more operations inside of it than its counterpart I'll explore in a few paragraphs. But let's dig into that handler.
#### Main Function
Main in a Rust Lambda Function is where I set up my reusable objects that will be long-lived during the life of the function. The function below takes care of building my logging subscriber as well as grabbing a secret key that I'll use to decrypt the webhook message that comes with the webhook body.
```rust
#[tokio::main]
async fn main() -> Result<(), Error> {
let filtered_layer = tracing_subscriber::fmt::layer()
.pretty()
.json()
.with_target(true)
.with_file(true);
tracing_subscriber::registry()
.with(filtered_layer)
.with(EnvFilter::from_default_env())
.init();
let config = aws_config::load_from_env().await;
let secrets_client = aws_sdk_secretsmanager::Client::new(&config);
let resp = secrets_client
.get_secret_value()
.secret_id("moment-webhook-token")
.send()
.await?;
let string_field = resp
.secret_string()
.expect("Secret string must have a value");
run(service_fn(move |payload: Request| async move {
function_handler(string_field, payload).await
}))
.await
}
```
#### Function Handler
The webhook endpoint will receive a few pieces of information that I want to parse through to verify the contents of the payload. To do that, I need to fetch those items out of the header and body.
```rust
let body = event.body();
let body_string = std::str::from_utf8(body).expect("Body wasn't supplied");
let payload: Result<MomentoPayload, serde_json::Error> = serde_json::from_str(body_string);
let header_value = event.headers().get("momento-signature");
```
#### Protecting Against Replays
I added a touch of code to protect against replay attacks. The Momento request payload includes a timestamp that marks the time when the message was published. That published field is then checked to make sure it was performed in the last 60 seconds. That function looks like the below:
```rust
fn is_request_new_enough(published: i64) -> bool {
let start = SystemTime::now();
let since_the_epoch = start
.duration_since(UNIX_EPOCH)
.expect("Time went backwards");
let new_duration = Duration::from_millis(published as u64);
let calculated = since_the_epoch - new_duration;
debug!(
since_the_epoch = since_the_epoch.as_millis(),
published = published,
time_since_published = calculated.as_secs(),
"Time since published"
);
calculated.as_secs() < 60
}
```
#### Verifying the Signature
On top of protecting against replay attacks, I want to make sure that the signature of the message matches what I expect. When you think about it, these additional two operations will not be in the topic subscribed version with the Momento SDK for Rust. It almost makes it even more impressive that things are so close in timings.
```rust
fn verify_signature(payload: &MomentoPayload, secret_string: &str, signature: &str) -> bool {
let s = serde_json::to_string(&payload).expect("Error serde");
let mac3 = HmacSha3_256::new_from_slice(secret_string.as_bytes());
match mac3 {
Ok(mut m) => {
m.update(s.as_ref());
let result3 = m.finalize();
let code_bytes_3 = result3.into_bytes();
hex::encode(code_bytes_3) == signature
}
Err(_) => false,
}
}
```
### Subscribed Client
Finally, on to the last application in the repository. This app can be compiled and run on any hardware of your choice. For my comparisons, I ran it on an Ubuntu instance running in EC2 built for Graviton. I do all of my Rust work against ARM64.
I'm going to skip the setup of the API Key but it looks like the code above in the Topic Publisher. However, below is what establishes a Topic Client and then a subscription.
```rust
let topic_client = match TopicClient::builder()
.configuration(momento::topics::configurations::Laptop::latest())
.credential_provider(CredentialProvider::from_string(api_key).unwrap())
.build()
{
Ok(c) => c,
Err(_) => panic!("error with momento client"),
};
let mut subscription: Subscription = topic_client
.subscribe("cache-off", "cache-off")
.await
.expect("subscribe rpc failed");
```
### A Stream
Remember to not cross the streams right?

Well, not that kind of stream. A stream in Rust is essentially an asynchronous iterator on something that will provide values in the future. The reason this code is so fast is that it gives me a persistent TCP-based connection to the Momento infrastructure. It is about as fast as I can get and considering that my code is running in the same region as my Momento Cache, it is the fastest way to work with Momento.
All of that power though comes with just a fraction of the code. Again, the Momento SDK for Rust is fantastic.
```rust
while let Some(item) = subscription.next().await {
info!("Received subscription item: {item:?}");
let value: Result<String, MomentoError> = item.try_into();
match value {
Ok(v) => {
let o: MomentoModel = serde_json::from_str(v.as_str()).unwrap();
info!(
"(Value)={}|(MoModel)={o:?}|(TimeBetween)={}",
v,
o.time_between_publish_and_received()
);
}
Err(e) => {
error!("(Error Momento)={}", e);
}
}
}
```
## The Comparison
With the two solutions in place, now it's time to compare the performance. For clarification, these are small sample sizes but I ran through several batches and found very similar results which makes me feel like they are accurate on a larger scale.
### Lambda Function Performance
Here are the values for a few runs of the Lambda Function webhook handler. I've included both cold and warm starts, as well as the actual Lambda Function, and billed durations. My worst performer was sitting at 38ms of billing and 244ms of response from publish time to receive time.

From the initial starts, you can see a significant improvement in both the time from publishing and the billed duration. This Lambda Function was set at 256MB of memory which is plenty of power for what I'm doing here.
I didn't include it in the graph, but I did capture the publish time and the webhook publish time as well. What's crazy about this is that the latency does NOT come from Momento. Momento was consistently publishing the webhook with 1 and 2ms from receipt. That's seriously impressive.
However, without diving further into it, I'm going to assume that the driving factors are that I'm traversing the public internet, dealing with TLS, and the additional security checks that I'm making in the handler.
### EC2 Subscriber
Remember, if you are shocked that this is faster, go back to the fact that it's an EC2 instance with a program that has established a TCP connection to the Momento publisher via the Momento SDK for Rust. However, I will say that the performance is rather astonishing. I have no idea how many servers are powering this or quite frankly what the infrastructure is. It's pure serverless at ITS BEST!

Holy cow! 2ms average with a consistent 1ms of latency. I'm rounding up too just so you are aware. The fact is, this is some serious performance with the Momento SDK for Rust.
But, one thing to note, the very first result was 8ms. I want to point out, all things experience "cold starts". Anything from engines to TCP connections has a period of "warming up". It's just a matter of how much and how frequently it happens when determining if it matters to you.
## Making Sense of It
Here's my take at this point. If you require blazing speed, the Momento SDK for Rust handles topic subscriptions like a champion. It's easy to code with. Easy to set up. And I get an amazing performance. In cases where I need to update a leaderboard, perhaps deal with real-time chats, or work with financial data that needs to be updated as it happens, this would 100% be the way I'd go. There is no substitute for speed. And if I'm going that far, I'd probably compile for my chipset and run it as a [Systemd](https://systemd.io) service. I wouldn't fool with Docker. Again, if performance is what I'm baking on.
Momento is so fast and so scalable that it makes perfect sense for these types of scenarios.
However, if I'm connecting systems in a web environment where I want to respond to change in a disconnected yet still VERY timely manner, I'm going to fire this up as a Lambda Function Webhook Handler. The code is easy to work with, more than fast, and Momento takes great care in making sure you can trust the payloads but also verify them for security. There is also a [well-written](https://docs.momentohq.com/topics/integrations/lambda-handler) example of how to extend their webhooks with AWS EventBridge.
They are two different scenarios that both offer amazing performance, great developer experiences, and the infinitely scalable and collapsable guarantees that come with Serverless.
## Wrapping Up
This has been a fun piece to write. It gave me a chance to write more Rust, work with Momento, and do some benchmarking. It also just underscores the current version of the serverless ecosystem and how builders have great choices when designing for their customers.
If are you looking for a caching solution or perhaps looking to get rid of a reliance on VPCs and ElastiCache, give Momento a shot. I've personally deployed them in production with their Golang SDK and my customers are very happy with the results. So get out there and write some code!
Thanks for reading and happy building! | benbpyle |
1,906,754 | A Comparative Analysis of Svelte and Vue.js: Exploring Niche Frontend Technologies | A Comparative Analysis of Svelte and Vue.js: Exploring Niche Frontend Technologies In the rapidly... | 0 | 2024-06-30T16:09:41 | https://dev.to/abdulkola/a-comparative-analysis-of-svelte-and-vuejs-exploring-niche-frontend-technologies-51ea | A Comparative Analysis of Svelte and Vue.js: Exploring Niche Frontend Technologies
In the rapidly evolving landscape of frontend development, numerous frameworks and libraries emerge, each offering unique features and benefits. While mainstream technologies like ReactJS dominate the industry, niche frontend technologies like Svelte and Vue.js are gaining traction among developers for their distinctive approaches and advantages. This article delves into a comparative analysis of Svelte and Vue.js, highlighting their differences, strengths, and potential applications.
Introduction to Svelte and Vue.js
Svelte is a relatively new frontend framework that shifts much of the work traditionally done in the browser to the build step. Unlike other frameworks that operate in the browser, Svelte compiles components to highly efficient imperative code that directly manipulates the DOM. This results in faster load times and a more responsive user experience.
Vue.js, on the other hand, is a progressive JavaScript framework that is incrementally adoptable. It is designed to be versatile, providing features to create both simple single-page applications (SPAs) and complex enterprise-level projects. Vue.js offers a familiar templating syntax and a reactive data binding system, making it easy to integrate and use.
Key Differences Between Svelte and Vue.js
1. Architecture and Design Philosophy
Svelte: Svelte's compiler-based architecture means that it converts your components into highly optimized JavaScript code at build time. This eliminates the need for a virtual DOM, leading to more efficient updates and smaller bundle sizes. Svelte components are self-contained, with HTML, CSS, and JavaScript all in a single file, promoting simplicity and ease of maintenance.
Vue.js: Vue.js relies on a virtual DOM to manage reactivity and updates. Its component-based architecture encourages the separation of concerns by splitting templates, styles, and scripts into different sections. Vue.js provides an intuitive API and extensive documentation, making it accessible for developers of all skill levels.
2. Learning Curve and Developer Experience
Svelte: Svelte's syntax is straightforward and easy to grasp, especially for developers with a background in HTML, CSS, and JavaScript. The absence of a virtual DOM simplifies the mental model, making state management and reactivity more intuitive. Svelte's single-file components streamline development by reducing the need for context switching between files.
Vue.js: Vue.js boasts a gentle learning curve, thanks to its clear and concise documentation. Its familiar templating syntax and comprehensive ecosystem make it a popular choice for beginners and experienced developers alike. Vue's CLI provides a robust development environment with built-in tools for testing, linting, and building projects.
3. Performance and Efficiency
Svelte: Svelte's compile-time optimizations result in smaller bundle sizes and faster runtime performance. By eliminating the virtual DOM, Svelte reduces the overhead associated with DOM diffing and patching, leading to more efficient updates. This makes Svelte particularly suitable for performance-critical applications.
Vue.js: While Vue.js relies on the virtual DOM, its performance is still impressive, thanks to efficient diffing algorithms and optimizations. Vue's reactivity system ensures that updates are handled efficiently, even in complex applications. Vue's performance can be further enhanced through techniques like lazy loading and code splitting.
4. Community and Ecosystem
Svelte: As a newer framework, Svelte's community is smaller compared to Vue.js. However, it is rapidly growing, and there are an increasing number of resources, tutorials, and third-party libraries available. SvelteKit, a full-fledged framework for building Svelte applications, offers server-side rendering, static site generation, and other advanced features.
Vue.js: Vue.js boasts a large and active community, with extensive resources, plugins, and libraries. The Vue ecosystem includes powerful tools like Vuex for state management, Vue Router for routing, and Nuxt.js for server-side rendering and static site generation. The vibrant community and ecosystem make it easier to find support and solutions for common challenges.
Expected Role at HNG and Thoughts on ReactJS
At HNG, I anticipate working extensively with ReactJS, a versatile and widely adopted frontend library. React's component-based architecture, virtual DOM, and declarative programming style make it a powerful tool for building dynamic and interactive user interfaces. The use of JSX, a syntax extension that allows mixing HTML with JavaScript, enhances the development experience by enabling the creation of reusable UI components.
React's robust ecosystem, including libraries like Redux for state management and React Router for navigation, provides a comprehensive toolkit for building complex applications. Additionally, the active React community ensures continuous improvements, extensive documentation, and a wealth of learning resources.
Conclusion
In conclusion, both Svelte and Vue.js offer unique advantages that cater to different development needs. Svelte's compiler-based approach and emphasis on performance make it an excellent choice for lightweight and highly efficient applications. Vue.js, with its versatile and developer-friendly architecture, is ideal for projects of all sizes, from simple SPAs to large-scale enterprise applications.
While ReactJS remains the go-to choice at HNG, exploring niche technologies like Svelte and Vue.js broadens our understanding of the frontend landscape and equips us with diverse tools to tackle various development challenges. Whether you choose Svelte for its performance optimizations or Vue.js for its flexibility, both frameworks provide valuable alternatives to consider for your next project.
https://hng.tech/internship, https://hng.tech/hire | abdulkola | |
1,906,677 | Analyzing Likes Using Instagram API with python - part 3 | Learn how to analyze Instagram likes using Python. This comprehensive guide covers creating an Instagram API client, caching requests, data analysis, and visualizing results with bar charts. | 0 | 2024-06-30T16:08:07 | https://usemyapi.com/articles/instagram-data-analysis-with-python-analyzing-likes-using-instagram-api-part-3/ | python, tutorial, programming | ---
title: "Analyzing Likes Using Instagram API with python - part 3"
published: true
description: "Learn how to analyze Instagram likes using Python. This comprehensive guide covers creating an Instagram API client, caching requests, data analysis, and visualizing results with bar charts."
canonical_url: "https://usemyapi.com/articles/instagram-data-analysis-with-python-analyzing-likes-using-instagram-api-part-3/"
---
# Analyzing likes using Instagram API with python - part 3
Some time ago, we started working on an app to [analyze likes on Instagram posts](https://usemyapi.com/articles/how-to-analyze-instagram-likes-a-simple-python-app/). We want to answer the question: **What percentage of followers like Instagram posts?** This will tell us whether the account should focus on gaining new followers or improving content since its current followers might not be interested. The result will be a bar chart for each post showing the ratio of all likes to likes from followers.
In our previous post [How to analyze Instagram likes – part 2](https://dev.to/apiharbor/how-to-create-an-instagram-api-client-in-python-a-step-by-step-tutorial-2l1h), we created an API client in Python. We decided to use the [Instagram Scraper 2023 API](https://usemyapi.com/articles/instagram-scraper-api-comprehensive-guide-and-best-practices/). We wrote a client to fetch the data needed for our task. Today, it’s time to finish our app.
## Save requests to Instagram API by adding file caching
Let’s revisit our `rapidapi_client`. We want to cache every call to the endpoint. We don’t want to waste requests while working, so it’s worth returning previously fetched data.
Currently, we have three functions that we will add caching to. Let’s write the caching method first:
```python
def __get_cache_file_path(self, endpoint: str, identifier: str, count: int, end_cursor: Optional[str]) -> str:
"""
Generate a file path for caching based on the endpoint, identifier, count, and end_cursor.
"""
filename = f"{endpoint}_{identifier}_{count}_{self.__get_end_cursor(end_cursor)}.pkl"
return os.path.join(self.cache_dir, filename)
def __get_or_set_cache(self, endpoint: str, identifier: str, count: int, end_cursor: Optional[str], url: str) -> Any:
"""
Retrieve data from the cache if it exists; otherwise, fetch data from the URL, cache it, and return the data.
:param endpoint: The API endpoint being queried.
:param identifier: The identifier for the request (e.g., user ID or post shortcode).
:param count: The number of items requested.
:param end_cursor: The pagination cursor for the request.
:param url: The URL to fetch data from if not cached.
:return: The data retrieved from the cache or fetched from the URL.
"""
cache_file_path = self.__get_cache_file_path(endpoint, identifier, count, end_cursor)
# Check if the cache file exists
if os.path.exists(cache_file_path):
# Load and return data from cache
with open(cache_file_path, 'rb') as cache_file:
return pickle.load(cache_file)
# Make the API request if cache does not exist
response = requests.get(url, headers=self.headers)
data = response.json()
# Save the fetched data to the cache
with open(cache_file_path, 'wb') as cache_file:
pickle.dump(data, cache_file)
return data
```
We also need to change the constructor to create a folder for caching files:
```python
def __init__(self, cache_dir: str = 'cache'):
self.headers = {
'x-rapidapi-key': RAPIDAPI_KEY,
'x-rapidapi-host': RAPIDAPI_HOST
}
self.cache_dir = cache_dir
if not os.path.exists(self.cache_dir):
os.makedirs(self.cache_dir)
```
Everything looks fantastic! Now, we need to change the methods that communicate with the API:
```python
def get_user_posts(self, userid, count, end_cursor=None):
url = self.__get_api_url(f"/userposts/{userid}/{count}/{self.__get_end_cursor(end_cursor)}")
return self.__get_or_set_cache("userposts", userid, count, end_cursor, url)
def get_post_likes(self, shortcode, count, end_cursor=None):
url = self.__get_api_url(f"/postlikes/{shortcode}/{count}/{self.__get_end_cursor(end_cursor)}")
return self.__get_or_set_cache("postlikes", shortcode, count, end_cursor, url)
def get_user_followers(self, userid, count, end_cursor=None):
url = self.__get_api_url(f"/userfollowers/{userid}/{count}/{self.__get_end_cursor(end_cursor)}")
return self.__get_or_set_cache("userfollowers", userid, count, end_cursor, url)
```
Nothing extraordinary here, we just use `__get_or_set_cache` and return the cached content if found instead of querying the Instagram API.
We are now protected against wasting requests, let’s move on!
## Creating plain old python objects (POPO)
POPO are simple Python objects without additional methods or attributes other than those explicitly defined. We need them to pass data between classes.
Let’s think about the POPO we need. We must gather data from the API and merge it into one object to pass it further. So, we need a simple `PostData` class:
```python
class PostData:
def __init__(self, post_id, post_likers: List[str], followers: List[str]):
self.post_id = post_id
self.post_likers = post_likers
self.followers = followers
```
Another POPO will be the analysis result, let’s name this class `PostResult`:
```python
class PostResult:
def __init__(self, post_id, all_likes_count, likers_likes_count):
self.all_likes_count = all_likes_count
self.likers_likes_count = likers_likes_count
self.post_id = post_id
```
Let’s create a list of `PostData` built from Instagram API data.
```python
# Create an instance of the RapidApiClient
api_client = RapidApiClient()
end_cursor = None
# Replace with your actual Instagram account ID
ACCOUNT_ID = '__PASTE_ACCOUNT_ID__'
# Retrieve followers for the account (maximum 50 followers for testing purposes)
followers = api_client.get_user_followers(ACCOUNT_ID, 50)
# Extract usernames of the followers
followers_usernames = [user['username'] for user in followers["data"]["user"]]
# Initialize an empty list to store post data
posts_data: List[PostData] = []
PAGE_LIMIT = 1 # Limit the number of pages to retrieve for testing purposes
for i in range(PAGE_LIMIT):
# Retrieve user posts (maximum 5 posts per page for testing purposes)
posts = api_client.get_user_posts(ACCOUNT_ID, 5, end_cursor)
# Check if there is no next page, break the loop if not
if not posts["data"]["next_page"]:
break
data = posts["data"]
end_cursor = data["end_cursor"] # Update the end_cursor for the next page
edges = data["edges"] # Extract the posts data
for edge in edges:
node = edge["node"]
post_id = node["id"] # Extract the post ID
# Retrieve likes for the post (maximum 50 likes for testing purposes)
post_likes = api_client.get_post_likes(node["shortcode"], 50)
# Extract usernames of the users who liked the post
post_likers = [like['username'] for like in post_likes["data"]["likes"]]
# Create a PostData object and add it to the posts_data list
posts_data.append(PostData(post_id, post_likers, followers_usernames))
```
At this point, `posts_data` contains a list of `PostData` objects composed of post IDs, lists of likers, and lists of followers. Great! We have the data ready; it’s time to feed it to the analyzer.
## Analyzing data from the Instagram API
Let’s think about what such an analyzer should do. Essentially, it should count, just count 😉 But what exactly?
- The total number of likes on a given post
- The number of likes from followers
This is achieved by the following code:
```python
from typing import List
class LikesAnalyzer:
def __init__(self, posts_data: List[PostData]):
"""
Initialize the LikesAnalyzer with a list of PostData objects.
:param posts_data: List of PostData objects containing data for each post.
"""
self.posts_data = posts_data
def get_analysis(self) -> List[PostResult]:
"""
Analyze the likes data to determine the total likes and the likes from followers for each post.
:return: A list of PostResult objects containing the analysis results for each post.
"""
results: List[PostResult] = [] # Initialize an empty list to store the analysis results
# Iterate through each PostData object in the posts_data list
for p in self.posts_data:
all_likes_count = len(p.post_likers) # Count the total number of likes for the post
likers_likes_count = 0 # Initialize the count for likes from followers
# Iterate through each liker of the post
for liker in p.post_likers:
# Check if the liker is also a follower
if liker in p.followers:
likers_likes_count += 1 # Increment the count if the liker is a follower
# Create a PostResult object with the post ID, total likes, and likes from followers
results.append(PostResult(p.post_id, all_likes_count, likers_likes_count))
return results # Return the list of PostResult objects
```
The `get_analysis` function returns a list of `PostResult`. Yes, these are the results of our analysis. Such dry results don’t tell us much. Let’s make a chart out of them!
## Displaying the chart based on
Data from the Instagram API
I propose the class name `PostLikesPlotter` to keep it simple. The best representation for the results will be a bar chart. It will immediately show on one bar how many followers liked a given post.
```python
import matplotlib.pyplot as plt
from typing import List
import random
# Constants for the bar chart
BAR_WIDTH = 0.5 # Width of the bars in the chart
ALL_LIKES_COLOR = 'blue' # Color of the bars representing all likes
LIKERS_LIKES_COLOR = 'green' # Color of the bars representing likes from followers
class PostLikesPlotter:
def plot_analysis(self, results: List[PostResult]) -> None:
"""
Plot the analysis of post likes, showing total likes and likes from followers for each post.
:param results: A list of PostResult objects containing analysis results for each post.
"""
# Extract post IDs, total likes counts, and likers likes counts from the results
post_ids = [result.post_id for result in results]
all_likes_counts = [result.all_likes_count for result in results]
likers_likes_counts = [result.likers_likes_count for result in results]
# Create a figure and axis for the bar chart
fig, ax = plt.subplots()
# Plot the bars for total likes
bars = ax.bar(post_ids, all_likes_counts, BAR_WIDTH, color=ALL_LIKES_COLOR, label='All Likes')
# Plot the bars for likes from followers, overlaying them on the total likes bars
for i, (bar, all_likes, liker_likes) in enumerate(zip(bars, all_likes_counts, likers_likes_counts)):
ax.bar(bar.get_x(), liker_likes, BAR_WIDTH, color=LIKERS_LIKES_COLOR, label='Likers Likes' if i == 0 else "")
# Set the labels and title of the chart
ax.set_xlabel('Post ID')
ax.set_ylabel('Likes Count')
ax.set_title('Likes Analysis per Post')
# Move the legend outside the bar chart area
ax.legend(loc='upper left', bbox_to_anchor=(1, 1))
# Adjust the appearance of the x-axis labels
ax.set_xticklabels(post_ids, fontsize=8, rotation=45, ha='right')
# Display the bar chart
plt.show()
```

## Summary
We have a working application written in Python for Instagram data analysis! For those who want to download the source code, visit our website [UseMyApi.com](https://usemyapi.com/articles/instagram-data-analysis-with-python-analyzing-likes-using-instagram-api-part-3/) where you will find a link to GitHub with the code.
If you liked the post ❤️ leave a comment or any reaction. If you need any other application or changes to the current code, you can "hire" me for programming work. Feel free to [contact me](https://usemyapi.com/services/)!
All the best for you all! | apiharbor |
1,906,423 | Customizing Your Lazyvim Setup for Personal Preferences | Table Of Contents Folder Structure Tree of LazyVim Overview of the bullet... | 0 | 2024-06-30T16:03:02 | https://dev.to/insideee_dev/customizing-your-lazyvim-setup-for-personal-preferences-57 | vim, beginners, tutorial, coding | ## Table Of Contents
1. [Folder Structure Tree of LazyVim](#folder-structure-tree)
2. [Overview of the bullet points](#overview-of-the-bullet-points)
* [Solarized Osaka theme](#solarized-osaka)
* [Key Mappings](#key-mappings)
* [Auto Commands](#auto-commands)
* [Options Configuration Vim](#options-configuration)
Welcome back, my friends!
I know, I know, it's been a while since we last delved into the wonderful world of **LazyVim** customization.
In [Part 1](https://dev.to/insideee_dev/level-up-your-dev-workflow-conquer-web-development-with-a-blazing-fast-neovim-setup-part-1-28b2), we laid the groundwork with **LazyVim**, establishing a solid foundation for our _Neovim_ journey. Now, it's time to take things to the next level – personalization!
In the next section, I want to express my gratitude to ***[Takuya Matsuyama](https://www.craftz.dog/)***, who is a true inspiration to me. I have learned a great deal from his work, and I am deeply appreciative of his contributions to the field.
And now, let's start cowboy 🤠🤠🤠.
### Folder Structure Tree of LazyVim <a name="folder-structure-tree"></a>
Before starting to customize anything, we need to quickly look through the default _File Structure Diagram_ of **Lazy.vim** to get an overview of it.
For more information: [General Settings](https://www.lazyvim.org/configuration/general)
``` markdown
~/.config/nvim
├── lua
│ ├── config
│ │ ├── autocmds.lua
│ │ ├── keymaps.lua
│ │ ├── lazy.lua
│ │ └── options.lua
│ └── plugins
│ ├── spec1.lua
│ ├── **
│ └── spec2.lua
└── init.toml
```
- *autocmds.lua (auto commands)*: are used to automatically execute specific `commands` or `functions` in response to certain events in the editors.
> Greatly enhance your _Vim_ workflow, it very wonderful 😍.
- *keymaps (key mappings)*: are configurations that define custom keyboard shortcuts. These mappings allow users to bind specific keys or combinations of keys to _Vim_ `commands`, `functions`, or `scripts`, … thereby streamlining their workflow and making repetitive tasks quicker and easier.
> Suppose you don’t know about Key Mappings. In that case, I think this is a useful blog to get started with: [Basic Vim Mapping](https://dev.to/iggredible/basic-vim-mapping-5ahj).
Thanks for the great post! {% embed https://dev.to/iggredible %}
- *lazy.lua*: this is a place to set the default configuration of the *Lazy.vim*
- *options.lua (optional)*: you can set anything to custom your *Vim* if you want like: `autoindent`, `smartindent`, `hlsearch`, `showcmd`, `shiftwidth`, … I will show my config later 🫣
### Overview of the bullet points <a name="overview-of-the-bullet-points"></a>
Now, we are an overview of the bullet points I will configure in the next section:
- Theme:
- _Solarized Osaka_ (I’m a big fan of **[Takuya Matsuyama](https://www.devas.life/author/takuya/)**)
- Keymaps:
- *keymaps*
- _autocmds_
- _options_
- Plugins:
- _ColorScheme_ (at above):
- [craftzdog/solarized-osaka.nvim](https://github.com/craftzdog/solarized-osaka.nvim)
- _coding_:
- [smjonas/inc-rename.nvim](https://github.com/smjonas/inc-rename.nvim)
- [ThePrimeagen/refactoring.nvim](https://github.com/ThePrimeagen/refactoring.nvim) (The Refactoring library based off the Refactoring book by Martin Fowler)
- [echasnovski/mini.bracketed](https://github.com/echasnovski/mini.nvim)
- [monaqa/dial.nvim](https://github.com/monaqa/dial.nvim)
- [danymat/neogen](https://github.com/danymat/neogen)
- [simrat39/symbols-outline.nvim](https://github.com/simrat39/symbols-outline.nvim) (A _tree like view_ for symbols in _Neovim_ using the Language Server Protocol. Supports all your favourite languages.)
- [nvim-cmp](https://github.com/hrsh7th/nvim-cmp) (A completion engine plugin for _neovim_ written in `Lua`. Completion sources are installed from external repositories and "sourced".)
- [kylechui/nvim-surround](https://github.com/kylechui/nvim-surround) (Surround selections, stylishly 😎)
- _lsp_:
- [neovim/nvim-lspconfig](https://github.com/neovim/nvim-lspconfig)
- [williamboman/mason.nvim](https://github.com/williamboman/mason.nvim) (Portable package manager for _Neovim_ that runs everywhere Neovim runs.
Easily install and manage `LSP servers`, `DAP servers`, `linters`, and `formatters`.)
- _treesitters_:
- [nvim-treesitter/nvim-treesitter](https://github.com/nvim-treesitter/nvim-treesitter) (to provide a simple and easy way to use the interface for `tree-sitter` in _Neovim_ and to provide some basic functionality such as highlighting based on it)
- [nvim-treesitter/playground](https://github.com/nvim-treesitter/playground) (View treesitter information directly in _Neovim_!)
- [nvim-treesitter/nvim-treesitter-context](https://github.com/nvim-treesitter/nvim-treesitter-context)
- _ui_:
- [lukas-reineke/indent-blankline.nvim](https://github.com/lukas-reineke/indent-blankline.nvim)
- [folke/noice.nvim](https://github.com/folke/noice.nvim) (Noice improves the UI for `messages`, `cmdline` and the `popupmenu`.)
- [rcarriga/nvim-notify](https://github.com/rcarriga/nvim-notify) (A fancy, configurable, notification manager for _NeoVim_)
- [echasnovski/mini.animate](https://github.com/echasnovski/mini.animate) (has an extra that enables animations)
- [b0o/incline.nvim] (https://github.com/b0o/incline.nvim) (When editing many files in a single `tabpage`, you can't quickly know which file is opened in the `tabpage`. That's because lualine's `globalstatus` option is set to `false` in LazyVim.)
- [folke/zen-mode.nvim](https://github.com/folke/zen-mode.nvim)
- [nvimdev/dashboard-nvim](https://github.com/nvimdev/dashboard-nvim)
- _editors_:
- [folke/flash.nvim](https://github.com/folke/flash.nvim)
- [echasnovski/mini.hipatterns](https://github.com/echasnovski/mini.hipatterns)
- [dinhhuy258/git.nvim](https://github.com/dinhhuy258/git.nvim)
- [telescope.nvim](https://github.com/nvim-telescope/telescope.nvim) (It is an extendable _fuzzy finder _over lists.)
> As you configure each plugin, it's a good idea to save your changes and restart nvim quickly. This helps you catch any errors right away and makes troubleshooting smoother down the line. Think of it like a mini test drive after each tweak! 🤠🤠🤠
### Solarized Osaka theme: <a name="solarized-osaka"></a>
To change your _theme_ create a file like `colorscheme.lua`
Create: `lua/plugins/colorscheme.lua` and config:
``` lua
return {
{
"craftzdog/solarized-osaka.nvim",
branch = "osaka",
lazy = true,
priority = 1000,
opts = function()
return {
transparent = true,
}
end,
},
}
```
After config, you restart and see the theme:
Awesome 🥸🥸🥸!!!

### Key Mappings <a name="key-mappings"></a>
The **LazyVim** has already configured many things for me now, but I added some customizations like those in the file: `lua/config/keymaps.lua`
``` lua
local keymap = vim.keymap
local opts = { noremap = true, silent = true }
keymap.set("n", "x", '"_x')
-- Increment/decrement
keymap.set("n", "+", "<C-a>")
keymap.set("n", "-", "<C-x>")
-- Delete a word backwards
keymap.set("n", "dw", 'vb"_d')
-- Select all
keymap.set("n", "<C-a>", "gg<S-v>G")
-- Save with root permission (not working for now)
--vim.api.nvim_create_user_command('W', 'w !sudo tee > /dev/null %', {})
-- Disable continuations
keymap.set("n", "<Leader>o", "o<Esc>^Da", opts)
keymap.set("n", "<Leader>O", "O<Esc>^Da", opts)
-- Jumplist
keymap.set("n", "<C-m>", "<C-i>", opts)
-- New tab
keymap.set("n", "te", ":tabedit")
keymap.set("n", "<tab>", ":tabnext<Return>", opts)
keymap.set("n", "<s-tab>", ":tabprev<Return>", opts)
-- Split window
keymap.set("n", "ss", ":split<Return>", opts)
keymap.set("n", "sv", ":vsplit<Return>", opts)
-- Move window
keymap.set("n", "sh", "<C-w>h")
keymap.set("n", "sk", "<C-w>k")
keymap.set("n", "sj", "<C-w>j")
keymap.set("n", "sl", "<C-w>l")
-- Resize window
keymap.set("n", "<C-w><left>", "<C-w><")
keymap.set("n", "<C-w><right>", "<C-w>>")
keymap.set("n", "<C-w><up>", "<C-w>+")
keymap.set("n", "<C-w><down>", "<C-w>-")
-- Pick a buffer
keymap.set("n", "<Leader>1", "<Cmd>BufferLineGoToBuffer 1<CR>", {})
keymap.set("n", "<Leader>2", "<Cmd>BufferLineGoToBuffer 2<CR>", {})
keymap.set("n", "<Leader>3", "<Cmd>BufferLineGoToBuffer 3<CR>", {})
keymap.set("n", "<Leader>4", "<Cmd>BufferLineGoToBuffer 4<CR>", {})
keymap.set("n", "<Leader>5", "<Cmd>BufferLineGoToBuffer 5<CR>", {})
keymap.set("n", "<Leader>6", "<Cmd>BufferLineGoToBuffer 6<CR>", {})
keymap.set("n", "<Leader>9", "<Cmd>BufferLineGoToBuffer -1<CR>", {})
-- Moving text
-- Move text up and down
keymap.set("n", "<C-Down>", "<Esc>:m .+1<CR>", opts)
keymap.set("n", "<C-Up>", "<Esc>:m .-2<CR>", opts)
keymap.set("v", "<C-Down>", ":m .+1<CR>", opts)
keymap.set("v", "<C-Up>", ":m .-2<CR>", opts)
keymap.set("x", "<C-Down>", ":move '>+1<CR>gv-gv", opts)
keymap.set("x", "<C-Up>", ":move '<-2<CR>gv-gv", opts)
-- Diagnostics
keymap.set("n", "<C-j>", function()
vim.diagnostic.goto_next()
end, opts)
```
### Auto Commands <a name="auto-commands"></a>
`lua/config/autocmds.lua`
I don’t want to show `concealing` when using `json` and `markdown` files and turn off paste mode when leaving insert. So, I set `conceallevel` to 0 for `json` files and turned off paste mode like below.
If you want to use them, you can skip this config file 🙄
``` lua
-- Turn off paste mode when leaving insert
vim.api.nvim_create_autocmd("InsertLeave", {
pattern = "*",
command = "set nopaste",
})
-- Disable the concealing in some file formats
-- The default conceallevel is 3 in LazyVim
vim.api.nvim_create_autocmd("FileType", {
pattern = { "json", "jsonc", "markdown" },
callback = function()
vim.opt.conceallevel = 0
end,
})
```
Awesome 🥸🥸🥸!!!

### Options Configuration Vim <a name="options-configuration"></a>
I have some config for default _Vim_, it’s only just an old config for normal _nvim_ in the past: `lua/config/options.lua`. I will omit to explain this file 🫣
``` lua
vim.g.mapleader = " "
vim.opt.encoding = "utf-8"
vim.opt.fileencoding = "utf-8"
vim.opt.spell = true
vim.opt.spelllang = { "en_us" }
vim.opt.number = true
vim.opt.title = true
vim.opt.autoindent = true
vim.opt.smartindent = true
vim.opt.hlsearch = true
vim.opt.backup = false
vim.opt.showcmd = true
vim.opt.cmdheight = 1
vim.opt.laststatus = 2
vim.opt.expandtab = true
vim.opt.scrolloff = 10
vim.opt.shell = "fish"
vim.opt.backupskip = { "/tmp/*", "/private/tmp/*" }
vim.opt.inccommand = "split"
vim.opt.ignorecase = true -- Case insensitive searching UNLESS /C or capital in search
vim.opt.smarttab = true
vim.opt.breakindent = true
vim.opt.shiftwidth = 2
vim.opt.tabstop = 2
vim.opt.wrap = false -- No Wrap lines
vim.opt.backspace = { "start", "eol", "indent" }
vim.opt.path:append({ "**" }) -- Finding files - Search down into subfolders
vim.opt.wildignore:append({ "*/node_modules/*" })
vim.opt.splitbelow = true -- Put new windows below current
vim.opt.splitright = true -- Put new windows right of current
vim.opt.splitkeep = "cursor"
vim.opt.mouse = ""
vim.g.deprecation_warnings = true
-- Undercurl
vim.cmd([[let &t_Cs = "\e[4:3m"]])
vim.cmd([[let &t_Ce = "\e[4:0m"]])
-- Add asterisks in block comments
vim.opt.formatoptions:append({ "r" })
vim.cmd([[au BufNewFile,BufRead *.astro setf astro]])
vim.cmd([[au BufNewFile,BufRead Podfile setf ruby]])
if vim.fn.has("nvim-0.8") == 1 then
vim.opt.cmdheight = 0
end
```
Honestly, I don’t want this post to be too long, so I’ll cover the **_Plugin configuration_** section in the next article.
My dotfiles configurations:
{% embed https://github.com/CaoNgocCuong/dotfiles-nvim-lua %}
{% embed https://dev.to/insideee_dev %}
{% cta https://twitter.com/insideee_dev013 %}🚀 Follow me on X(Twitter){% endcta%}
{% cta https://www.linkedin.com/in/cuong-cao-ngoc-792992229/ %}🚀 Connect with me on Linkedin{% endcta %}
{% cta https://github.com/CaoNgocCuong?tab=repositories %}🚀 Checkout my GitHub{% endcta %}
See you soon, comrades, in the next part.
Thanks for reading!
| insideee_dev |
1,906,753 | React JS vs Vue JS | Hey there, coding enthusiasts! As someone prepping for the HNG Internship (have you checked out their... | 0 | 2024-06-30T16:00:09 | https://dev.to/tosin_adejuwon_08fdc8c8d5/react-js-vs-vue-js-2gjf | javascript, react, vue | Hey there, coding enthusiasts! As someone prepping for the HNG Internship (have you checked out their website? https://hng.tech/internship), I'm sure you're diving headfirst into the world of frontend development. With a gazillion frameworks out there, choosing the right one can feel like picking a superpower. Today, let's compare two titans: ReactJS and Vue.js.
React: The OG of Component-Based UIs
React, created by Facebook, is a battle-tested library known for its efficiency and scalability. It uses a component-based architecture, where you break down your interface into reusable building blocks. Think of it like Legos for building web UIs! This makes complex projects a breeze to manage and keeps your code clean. React also boasts a virtual DOM, a clever in-memory representation of the real DOM, that allows for super-fast updates without clunky re-renders.
Vue.js: The Pragmatic Up-and-Comer
Vue.js, the brainchild of ex-Googler Evan You, is a progressive framework that's gaining serious traction. It offers a gentle learning curve with a focus on simplicity and ease of use. Vue.js combines the best of both worlds: it has a core library for small projects but can scale into a full-fledged framework for complex applications. Plus, Vue's intuitive template syntax makes it easy to jump right in, even for JavaScript newbies.
So, Which One Reigns Supreme?
There's no clear winner! Both React and Vue.js are fantastic choices, each with its own strengths. Here's a cheat sheet to help you decide:
For complex, scalable projects: React's robust ecosystem and virtual DOM might be a better fit.
For beginners or quick prototypes: Vue.js' gentle learning curve and flexibility could be your champion.
My HNG Journey with React: Gearing Up to Build Awesome Things!
Since HNG uses React, I'm pumped to delve into its component-based magic. I'm eager to learn how to create reusable components, manage state effectively, and leverage the vast React ecosystem. I can already envision building dynamic and interactive interfaces during the internship – the possibilities are endless!
The HNG internship is all about collaboration and mentorship, and I'm excited to learn from experienced developers and fellow interns. With React as my weapon of choice, I'm ready to tackle any coding challenge HNG throws my way!
This is just a taste of the React vs. Vue.js debate. Remember, the best framework is the one that fits your project and your learning style. So, dive in, experiment, and get ready to build something amazing! (P.S. If you're curious about joining the HNG fam, check out their website: https://hng.tech/internship)
If you are looking out for best out bests talents to employ check out : https://hng.tech/hire
You'd like us to talk or catch up about react? I am open 🙋♂️
| tosin_adejuwon_08fdc8c8d5 |
1,906,541 | Unveiling the Shield: Ensuring Security with Open Source 2FA | In the digital age, where personal data is more valuable than ever, securing your online accounts is... | 0 | 2024-06-30T15:59:37 | https://dev.to/verifyvault/unveiling-the-shield-ensuring-security-with-open-source-2fa-nn6 | opensource, security, cybersecurity, github | In the digital age, where personal data is more valuable than ever, securing your online accounts is not just a recommendation but a necessity. Two-Factor Authentication (2FA) stands as a robust defense against unauthorized access, significantly bolstering your account security beyond mere passwords. However, not all 2FA implementations are created equal. Understanding the underlying security analysis and threat models can empower users to make informed decisions when choosing a 2FA solution.
### <u>Security Analysis: Fortifying Your Defenses</u>
1. **Authentication Methods**: VerifyVault leverages time-based one-time passwords (TOTP), ensuring compatibility with a wide range of services while adhering to industry standards.
2. **Encryption and Privacy**: Your security is paramount. VerifyVault encrypts all data locally, ensuring that even if your device is compromised, your authentication codes remain secure.
3. **Open Source Transparency**: Transparency breeds trust. VerifyVault's open-source nature allows security experts and the community to scrutinize the code for vulnerabilities, ensuring continuous improvement and a high level of security assurance.
### <u>Threat Models: Anticipating Challenges</u>
1. **Phishing Attacks**: VerifyVault's offline operation and manual entry mode protect against phishing attempts targeting TOTP codes.
2. **Device Compromise**: Encrypted storage and optional password locking safeguard against unauthorized access if your device falls into the wrong hands.
3. **Data Interception**: Utilize QR code import/export and encrypted backups to mitigate the risk of interception during data transmission.
### <u>Why Choose VerifyVault?</u>
- **Open Source Assurance**: Built on transparency, VerifyVault allows you to verify its security protocols and trust its commitment to protecting your data.
- **Future Cross-Platform Compatibility**: Whether you're on Windows, Linux, or Mac, VerifyVault will ensure seamless integration into your daily digital life.
- **User-Friendly Features**: From automatic backups to password-locked access, VerifyVault prioritizes user convenience without compromising privacy or security.
### <u>Secure Your Digital Frontier with VerifyVault</u>
Don't wait until it's too late. Embrace the power of two-factor authentication with VerifyVault today. Download now and take control of your digital security with a trusted, open-source solution designed to protect what matters most—your privacy.
### <u>Downloads</u>
[VerifyVault GitHub](https://github.com/VerifyVault)
[VerifyVault Beta v0.2.2 Download](https://github.com/VerifyVault/VerifyVault/releases/tag/Beta-v0.2.2) | verifyvault |
1,906,662 | Comparing Javascript Front-end technologies (Angular and React) | Introduction Angular and React are both javascript technologies for building complex and... | 0 | 2024-06-30T15:59:06 | https://dev.to/iam_nick/comparing-javascript-front-end-technologies-angular-and-react-d3c | ## **Introduction**
Angular and React are both javascript technologies for building complex and interactive web applications. They share several similarities but some distinct characteristics which makes them fundamentally different.
I will be comparing both technologies in this post but first, let's meet them.
**Angular**
Angular JS is an open source JavaScript framework that is used to build web applications. It is a TypeScript-based development platform and design framework with a component-based architecture for building scalable web apps. It has a collection of well-integrated libraries and features, such as client-server communication, routing, and more, that help speed up front-end development.
**React**
React is a declarative, efficient, and flexible JavaScript library for building reusable UI components. Its component-based architecture and declarative views enable easy creation of interactive, complex UIs. With the 'learn once, write anywhere' principle, developers can build fast, scalable apps for all platforms.
*
## *Now Let's compare them**
1. Popularity: Both technologies are popular, but which is more popular? On Git-hub, React has more stars compared to angular and according to Statista survey in 2022 on the most used framework worldwide, React was in second position while Angular was fifth.
2. Performance: Since Virtual DOM trees are built on the server and are quite lightweight, React looks to exceed Angular in terms of runtime performance.
3. Data Binding: Angular uses two-way data binding, which means the model state changes automatically whenever any interface element changes. This keeps the two layers updated with the same data. Whereas, React uses one-way data binding, which renders the modifications in the interface model only after the model state has been updated first.
4. Server-side Rendering: Angular renders the application by creating a static view before it gets fully interactive. It is up to you the developer to cleverly use the combination of JSON and client-side caching to increase server-side performance. In React, to make your application SEO-friendly, you would need to render the application to the server. React does it with ease with the help of some specific functions.
Both technologies are great for building modern web applications with component-base architecture, choosing the best one to work with depends on the project you want to build.
PS. I am new to writing, therefore this piece may not be perfect. Thanks for reading.
[](https://hng.tech/internship)
[](https://hng.tech/hire)
[](https://hng.tech/premium)
| iam_nick | |
1,906,752 | My Mobile Development Journey and Architectural Insights | The Journey Begins Embarking on the journey of mobile development is both thrilling and challenging.... | 0 | 2024-06-30T15:57:40 | https://dev.to/akpamzy_junior_cc600a044d/my-mobile-development-journey-and-architectural-insights-pmg | flutter, mobile, mobiledev, hng | The Journey Begins
Embarking on the journey of mobile development is both thrilling and challenging. I started my journey as a mobile developer, specifically a Flutter developer, six months ago. It's been challenging, tasking, but above all, thrilling. One minute I am happy that my code works ☺️☺️, and the next minute I start to ask myself what I am doing in this career space and feel like crying 😩😭.
But above it all, it feels fulfilling to have the knowledge and power to build real-life solutions through mobile development. Well this article is not just to bore you with my personal experiences but to highlight some of the dynamics, architecture and frameworks of mobile development
I can't help but reflect on the fascinating landscape of mobile development platforms and the architectural patterns that drive them. Join me as I explore these concepts (Don’t say I didn’t do anything for you oo 😒)
Mobile Development Platforms
What is mobile app development: Mobile app development is the process of creating applications that run on mobile devices like smartphones and tablets. It involves designing and coding software that users can download and use on platforms like Android and iOS.
When it comes to mobile development, there are multiple platforms and frameworks to consider: Android, iOS, and cross-platform frameworks like Flutter and React Native. Each platform and framework has its own set of tools, languages, and benefits, making the choice between them crucial for any aspiring mobile developer.
What is Mobile Development?
Mobile development is the process of creating software applications that run on mobile devices such as smartphones and tablets. This involves designing, coding, testing, and deploying apps for various platforms. Mobile development can be done using platform-specific languages and tools or cross-platform frameworks that allow a single codebase to work on multiple operating systems.
Mobile Development Platforms
When it comes to mobile development, there are multiple platforms and frameworks to consider: Android, iOS, and cross-platform frameworks like Flutter and React Native. Each platform and framework has its own set of tools, languages, and benefits, making the choice between them crucial for any aspiring developer.
Android
Android is a popular mobile operating system developed by Google. It uses Java or Kotlin as its primary programming languages and offers a wide range of tools through Android Studio.
Pros:
Wide user base: Android has a large global market share, which means a potentially vast audience for your app.
Extensive customization: Developers have significant freedom to customize their apps and the user interface.
Easier app publishing process: The Google Play Store has a less stringent app approval process compared to Apple’s App Store.
Cons:
Device fragmentation: Android runs on a wide variety of devices with different screen sizes and hardware capabilities, making optimization challenging.
Security vulnerabilities: The open nature of Android can lead to higher security risks.
iOS
iOS is Apple's mobile operating system. It uses Swift or Objective-C as its primary programming languages and provides a cohesive development environment through Xcode.
Pros:
Consistent user experience: iOS offers a uniform look and feel across devices, leading to a more consistent user experience.
Higher likelihood of monetization: iOS users are generally more willing to pay for apps and in-app purchases.
Better security: iOS is known for its robust security features.
Cons:
Less flexibility: Developers have less freedom to customize the user interface compared to Android.
Higher development costs: Developing for iOS can be more expensive due to the need for Mac hardware and stricter app approval processes.
Flutter
Flutter is a cross-platform framework developed by Google. It allows developers to create applications for Android, iOS, and other platforms using a single codebase written in Dart.
Pros:
Single codebase for multiple platforms: Write once, run anywhere approach saves time and effort.
Rich pre-designed widgets: Flutter provides a comprehensive set of customizable widgets for building UIs.
Hot reload feature: Allows developers to see changes in real-time without restarting the app.
Cons:
Larger app size: Flutter apps tend to have a larger binary size compared to native apps.
Limited native features: Some platform-specific features may not be fully supported.
React Native
React Native is a popular cross-platform framework developed by Facebook. It allows developers to build mobile applications using JavaScript and React.
Pros:
Single codebase: Like Flutter, React Native enables developers to write one codebase that works on multiple platforms.
Large community support: A vast community provides numerous libraries, plugins, and resources.
Hot reload feature: Developers can instantly see the results of their code changes.
Cons:
Performance issues: React Native apps can face performance bottlenecks compared to native apps.
Complexity in custom modules: Implementing custom native modules can be challenging and require deeper platform-specific knowledge.
Software Architecture Patterns
Choosing the right software architecture pattern is crucial for the success and maintainability of any mobile application. Here are some common patterns used in mobile development:
Model-View-Controller (MVC)
MVC is a design pattern that separates an application into three main components: Model, View, and Controller.
Model: Manages the data and business logic of the application.
View: Handles the presentation layer and user interface.
Controller: Acts as an intermediary between the Model and View, processing user input and updating the View accordingly.
Pros:
Clear separation of concerns: Each component has a distinct responsibility, making the application easier to manage and understand.
Easy to understand and implement: MVC is straightforward and widely used, making it accessible to developers.
Cons:
Tight coupling: Controllers can become tightly coupled with views, leading to difficulties in testing and maintenance.
Scalability issues: As the application grows, managing the interactions between components can become complex.
Model-View-ViewModel (MVVM)
MVVM is a design pattern that enhances the separation of concerns by introducing the ViewModel component.
Model: Manages the data and business logic.
View: Handles the presentation layer and user interface.
ViewModel: Acts as a mediator between the Model and View, managing the data display logic and user input handling.
Pros:
Easy data binding: MVVM facilitates two-way data binding between the View and ViewModel, simplifying the synchronization of UI and data.
Enhanced testability: The ViewModel can be tested independently of the View, improving the overall testability of the application.
Cons:
Steeper learning curve: MVVM can be more challenging to understand and implement compared to MVC.
Overhead in smaller applications: For simple apps, the added complexity of MVVM may not be justified.
Clean Architecture
Clean Architecture is a design pattern that emphasizes separation of concerns and independent layers.
Entities: Represent the core business logic and rules.
Use Cases: Define the application-specific business rules and interact with the entities.
Interface Adapters: Convert data between the use cases and the user interface or external systems.
Frameworks and Drivers: Handle the technical details such as databases, user interfaces, and external APIs.
Pros:
Highly scalable and maintainable: The clear separation of concerns and independent layers make the application easier to scale and maintain.
Independent layers for easy modification: Changes in one layer do not affect other layers, allowing for more flexibility and easier updates.
Cons:
Complex for small projects: The architecture can be overkill for simple applications with limited functionality.
Time-consuming setup: The initial setup and configuration can be time-consuming and require a deeper understanding of the pattern.
Journey with HNG Internship
I strongly believe Joining the [HNG Internship](https://hng.tech/) is a significant milestone in my mobile development journey. I am thrilled about the journey ahead however I am also scared of the (shege) ahead too 😭. The HNG Internship is the perfect platform for anyone who loves mobile development to channel their passion for technology and innovation into creating meaningful applications. It provides a unique blend of hands-on experience and theoretical knowledge, essential for becoming a proficient mobile developer.
I am super thrilled about HNG because it perfectly simulates ideating, building, and pushing a real live product that solves day to day problems. I took the marketing stack in the HNG x program, and it really helped me in my career to multi-stack and also work under pressure, it was like I was a new person at my place of work after the internship program.
Another amazing thing about the Hng internship program is the program connects employers with top-notch talent who have gone through the rigorous training program via the [HNG HIRE](https://hng.tech/hire) platform.
Conclusion
Mobile development is a dynamic and rewarding field. Understanding the platforms and architecture patterns is crucial for building successful applications. As I embark on this journey with the [HNG Internship](https://hng.tech/), I look forward to growing my skills, building amazing apps, and contributing to the tech community. Let's make this journey unforgettable! | akpamzy_junior_cc600a044d |
1,906,748 | Tipar automáticamente Swagger/OpenAPI endpoints con NSwag | Tipar automáticamente Swagger/OpenAPI endpoints con NSwag Tipar automáticamente... | 0 | 2024-06-30T15:50:15 | https://dev.to/altaskur/tipar-automaticamente-swaggeropenapi-endpoints-con-nswag-397g | webdev, typescript, tutorial, spanish | # Tipar automáticamente Swagger/OpenAPI endpoints con NSwag
- [Tipar automáticamente Swagger/OpenAPI endpoints con NSwag](#tipar-automáticamente-swaggeropenapi-endpoints-con-nswag)
- [¿Qué es NSwag?](#qué-es-nswag)
- [¿Qué necesitas?](#qué-necesitas)
- [Instalación](#instalación)
- [Configuración](#configuración)
- [Generar Interfaces](#generar-interfaces)
- [Final](#final)
- [Redes](#redes)
Si alguna vez has tenido que tipar las respuestas de tus endpoints, sabrás que muchas veces es una tarea bastante difícil y larga. ¿Te imaginas poder hacerlo de manera automática y en pocos segundos? En este artículo, te explicaré cómo usar NSwag y cómo prepararlo para añadirlo a tus proyectos frontend.
## ¿Qué es NSwag?
NSwag es una herramienta muy flexible que nos permite tipar automáticamente clientes API a partir del archivo de definición de OpenAPI, de una manera clara y precisa. Es especialmente útil para endpoints de cierta envergadura.
## ¿Qué necesitas?
NSwag está creado en C#, por lo tanto, necesitarás tener instalado el Framework .NET completo 4.6.2+ o .NET 6.0+ en tu sistema. Al instalar el paquete npm, se verificarán los requisitos necesarios y, en caso de faltar alguno, se te proporcionará un enlace directo a la web oficial de Microsoft, así que no te preocupes por eso.
Por supuesto, también necesitarás disponer de un Endpoint que cumpla con los estándares de Swagger/OpenAPI 2.0. Normalmente, estos archivos de definición Swagger se encuentran en la página de Swagger de tu endpoint, justo debajo del título.

Nos interesa el archivo .json que contiene toda la documentación de nuestro backend y con el que NSwag va a trabajar.
También disponemos de una interfaz gráfica para realizar la configuración, pero en nuestro caso nos vamos a enfocar en la generación manual a través de comandos, para así poder añadirlo al flujo de automatización.
## Instalación
Ahora vamos a añadir este paquete a nuestro repositorio. Dado que esta herramienta es una ayuda al desarrollo y no tiene impacto en la versión final del cliente, la añadiremos a las dependencias de desarrollo.
```bash
npm install nswag --save-dev
```
Durante el proceso de instalación, si no tienes instalado el framework .NET necesario (como el Framework .NET completo 4.6.2+ o .NET 6.0+), el asistente te pedirá que lo instales. También proporcionará un enlace directo a la web oficial de Microsoft para que puedas descargarlo e instalarlo fácilmente.
## Configuración
La herramienta va a consumir un archivo tipo config.nswag dónde añadiremos toda la configuración, en este artículo nos vamos a enfocar en sacar los tipados de nuestro endpoint, ya que la herramienta puede generar hasta las peticiones y los DTO del lenguaje, desde Typescript, a Angular, pasando por Jquery y C#
Voy a ponerte un archivo base que llamaremos config.nswag:
```json
{
"runtime": "Net80",
"defaultVariables": null,
"documentGenerator": {
"fromDocument": {
"url": "[la dirección del swagger.json de tu endpoint]",
"output": null,
"newLineBehavior": "Auto"
}
},
"codeGenerators": {
"openApiToTypeScriptClient": {
"className": "{controller}Client",
"moduleName": "",
"namespace": "",
"typeScriptVersion": 5.4,
"template": "Fetch",
"promiseType": "Promise",
"httpClass": "HttpClient",
"withCredentials": false,
"useSingletonProvider": false,
"injectionTokenType": "OpaqueToken",
"dateTimeType": "Date",
"nullValue": "Undefined",
"generateClientClasses": false,
"generateClientInterfaces": true,
"generateOptionalParameters": false,
"exportTypes": true,
"wrapDtoExceptions": false,
"exceptionClass": "SwaggerException",
"clientBaseClass": null,
"wrapResponses": false,
"generateResponseClasses": true,
"responseClass": "SwaggerResponse",
"configurationClass": null,
"useTransformOptionsMethod": false,
"useTransformResultMethod": false,
"generateDtoTypes": true,
"operationGenerationMode": "MultipleClientsFromOperationId",
"markOptionalProperties": true,
"generateCloneMethod": false,
"typeStyle": "Interface",
"enumStyle": "Enum",
"useLeafType": false,
"extensionCode": null,
"generateDefaultValues": true,
"handleReferences": false,
"generateTypeCheckFunctions": false,
"generateConstructorInterface": true,
"convertConstructorInterfaceData": false,
"importRequiredTypes": true,
"useGetBaseUrlMethod": false,
"baseUrlTokenName": "API_BASE_URL",
"useAbortSignal": false,
"inlineNamedDictionaries": false,
"inlineNamedAny": false,
"includeHttpContext": false,
"templateDirectory": null,
"serviceHost": null,
"serviceSchemes": null,
"output": "src/app/api/interfaces.ts",
"newLineBehavior": "Auto"
}
}
}
```
Asegúrate de cambiar la url del from document, tanto local cómo externa
y de cambiar el output de codeGenerators a la ruta que prefieras para guardar los typos de tu endpoint.
## Generar Interfaces
Una vez tenemos el archivo de configuración podemos ejecutar nswag con el comando
```bash
npx nswag run ./config.nswag
```
Ahora podemos ir a la ruta de nuestro output y ver todos los tipos y enums de nuestro endpoint en nuestro caso sería src/app/api/interfaces.ts
## Final
Pero esta herramienta no se queda aquí, puedes generar los DTOs, las peticiones y hasta los servicios de tu endpoint, todo de manera automática y en pocos segundos.
Si quieres saber más sobre esta herramienta, te recomiendo que visites la [documentación oficial](https://github.com/RicoSuter/NSwag) de NSwag.
¡Espero que te haya sido de ayuda! Si tienes alguna duda, no dudes en preguntar en los comentarios.
Y no te olvides de seguirme en redes sociales para estar al tanto de todas las novedades.
## Redes
[Twitter](https://twitter.com/altaskur)
[Twitch](https://www.twitch.tv/altaskur)
[Instagram](https://www.instagram.com/altaskur/)
[Github](https://www.github.com/altaskur)
[Otras redes](https://www.github.io/altaskur)
| altaskur |
1,906,750 | React Vs AngularJS | Introduction Frontend development is constantly progressing, and two key technologies in this domain... | 0 | 2024-06-30T15:49:33 | https://dev.to/halimat_yakubu_f257fd0215/react-vs-angularjs-4e1m | **Introduction**
Frontend development is constantly progressing, and two key technologies in this domain are React and AngularJS, two of the most popular frameworks for building dynamic web applications. In this article, I will state brief overviews, advantages, and disadvantages of React and AngularJS. Furthermore, I will outline my expectations for the HNG Internship and provide my perspectives on using React.
- **Overview of React**
**React** is a JavaScript library developed by Facebook which allows developers to
create reusable user interface components, and manage the state of their applications efficiently.
- **Advantages of React**
1. Flexibility.
React can be integrated with various libraries and frameworks, giving developers the freedom to choose the tools that best fit their project needs.
2. One-Way Data Binding.
This characteristic of React ensures a unidirectional data flow, which makes debugging easier and ensures that data changes are predictable.
3. Strong Community Support.
Extensive documentation, tutorials, and a vast ecosystem of third-party libraries and tools -such as Redux for state management and React Router for routing- contribute to a robust support network.
- **Disadvantages of React**
1. Steep Learning Curve.
Understanding the component lifecycle, state management, and other advanced concepts can be challenging.
2. Requires Additional Tools
React is a library, not a full-fledged framework, so developers often need to rely on additional tools and libraries (e.g., Redux, React Router) to build a complete application.
3. Rapid Changes: The React ecosystem evolves quickly, which can make it difficult to keep up with the latest best practices and updates.
- **Overview of AngularJS**
**AngularJS** is a framework developed by Google for building dynamic web applications. It is a comprehensive solution that includes everything needed to create a robust web application, from data binding to dependency injection.
- **Advantages of AngularJS**
1. Comprehensive Framework: Angular offers a complete solution with built-in tools for routing, form handling, HTTP services, and more, reducing the need for additional libraries.
2. Two-Way Data Binding
This feature automatically synchronizes data between the model and the view, simplifying development and reducing boilerplate code.
3. Structured Architecture: The MVC (Model-View-Controller) pattern helps in organizing code and separating concerns, which is beneficial for large-scale applications.
- **Disadvantages of AngularJS**
1. Performance Issues.
Two-way data binding can lead to performance bottlenecks in large applications due to the continuous synchronization between the model and the view.
2. Verbosity: Angular’s syntax and structure can be verbose, leading to more boilerplate code compared to other frameworks.
3. Less Flexibility: Being a full-fledged framework, Angular is more opinionated in its approach, which can limit flexibility and make it harder to integrate with other libraries or tools not designed specifically for Angular.
**My Expectations for the HNG Internship**
As I start the HNG Internship, I am excited to dive into a collaborative, fast paced and dynamic learning environment. I believe this program offers a great opportunity to work on real-world projects, enhance my skills, and gain valuable practical experience while collaborating with like minds.
**My Expectations for working with React**
React is a robust and popular library for creating user interfaces, so that is reason enough for me to be excited at the prospect of expanding my knowledge of React during this internship. I am eager to master its features, learn best practices, and contribute to meaningful projects.
To learn more about the HNG Internship,
Visit: https://hng.tech/internship
If you are looking to hire talented developers,
Check out HNG Hire https://hng.tech/hire
**Conclusion**
Both React and Angular have their own sets of advantages and disadvantages. React’s flexibility and performance make it ideal for dynamic applications, while Angular’s comprehensive framework and structured approach are well-suited for large-scale enterprise applications.
As I begin my journey with the HNG Internship, I am enthusiastic about utilizing these technologies, improving my skills, and contributing to innovative solutions. The future of frontend development is very promising, and I am more than excited to be a part of it.
| halimat_yakubu_f257fd0215 | |
1,906,749 | Quanto tempo de experiência para se candidatar pra uma vaga internacional? | Recebi essa pergunta no formulário que publiquei no meu primeiro artigo. Vou responder com base na... | 0 | 2024-06-30T15:44:04 | https://dev.to/lucasheriques/quanto-tempo-de-experiencia-para-se-candidatar-pra-uma-vaga-internacional-2735 | beginners, career, braziliandevs, webdev | Recebi essa pergunta no [formulário](https://forms.gle/qbNvHPpdEaLp7R8b7) que publiquei no meu [primeiro artigo](https://devnagringa.substack.com/p/como-eu-virei-um-dev-na-gringa). Vou responder com base na minha experiência como engenheiro de software. Mas acho que tudo aqui é válido para qualquer vaga remota internacional.
Inclusive, se você tiver alguma dúvida, sinta-se à vontade para usar o formulário ou deixar um comentário. Ou pelas minhas [redes sociais](https://lucasfaria.dev/socials).
[Chegou agora? Junte-se a 100+ assinantes para receber 1 email todo domingo sobre crescimento profissional.](https://devnagringa.substack.com/subscribe?utm_source=devto)
Nessa semana, temos uma surpresa no final do artigo. 🙌
---
🎯 Conseguir um trabalho na gringa é uma questão de dois passos:
1. Conseguir a entrevista
2. Passar na entrevista e receber a oferta
Experiência profissional ajuda nos dois passos, mas de maneiras diferentes para cada um. Mas existem outras maneiras de se destacar também.
Não quer dizer que é fácil. Existem vários fatores envolvidos, e alguns estão fora do nosso controle. Mas o importante é tentar de qualquer forma. E existem algumas dicas que não envolvem experiência profissional que você pode aplicar. Vamos falar delas em seguida.
## Conseguindo a entrevista
Contratação de uma pessoa é um exercício de confiança.
Você precisa demonstrar ao gerente que você é capaz de fazer um bom trabalho.
Uma das maneiras mais comuns de fazer isso é através de um bom currículo. Onde suas experiências dizem como você é um bom profissional.
Mas não é a única.
No contexto para uma vaga de engenheiro de software, tente outras estratégias como:
- Mande um e-mail diretamente para o CEO/fundador da empresa. Escreva de forma clara e correta. Compartilhe um link do seu portfólio digital.
- Teste o produto. Reporte um _bug_, sugira uma nova funcionalidade. Arrisque sua opinião sobre o que deve ser feito.
- Use as ferramentas que a empresa utiliza. Mostre que você entende do ambiente que a empresa usa.
- Construa o seu histórico digital com suas tentativas e sucessos. Use o LinkedIn, Twitter, Dev.to, ou seu próprio site. Deixe o seu trabalho falar por você mesmo.
- Procure por exemplos de outras pessoas que trabalham na empresa e publicaram na internet. Veja as histórias de sucesso deles e tente aplicar certos elementos na sua candidatura também.
- Crie um _side project_. Um projeto de sucesso mostra para as pessoas que você consegue: escrever código, resolver problemas de pessoas e que você é independente. E não precisa de nenhum gestor te falando o que fazer. Que você é um [**gerente de um só**](https://signalvnoise.com/posts/1430-hire-managers-of-one).
Sim, experiência profissional ajuda. Mas, não é a única forma de você mostrar que consegue fazer o trabalho necessário.
O seu objetivo deve ser responder a pergunta do gestor: "por que eu devo contratar essa pessoa?".
É difícil definir em termos de _anos de experiência_ quando você vai estar pronto. Mas, se você não tentar, nunca vai saber a resposta. Mostre que você quer o trabalho, e que é capaz de realizá-lo. Não se preocupe com fatores arbitrários sobre quantos anos de experiência você tem.

## Passando na entrevista
Seu currículo agora deixa de ser o fator mais importante.
Suas experiências anteriores ajudam aqui desde que sejam relevantes para a vaga que você está aplicando.
Mas, que importa agora é o seu desempenho na avaliação da empresa.
Para engenheiros de software, tenho [outro artigo](https://devnagringa.substack.com/p/dev-na-gringa-processos-seletivos) falando sobre os processos seletivos.
Se precisar estudar, faça isso de forma inteligente. Pesquise por experiências de entrevistas anteriores na empresa. Entenda se eles buscam conhecimentos em [algoritmos](https://neetcode.io/), domínios de linguagem, [projeto de sistemas (_system design_)](https://amzn.to/3Y2SJ7t), [frontend](https://www.greatfrontend.com/?fpr=lucasfaria).
Escreva sobre os seus conhecimentos. Publique-os se puder. Eles podem ajudar outras pessoas que estão em uma situação parecida.
Lembre-se também de não negligenciar a entrevista comportamental. Já falei anteriormente que eu acho ela a [melhor forma de se destacar num processo seletivo](https://devnagringa.substack.com/p/como-se-destacar-em-qualquer-processo).
## Mas então, quantos anos de experiência eu realmente preciso para me candidatar para uma vaga na gringa?
Como tudo na engenharia de software, depende. **Mas, a minha resposta é: zero**. Você pode começar desde a primeira.
Claro, existem problemas com isso.
É menos comum vaga para engenheiros de software júnior trabalhando pra fora.
É um ambiente mais competitivo: qualquer pessoa no mundo pode se candidatar. Além disso, são vagas remotas, [que estão ficando cada vez mais competitivas](https://www.forbes.com/sites/lindsaykohler/2024/04/02/fully-remote-jobs-are-getting-harder-to-find/).
Anos de experiência te ajudam a se tornar uma pessoa mais competitiva nesse espaço. Mas, não acho que você precise esperar para começar a tentar.
Faça outras coisas que te ajudem a se destacar.
Participe de comunidades. Faça projetos. Publique-os. Escreva sobre o que você sabe.
Não espere uma métrica arbitrária como anos de experiência para definir o que você espera da sua carreira. Tome posse dela você mesmo.
Pense em aonde você quer chegar. E escreva todos os passos necessários do que você precisa fazer para chegar lá.
Vou dar um exemplo aqui: a [Camila Rosa](https://www.linkedin.com/in/camilarosa-2403/).
Ela fez a transição de carreira de produção de TV para gerente da comunidade tech do [Meteor](https://www.meteor.com/). E foi a **primeira experiência de trabalho dela no mercado de tecnologia.**
Eu conversei com a Camila, e um dos principais fatores que levaram a essa oportunidade foi participar em eventos! Ela conheceu pessoas na área, que levou ela a ficar sabendo de uma vaga, que então se candidatou e passou.
Use histórias como essa como inspiração. E se dedique para o seu objetivo.
Eu nunca falei em público fora da faculdade num evento de tecnologia. Mas, vou enviar **três palestras diferentes** para o [The Developer's Conference SP](https://cfp-sp.thedevconf.com.br/) esse ano.
Não sei se vou ser aceito (torcendo para pelo menos uma 🙏). Mas, vou tentar de qualquer forma.
**Faça coisas fora da sua área de conforto.**
---
## Sorteio e agradecimentos
Essa semana, dia 27 de junho, atingimos nossa marca de **100 assinantes**! Com 43 dias desde a publicação do [primeiro artigo](https://devnagringa.substack.com/p/como-eu-virei-um-dev-na-gringa).
Queria agradecer a todos a todos que assinaram até agora! Escrever esses artigos semanais tem sido uma experiência incrível. Espero poder alcançar cada vez mais pessoas e publicar conteúdos cada vez mais úteis para todos. ❤️
Para celebrar esse marco, eu queria propor uma ideia para os que participam da comunidade: um **clube do livro**. Lendo um capítulo semanalmente e encontrando no Discord para trocar conhecimentos. 📚
Pensei em começarmos com um livro que seria ideal para todos os níveis de conhecimento: [Entendendo Algoritmos, de Aditya Bhargava](https://amzn.to/3W2R0xe). Algoritmos e estruturas de dados são essenciais para qualquer engenheiro de software. E necessários também para um bom desempenho em entrevistas no estilo de _leetcode_.
> Grokking Algorithms by [Aditya Bhargava](http://adit.io/) is hands down the best guide on algorithms from beginners to experienced engineers. A very approachable, and visual guide, covering all that most people need to know on this topic. **I am convinced that you don't need to know more about algorithms than this book covers.** [_Escrito por Gergely Orosz, autor do blog Pragmatic Engineer._](https://blog.pragmaticengineer.com/data-structures-and-algorithms-i-actually-used-day-to-day/)
Se tivermos pelo menos dez interessados no clube do livro, devemos começar na segunda metade do mês de Julho. [Participe da enquete no Substack](https://devnagringa.substack.com/i/146124023/sorteio-e-agradecimentos) para mostrar seu interesse.
**Como comemoração aos 100 inscritos, vou sortear uma cópia para entre todos os assinantes!** Vou fazer esse sorteio ao vivo no [YouTube](https://www.youtube.com/@LucasFariaDev/streams) e [Twitch](https://twitch.tv/lucasfariadev/) daqui 2 semanas, **Domingo, dia 13 de julho**. Semana que vem vou estar em um casamento então vai ficar para a outra.
**Se chegarmos em 150 inscritos até a hora do sorteio, vamos sortear duas cópias ao invés de uma só.**
---
Você gostou dessa edição? Se sim, tem duas coisas que você pode fazer para ajudar:
[Junte-se a 100+ profissionais recebendo conselhos práticos para crescer na carreira e sobre o mercado internacional.](https://devnagringa.substack.com/subscribe?utm_source=devto)
Se você acha que outra pessoa pode gostar desse artigo, 🔁 compartilhe. | lucasheriques |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.