id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,880,604 | Ibuprofeno.py💊| #136: Explica este código Python | Explica este código Python Dificultad: Avanzado x = {3, 4} y =... | 25,824 | 2024-07-09T11:00:00 | https://dev.to/duxtech/ibuprofenopy-136-explica-este-codigo-python-2bcg | spanish, learning, beginners, python | ## **<center>Explica este código Python</center>**
#### <center>**Dificultad:** <mark>Avanzado</mark></center>
```py
x = {3, 4}
y = frozenset(x)
y.add(5)
print(x)
```
* **A.** `AttributeError`
* **B.** `TypeError`
* **C.** `SyntaxError`
* **D.** `NameError`
---
{% details **Respuesta:** %}
👉 **A.** `AttributeError`
La función `frozenset()` permite crear conjuntos inmutables, es decir, conjuntos que no podrán modificarse ni agregando ni eliminando nuevos items.
{% enddetails %} | duxtech |
1,880,605 | Ibuprofeno.py💊| #137: Explica este código Python | Explica este código Python Dificultad: Intermedio def f(): return... | 25,824 | 2024-07-10T11:00:00 | https://dev.to/duxtech/ibuprofenopy-137-explica-este-codigo-python-42po | python, learning, spanish, beginners | ## **<center>Explica este código Python</center>**
#### <center>**Dificultad:** <mark>Intermedio</mark></center>
```py
def f():
return (1,2)
pass
a, b = f()
print(b)
```
* **A.** `1`
* **B.** `2`
* **C.** `pass`
* **D.** `SyntaxError`
---
{% details **Respuesta:** %}
👉 **B.** `2`
Las funciones en Python pueden regresar cualquier tipo de dato o estructura de dato, en este caso regresamos una tupla.
Como el valor de retorno es una tupla entonces podemos usar el **desempaquetado de tuplas** usando las variables `a` y `b` para ello.
Finalmente imprimimos el segundo valor de la tupla que es `2`.
Todo código que vaya luego del `return` nunca se ejecuta, por ello `pass` no tiene sentido en este ejemplo y solo se lo uso para distraer.
{% enddetails %} | duxtech |
1,880,606 | Ibuprofeno.py💊| #138: Explica este código Python | Explica este código Python Dificultad: Fácil def f(x, y, z): return x... | 25,824 | 2024-07-11T11:00:00 | https://dev.to/duxtech/ibuprofenopy-138-explica-este-codigo-python-6m3 | python, spanish, learning, beginners | ## **<center>Explica este código Python</center>**
#### <center>**Dificultad:** <mark>Fácil</mark></center>
```py
def f(x, y, z):
return x * y * z
print(f(y=2, x=1, z=4))
```
* **A.** `2`
* **B.** `4`
* **C.** `8`
* **D.** `SyntaxError`
---
{% details **Respuesta:** %}
👉 **C.** `8`
En la funciones de Python existe el concepto de **argumentos nominales** o **argumentos por nombre** , es una característica muy interesante de Python que nos permite llamar a una función con cada argumento nombrado, esto permite que el orden de los argumentos no importe.
{% enddetails %} | duxtech |
1,901,705 | Different Problem, One Solution : API Gateways | Greetings fellow developers, both experienced and newcomers. Have you ever found yourself dealing... | 0 | 2024-07-10T21:33:58 | https://dev.to/grenishrai/different-problem-one-solution-api-gateways-j1d | api, javascript, node, tutorial | Greetings fellow developers, both experienced and newcomers.
Have you ever found yourself dealing with various APIs featuring different endpoints, leading to confusion about which one the frontend should access? Perhaps you even accidentally exposed all endpoint responses to the frontend.
There exists a solution to this issue: the API Gateway.
## What is API Gateway?
API Gateway is a service that helps manage and secure APIs (Application Programming Interfaces). It acts as a front door for the APIs, routing requests to the appropriate backend services, handling API versioning, security, and access control, and also providing features like rate limiting and caching. In simpler terms, API Gateway facilitates the communication and interaction between different software applications.

Lets say you have a mobile e-commerce app with various features like user authentication, product search, and ordering. To manage the APIs for this app, you can use Amazon API Gateway. With API Gateway, you can create different API endpoints for user authentication, product search, and order placement.
For instance, you can set up an API endpoint `/auth` for user authentication that routes incoming requests to a Lambda function for verifying user credentials. Another endpoint `/products` can handle product search requests by connecting to a separate backend service that retrieves product information from a database.
Additionally, let's implement API versioning by creating different versions of the APIs to support backward compatibility. The API Gateway allows you to easily manage and switch between versions without disrupting the app's functionalities.
Moreover, you can enhance security by setting up API key authentication or integrating with IAM roles to control access to specific APIs. With features like rate limiting, you can prevent abuse of the APIs by restricting the number of requests a user can make within a certain time period.
By utilizing API Gateway in this scenario, you can effectively manage and secure the communication between your mobile e-commerce app and the backend services, ensuring a seamless and protected experience for your users.
Some most used API Gateway examples include AWS API Gateway, Azure API Management, Google Cloud Endpoints, and Kong.
## Work In Progress
After understanding What, let's move on to How.
To begin with, create an empty repository
`mkdir api-gateway`
Next, initialize npm
`npm init`
Once npm is initialized, proceed to install necessary packages for the application
```shell
npm i express nodemon morgan express-rate-limit axios
```
Create an `index.js` file in the repository, and your final result should resemble this.

Ensure making slight alterations within `package.json` file to initiate `nodemon`. Optionally, you may set `"type": "module"` for proceeding with ES6, although our approach will be in CommonJs.
Starting with necessary imports,
```js
const express = require('express');
const axios = require('axios');
const rateLimit = require('express-rate-limit');
const morgan = require('morgan');
```
initializing the app
```js
const app = express();
const PORT = 3000;
```
> _NOTE: Although CORS is not utilized in this example, it is essential to implement it._
Let's set the middleware for Morgan
```js
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // limit each IP to 100 requests per windowMs
});
app.use(limiter);
app.use(morgan('combined'));
```
And for error handlers middleware
```js
const handleError = (err, req, res, next) => {
console.error(err);
res.status(500).send('Internal Server Error');
};
app.use(handleError);
```
In this instance, we will utilize the dummy APIs provided by JSONPlaceHolder and ReqRes.
- JSONPlaceHolder
- `/posts` - Fetch a collection of posts.
- `/comments` - Access a set of comments.
- `/albums` - Get a listing of albums.
- `/photos` - Obtain a series of photos.
- `/todos` - Get a rundown of todos.
- `/users` - Retrieve a register of users.
- ReqRes
- `/login` - Used for requesting login access.
```js
app.use('/api', async (req, res, next) => { ... }
```
inside we'll implement `try` and `catch` method
```js
try{ ... } catch (error) { ... }
```
inside `try` lets start the main work
```js
let response;
const path = req.path;
```
```js
if (
path.startsWith("/posts") ||
path.startsWith("/comments") ||
path.startsWith("/albums") ||
path.startsWith("/photos") ||
path.startsWith("/todos") ||
path.startsWith("/users")
) {
// Route to JSONPlaceholder
response = await axios.get(`https://jsonplaceholder.typicode.com${path}`);
} else if (path.startsWith("/reqres-users")) {
// Route to ReqRes
response = await axios.get(`https://reqres.in/api${path}`);
} else if (path.startsWith("/reqres-login")) {
// Handle ReqRes login
response = await axios.post("https://reqres.in/api/login", {
email: "eve.holt@reqres.in",
password: "cityslicka",
});
} else {
res.status(404).send("Endpoint not found");
return;
}
res.json(response.data);
```
And that is it. We're done implementing our first API Gateway.
The overall code should look like this.
```js
const express = require("express");
const axios = require("axios");
const rateLimit = require("express-rate-limit");
const morgan = require("morgan");
const app = express();
const PORT = 3000;
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
});
app.use(limiter);
app.use(morgan("combined"));
// Error Handling Middleware
const handleError = (err, req, res, next) => {
console.error(err);
res.status(500).send("Internal Server Error");
};
app.use(handleError);
// Unified API endpoint
app.use("/api", async (req, res, next) => {
try {
let response;
const path = req.path;
if (
path.startsWith("/posts") ||
path.startsWith("/comments") ||
path.startsWith("/albums") ||
path.startsWith("/photos") ||
path.startsWith("/todos") ||
path.startsWith("/users")
) {
// Route to JSONPlaceholder
response = await axios.get(`https://jsonplaceholder.typicode.com${path}`);
} else if (path.startsWith("/reqres-users")) {
// Route to ReqRes
response = await axios.get(`https://reqres.in/api${path}`);
} else if (path.startsWith("/reqres-login")) {
// Handle ReqRes login
response = await axios.post("https://reqres.in/api/login", {
email: "eve.holt@reqres.in",
password: "cityslicka",
});
} else {
res.status(404).send("Endpoint not found");
return;
}
res.json(response.data);
} catch (error) {
next(error);
}
});
app.listen(PORT, () => {
console.log(`API Gateway running on port ${PORT}`);
});
```
## Test and Results
Its time to test our API Endpoint, I'm using Insomnia, but you can use any of your choiceSure, let's go ahead and send a request to the API endpoint to see if everything is working as expected.




Keep Hacking! | grenishrai |
1,880,607 | Ibuprofeno.py💊| #139: Explica este código Python | Explica este código Python Dificultad: Intermedio def f(x, y, z): ... | 25,824 | 2024-07-12T11:00:00 | https://dev.to/duxtech/ibuprofenopy-139-explica-este-codigo-python-37ic | python, beginners, spanish, learning | ## **<center>Explica este código Python</center>**
#### <center>**Dificultad:** <mark>Intermedio</mark></center>
```py
def f(x, y, z):
return x * y * z
print(f(4, z=2, y=2))
```
* **A.** `8`
* **B.** `16`
* **C.** `2`
* **D.** `SyntaxError`
---
{% details **Respuesta:** %}
👉 **B.** `16`
Es posible mezclar tanto argumentos posicionales como argumentos nombrados en una función.
La única condición es que los **argumentos posicionales** deben ir siempre antes que los **argumentos nombrados**, solo así no se producirá un error.
Por ejemplo, el siguiente código sería incorrecto:
```py
def f(x, y, z):
return x * y * z
print(f(z=2, 4, y=2)) # SyntaxError: positional argument follows keyword argument
```
{% enddetails %} | duxtech |
1,880,648 | Testing DateTime.Now Revisited: Using .NET 8.0 TimeProvider | I originally posted this post on my blog a couple of weeks ago. It's part of an ongoing series I've... | 13,455 | 2024-07-08T05:00:00 | https://canro91.github.io/2024/06/10/TestingTimeWithTimeProvider/ | csharp, dotnet, beginners, testing | _I originally posted this post on [my blog](https://canro91.github.io/2024/06/10/TestingTimeWithTimeProvider/) a couple of weeks ago. It's part of an ongoing series I've been publishing, called [Unit Testing 101](https://canro91.github.io/UnitTesting)._
Starting from .NET 8.0, we have new abstractions to test time. We don't need a custom `ISystemClock` interface. There's one built-in. Let's learn how to use the new `TimeProvider` class to write tests that use `DateTime.Now`.
**.NET 8.0 added the TimeProvider class to abstract date and time inside tests. It has a virtual method GetUtcNow() that sets the current time inside tests. It also has a non-testable implementation for production code.**
Let's play with the `TimeProvider` by revisiting [how to write tests that use DateTime.Now](https://canro91.github.io/2021/05/10/WriteTestsThatUseDateTimeNow/).
Back in the day, we wrote two tests to validate expired credit cards. And we wrote an `ISystemClock` interface to control time inside our tests. These are the tests we wrote:
```csharp
using FluentValidation;
using FluentValidation.TestHelper;
namespace TimeProviderTests;
[TestClass]
public class CreditCardValidationTests
{
[TestMethod]
public void CreditCard_ExpiredYear_ReturnsInvalid()
{
var when = new DateTime(2021, 01, 01);
var clock = new FixedDateClock(when);
var validator = new CreditCardValidator(clock);
// 👆👆👆
// Look, ma! I'm going back in time
var creditCard = new CreditCardBuilder()
.WithExpirationYear(DateTime.UtcNow.AddYears(-1).Year)
.Build();
var result = validator.TestValidate(creditCard);
result.ShouldHaveAnyValidationError();
}
[TestMethod]
public void CreditCard_ExpiredMonth_ReturnsInvalid()
{
var when = new DateTime(2021, 01, 01);
var clock = new FixedDateClock(when);
var validator = new CreditCardValidator(clock);
// 👆👆👆
// Look, ma! I'm going back in time again
var creditCard = new CreditCardBuilder()
.WithExpirationMonth(DateTime.UtcNow.AddMonths(-1).Month)
.Build();
var result = validator.TestValidate(creditCard);
result.ShouldHaveAnyValidationError();
}
}
public interface ISystemClock
{
DateTime Now { get; }
}
public class FixedDateClock : ISystemClock
{
private readonly DateTime _when;
public FixedDateClock(DateTime when)
{
_when = when;
}
public DateTime Now
=> _when;
}
public class CreditCardValidator : AbstractValidator<CreditCard>
{
public CreditCardValidator(ISystemClock systemClock)
{
var now = systemClock.Now;
// Beep, beep, boop 🤖
// Using now to validate credit card expiration year and month...
}
}
```
We wrote a `FixedDateClock` that extended `ISystemClock` to freeze time inside our tests. The thing is, we don't need them with .NET 8.0.
## 1. Use TimeProvider instead of ISystemClock
Let's get rid of our old `ISystemClock` by making our `CreditCardValidator` receive `TimeProvider` instead, like this:
```csharp
public class CreditCardValidator : AbstractValidator<CreditCard>
{
// Before:
// public CreditCardValidator(ISystemClock systemClock)
// After:
public CreditCardValidator(TimeProvider systemClock)
// 👆👆👆
{
var now = systemClock.GetUtcNow();
// or
//var now = systemClock.GetLocalNow();
// Beep, beep, boop 🤖
// Rest of the code here...
}
}
```
The `TimeProvider` abstract class has the `GetUtcNow()` method to override the current UTC date and time. Also, it has the `LocalTimeZone` property to override the local timezone. With this timezone, `GetLocalNow()` returns the "frozen" UTC time as a local time.
If we're working with `Task`, we can use the `Delay()` method to create a task that completes after, well, a delay. Let's use the short delays in our tests to [avoid making our tests slow](https://canro91.github.io/2023/05/29/SpeedingUpSomeTests/). Nobody wants a slow test suite, by the way.
With the `TimeProvider`, we can control time inside our tests by injecting a fake. But for production code, let's use `TimeProvider.System`. It uses `DateTimeOffset.UtcNow` under the hood.
<figure>
<img src="https://source.unsplash.com/xqjMjaGGhmw/600x400" alt="
person holding blue sand">
<figcaption>Controlling the sands of time...Photo by <a href="https://unsplash.com/@benwhitephotography?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Ben White</a> on <a href="https://unsplash.com/photos/person-holding-blue-sand-xqjMjaGGhmw?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a></figcaption>
</figure>
## 2. Use FakeTimeProvider instead of FixedDateClock
We might be tempted to roll a child class that extends `TimeProvider`. But, let's hold our horses. There's an option for that too.
Let's rewrite our tests after that change in the signature of the `CreditCardValidator`.
First, let's install the `Microsoft.Extensions.TimeProvider.Testing` NuGet package. It has a fake implementation of the time provider: `FakeTimeProvider`.
Here are our two tests using the `FakeTimeProvider`:
```csharp
using FluentValidation;
using FluentValidation.TestHelper;
using Microsoft.Extensions.Time.Testing;
namespace TestingTimeProvider;
[TestClass]
public class CreditCardValidationTests
{
[TestMethod]
public void CreditCard_ExpiredYear_ReturnsInvalid()
{
// Before:
//var when = new DateTime(2021, 01, 01);
//var clock = new FixedDateClock(when);
var when = new DateTimeOffset(2021, 01, 01, 0, 0, 0, TimeSpan.Zero);
var clock = new FakeTimeProvider(when);
// 👆👆👆
// Look, ma! No more ISystemClock
var validator = new CreditCardValidator(clock);
// 👆👆👆
var creditCard = new CreditCardBuilder()
.WithExpirationYear(DateTime.UtcNow.AddYears(-1).Year)
.Build();
var result = validator.TestValidate(creditCard);
result.ShouldHaveAnyValidationError();
}
[TestMethod]
public void CreditCard_ExpiredMonth_ReturnsInvalid()
{
// Before:
//var when = new DateTime(2021, 01, 01);
//var clock = new FixedDateClock(when);
var when = new DateTimeOffset(2021, 01, 01, 0, 0, 0, TimeSpan.Zero);
var clock = new FakeTimeProvider(when);
// 👆👆👆
var validator = new CreditCardValidator(clock);
// 👆👆👆
// Look, ma! I'm going back in time
var creditCard = new CreditCardBuilder()
.WithExpirationMonth(DateTime.UtcNow.AddMonths(-1).Month)
.Build();
var result = validator.TestValidate(creditCard);
result.ShouldHaveAnyValidationError();
}
}
```
The `FakeTimeProvider` has two constructors. One without parameters sets the internal date and time to January 1st, 2000, at midnight. And another one that receives a `DateTimeOffset`. That was the one we used in our two tests.
The `FakeTimeProvider` has two helpful methods to change the internal date and time: `SetUtcNow()` and `Advance()`. `SetUtcNow()` receives a new `DateTimeOffset` and `Advance()`, a `TimeSpan` to add it to the internal date and time.
If we're curious, this is the source code of [TimeProvider](https://github.com/dotnet/runtime/blob/5535e31a712343a63f5d7d796cd874e563e5ac14/src/libraries/Common/src/System/TimeProvider.cs) and [FakeTimeProvider](https://github.com/dotnet/extensions/blob/e5e1c7c88f3232bb3a096990da52fe7bf8a76996/src/Libraries/Microsoft.Extensions.TimeProvider.Testing/FakeTimeProvider.cs#L121C9-L129C6) from the official dotnet repository on GitHub.
If we take a closer look at our tests, we're "controlling" the time inside the `CreditCardValidator`. But, we still have `DateTime.UtcNow` when creating a credit card. For that, we can introduce a class-level constant `Now`. But that's an "exercise left to the reader."
Voilà! That's how to use the new .NET 8.0 abstraction to test time. We have the new `TimeProvider` and `FakeTimeProvider`. We don't need our `ISystemClock` and `FixedDateClock` anymore.
***
_If you want to upgrade your unit testing skills, check my course: [Mastering C# Unit Testing with Real-world Examples](https://www.udemy.com/course/mastering-csharp-unit-testing-with-real-world-examples/?referralCode=8456B1B78E2EDE923174) on Udemy. Practice with hands-on exercises and learn best practices by refactoring real-world unit tests._
_Happy testing!_ | canro91 |
1,886,355 | Authentication & Authorization | Topic: "Implementing Authentication with JWT" Description: How to implement authentication and... | 27,559 | 2024-07-08T07:58:00 | https://dev.to/suhaspalani/authentication-authorization-26cd | authjs, javascript, webdev, backenddevelopment | - *Topic*: "Implementing Authentication with JWT"
- *Description*: How to implement authentication and authorization using JSON Web Tokens (JWT).
#### Content:
#### 1. Introduction to JWT
- **What is JWT**: Explain JSON Web Tokens and their structure.
- **Why JWT**: Discuss the benefits of using JWT for authentication.
#### 2. Setting Up JWT
- **Install Dependencies**:
```bash
npm install jsonwebtoken bcryptjs
```
- **Configure JWT**:
```javascript
const jwt = require('jsonwebtoken');
const bcrypt = require('bcryptjs');
const secret = 'your_jwt_secret'; // Use an environment variable in real applications
```
#### 3. User Model and Registration
- **Define User Schema**:
```javascript
const userSchema = new mongoose.Schema({
username: { type: String, required: true, unique: true },
password: { type: String, required: true }
});
userSchema.pre('save', async function(next) {
if (this.isModified('password')) {
this.password = await bcrypt.hash(this.password, 10);
}
next();
});
const User = mongoose.model('User', userSchema);
```
- **User Registration Endpoint**:
```javascript
app.post('/register', async (req, res) => {
const user = new User(req.body);
try {
await user.save();
res.status(201).json(user);
} catch (err) {
res.status(400).json({ error: err.message });
}
});
```
#### 4. User Login and Token Generation
- **Login Endpoint**:
```javascript
app.post('/login', async (req, res) => {
const { username, password } = req.body;
try {
const user = await User.findOne({ username });
if (user && await bcrypt.compare(password, user.password)) {
const token = jwt.sign({ id: user._id, username: user.username }, secret, { expiresIn: '1h' });
res.json({ token });
} else {
res.status(401).send('Invalid credentials');
}
} catch (err) {
res.status(500).json({ error: err.message });
}
});
```
#### 5. Protecting Routes with Middleware
- **Authentication Middleware**:
```javascript
const authMiddleware = (req, res, next) => {
const token = req.header('Authorization').replace('Bearer ', '');
if (!token) {
return res.status(401).send('Access denied');
}
try {
const decoded = jwt.verify(token, secret);
req.user = decoded;
next();
} catch (err) {
res.status(400).send('Invalid token');
}
};
```
- **Protecting an Endpoint**:
```javascript
app.get('/profile', authMiddleware, async (req, res) => {
try {
const user = await User.findById(req.user.id);
res.json(user);
} catch (err) {
res.status(500).json({ error: err.message });
}
});
```
#### 6. Testing Authentication
- **Using Postman**: Demonstrate how to register a user, log in to receive a JWT, and use the JWT to access protected routes.
- **Example Workflow**:
1. Register a new user at `/register`.
2. Log in with the new user at `/login` to get a token.
3. Access the protected `/profile` route using the token in the Authorization header.
This detailed breakdown for weeks 7 to 10 includes explanations and hands-on code examples to provide a comprehensive learning experience. | suhaspalani |
1,888,562 | GitHub Container Registry: How to push Docker images to GitHub 🐋 | The GitHub Container Registry allows you to publish or host private and public Docker images. You... | 27,724 | 2024-07-11T10:00:00 | https://webdeasy.de/en/github-container-registry-en/ | docker, devops, github, webdev | > _The GitHub Container Registry allows you to publish or host private and public Docker images. You can find out everything you need to know in this tutorial._
With its registry service **GitHub Packages**, GitHub provides a way for developers to host their own Docker images directly in GitHub. In my opinion, the name of GitHub is somewhat misleading: the umbrella term for the service is GitHub Packages. It contains a number of registries. These include the **Docker Registry**, which has been renamed the **Container Registry** in order to host all types of containers.

## Why not use Docker Hub? 🐋
The [Docker Hub](https://hub.docker.com/) is the first port of call when it comes to Docker images – no question about it. In the free version, however, only one private image is available to each account and the number of pulls (200 pulls per 6 hours) is also limited. And this is where the GitHub Container Registry comes into play! 🙂
## Push Docker Images into the GitHub Registry
To create a Docker image, we need a Docker image, which we have to create with a Dockerfile. You can create such a Dockerfile using the [docker init command](https://webdeasy.de/en/docker-init-command/), for example. Once the file is available, we can build the image:
```
docker build -t frontend .
```

In the next step, we need to log in to the GitHub Container Registry. To do this, we need a personal access token from GitHub.
### Generate Personal Access Token (PAT)
To generate a new PAT, navigate to **GitHub Settings > Developer seettings > Personal access tokens > Tokens (classic)** and create a new token. You must select **“write:packages”** and** “read:packages”** as the scope. You can adjust the duration of the token to your project or set it to unlimited.

You should save the token, e.g. in your [password manager](https://webdeasy.de/en/password-manager/). If you lose the token, you will have to regenerate it.
Now open a terminal on your computer and set your PAT in a variable (replace `<YOUR_PAT>` with your token):
```
# Set PAT into a variable
export GH_PAT=<YOUR_PAT>
```
To log in to the GitHub Container Registry (ghcr.io), use the following command (replace `<USERNAME>` with your GitHub username).
```
# Login to the GitHub Container Registry using your PAT
echo $GH_PAT | docker login ghcr.io -u <USERNAME> --password-stdin
```
If the PAT and your user name were correct, you should receive the message “Login Succeeded”.

Next, we tag the image so that it also ends up in the correct registry. Replace `<USERNAME>` with your GitHub username, `<IMAGE_NAME>` with the desired name of the image and `<TAG>` with the desired tag (default: `latest`)
```
# Tag the builded docker image
docker tag frontend ghcr.io/<USERNAME>/<IMAGE_NAME>:<TAG>
```
Now we can push the image into the registry. Here you have to replace the same variables as when tagging:
```
docker push ghcr.io/<USERNAME>/<IMAGE_NAME>:<TAG>
```
If you have done everything correctly, you should receive such an output or no error message 😉

_Congratulations! You have successfully pushed your first Docker image to the GitHub registry!_ 🎉

On this page you now have further options, such as changing the visibility or deleting the image.
## Pulling GitHub Docker Images
Of course, you can now also use your Docker images from the registry. The path is the same as for pushing:
```
docker pull ghcr.io/<USERNAME>/<IMAGE_NAME>:<TAG>
```
## GitHub Actions Workflow for Docker Images in the GitHub Registry
Pushing Docker images manually is daft! That’s why there are [CI/CD pipelines](https://webdeasy.de/en/what-is-ci-cd/) that we can implement with [GitHub Actions](https://webdeasy.de/en/github-actions-tutorial-en/), for example. For certain events, e.g. a new commit, we can automatically build the Docker image and publish it in the registry. In the following example, a Docker image is built and published when the frontend branch is pushed. Building and publishing is already done after the “Build and push Docker image” step. However, I have left the last two steps in for the sake of completeness, as this is a workflow of one of my projects and you may also have a use case for it 😉
In your GitHub repository, you can simply create a file under `.github/workflows/cd.yml`.
```yaml
name: Continuous Deployment - Frontend
on:
push:
branches: ['frontend']
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
attestations: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
id: push
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Install SSH client
run: sudo apt-get update && sudo apt-get install -y openssh-client
- name: Deploy to VM
env:
SSH_HOST: ${{ vars.SSH_HOST }}
SSH_USER: ${{ secrets.SSH_USER }}
SERVER_SSH_KEY: ${{ secrets.SERVER_SSH_KEY }}
IMAGE_TAG: ${{ steps.meta.outputs.tags }}
run: |
echo "${{ env.SERVER_SSH_KEY }}" > key.pem
chmod 600 key.pem
ssh -i key.pem -o StrictHostKeyChecking=no ${{ env.SSH_USER }}@${{ env.SSH_HOST }} << 'EOF'
docker login ${{ env.REGISTRY }} -u ${{ github.actor }} -p ${{ secrets.MY_PERSONAL_PAT }}
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}
cd ~
docker compose down frontend
docker compose up -d frontend
EOF
# Clean up the private key.pem
rm key.pem
```
And don’t forget: Create the used secrets and variables in the repository settings. In this case:
- **Secrets:**
- MY_PERSONAL_PAT
- SERVER_SSH_KEY
- SSH_USER
- **Variables:**
- SSH_HOST

## GitHub Container Registry: Conclusion
That’s it! Building and publishing Docker images in the GitHub Docker Registry or GitHub Container Registry is very similar to Docker Hub. Only the login via the GitHub PAT is a little more complicated, but also offers advantages in terms of authorisation management, etc. Will you be using the GitHub Packages Registry in the future? | webdeasy |
1,889,916 | Leveraging Vue 3's Composition API for Scalable and Maintainable Codebases | I am proud to say that I have developed with Vue js long enough to have witnessed the evolution of... | 0 | 2024-07-11T15:07:29 | https://dev.to/cyrilmuchemi/leveraging-vue-3s-composition-api-for-scalable-and-maintainable-codebases-255j | javascript, vue, webdev, beginners | I am proud to say that I have developed with Vue js long enough to have witnessed the evolution of Vue js from the **Options API** to the **Composition API.** The Options API, a staple in **Vue 2,** provided a clear and structured way to build components. However, as applications grew larger and more complex, some limitations of the Options API became apparent. This led to the introduction of the Composition API in **Vue 3**.
The Options API in Vue 2 organizes component logic into various options like **data, methods, computed, watch, and lifecycle hooks**. This approach is intuitive and works well for small to medium-sized components.
**Example**
Here is an example of a code that uses the **Options API** to display and reverse a message when the update button is clicked.
``` javascript
<template>
<div>
<p>{{ message }}</p>
<button @click="updateMessage">Update Message</button>
</div>
</template>
<script>
export default {
data() {
return {
message: 'Hello, Vue 2!'
};
},
methods: {
updateMessage() {
this.message = 'Message updated!';
}
},
computed: {
reversedMessage() {
return this.message.split('').reverse().join('');
}
},
watch: {
message(newValue, oldValue) {
console.log(`Message changed from ${oldValue} to ${newValue}`);
}
}
};
</script>
```

You must have noticed that the more our component's logic grows, the messier our code will become. Below are some of the **limitations** of using the Options API:
- **Scalability:** As components grow, managing related logic spread across different options becomes difficult.
- **Reusability:** Extracting and reusing logic across components often leads to mixins, which can introduce naming conflicts and lack of clear dependencies.
- **Typescript Support:** TypeScript integration is possible but can be cumbersome and less intuitive with the Options API.
The **Composition API** addresses these limitations by allowing developers to group related logic using functions. This results in better code organization, improved reusability, and enhanced TypeScript support. Here is how the same reverse-string logic can be implemented using the Composition API:
``` javascript
<template>
<div>
<p>{{ message }}</p>
<button @click="updateMessage">Update Message</button>
</div>
</template>
<script setup>
import { ref, computed, watch } from 'vue';
const message = ref('Hello, Vue 3!');
const updateMessage = () => {
message.value = 'Message updated!';
};
const reversedMessage = computed(() => {
return message.value.split('').reverse().join('');
});
watch(message, (newValue, oldValue) => {
console.log(`Message changed from ${oldValue} to ${newValue}`);
});
</script>
```
Using **script setup** simplifies the code by removing the need for the setup function and the explicit return statement, making the component more concise and easier to read. It also helps with maintaining a scalable architecture and facilitating effective team collaboration.
**Conclusion**
With a clear separation of concerns, developers can focus on specific functionality without being overwhelmed by the complexity of the entire application. Real-world examples highlight how teams have leveraged the Composition API to streamline their development processes, foster consistency, and improve overall code quality.
Adopting the Composition API empowers development teams to build more maintainable and efficient applications, ensuring long-term success and adaptability in a rapidly changing tech landscape. | cyrilmuchemi |
1,891,151 | Effective Strategies for MySQL User Management | MySQL user management is a vital aspect of database administration. This article explores essential... | 21,681 | 2024-07-08T07:00:00 | https://dev.to/dbvismarketing/effective-strategies-for-mysql-user-management-4i53 | security | MySQL user management is a vital aspect of database administration. This article explores essential strategies for managing MySQL users, roles, and privileges effectively.
### MySQL User Management examples
MySQL has three primary users by default `mysql.session`, `mysql.sys`, and `root`, their functions are:
- `mysql.session`, access for plugins.
- `mysql.sys`, security backup if `root` is altered.
- `root`, full privileges for all administrative tasks.
Roles help manage user privileges efficiently, here’s an example;
```sql
CREATE ROLE 'admin_role';
GRANT ALL PRIVILEGES ON database.* TO 'admin_role';
GRANT 'admin_role' TO 'admin_user';
```
### FAQ
**What Is User Management in MySQL?**
Managing users and their permissions within MySQL databases.
**How Do I Secure Users in MySQL?**
Utilize strong passwords, appropriate privileges, and MySQL’s security features.
**Where Can I Learn More About MySQL Security?**
Consult MySQL’s official documentation or our detailed blog posts.
### Summary
Managing MySQL users effectively requires knowledge of roles and privileges coupled with strong security measures. For a detailed guide, check out the comprehensive article [MySQL User Management: A Guide.](https://www.dbvis.com/thetable/mysql-user-management-a-guide/) | dbvismarketing |
1,891,198 | Streamlining SQL Data Management with Generated Columns | Generated columns in SQL automatically compute and store data, simplifying database operations. This... | 21,681 | 2024-07-11T07:00:00 | https://dev.to/dbvismarketing/streamlining-sql-data-management-with-generated-columns-4jf8 | Generated columns in SQL automatically compute and store data, simplifying database operations. This article offers a brief overview and practical examples to demonstrate their use.
## Examples of SQL Generated Columns
In SQL, generated columns are defined via `CREATE TABLE` or `ALTER TABLE`. Here’s an example using MySQL:
```sql
ALTER TABLE users
ADD COLUMN fullName VARCHAR(255) AS (CONCAT(name, " ", surname)) STORED;
```
This adds a stored column `fullName` that concatenates `name` and `surname`.
For a virtual column, which doesn’t use storage space:
```sql
ALTER TABLE users
ADD fullNamePoints VARCHAR(255) AS (CONCAT(fullName, " (", points, ")")) VIRTUAL;
```
## FAQs About Generated Columns
**What databases support generated columns?**
Databases like MySQL, MariaDB, PostgreSQL, SQL Server, and Oracle support generated columns.
**What is the difference between a trigger and a generated column?**
Triggers execute scripts on events affecting multiple tables, whereas generated columns store auto-calculated data in one table.
**What are the types of columns generated in SQL?**
SQL has stored (precomputed) and virtual (computed on-the-fly) generated columns.
**What is the difference between a generated column and a regular column?**
Generated columns are auto-calculated and immutable, unlike regular columns which are manually updated.
## Conclusion
SQL generated columns automate data calculations, enhancing database efficiency. For an in-depth guide and more examples, check out [The Ultimate Guide to Generated Columns.](https://www.dbvis.com/thetable/the-ultimate-guide-to-generated-columns/) | dbvismarketing | |
1,891,408 | Optimizing Data Manipulation with JavaScript's reduce Method | In modern web development, data manipulation is crucial for ensuring smooth and responsive... | 0 | 2024-07-12T00:13:09 | https://dev.to/ayoashy/optimizing-data-manipulation-with-javascripts-reduce-method-e2l | javascript, webdev, reduce, learning | In modern web development, data manipulation is crucial for ensuring smooth and responsive applications. Whether you're filtering products, finding specific items, or transforming data for display, effective data manipulation ensures your application runs smoothly and provides great user experience.
JavaScript provides several built-in methods like `find`, `map`, and `filter` for common tasks. However, the versatile `reduce` method stands out for its ability to perform all these operations and more. With `reduce`, you can accumulate values, transform arrays, flatten nested structures, and create complex data transformations concisely.
While `reduce` can replicate other array methods, it may not always be the most efficient choice for simple tasks. Methods like `map` and `filter` are optimized for specific purposes and can be faster for straightforward operations. However, understanding how to use reduce can help you find many ways to make your code better and easier to understand.
In this article, we will delve into the `reduce` method, explore various use cases, and discuss best practices to maximize its potential.
Overview of the Article
- Understanding the `reduce` Method
- JavaScript `reduce` Syntax
- Javascript Reduce Example
- Various Use Cases of the `reduce` Method
- Substituting JavaScript `map`, `filter`, and `find` with `reduce`
- conclusion
## Understanding the `reduce` Method
> Javascript `reduce` method applies a function against an `accumulator` and each element in the array (from left to right) to `reduce` it to a single value. This single value could be string, number, object or an array.
Basically, the `reduce` method takes an array and condenses it into one value by repeatedly applying a function that combines the accumulated result with the current array element.
## JavaScript `reduce` Syntax
```javascript
array.reduce(callback(accumulator, currentValue, index, array), initialValue);
```
Parameters:
`callback`: The function to execute on each element, which takes the following arguments:
`accumulator`: The accumulated value previously returned in the last invocation of the callback, or initialValue, if supplied.
`currentValue`: The current element being processed in the array.
`index` (optional): The index of the current element being processed in the array.
`array` (optional): The array reduce was called upon.
`initialValue`: A value to use as the first argument to the first call of the callback. If no initialValue is supplied, the first element(array[0]) in the array will be used as the initial accumulator value, and `callback` will not be executed on the first element.
## Javascript Reduce Example
Here is a basic example how the javascript `reduce` method can be used
## Using JavaScript `reduce` to Sum
```javascript
const numbers = [1, 2, 3, 4];
const sum = numbers.reduce((acc, curr) => acc + curr, 0);
console.log(sum); // Output: 10
```
In this example, reduce adds each number in the array to the accumulator (acc). Starting with an initial value of 0, it processes as follows:
- `(0 + 1) -> 1`
- `(1 + 2) -> 3`
- `(3 + 3) -> 6`
- `(6 + 4) -> 10`
## Various Use Cases of the reduce Method
The `reduce` method is highly versatile and can be applied to a wide range of scenarios. Here are some common use cases with explanations and code snippets.
## Reducing an Array of Objects
Suppose you have an array of objects and you want to sum up a particular property.
```javascript
const products = [
{ name: 'Laptop', price: 1000 },
{ name: 'Phone', price: 500 },
{ name: 'Tablet', price: 750 }
];
const totalPrice = products.reduce((acc, curr) => acc + curr.price, 0);
console.log(totalPrice); // Output: 2250
```
In this example, `reduce` iterates over each product object, adding the price property to the accumulator (acc), which starts at 0.
## Reduce an array to an object
You can use `reduce` to transform an array into an object. This can come handy when you wan to group an array using it's property
```javascript
const items = [
{ name: 'Apple', category: 'Fruit' },
{ name: 'Carrot', category: 'Vegetable' },
{ name: 'Banana', category: 'Fruit' }
];
const groupedItems = items.reduce((acc, curr) => {
if (!acc[curr.category]) {
acc[curr.category] = [];
}
acc[curr.category].push(curr.name);
return acc;
}, {});
console.log(groupedItems);
// Output: { Fruit: ['Apple', 'Banana'], Vegetable: ['Carrot'] }
```
This example groups items by their category. For each item, it checks if the category already exists in the accumulator (acc). If not, it initializes an array for that category and then adds the item name to the array.
## Flattening an Array of Arrays
The `reduce` method can flatten an array of arrays into a single array as shown below
```javascript
const nestedArrays = [[1, 2], [3, 4], [5, 6]];
const flatArray = nestedArrays.reduce((acc, curr) => acc.concat(curr), []);
console.log(flatArray); // Output: [1, 2, 3, 4, 5, 6]
```
Here, `reduce` concatenates each nested array (curr) to the accumulator (acc), which starts as an empty array.
## Removing Duplicates from an Array
The `reduce` method can also be used to remove duplicates from an array
```javascript
const numbers = [1, 2, 2, 3, 4, 4, 5];
const uniqueNumbers = numbers.reduce((acc, curr) => {
if (!acc.includes(curr)) {
acc.push(curr);
}
return acc;
}, []);
console.log(uniqueNumbers); // Output: [1, 2, 3, 4, 5]
```
## Substituting JavaScript map, filter, and find with reduce
The `reduce` method is incredibly versatile and can replicate the functionality of other array methods like `map`, `filter`, and `find`. While it may not always be the most performant option, it's useful to understand how `reduce` can be used in these scenarios. Here are examples showcasing how `reduce` can replace these methods.
## Using `reduce` to Replace `map`
The `map` method creates a new array by applying a function to each element of the original array. This can be replicated with `reduce`.
```javascript
const numbers = [1, 2, 3, 4];
const doubled = numbers.reduce((acc, curr) => {
acc.push(curr * 2);
return acc;
}, []);
console.log(doubled); // Output: [2, 4, 6, 8]
```
In this example, `reduce` iterates over each number, doubles it, and pushes the result into the accumulator array (acc).
## Using `reduce` to Replace `filter`
The `filter` method creates a new array with elements that pass a test implemented by a provided function. This can also be achieved with `reduce`.
```javascript
const numbers = [1, 2, 3, 4, 5, 6];
const evens = numbers.reduce((acc, curr) => {
if (curr % 2 === 0) {
acc.push(curr);
}
return acc;
}, []);
console.log(evens); // Output: [2, 4, 6]
```
Here, `reduce` checks if the current number (curr) is even. If it is, the number is added to the accumulator array (acc).
## Using `reduce` to Replace find
The find method returns the first element in an array that satisfies a provided testing function. `reduce` can also be used for this purpose. This can come in handy when finding the first even number in an array
```javascript
const numbers = [1, 3, 5, 6, 7, 8];
const firstEven = numbers.reduce((acc, curr) => {
if (acc !== undefined) return acc;
return curr % 2 === 0 ? curr : undefined;
}, undefined);
console.log(firstEven); // Output: 6
```
## Conclusion
The `reduce` method in JavaScript is a versatile tool that can handle a wide range of data manipulation tasks, surpassing the capabilities of `map`, `filter`, and `find`. While it may not always be the most efficient for simple tasks, mastering `reduce` opens up new possibilities for optimizing and simplifying your code. Understanding and effectively using `reduce` can greatly enhance your ability to manage complex data transformations, making it a crucial part of your JavaScript toolkit.
| ayoashy |
1,892,408 | Power Up Your Website: A Serverless Frontend and Backend using 8 AWS services | I recently made a presentation and demo for AWS meetup titled “Building Frontend and Backend using... | 24,864 | 2024-07-08T13:03:00 | https://swac.blog/power-up-your-website-a-serverless-frontend-and-backend-using-8-aws-services/ | aws, serverless, cloudcomputing, beginners |
I recently made a [presentation and demo for AWS meetup titled “Building Frontend and Backend using AWS S3 & Serveless, A Practical Project”](https://www.meetup.com/awsegyptmeetup/events/300522210/). In this project, I created a serverless application that hosted a static website frontend on an S3 bucket and allowed communication from web forms on this static website to a serverless backend to provide a fully-fledged functionality (i.e. both frontend and backend). In this blog post, I will provide an overview of the 8 AWS services that were used in the demo to come up with the Web app up and running.
Furthermore, a brief description of the 8 AWS services and the steps used will be provided. This blog post (and demo) can be particularly useful to developers new to serverless architecture and to AWS users looking to explore serverless options. It can be beneficial in certain use cases such as: building a simple portfolio/company website or developing a landing page for a marketing campaign.
You can check [the event presentation from this link](https://swac.blog/AWS-egypt-meetup2-ppt-public.html). You can also [fork my GitHub Repo containing the code and simple instructions from this link](https://github.com/KhalidElGazzar/aws-egypt-fe-and-be-may2024). Also, make sure to check this post shortly as I will be providing a video recording of the session to show the detailed steps. Furthermore, _Note that_ All IP rights to this blog post are reserved. Since I have been facing several content piracy cases lately, this blog post has ONLY been published on [the Software, Architecture, and Cloud blog - SWAC.blog](https://swac.blog) and canonically to [dev.to](https://dev.to/khalidelgazzar) only. If you are reading it elsewhere, then [please let us know](https://swac.blog/contact-us/)
### I. **Introduction**
Serverless architecture offers numerous benefits for modern web applications. It removes the need to manage servers, allowing developers to focus mainly on the application logic. This mini-project demonstrates how to leverage AWS services to create a cost-effective and scalable serverless application containing both a frontend and a backend.
I will be hosting a static website consisting of 5 HTML pages (index, services, about, contact, & an error page) on an S3 bucket. These static files have a webform that can post data to the backend. In order to add backend functionality, I will be using an AWS python lambda function to receive a post request from the front end, store it in a DynamoDB, and then return a response.
The static assets also include a css file (for simple styling) and a JavaScript file (to implement fetchAPI call the backend). This call is received by the API gateway) and then passed over to the Lambda function for further processing.
The contact form contains a simple email registration form (name, email & submit button) that posts this data to the serverless backend upon click the submit button. I used Postman to simulate the post request during implementation. Note that the static files (HTML, CSS, JavaScript, and images) were stored on a GitHub repo and then were uploaded to an S3 bucket that was configured to be a static website hosting.

### **II. Frontend Services Overview**
As explained earlier, the frontend of this application is a static website composed of HTML, CSS, and JavaScript files. Here’s a breakdown of the AWS services used for the frontend:
1. **Amazon S3:** S3 is a scalable object storage service that acts as the foundation for our static website. I uploaded the website’s HTML, CSS, and JavaScript files to an S3 bucket configured for static website hosting. This also mandated using ACLs or bucket policies to ensure granting proper public access. S3 provides ease of use, scalability, and cost-effectiveness for our use case.
2. **Amazon Route 53:** Route 53 is a highly available and scalable Domain Name System (DNS) web service. We used Route 53 to manage the domain name for our website. For more information about Route 53, you may check [my earlier blog post covering in-depth details about Route 53](https://swac.blog/aws-route-53-routing-policies-a-cornerstone-component-in-improving-performance-availability/).
3. **Amazon Certificate Manager (ACM):** ACM is a service that allows us to easily provision, manage, and deploy public and private SSL/TLS certificates for our websites. I used ACM to obtain an SSL certificate for our domain name.
4. **Amazon CloudFront:** CloudFront is a content delivery network (CDN) service that improves website performance by caching content at geographically distributed edge locations. In this project, I created a CloudFront distribution using AWS POPs that allows delivering content from caching locations near to the geographic location of the user and also allows converting the HTTP traffic into HTTPS.

### III. **Backend Services Overview**
The backend for this application is serverless, meaning I will utilize services that allow automatic provisioning and scalability based on incoming requests. This is a huge advantage for developers eliminating the headache of managing and maintaining infrastructure. Here’s a look at the serverless backend services:
1. **Amazon API Gateway:** API Gateway is a fully managed service that allows developers to create, publish, and maintain APIs for various backends. It acts as the single entry point for frontend requests to our serverless backend. In this demo, I used REST APIs and configured the API GW to use REST.
2. **AWS Lambda:** Lambda is a serverless compute service that lets us run code without provisioning or managing servers. I created a Python Lambda function that handled the incoming API Gateway requests, inserted the received record into a DynamoDB table, and then responded to the frontend with a response.
3. **Amazon DynamoDB:** DynamoDB is a NoSQL database service that provides fast and scalable storage for applications. I used DynamoDB to store form data submitted from the website’s contact form in a simple JSON format.
4. **AWS Identity and Access Management (IAM):** IAM is a service that helps control access to AWS resources. I created the needed IAM roles to grant the Lambda function permission to interact with DynamoDB. Further roles were also created in order to grant the API GW access on the lambda function.

### IV. **High-Level steps to build the serverless web app in AWS**
Here is a simplified overview of the steps involved in building this serverless application. You can refer to the video recording for the detailed steps:
1. **Create an S3 bucket and configure website hosting.** Upload the static website files (HTML, CSS, JavaScript) to the S3 bucket.
2. **Purchase a domain name** and configure Route 53 to manage the DNS records for your domain.
3. **Obtain an SSL certificate from ACM** for your domain name.
4. **Create a CloudFront distribution** and configure it to serve content from the S3 bucket with website hosting enabled.
5. **Create an API Gateway** and define an API resource.
6. **Create a Lambda function** to handle API Gateway requests. The Lambda function should process the request data and interact with DynamoDB.
7. **Create a DynamoDB table** to store the data submitted from the website’s contact form.
8. **Configure IAM roles** to grant the Lambda function permission to interact with DynamoDB.

### IV. **Conclusion**
This blog post has provided a high-level overview on how to build a serverless application with a static website as the frontend and then connect it to a serverless backend. The overall steps take around 2 hours which clearly demonstrates that we can leverage AWS services to create a cost-effective and scalable application while focusing on development logic rather than server management. Note that this is not a production-level application and further optimizations and security hardnening can be implemented to reach this level.
You can check [the event presentation from this link](https://swac.blog/AWS-egypt-meetup2-ppt-public.html). You can also [fork my GitHub Repo containing the code and simple instructions from this link](https://github.com/KhalidElGazzar/aws-egypt-fe-and-be-may2024). Also, make sure to check this post shortly as I will be providing a video recording of the session to show the detailed steps. _Note that_ All IP rights to this blog post are reserved. Since I have been facing several content piracy cases lately, this blog post has ONLY been published on [the Software, Architecture, and Cloud blog - SWAC.blog](https://swac.blog) and canonically to [dev.to](https://dev.to/khalidelgazzar) only. If you are reading it elsewhere, then [please let us know](https://swac.blog/contact-us/)
| khalidelgazzar |
1,893,023 | Recursion vs Loop: a low-level analysis | Introduction Sometimes when writing code we have to decide between using loop or... | 0 | 2024-07-12T00:16:38 | https://dev.to/lucaslealllc/recursion-vs-loop-a-low-level-analysis-4akc | assembly, c, programming, algorithms | ## Introduction
Sometimes when writing code we have to decide between using loop or recursion. How do both work under the hood? In terms of performance which one is the best to pick and why?
PS.: CPU architecture -> x86_64 | RAM -> DDR4 16G
---
## 1. Study Case 📝
Let's implement a function that receives a number and adding it one by one, arrives at the value received.
### 1.1. Recursion
```c
int sum(int n) {
if (n == 0) {
return 0;
}
return 1 + sum(n - 1);
}
```
### 1.2. Loop
```c
int sum(int n) {
int result = 0;
for (int i = 1; i <= n; i++) {
result += 1;
}
return result;
}
```
### 1.3. Time each function
```c
#include <stdio.h>
#include <time.h>
// sum functions here
int main() {
int n = 70000;
clock_t start = clock();
sum(n);
clock_t end = clock();
double cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
printf("sum(%d) Time elapsed = %.7f\n", n, cpu_time_used);
return 0;
}
```
---
## 2. Compiling the code
Let's compile but not assemble it. We can do it by using the `gcc` compiler with the flag -S. Since we are interested in the recursive and loop functions, we will only examine these assembly instructions:
### 2.1. Recursion
```assembly
sum:
.LFB0:
.cfi_startproc
endbr64
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
subq $16, %rsp
movl %edi, -4(%rbp)
cmpl $0, -4(%rbp)
jne .L2
movl $0, %eax
jmp .L3
.L2:
movl -4(%rbp), %eax
subl $1, %eax
movl %eax, %edi
call sum
addl $1, %eax
.L3:
leave
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE0:
.size sum, .-sum
.section .rodata
```
### 2.2. Loop
```assembly
sum:
.LFB0:
.cfi_startproc
endbr64
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movl %edi, -20(%rbp)
movl $0, -8(%rbp)
movl $1, -4(%rbp)
jmp .L2
.L3:
addl $1, -8(%rbp)
addl $1, -4(%rbp)
.L2:
movl -4(%rbp), %eax
cmpl -20(%rbp), %eax
jle .L3
movl -8(%rbp), %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE0:
.size sum, .-sum
.section .rodata
```
---
## 3. Executing both programs
Execution of sum(70000) using both approaches 5 times each. Elapsed time in seconds:
| Loop | Recursion |
|---|---|
| 0.0001380 | 0.0012100 |
| 0.0001230 | 0.0011630 |
| 0.0001370 | 0.0011390 |
| 0.0001710 | 0.0012150 |
| 0.0001380 | 0.0011810 |
Why is the loop approach 10 times faster?
---
## 4. Cause of the performance penalty 📉
At the assembly code, we have to pay attention to some instructions in order to understand the overhead of the recursion approach. These are:
- `pushq %rbp`
- `movq %rsp, %rbp`
- `leave` or `popq %rbp`
In short, `pushq %rbp` saves the base pointer of the previous function. `movq %rsp, %rbp` initializes the base pointer to the current function, `subq $16, %rsp` alocates 16 bytes on the top of the stackframe - by subtracting 16 bytes from the initial `%rsp`, since it is initialy equal to the `%rbp`: the stack grows from high memory addresses to the lower ones. `leave` sets `%rsp` to `%rbp`, then pops the top of the stack, thus restoring the function that called it.
<u>This process is repeated in each function call. It significantly increases the interaction with memory - L caches cannot help that much in this case - the performance penalty comes precisely from that.</u>
## 5. Comparing the instructions in both cases
### 5.1. Loop
```assembly
movl %edi, -20(%rbp)
movl $0, -8(%rbp)
movl $1, -4(%rbp)
```
These three instructions load into memory the initial values of `n`, `result` and `i`. `%edi` is the register used to pass the argument to the function, so the value of `n` is saved into the address `-20(%rbp)`. `i` and `result` are initially set to 1 and 0, that is why `$1` and `$0` are loaded into memory addresses `-4(%rbp)` and `-8(%rbp)`, respectively.
The loop starts at `jmp .L2`. `movl -4(%rbp), %eax` loads data into `%eax` register so it can be used to compare in `cmpl -20(%rbp), %eax`. `jle .L3` jumps to `.L3` if `%eax` is less or equal to `-20(%rbp)`, namely `i <= n;`.
In `L3.`, `addl $1, -8(%rbp)` will increase `-8(%rbp)` by one - result -, then do the same to `i`: `addl $1, -4(%rbp)`, i.e., `i++`
This process is executed until the comparison is false, then `popq %rbp` and `ret` are executed, respectively poping the top of the stack and returning `sum`.
### 5.2. Recursion
Recursion instructions:
`movl %edi, -4(%rbp)` takes the argument - sent via `%edi` register - and saves it 4 bytes bellow the value of the pointer saved in `%rbp`. `cmpl $0, -4(%rbp)` takes the value and compares it to 0, `jne .L2` sets that, if the value in `-4(%rbp)` is not equal to 0, the code jumps to `.L2` block.
At `.L2`, `movl -4(%rbp), %eax` will load the value of `-4(%rbp)` to `%eax`, then `subl $1, %eax` subtract it by 1 and save it in the register itself. In `movl %eax, %edi`, the value of register `%eax` is loaded into another register: `%edi`. As we saw, this register is responsible for passing arguments to functions. The argument is passed to `call sum`, so allocating more addresses onto the top of the stackframe and repeting the whole process of recursion. `addl $1, %eax` will increase by 1 and save the value in the register `%eax`. When recursion reach the base case, `movl $0, %eax` - so placing 0 as return of `sum(n - 1)` - will be executed, then `jmp.L3` will jump to `.L3` and execute `leave` and `ret`.
## 6. Segmentation Fault using recursion
Once it is clear how recursion works under the hood, it is much easier to see how a Segmentation Fault can take place. Each process allocates a finite space in memory, since recursions can grown the stackframe indefinitely, therefore Segmentation Fault can occur.
E.g., let's set `int n = 1000000;` and see how it performs:
```bash
linux@x86_64:~$ ./recursive_sum
Segmentation fault (core dumped)
```
Once using loop approach there is no indefinite function calls, there is no risk of the Stack Overflow that happens under recursion. | lucaslealllc |
1,893,250 | Building Docker Images | In the previous post we discussed how to create a Dockerfile. The next step of the process is to... | 27,622 | 2024-07-08T06:00:00 | https://dev.to/kalkwst/building-docker-images-55f1 | beginners, docker, devops, tutorial | In the previous post we discussed how to create a **Dockerfile**. The next step of the process is to build a **Docker image** using the **Dockerfile**.
A **Docker image** is like a compiled file. It's built in layers, starting with the base layer that contains the fundamental setup. This is usually the layer we use the **FROM** directive. Each subsequent layer adds on top of the previous one, making small adjustments or additions. These layers are stacked on top of another, and each layer builds up the changes made in the layer before it.
Once a Docker image is created, all its layers become **read-only**. However, when you start a Docker container from this image, Docker adds a new layer on top of them. This new layer is where any changes or updates made during the container's operation are stored. It acts like a thin, writable layer that captures all the modifications made to the container's filesystem.
In essence, the Docker image provides a blueprint for creating a consistent environment, while the container allows you to work within that environment and make changes as needed, without affecting the original image.
This image build process is initialized by the Docker CLI and executed by the Docker daemon. To generate a Docker image, the Docker daemon needs access to the Dockerfile, any source code, and files referenced inside that Dockerfile. These files are stored in a directory known as the **build context**. This context directory needs to be specified while executing the `docker image build` command.
The `docker image build` command takes the following format:
```powershell
docker image build <context>
```
If we want to specify the current directory as the context, we can use the **dot** (**.**) as a directory:
```powershell
docker image build .
```
Let's create a simple **Dockerfile** to demonstrate the Docker image build process:
```dockerfile
FROM ubuntu:latest
LABEL maintainer="ananalogguyinadigitalworld@example.com"
CMD ["echo", "Hello World"]
```
This Dockerfile does the following. It begins with the **FROM** command, specifying that the image should be built on top of the latest version of Ubuntu available from the Docker Hub repository. This base image serves as the starting point for our custom image.
The **LABEL** command is used to provide metadata about the image. In this case, it assigns the maintainer label, indicating who maintains or is responsible for this particular image. The email address provided (`ananalogguyinadigitalworld@example.com`) is used as the contact information.
Lastly, the **CMD** command sets the default command to be executed when a container is started from this image. Here, it specifies that when a container starts, it should execute the command `echo "Hello World"`. This command simply prints "Hello World" to the standard output of the container.
Navigate to the directory where you created your Dockerfile, and use the following command:
```powershell
docker image build .
```
You will see an output similar to the following:
```
[+] Building 3.5s (6/6) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 143B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 3.4s
=> [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s
=> CACHED [1/1] FROM docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd905 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc4362 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:f14484d3185f92d2d7896904300502be4f3c6d0df4ebba61b127d630d74b6f0d 0.0s
```
Now, let's visit the locally available Docker images with the `docker image list` command:
```powershell
docker image list
```
The command should return the following output:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> f14484d3185f 11 days ago 78.1MB
```
Note that there was no name for our custom Docker image. This was because we did not specify any repository or tag during the build process. We can tag an existing image with the `docker image tag` command.
Let's tag our image with IMAGE ID `f14484d3185f` as **my-tagged-image:v1.0**:
```powershell
docker image tag f14484d3185f my-tagged-image:v1.0
```
Now, if we list our images again, we can see the Docker image name and the tag under the **REPOSITORY** and **TAG** columns:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
my-tagged-image v1.0 f14484d3185f 11 days ago 78.1MB
```
We can also tag an image during the build process by specifying the `-t` flag:
```powershell
docker image build -t my-tagged-image:v2.0 .
```
The preceding command will print the following output:
```
[+] Building 5.8s (6/6) FINISHED docker:default
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 143B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 5.6s
=> [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s
=> CACHED [1/1] FROM docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd905 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc4362 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:f14484d3185f92d2d7896904300502be4f3c6d0df4ebba61b127d630d74b6f0d 0.0s
=> => naming to docker.io/library/my-tagged-image:v2.0 0.0s
```
This time, in addition to the **writing image sha256:f14484d3185f92d2d7896904300502be4f3c6d0df4ebba61b127d630d74b6f0d** line, we can see a **naming to docker.io/library/my-tagged-image:v2.0** line, which indicates the tagging on our Docker image.
## Summary
In this post, we discussed how to build a Docker image from a **Dockerfile**. We also discussed the difference between a **Dockerfile** and a **Docker image**. Then, we discussed how a Docker image is made up of multiple layers. Finally, we tagged the Docker images.
In the next post we are going to discuss about more advanced directives, like **ENV**, **ARG** and **WORKDIR**. | kalkwst |
1,893,556 | Advanced Dockerfile Directives | In this post, we are going to discuss more advanced Dockerfile directives. These directives can be... | 27,622 | 2024-07-08T06:25:58 | https://dev.to/kalkwst/advanced-dockerfile-directives-193f | beginners, docker, devops, tutorial | In this post, we are going to discuss more advanced **Dockerfile** directives. These directives can be used to create more advanced Docker images.
For example, we can use the **VOLUME** directive to bind the filesystem of the host machine to a Docker container. This will allow us to save the data generated and used by the Docker container to our local machine.
We are going to cover the following directives in this post:
1. The **ENV** directive
2. The **ARG** directive
3. The **WORKDIR** directive
4. The **COPY** directive
5. The **ADD** directive
6. The **USER** directive
7. The **VOLUME** directive
8. The **EXPOSE** directive
9. The **HEALTHCHECK** directive
10. The **ONBUILD** directive
## The ENV Directive
The **ENV** directive in a **Dockerfile** can be used to set environment variables. **Environment variables** are used to set environment variables. Environment variables are key-value pairs that provide information to applications and processes running inside the container. They can influence the behavior of programs and scripts by making dynamic values available during runtime.
Environment variables are defined as key-value pairs as per the following format:
```dockerfile
ENV <key> <value>
```
For example, we can set a path using the **ENV** directive as follows:
```dockerfile
ENV PATH $PATH:/usr/local/app/bin/
```
We can set multiple environment variables in the same line separated by spaces. However, in this form, the **key** and **value** should be separated by the equal to (`=`) symbol:
```dockerfile
ENV <key>=<value> <key=value> ...
```
Below, we set two environment variables configured. The **PATH** environment variable is configured with the value of `$PATH:/usr/local/app/bin`, and the **VERSION** environment variable is configured with the value of `1.0.0`:
```dockerfile
ENV PATH=$PATH:/usr/local/app/bin/ VERSION=1.0.0
```
Once an environment variable is set with the **ENV** directive in the **Dockerfile**, this variable is available in all subsequent Docker image layers. This variable is even available in the Docker containers launched from this Docker image.
## The ARG Directive
The **ARG** directive in a Dockerfile is used to define variables that users can pass at build time to the builder with the `docker build` command. These variables behave similarly to environment variables and can be used throughout the Dockerfile but are not persisted in the final image unless explicitly declared using the **ENV** directive.
The **ARG** directive has the following format:
```dockerfile
ARG <varname>
```
We can also add multiple **ARG** directives, as follows:
```dockerfile
ARG USER
ARG VERSION
```
These arguments can also have optional default values specified within the Dockerfile itself. If no value is provided by the user during the build process, Docker uses the default value defined in the **ARG** instruction:
```dockerfile
ARG USER=TestUser
ARG VERSION=1.0.0
```
Unlike the **ENV** variables, **ARG** variables are not accessible from the running container. They are only available during the build process.
### Using ENV and ARG Directives in a Dockerfile
We are going to create a **Dockerfile** that will use ubuntu as the parent image, but we will be able to change the ubuntu version at build time. We will also going to specify the environment's name and application directory as the environment variables of the Docker image.
Create a new directory named `env-arg-example` using the `mkdir` command:
```powershell
mkdir env-arg-example
```
---
Navigate the newly created `env-arg-example` directory using the `cd` command:
```powershell
cd env-arg-example
```
---
Now, let's create a new Dockerfile. I am going to use VS Code but feel free to use any editor you feel comfortable with:
```powershell
code Dockerfile
```
---
Add the following content to the **Dockerfile**. Then save and exit:
```dockerfile
ARG TAG=latest
FROM ubuntu:$TAG
LABEL maintainer=ananalogguyinadigitalworld@example.com
ENV ENVIRONMENT=dev APP_DIR=/usr/local/app/bin
CMD ["env"]
```
The **Dockerfile** begins by defining an argument **TAG** with a default value of `latest`. It then uses this argument to specify the base image in the **FROM** directive, resulting in the selection of the Ubuntu image tagged with `latest`.
The **LABEL** directive adds metadata to the image, indicating the maintainer's email address. Next, the **ENV** directive sets two environment variables: `ENVIRONMENT` with a value of `dev` and `APP_DIR` pointing to `/usr/local/app/bin`. These variables can be used by applications running inside the container to adjust behavior based on the environment and directory paths.
Finally, the **CMD** directive specifies the command to run when a container is started from this image, in this case, it executes `env` to display all environment variables set within the container.
---
Now lets build the Docker image:
```powershell
docker image build -t env-arg --build-arg TAG=23.10 .
```
The output should look similar to the following:
```
[+] Building 34.9s (6/6) FINISHED docker:default
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 189B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:23.10 3.3s
=> [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s
=> [1/1] FROM docker.io/library/ubuntu:23.10@sha256:fd7fe639db24c4e005643921beea92bc449aac4f4d40d60cd9ad9ab6456aec01 31.6s
=> => resolve docker.io/library/ubuntu:23.10@sha256:fd7fe639db24c4e005643921beea92bc449aac4f4d40d60cd9ad9ab6456aec01 0.0s
=> => sha256:fd7fe639db24c4e005643921beea92bc449aac4f4d40d60cd9ad9ab6456aec01 1.13kB / 1.13kB 0.0s
=> => sha256:c57e8a329cd805f341ed7ee7fcc010761b29b9b8771b02a4f74fc794f1d7eac5 424B / 424B 0.0s
=> => sha256:77081d4f1e7217ffd2b55df73979d33fd493ad941b3c1f67f1e2364b9ee7672f 2.30kB / 2.30kB 0.0s
=> => sha256:cd0bff360addc3363f9442a3e0b72ff44a74ccc0120d0fc49dfe793035242642 27.23MB / 27.23MB 30.3s
=> => extracting sha256:cd0bff360addc3363f9442a3e0b72ff44a74ccc0120d0fc49dfe793035242642 1.1s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:86b2f4c440c71f37c3f29f5dd5fe79beac30f5a6ce878bd14dc17f439bd2377d 0.0s
=> => naming to docker.io/library/env-arg 0.0s
```
---
Now, execute the `docker container run` command to start a new container from the Docker image that we built in the last step:
```powershell
docker container run env-arg
```
And the output should be something similar to the following:
```
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d6020a144f39
ENVIRONMENT=dev
APP_DIR=/usr/local/app/bin
HOME=/root
```
## The WORKDIR Directive
The **WORKDIR** directive in a Dockerfile is used to set the current working directory for any subsequent instructions that follow in the Dockerfile. This directive helps to define where the commands such as **ADD**, **CMD**, **COPY**, **ENTRYPOINT**, and **RUN**, will be executed within the container.
The **WORKDIR** directive has the following format:
```dockerfile
WORKDIR /path/to/workdir
```
If the specified directory does not exist in the image, Docker will create it during the build process. Also the **WORKDIR** directive effectively combines the functionality of `mkdir` and `cd` commands in a Unix-like system. It creates the directory if it doesn't exist and changes the current directory to the specified path.
We can have multiple **WORKDIR** directives in a Dockerfile. If subsequent **WORKDIR** directives use relative paths, they will be relative to the last **WORKDIR** set.
So for example:
```dockerfile
WORKDIR /one
WORKDIR two
WORKDIR three
WORKDIR drink
```
`WORKDIR /one` will set `/one` as the initial working directory. `WORKINGDIR two` will then change the directory to `/one/two`. `WORKDIR three` further changes it to `one/two/three`. Finally `WORKDIR drink` will change it to its final form `one/two/three/drink`.
## The COPY Directive
When building a Docker image, it's common to include files from our local development environment into the image itself. These files can range from application source code to configuration files and other resources needed for the application to run properly inside the container. The **COPY** directive in a Dockerfile serves this purpose by allowing us to specify which files or directories from our local filesystem should be copied into the image being built.
The syntax of the **COPY** command looks as follows:
```dockerfile
COPY <source> <destination>
```
The `<source>` specifies the path to the file or directory on your local filesystem relative to the build context. The `<destination>` specifies the path where the file or directory should be copied within the Docker image filesystem.
In the following example, we are using the **COPY** directive to copy an `index.html` file from the local filesystem to the `/var/www/html/` directory of the Docker image:
```dockerfile
COPY index.html /var/www/html/index.html
```
We can also use wildcards to copy all files matching the given pattern. Below, we will copy all files with the `.html` extension from the current directory to the `/var/www/html/` directory of the Docker image:
```dockerfile
COPY *.html /var/www/html/
```
When using the **COPY** directive in a Dockerfile to transfer files from the local filesystem into a Docker image, we can also specify the `--chown` flag. This flag allows us to set the user and group ownership of the copied files within the Docker image.
```dockerfile
COPY --chown=myuser:mygroup *.html /var/www/html/
```
In this example, `--chown=myuser:mygroup` specifies that all `.html` files being copied from the local directory to `/var/www/html/` in the Docker image, should be owned by `myuser` (the user) and `mygroup` (the group).
## The ADD Directive
The **ADD** directive in Dockerfiles functions similar to the **COPY** directive but with additional features.
```dockerfile
ADD <source> <destination>
```
The `<source>` specifies a path or **URL** to the file or directory on the local filesystem or a remote URL. The `<destination>` again specifies the path where the file or directory should be copied within the Docker image filesystem.
In the example below, we are going to use **ADD** to copy a file from the local filesystem:
```dockerfile
ADD index.html /var/www/html/index.html
```
In this example, Docker is going to copy the `index.html` file from the local filesystem (relative to the Docker build context) into `/var/www/html/index.html` within the Docker image.
In the example below, we are going to use **ADD** to copy a file from a remote URL:
```dockerfile
ADD http://example.com/test-data.csv /tmp/test-data.csv
```
Unlike **COPY**, the **ADD** directive allows specifying a URL (in this case `http://example.com/test-data.csv`) as the `<source>` parameter. Docker will download the file from the URL and copy it to the `/tmp/test-data.csv` within the Docker image.
The **ADD** directive not only copies files from the local filesystem or downloads them from URLs but also includes automatic extraction capabilities from a certain types of compressed archives. When `<source>` is a compressed archive file (e.g., `.tar`, `.tar.gz`, `.tgz`, `.bz2`, `.tbz2`, `.txz`, `.zip`), Docker will automatically extract its contents into `<destination>` within the Docker image filesystem.
For example:
```dockerfile
ADD myapp.tar.gz /opt/myapp/
```
In the example above, `myapp.tar.gz` is a compressed archive file, and Docker will automatically extract the contents of `myapp.tar.gz` into `/opt/myapp/` within the Docker image.
### Best Practices: COPY vs ADD in Dockerfiles
When writing Dockerfiles, choosing between the **COPY** and **ADD** directives is crucial for maintaining clarity, security and reliability in the image build process.
##### Clarity and Intent
**COPY** is straightforward and explicitly states that files or directories from the local filesystem are being copied into the Docker image. This clarity helps with understanding the Dockerfile's purpose and makes it easier to maintain over time.
On the other hand, **ADD** introduces additional functionalities such as downloading files from URLs and automatically extracting compressed archives. While these features can be convenient in certain scenarios , they can also obscure the original intent of simply copying files. This lack of transparency might lead to unexpected behaviors or security risks if not carefully managed.
##### Security and Predictability
Using **COPY** enhances security by avoiding potential risks associated with downloading files from arbitrary URLs. Docker images should be built using controlled, validated sources to prevent unintended or malicious content from being included. Separating the download of files from the build process and using `COPY` ensures that the Docker build environment remains secure and predictable.
##### Docker Philosophy Alignment
Docker encourages building lightweight, efficient, and predictable containerized applications. **COPY** aligns well with this philosophy by promoting simplicity and reducing the risk of unintended side effects during image builds.
### Using the WORKDIR, COPY, and ADD Directives in a Dockerfile
In this example we are going to deploy a custom HTML file to an Apache web server. We are going to use Ubuntu as our base image and install Apache on top of it. Then, we are going to copy the custom `index.html` file to the Docker image and download a Docker logo.
Create a new directory named `workdir-copy-add-example` using the `mkdir` command:
```powershell
mkdir workdir-copy-add-example
```
---
Navigate to the newly created `workdir-copy-add-example` directory:
```powershell
cd .\workdir-copy-add-example\
```
---
Within the `workdir-copy-add-example` directory, create a file named `index.html`. This file will be copied to the Docker image during build time. I am going to use VS Code, but feel free to use any editor you feel more comfortable with:
```powershell
code index.html
```
---
Add the following content to the index.html file, save it, and close your editor:
```html
<html>
<body>
<h1>
Welcome to Docker!
</h1>
<img src="logo.png" height="350" width="500"/>
</body>
</html>
```
This HTML code creates a basic web page that greets visitors with a large heading saying "Welcome to Docker!". Below the heading, it includes an image displayed using the `<img>` tag with the source attribute (`src="logo.png"`), indicating that it should fetch and display an image file named `logo.png`. The image is sized to be 350 pixels in height and 500 pixels in width (`height="350"` and `width="500"`).
---
Now, create a **Dockerfile** within this directory:
```powershell
code Dockerfile
```
---
Add the following content to the `Dockerfile` file, save it, and exit:
```dockerfile
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade
RUN apt-get install apache2 -y
WORKDIR /var/www/html/
COPY index.html .
ADD https://upload.wikimedia.org/wikipedia/commons/4/4e/Docker_%28container_engine%29_logo.svg ./logo.png
CMD ["ls"]
```
This Dockerfile begins by specifying `FROM ubuntu:latest`, indicating it will build upon the latest Ubuntu base image available. The subsequent `RUN apt-get update && apt-get upgrade` commands update and upgrade the package lists within the container. Following this, `apt-get install apache2 -y` installs the Apache web server using the package manager. The `WORKDIR /var/www/html/` directive sets the working directory to `/var/www/html/`, a common location for serving web content in Apache.
Within this directory, `COPY index.html .` copies a local `index.html` file from the host machine into the container. Additionally, `ADD https://upload.wikimedia.org/wikipedia/commons/4/4e/Docker_%28container_engine%29_logo.svg ./logo.png` retrieves an SVG image file from a URL and saves it locally as `logo.png` in the same directory.
Lastly, `CMD ["ls"]` specifies that upon container startup, the `ls` command will execute, displaying a listing of files and directories in `/var/www/html/`.
---
Now, build the Docker image with the tag of `workdir-copy-add`:
```powershell
docker build -t workdir-copy-add .
```
You should see the following output:
```
[+] Building 4.0s (13/13) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 290B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 3.6s
=> [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s
=> [1/6] FROM docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc4362 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc4362 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 32B 0.0s
=> https://upload.wikimedia.org/wikipedia/commons/4/4e/Docker_%28container_engine%29_logo.svg 0.3s
=> CACHED [2/6] RUN apt-get update && apt-get upgrade 0.0s
=> CACHED [3/6] RUN apt-get install apache2 -y 0.0s
=> CACHED [4/6] WORKDIR /var/www/html/ 0.0s
=> CACHED [5/6] COPY index.html . 0.0s
=> CACHED [6/6] ADD https://upload.wikimedia.org/wikipedia/commons/4/4e/Docker_%28container_engine%29_logo.svg . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:646864d79dc576f862a980ef6ddab550ef9801790d2c91967c3c9596cf85b81a 0.0s
=> => naming to docker.io/library/workdir-copy-add 0.0s
```
---
Execute the `docker container run` command to start a new container from the Docker image we built previously:
```powershell
docker run workdir-copy-add
```
As we can see, both `index.html` and `logo.png` are available in the `/var/www/html/` directory:
```
index.html
logo.png
```
## The USER Directive
In Docker, by default, containers run with the root user, which has extensive privileges within the container environment. To mitigate security risks, Docker allows us to specify a non-root user using the **USER** directive in the **Dockerfile**. This directive sets the default user for the container, and all subsequent commands specified in the Dockerfile, such as **RUN**, **CMD**, and **ENTRYPOINT**, will be executed under this user's context.
Implementing the **USER** directive is considered a best practice in Docker security, aligning with the principle of least privilege. It ensures that containers operate with minimal privileges necessary for their functionality, thereby enhancing overall system security and reducing the attack surface.
The **USER** directive takes the following format:
```dockerfile
USER <user>
```
In addition to the username, we can also specify the optional group name to run the Docker container:
```dockerfile
USER <user>:<group>
```
You need to make sure that the `<user>` and `<group>` values are valid user and group names. Otherwise, the Docker daemon will throw an error while trying to run the container.
### Using USER Directive in the Dockerfile
In this example we are going to use the **USER** directive in the **Dockerfile** to set the default user. We will be installing the Apache web server and changing the user to **www-data**. Finally, we will execute the `whoami` command to verify the current user by printing the username.
Create a new directory named `user-example`
```powershell
mkdir user-example
```
---
Navigate to the newly created `user-example directory`
```powershell
cd .\user-example\
```
---
Within the `user-example` directory create a new **Dockerfile**
```
code Dockerfile
```
---
Add the following content to your **Dockerfile**, save it and close the editor:
```dockerfile
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade
RUN apt-get install apache2 -y
USER www-data
CMD ["whoami"]
```
This Dockerfile starts with the latest Ubuntu base image and updates system packages before installing Apache web server (`apache2`). It enhances security by switching to the `www-data` user, commonly used for web servers, to minimize potential vulnerabilities. The `CMD ["whoami"]` directive ensures that when the container starts, it displays the current user (`www-data`), demonstrating a secure setup suitable for hosting web applications in a Docker environment.
---
Build the Docker image:
```powershell
docker build -t user .
```
And you should see the following output:
```
[+] Building 5.0s (8/8) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 157B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 4.8s
=> [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s
=> [1/3] FROM docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc436217b30 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc436217b30 0.0s
=> CACHED [2/3] RUN apt-get update && apt-get upgrade 0.0s
=> CACHED [3/3] RUN apt-get install apache2 -y 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:ce1de597471f741f4fcae898215cfcb0d847aacf7c201690c5d4e95289476768 0.0s
=> => naming to docker.io/library/user 0.0s
```
---
Now, execute the `docker container run` command to start a new container from the Docker image that we built in the previous step:
```powershell
docker container run user
```
And the output should display `www-data` as the current user associated with the Docker container:
```
www-data
```
## The VOLUME Directive
In Docker, containers are designed to encapsulate applications and their dependencies in a portable and lightweight manner. However, by default, any data generated or modified within a Docker container's filesystem is ephemeral, meaning it exists only for the duration of the container's runtime. When a container is deleted or replaced, this data is lost, which poses challenges for applications that require persistent storage, such as databases or file storage systems.
To address this challenge, Docker introduced the concept of volumes. Volumes provide a way to persist data independently of the container lifecycle. They act as a bridge between the Docker container and the host machine, ensuring that data stored within volumes persists even when containers are stopped, removed, or replaced. This makes volumes essential for applications that need to maintain stateful information across container instances, such as storing databases, configuration files, or application logs.
When you define a volume in a Dockerfile using the `VOLUME` directive, Docker creates a managed directory within the container’s filesystem. This directory serves as the mount point for the volume. Crucially, Docker also establishes a corresponding directory on the host machine, where the actual data for the volume is stored. This mapping ensures that any changes made to files within the volume from within the container are immediately synchronized with the mapped directory on the host machine, and vice versa.
Volumes in Docker support various types, including named volumes and host-mounted volumes. Named volumes are created and managed by Docker, offering more control and flexibility over volume lifecycle and storage management. Host-mounted volumes, on the other hand, allow you to directly mount a directory from the host filesystem into the container, providing straightforward access to host resources.
The **VOLUME** directive generally takes a JSON array as a parameter:
```dockerfile
VOLUME ["path/to/volume"]
```
Or, we can specify a plain string with multiple paths:
```dockerfile
VOLUME /path/to/volume1 /path/to/volume2
```
We can use the `docker container inspect <container>` command to view the volumes available in a container. The output JSON of the docker container inspect command will print the volume information similar to the following:
```json
[
{
"CreatedAt":"2024-06-21T22:52:52+03:00",
"Driver":"local",
"Labels":null,
"Mountpoint":"/var/lib/docker/volumes/f46f82ea6310d0db3a13897a0c3ab45e659ff3255eaeead680b48bca37cc0166/_data",
"Name":"f46f82ea6310d0db3a13897a0c3ab45e659ff3255eaeead680b48bca37cc0166",
"Options":null,
"Scope":"local"
}
]
```
### Using the VOLUME Directive in the Dockerfile
In this example, we are going to setup a Docker container to run the Apache web server. However, we don't want to lose the Apache log files in case of a Docker container failure. As a solution, we are going to persist the log files by mounting the Apache log path to the underlying Docker host.
Create a new directory named `volume-example`
```powershell
mkdir volume-example
```
---
Navigate to the newly created `volume-example` directory
```powershell
cd volume-example
```
---
Within the `volume-example` directory create a new **Dockerfile**
```
code Dockerfile
```
---
Add the following to the **Dockerfile**, save it, and exit
```dockerfile
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade
RUN apt-get install apache2 -y
VOLUME ["/var/log/apache2"]
```
This Dockerfile starts by using the latest version of Ubuntu as the base image and ensures it is up to date by running `apt-get update` and `apt-get upgrade` to update all installed packages. It then installs Apache HTTP Server (`apache2`) using `apt-get install apache2 -y`. The `VOLUME ["/var/log/apache2"]` directive defines a Docker volume at `/var/log/apache2`, which is where Apache typically stores its log files.
---
Now, let's build the Docker image:
```powershell
docker build -t volume .
```
And the output should be as follows:
```
[+] Building 3.6s (8/8) FINISHED docker:default
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 155B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 3.5s
=> [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s
=> [1/3] FROM docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc436217b30 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc436217b30 0.0s
=> CACHED [2/3] RUN apt-get update && apt-get upgrade 0.0s
=> CACHED [3/3] RUN apt-get install apache2 -y 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:9c7a81e379553444e0b4f3bbf45bdd17880aea251db8f8b75669e13964b9c30f 0.0s
=> => naming to docker.io/library/volume
```
---
Execute the `docker container run` command to start a new container from the previously built image. Note that you also need to use the `--interactive` and `--tty` flags to open an interactive bash session so that you can execute commands from the bash shell of the container. Also, you need to use the `--name` flag to define the container name as `volume-container`
```powershell
docker container run --interactive --tty --name volume-container volume /bin/bash
```
Your bash shell will be opened as follows:
```bash
root@8aa0f5fb8a6d:/#
```
---
Navigate to the `/var/log/apache2` directory
```bash
root@8aa0f5fb8a6d:/# cd /var/log/apache2/
```
This will produce the following output:
```
root@8aa0f5fb8a6d:/var/log/apache2#
```
---
Now, list the available files in the directory
```bash
root@8aa0f5fb8a6d:/var/log/apache2# ls -l
```
The output should be as follows
```
total 0
-rw-r----- 1 root adm 0 Jun 20 13:42 access.log
-rw-r----- 1 root adm 0 Jun 20 13:42 error.log
-rw-r----- 1 root adm 0 Jun 20 13:42 other_vhosts_access.log
```
These are the log files created by Apache while running the process. The same files should be available once you check the host mount of this volume.
---
Exit the container to check the host filesystem
```
root@8aa0f5fb8a6d:/var/log/apache2# exit
```
---
Inspect the `volume-container` to view the mount information
```powershell
docker container inspect volume-container
```
Under the `Mounts` key, you will be able to see the information relating to the mount
```json
"Mounts":[
{
"Type":"volume",
"Name":"50d3a5abf34535fbd3a347cbd6c74acf87a7aa533494360e661c73bbdf34b3e8",
"Source":"/var/lib/docker/volumes/50d3a5abf34535fbd3a347cbd6c74acf87a7aa533494360e661c73bbdf34b3e8/_data",
"Destination":"/var/log/apache2",
"Driver":"local",
"Mode":"",
"RW":true,
"Propagation":""
}
]
```
---
Inspect the volume with the `docker volume inspect <volume_name>` command. You can find the `<volume_name>` in the `Name` field of the previous output
```powershell
docker volume inspect 50d3a5abf34535fbd3a347cbd6c74acf87a7aa533494360e661c73bbdf34b3e8
```
You should get an output similar to the following
```json
[{
"CreatedAt":"2024-06-21T11:02:32Z",
"Driver":"local",
"Labels":{
"com.docker.volume.anonymous":""
},
"Mountpoint":"/var/lib/docker/volumes/50d3a5abf34535fbd3a347cbd6c74acf87a7aa533494360e661c73bbdf34b3e8/_data",
"Name":"50d3a5abf34535fbd3a347cbd6c74acf87a7aa533494360e661c73bbdf34b3e8",
"Options":null,
"Scope":"local"
}]
```
---
List the files available in the host file path. The host file path can be identified with the `Mountpoint` field of the previous output
```bash
ls -l /var/lib/docker/volumes/50d3a5abf34535fbd3a347cbd6c74acf87a7aa533494360e661c73bbdf34b3e8/_data
```
## The EXPOSE Directive
The **EXPOSE** directive in Docker serves to indicate to Docker that a container will be listening on specific ports during its runtime. This declaration is primarily informative and does not actually publish the ports to the host system or make them accessible from outside the container by default. Instead, it documents which ports are intended to be used for inter-container communication or network services within the Docker environment.
The **EXPOSE** directive supports both TCP and UDP protocols, allowing flexibility in how ports are exposed for various networking requirements. This directive is a precursor to the `-p` or `-P` options used during container runtime to actually map these exposed ports to ports on the host machine, enabling external access if required.
The **EXPOSE** directive has the following format:
```dockerfile
EXPOSE <port>
```
## The HEALTHCHECK Directive
A health check is a crucial mechanism that designed to assess the operational health of containers. It provides a means to verify if applications running within Docker containers are functioning properly. Without a specified health check, Docker lacks the capability to autonomously determine the health status of a container. This becomes especially critical in production environments where reliability and uptime are paramount.
The **HEALTHCHECK** directive in Docker allows developers to define custom health checks, typically in the form of commands or scripts, that periodically inspect the container's state and report back on its health. This directive ensures proactive monitoring and helps Docker orchestration tools make informed decisions about container lifecycle management based on health status.
There can be only one **HEALTHCHECK** directive in a **Dockerfile**. If there is more than one **HEALTHCHECK** directive, only the last one will take effect.
For example, we can use the following directive to ensure that the container can receive traffic on the `http://localhost/` endpoint:
```dockerfile
HEALTHCHECK CMD curl -f http://localhost/ || exit 1
```
The exit code at the end of the preceding command is used to specify the health status of a container, `0` and `1` are valid values for this field. `0` means a healthy container, `1` means an unhealthy container.
When using the **HEALTHCHECK** directive in Docker, it's possible to configure additional parameters beyond the basic command to tailor how health checks are performed:
- **`--interval`**: Specifies the frequency at which health checks are executed, with a default interval of 30 seconds.
- **`--timeout`**: Defines the maximum time allowed for a health check command to complete successfully. If no successful response is received within this duration, the health check is marked as failed. The default timeout is also set to 30 seconds.
- **`--start-period`**: Specifies the initial delay before Docker starts executing the first health check. This parameter allows the container some time to initialize before health checks begin, with a default start period of 0 seconds.
- **`--retries`**: Defines the number of consecutive failed health checks allowed before Docker considers the container as unhealthy. By default, Docker allows up to 3 retries.
In the following example, the default values of **HEALTHCHECK** are overridden, by providing custom values:
```dockerfile
HEALTHCHECK \
--interval=1m \
--timeout=2s \
--start-period=2m \
--retries=3 \
CMD curl -f http://localhost/ || exit 1
```
### Using EXPOSE and HEALTHCHECK Directives in the Dockerfile
We are going to dockerize the Apache web server to access the Apache home page from the web browser. Additionally, we are going to configure health checks to determine the health status of the Apache web server.
Create a new directory named `expose-heathcheck-example`
```powershell
mkdir expose-healthcheck-example
```
---
Navigate to the newly created `expose-healthcheck-example` directory
```powershell
cd .\expose-healthcheck-example\
```
---
Create a **Dockerfile** and add the following content
```dockerfile
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade
RUN apt-get install apache2 curl -y
HEALTHCHECK CMD curl -f http://localhost/ || exit 1
EXPOSE 80
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
```
This Dockerfile starts by pulling the latest Ubuntu base image and updating it. It then installs Apache web server and curl using `apt-get`. The `HEALTHCHECK` directive is set to run a health check command (`curl -f http://localhost/ || exit 1`), ensuring the container's health based on localhost connectivity. Port 80 is exposed to allow external access to Apache. Finally, the container is configured to run Apache in foreground mode using `ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]`, ensuring it stays active and responsive as the main process. This setup enables hosting a web server accessible via port 80 within the Docker environment.
---
Build the image
```powershell
docker image build -t expose-healthcheck-example .
```
You should get an output similar to the following:
```
[+] Building 29.0s (8/8) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 244B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 3.4s
=> [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s
=> [1/3] FROM docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc436217b30 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc436217b30 0.0s
=> CACHED [2/3] RUN apt-get update && apt-get upgrade 0.0s
=> [3/3] RUN apt-get install apache2 curl -y 24.8s
=> exporting to image 0.6s
=> => exporting layers 0.6s
=> => writing image sha256:3323e865b3888a4e45852c6a8c163cb820739735716f8783a0d126b43d810f1e 0.0s
=> => naming to docker.io/library/expose-healthcheck-example 0.0s
```
---
Execute the `docker container run` command to start a new container. You are going to use the `-p` flag to redirect port `80` of the host to port `8080` of the container. Additionally, you are going to use the `--name` flag to specify the container name as `expose-healthcheck-container`, and the `-d` flag to run the container in detach mode
```powershell
docker container run -p 8080:80 --name expose-healthcheck-container -d expose-healthcheck-example
```
---
List the running containers with the `docker container list` command
```powershell
docker container list
```
In the output, you will see that the `STATUS` of `expose-healthcheck-container` is healthy
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ff16b11275c expose-healthcheck-example "apache2ctl -D FOREG…" About a minute ago Up About a minute (healthy) 80/tcp, 0.0.0.0:8080->8080/tcp expose-healthcheck-container
```
---
Now, you should be able to view the Apache home page. Navigate to the `http://127.0.0.1:8080` endpoint from your browser

## The ONBUILD Directive
The ONBUILD directive in Dockerfiles facilitates the creation of reusable base images intended for subsequent image builds. It allows developers to define instructions that will be triggered only when another Docker image uses the current image as its base. For instance, you could construct a Docker image containing all necessary prerequisites and configurations required to run an application.
By applying the ONBUILD directive within this "prerequisite" image, specific instructions can be deferred until the image is employed as a parent in another Dockerfile. These deferred instructions are not executed during the build process of the current Dockerfile but are instead inherited and executed when building the child image. This approach streamlines the process of setting up environments and ensures that common dependencies and configurations are consistently applied across multiple projects or applications derived from the base image.
The **ONBUILD** directive takes the following format
```dockerfile
ONBUILD <instruction>
```
As an example, imagine that we have the following **ONBUILD** instruction in the **Dockerfile** of a custom base image
```dockerfile
ONBUILD ENTRYPOINT ["echo", "Running an ONBUILD Directive"]
```
The `Running an ONBUILD Directive` value will not be printed if we create a Docker container from our custom base image, but will be printed if we use it as a base for another Docker image.
### Using the ONBUILD Directive in a Dockerfile
In this example, we are going to build a parent image with an Apache web server and use the **ONBUILD** directive to copy HTML files.
Create a new directory named `onbuild-parent-example`
```powershell
mkdir onbuild-parent-example
```
---
Navigate to the newly created `onbuild-parent-example` directory:
```powershell
cd .\onbuild-parent-example\
```
---
Create a new **Dockerfile** and add the following content
```dockerfile
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade
RUN apt-get install apache2 -y
ONBUILD COPY *.html /var/www/html
EXPOSE 80
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
```
This Dockerfile begins by using the latest Ubuntu base image. It updates and upgrades the system packages, then installs the Apache web server. The ONBUILD directive specifies that any child images built from this Dockerfile will automatically copy all HTML files from the build context to the `/var/www/html` directory within the container. Port 80 is exposed to allow incoming traffic to the Apache server. Finally, the `ENTRYPOINT` command configures the container to run Apache in foreground mode, ensuring it remains active and responsive as the primary process. This setup enables the container to serve web content via Apache on port 80.
---
Now, build the Docker image:
```powershell
docker image build -t onbuild-parent-example .
```
The output should be as follows:
```
[+] Building 3.5s (8/8) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 221B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 3.3s
=> [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s
=> [1/3] FROM docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc4362 0.0s
=> => resolve docker.io/library/ubuntu:latest@sha256:2e863c44b718727c860746568e1d54afd13b2fa71b160f5cd9058fc4362 0.0s
=> CACHED [2/3] RUN apt-get update && apt-get upgrade 0.0s
=> CACHED [3/3] RUN apt-get install apache2 -y 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:4a6360882fb65415cdd7326392de35c2336a8599c2c4b8b7a1e4d962d81df7e4 0.0s
=> => naming to docker.io/library/onbuild-parent-example 0.0s
```
---
Execute the `docker container run` command to start a new container from the Docker image built in the previous step:
```powershell
docker container run -p 8080:80 --name onbuild-parent-container -d onbuild-parent-example
```
---
If you navigate to the `http://127.0.0.1:8080/` endpoint you should see the default Apache home page

---
Remove the container so it won't interfere with the ports
```powershell
docker container stop onbuild-parent-container
docker container rm onbuild-parent-container
```
---
Now, lets create another Docker image using `onbuild-parent-container` as the base image, to deploy a custom HTML home page. To do that let's create a new directory named `onbuild-child-example`
```powershell
cd ..
mkdir onbuild-child-example
```
---
Create a new html page with the following content
```html
<html>
<body>
<h1>Demonstrating Docker ONBUILD Directive</h1>
</body>
</html>
```
---
In the same directory create a **Dockerfile**
```dockerfile
FROM onbuild-parent-example
```
This **Dockerfile** has a single directive. This will use the **FROM** directive to utilize the `onbuild-parent-example` Docker image that we created previously as the base image.
---
Now, build the docker image
```powershell
docker image build -t onbuild-child-example .
```
The output should be something like the following
```
[+] Building 0.3s (7/7) FINISHED docker:default
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 64B 0.0s
=> [internal] load metadata for docker.io/library/onbuild-parent-example:latest 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 134B 0.0s
=> [1/1] FROM docker.io/library/onbuild-parent-example 0.1s
=> [2/1] COPY *.html /var/www/html 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:9fb3629a292e2536300724db933eb59a6fb918f9d01e46a01aff18fe1ad6fe69 0.0s
=> => naming to docker.io/library/onbuild-child-example 0.0s
```
---
Execute the `docker container run` command to start a new container with the image we just built
```powershell
docker container run -p 8080:80 --name onbuild-child-container -d onbuild-child-example
```
---
You should be able now to view our custom `index.html` page if you navigate to the `http://127.0.0.1:8080/` endpoint.

## Summary
In this post we focused on building Docker images. We we discussed more advanced Dockerfile directives, including the **ENV**, **ARG**, **WORKDIR**, **COPY**, **ADD**, **USER**, **VOLUME**, **EXPOSE**, **HEALTHCHECK**, and **ONBUILD** directives.
In the next few posts we are going to discuss what a Docker registry is, look at private and public Docker registries and how we can publish images to Docker registries.
---
Buckle up, buttercup! This docker journey is about to get even wilder. For references, just check out the first post in this series. It's your one-stop shop for all the nitty-gritty details. | kalkwst |
1,895,699 | Building a Scalable Furniture E-commerce Web API Using .NET Clean Architecture and MongoDB | INTRODUCTION In today's digital era, e-commerce has become a vital component of the global... | 0 | 2024-07-08T11:42:53 | https://dev.to/iamcymentho/building-a-scalable-furniture-e-commerce-web-api-using-net-clean-architecture-and-mongodb-33o7 | dotnet, dotnetcore, microservices, csharp |
## **INTRODUCTION**
In today's digital era, e-commerce has become a vital component of the global economy, enabling businesses to reach a wider audience and streamline their operations. Developing a robust and scalable e-commerce application requires a thoughtful approach to architecture and technology stack. This technical writing delves into the creation of a furniture e-commerce application using the .NET Clean Architecture model and MongoDB as the database.
The .NET Clean Architecture provides a structured and organized way to build applications, promoting separation of concerns and maintainability. By leveraging the power of MongoDB, a NoSQL database known for its scalability and flexibility, we ensure that our furniture e-commerce application can handle a high volume of transactions and adapt to changing requirements.
This article will guide you through the key components and design principles of the .NET Clean Architecture, the integration of MongoDB, and the various features and functionalities of the furniture e-commerce application. Whether you are a developer looking to adopt best practices or a technical enthusiast interested in modern software development, this writing aims to provide comprehensive insights into building a successful e-commerce platform tailored for the furniture industry.

## **Setting Up the Project Structure for Lacariz Furniture E-commerce Application**
In this section, we will guide you through the initial setup of a furniture e-commerce application using the .NET Clean Architecture model. This involves creating a Web API project along with three class libraries for data access, domain entities, and service logic. Follow the steps below to set up your project structure:
## Step 1: Create the Web API Project
- Open Visual Studio.
- Select Create a new project.
- Choose ASP.NET Core Web API and click Next.
- Name the project Lacariz.Furniture.Api and choose a suitable location.
- Click Create and configure the project with the latest .NET version.
- Click Create again to generate the project.
## Step 2: Create the Class Libraries
- In the Solution Explorer, right-click on the solution and select Add > New Project.
- Choose Class Library and click Next.
- Name the project Lacariz.Furniture.Data and click Create.
- Repeat the steps to create two more class libraries named Lacariz.Furniture.Domain and Lacariz.Furniture.Service.
## Your solution should now contain the following projects:
`- Lacariz.Furniture.Api`
`- Lacariz.Furniture.Data`
`- Lacariz.Furniture.Domain`
`- Lacariz.Furniture.Service
`
## Project Structure Overview
The `Lacariz.Furniture.Api` project will serve as the entry point of the application, exposing the Web API endpoints. The `Lacariz.Furniture.Data` project will handle data access and communication with `MongoDB`. The `Lacariz.Furniture.Domain` project will contain the domain entities. Finally, the `Lacariz.Furniture.Service` project will implement the service layer, containing the application’s use cases and business rules.
To help you set up your project structure, I have attached a screenshot guide illustrating the above steps. Ensure your solution explorer reflects the organization shown in the screenshot before proceeding to the implementation phase.
By following these steps, you will have a solid foundation for building a scalable and maintainable furniture e-commerce application using .NET Clean Architecture and MongoDB. In the subsequent sections, we will delve deeper into each project, exploring their roles and how they interact with each other to form a cohesive application.

## Configuring MongoDB for Lacariz Furniture E-commerce Application
In this section, we will configure `MongoDB` for the Lacariz Furniture e-commerce application. We'll create a configuration folder structure in the Lacariz.Furniture.Domain project, focusing on the MongoDB configuration. Follow these steps:
## Step 1: Create the Config Folder Structure
**Create the Config Folder:**
In the Lacariz.Furniture.Domain project, add a new folder named Config.
- Create Interfaces and Implementations Folders:
- Inside the Config folder, create two subfolders named Interfaces and Implementations.
**Step 2: Define MongoDB Configuration Interface**
- Add IMongoDbConfig Interface :
- In the Interfaces folder, create an interface file named IMongoDbConfig.cs.
```C#
namespace Lacariz.Furniture.Domain.Config.Interfaces;
public partial interface IMongoDbConfig
{
string DatabaseName { get; set; }
string ConnectionString { get; set; }
}
```
**Step 3: Implement MongoDB Configuration**
- Add MongoDbConfig Class:
- In the Implementations folder, create a class file named MongoDbConfig.cs.
- This class will implement the IMongoDbConfig interface, providing concrete properties for the MongoDB configuration.
```C#
using Lacariz.Furniture.Domain.Config.Interfaces;
namespace Lacariz.Furniture.Domain.Config.Implementations;
public partial class MongoDbConfig : IMongoDbConfig
{
public string DatabaseName { get; set; }
public string ConnectionString { get; set; }
public MongoDbConfig(string connectionString, string databaseName)
{
DatabaseName = databaseName;
ConnectionString = connectionString;
}
}
```
## Step 4: Configuring the program.cs for ASP.NET Core Application
**- Serilog Configuration:**
Sets up Serilog for logging. It enriches logs with contextual information, writes logs to the console, and reads configuration settings.
**- Swagger/OpenAPI Configuration:**
Configures Swagger/OpenAPI to provide documentation and testing tools for the API endpoints.
**- Health Checks:**
Adds health check services to monitor the application's health status.
CORS Policy:
- Configures Cross-Origin Resource Sharing (CORS) policy to allow requests from any origin, method, and header, facilitating cross-domain communication.
**- Configuration and Dependencies:**
Loads additional configuration settings and registers data and service dependencies required by the application.
API Versioning:
- Configures API versioning to manage different versions of the API and maintain backward compatibility.
**- Health Checks UI:**
Configures the Health Checks UI to provide a dashboard for viewing health check results and sets up various parameters like evaluation time, maximum history entries, and API endpoint configuration.
Development Environment Configuration:
- Configures Swagger and Swagger UI to be available only in the development environment for easier API testing and documentation.
- Logging, CORS, HTTPS Redirection, and Authorization:
- Sets up request logging using Serilog, enables CORS policy, redirects HTTP requests to HTTPS for security, and sets up authorization middleware.
**- Mapping Controllers and Health Checks:**
Maps controller routes to handle API requests, configures Health Checks UI middleware for monitoring, and sets up the endpoint for health checks.
Running the Application:
- Starts the application, launching the web server and making it ready to handle incoming requests.
```C#
//COPY AND PASTE IN YOUR PROGRAM.CS FILE
using HealthChecks.UI.Client;
//using Lacariz.Brevort.API;
using Lacariz.Furniture.Data;
using Lacariz.Furniture.Service;
using Microsoft.AspNetCore.Diagnostics.HealthChecks;
using Microsoft.OpenApi.Models;
using Serilog;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Host.UseSerilog((context, config) =>
{
config.Enrich.FromLogContext()
.WriteTo.Console()
.ReadFrom.Configuration(context.Configuration);
});
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen(
c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "Lacariz.Furniture", Version = "v1" });
}
);
builder.Services.AddHealthChecks();
builder.Services.AddCors(p => p.AddPolicy("corsapp", builder =>
{
builder.WithOrigins("*").AllowAnyMethod().AllowAnyHeader();
}));
builder.AddConfiguration();
builder.Services.AddDataDependencies(builder.Configuration);
builder.Services.AddServiceDependencies(builder.Configuration);
builder.Services.AddApiVersioning(x =>
{
x.DefaultApiVersion = new ApiVersion(1, 0);
x.AssumeDefaultVersionWhenUnspecified = true;
x.ReportApiVersions = true;
});
builder.Services.AddHealthChecksUI(opt =>
{
opt.SetEvaluationTimeInSeconds(builder.Configuration.GetValue<int>("HealthCheckConfig:EvaluationTimeInSeconds")); //time in seconds between check
opt.MaximumHistoryEntriesPerEndpoint(builder.Configuration.GetValue<int>("HealthCheckConfig:MaxHistoryPerEndpoint")); //maximum history of checks
opt.SetApiMaxActiveRequests(builder.Configuration.GetValue<int>("HealthCheckConfig:ApiMaxActiveRequest")); //api requests concurrency
opt.AddHealthCheckEndpoint("default api", builder.Configuration.GetValue<string>("HealthCheckConfig:HealthCheckEndpoint")); //map health check api
//bypass ssl
opt.UseApiEndpointHttpMessageHandler(sp =>
{
return new HttpClientHandler
{
ClientCertificateOptions = ClientCertificateOption.Manual,
ServerCertificateCustomValidationCallback = (httpRequestMessage, cert, cetChain, policyErrors) => { return true; }
};
});
})
.AddInMemoryStorage();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI(x =>
{
x.SwaggerEndpoint("/swagger/v1/swagger.json", "Lacariz.CharisHome");
x.RoutePrefix = string.Empty;
});
}
//app.UseMiddleware<EncryptionMiddleware>();
app.UseSerilogRequestLogging();
app.UseCors("corsapp");
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.UseHealthChecksUI();
app.MapHealthChecks(builder.Configuration.GetValue<string>("HealthCheckConfig:HealthCheckEndpoint"), new HealthCheckOptions()
{
Predicate = _ => true,
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
});
app.Run();
```
## Update appsettings.json:
Ensure the appsettings.json file includes a section for MongoDbConfig, specifying the DatabaseName and ConnectionString and other information that will be needed throughout the application.
```C#
{
"MongoDbSettings": {
"ConnectionString": "mongodb://localhost:27017",
"DatabaseName": "LacarizFurnitureDB"
},
"JwtConfig": {
"Secret": "xsxderyrredfghjkllknnnmuyffyuvhhgfhhjhgfuytrsewsdfwsdftfuhioikpoijiughtcgredwsxfedcvhgbiuhmkoiokpokjmkhngbgffghert",
"Issuer": "http://localhost:5013"
},
"EmailSettings": {
"SmtpHost": "smtp.elasticemail.com",
"SmtpPort": "2525",
"SmtpUser": "--Your email address--",
"SmtpPass": "--your smtp elastic mail pass--"
},
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
}
},
"PollyConfig": {
"BreakerTime": 2,
"RetryTime": 1,
"RetryCount": 5,
"HandledEventsAllowedBeforeBreaking": 5
},
"HealthCheckConfig": {
"EvaluationTimeInSeconds": 18000,
"MaxHistoryPerEndpoint": 50,
"ApiMaxActiveRequest": 1,
"HealthCheckEndpoint": "/health"
},
"PushNotification": {
"type": "service_account",
"project_id": "lacariz-furniture",
"private_key_id": "89b19620f584e5ba541b35210089eaf2e9545c0b",
"private_key": "--Your private key--",
"client_email": "--Your client email--",
"client_id": "--Your client Id--",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "--your certificate url--",
"universe_domain": "googleapis.com"
}
}
```
**Summary**
By following these steps, you will set up a clear and organized configuration for `MongoDB` within your `Lacariz Furniture e-commerce application`. This includes creating the necessary folder structure for configuration interfaces and implementations, defining the required configuration interface, and implementing the configuration class. Additionally, you will register the configuration in the API project's program class and update the application settings to include necessary details. This structured approach ensures maintainability and scalability for your application's configuration.
## Configuring MongoDB Interface and Repository for Lacariz Furniture E-commerce Application
Before we implement the notification service, we need to set up the MongoDB interface and repository that will handle communication with the database
```C#
public partial interface IMongoDBLogContext
{
IMongoCollection<MyBankLog> Logs { get; set; }
IMongoCollection<User> Users { get; set; }
IMongoCollection<Admin> Admins { get; set; }
IMongoCollection<FurnitureItem> FurnitureItems { get; set; }
IMongoCollection<ShoppingCart> ShoppingCarts { get; set; }
IMongoCollection<WishlistItem> WishlistItems { get; set; }
IMongoCollection<Order> Orders { get; set; }
IMongoCollection<PreOrder> PreOrders { get; set; }
IMongoCollection<CustomerInquiry> CustomerInquiries { get; set; }
}
public partial class MongoDBLogContext : IMongoDBLogContext
{
public IMongoCollection<MyBankLog> Logs { get; set; }
public IMongoCollection<User> Users { get; set; }
public IMongoCollection<Admin> Admins { get; set; }
public IMongoCollection<FurnitureItem> FurnitureItems { get; set; }
public IMongoCollection<ShoppingCart> ShoppingCarts { get; set; }
public IMongoCollection<WishlistItem> WishlistItems { get; set ; }
public IMongoCollection<Order> Orders { get; set; }
public IMongoCollection<PreOrder> PreOrders { get; set; }
public IMongoCollection<CustomerInquiry> CustomerInquiries { get; set; }
public MongoDBLogContext(IMongoDbConfig config, IMongoClient mongoClient)
{
var client = new MongoClient(config.ConnectionString);
var database = client.GetDatabase(config.DatabaseName);
Logs = database.GetCollection<MyBankLog>("MyBankLog");
Users = database.GetCollection<User>("User");
Admins = database.GetCollection<Admin>("Admin");
FurnitureItems = database.GetCollection<FurnitureItem>("FurnitureItem");
ShoppingCarts = database.GetCollection<ShoppingCart>("ShoppingCartItem");
WishlistItems = database.GetCollection<WishlistItem>("WishlistItem");
Orders = database.GetCollection<Order>("Order");
PreOrders = database.GetCollection<PreOrder>("PreOrder");
CustomerInquiries = database.GetCollection<CustomerInquiry>("CustomerInquiry");
}
}
```

## Implementing the Notification Service for Lacariz Furniture E-commerce Application
In this section, we will implement the notification service, starting from the repository interface and implementation.
```C#
using Lacariz.Furniture.Domain.Entities;
namespace Lacariz.Furniture.Data.Repositories.Interfaces
{
public interface IUserRepository
{
Task<User> RegisterUser(User user);
Task<User> GetUserByEmail(string emailAddress);
Task<User> GetUserById(string userId);
Task<bool> EmailExists(string emailAddress);
Task ResetPassword(string emailAddress, string newPassword);
Task UpdateUserActivationStatus(string emailAddress, bool isActivated);
Task UpdateAdminActivationStatus(string emailAddress, bool isActivated);
Task<Admin> RegisterAdmin(Admin admin);
Task<Admin> GetAdminByEmail(string emailAddress);
Task<Admin> GetAdminByLoginId(string loginId);
}
}
```
## User Repository Implementation
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Entities;
using Microsoft.EntityFrameworkCore;
using MongoDB.Driver;
using System.Net.Mail;
namespace Lacariz.Furniture.Data.Repositories.Implementations
{
public class UserRepository : IUserRepository
{
// private readonly IMySqlDbContext mySqlDbContext;
private readonly IMongoDBLogContext dbContext;
public UserRepository(/*IMySqlDbContext mySqlDbContext*/ IMongoDBLogContext dbContext)
{
// this.mySqlDbContext = mySqlDbContext;
this.dbContext = dbContext;
}
public async Task<bool> EmailExists(string emailAddress)
{
var filter = Builders<User>.Filter.Eq(u => u.EmailAddress, emailAddress);
var user = await dbContext.Users.Find(filter).FirstOrDefaultAsync();
return user != null;
}
public async Task<Admin> GetAdminByEmail(string emailAddress)
{
try
{
return await dbContext.Admins.Find(u => u.EmailAddress == emailAddress).FirstOrDefaultAsync();
}
catch (Exception)
{
throw;
}
}
public async Task<Admin> GetAdminByLoginId(string loginId)
{
try
{
return await dbContext.Admins.Find(u => u.AdminLoginId == loginId).FirstOrDefaultAsync();
}
catch (Exception)
{
throw;
}
}
public async Task<User> GetUserByEmail(string emailAddress)
{
try
{
return await dbContext.Users.Find(u => u.EmailAddress == emailAddress).FirstOrDefaultAsync();
}
catch (Exception ex)
{
throw ex;
}
}
public async Task<User> GetUserById(string userId)
{
try
{
return await dbContext.Users.Find(u => u.Id == userId).FirstOrDefaultAsync();
}
catch (Exception ex)
{
throw ex;
}
}
public async Task<Admin> RegisterAdmin(Admin admin)
{
try
{
await dbContext.Admins.InsertOneAsync(admin);
return admin;
}
catch (Exception)
{
throw;
}
}
public async Task<User> RegisterUser(User user)
{
try
{
await dbContext.Users.InsertOneAsync(user);
return user;
// await mySqlDbContext.SaveChangesAsync();
//return user;
}
catch (Exception)
{
throw;
}
}
public async Task ResetPassword(string emailAddress, string newPassword)
{
var filter = Builders<User>.Filter.Eq(u => u.EmailAddress, emailAddress);
var update = Builders<User>.Update.Set(u => u.Password, newPassword);
await dbContext.Users.UpdateOneAsync(filter, update);
}
public async Task UpdateAdminActivationStatus(string emailAddress, bool isActivated)
{
var filter = Builders<Admin>.Filter.Eq(u => u.EmailAddress, emailAddress);
var update = Builders<Admin>.Update.Set(u => u.isActivated, isActivated);
await dbContext.Admins.UpdateOneAsync(filter, update);
}
public async Task UpdateUserActivationStatus(string emailAddress, bool isActivated)
{
var filter = Builders<User>.Filter.Eq(u => u.EmailAddress, emailAddress);
var update = Builders<User>.Update.Set(u => u.isActivated, isActivated);
await dbContext.Users.UpdateOneAsync(filter, update);
}
}
}
```
## Next is the NewResult class , that's been returned in the service layer
```C#
namespace Lacariz.Furniture.Domain.Common.Generics
{
public class NewResult
{
public string ResponseCode { get; set; }
public string ResponseMsg { get; set; }
}
public class NewLoginResult
{
public string ResponseCode { get; set; }
public string ResponseMsg { get; set; }
public string Token { get; set; }
}
public class NewLoginResult<T> : NewLoginResult
{
public T ResponseDetails { get; set; }
public string Token { get; set; } // JWT token property
public static NewLoginResult<T> Success(T instance, string token, string message = "successful")
{
return new NewLoginResult<T>
{
ResponseCode = "00",
ResponseDetails = instance,
ResponseMsg = message,
Token = token // Set the JWT token
};
}
public static NewLoginResult<T> Failed(T instance, string message = "BadRequest")
{
return new NewLoginResult<T>
{
ResponseCode = "99",
ResponseDetails = instance,
ResponseMsg = message,
};
}
public static NewLoginResult<T> Error(T instance, string message = "An error occured while processing your request")
{
return new NewLoginResult<T>
{
ResponseCode = "55",
ResponseDetails = instance,
ResponseMsg = message,
};
}
}
public class NewResult<T> : NewResult
{
public T ResponseDetails { get; set; }
public static NewResult<T> Success(T instance, string message = "successful")
{
return new NewResult<T>
{
ResponseCode = "00",
ResponseDetails = instance,
ResponseMsg = message,
};
}
public static NewResult<T> Failed(T instance, string message = "BadRequest")
{
return new NewResult<T>
{
ResponseCode = "99",
ResponseDetails = instance,
ResponseMsg = message,
};
}
public static NewResult<T> Unauthorized(T instance, string message = "Unauthorized")
{
return new NewResult<T>
{
ResponseCode = "41",
ResponseDetails = instance,
ResponseMsg = message,
};
}
public static NewResult<T> RestrictedAccess(T instance, string message = "Unauthorized access")
{
return new()
{
ResponseCode = "40",
ResponseDetails = instance,
ResponseMsg = message,
};
}
public static NewResult<T> InternalServerError(T instance, string message = "Internal Server Error")
{
return new()
{
ResponseCode = "55",
ResponseDetails = instance,
ResponseMsg = message,
};
}
public static NewResult<T> SessionExpired(T instance, string message = "Session Expired")
{
return new()
{
ResponseCode = "41",
ResponseDetails = instance,
ResponseMsg = message,
};
}
public static NewResult<T> Error(T instance, string message = "An error occured while processing your request")
{
return new NewResult<T>
{
ResponseCode = "55",
ResponseDetails = instance,
ResponseMsg = message,
};
}
public static NewResult<T> Duplicate(T instance, string message = "Duplicate request")
{
return new NewResult<T>
{
ResponseCode = "77",
ResponseDetails = instance,
ResponseMsg = message,
};
}
}
}
```
## Now the user service layer , comprising of both the Interface and Implementation
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.DataTransferObjects;
using Lacariz.Furniture.Domain.Entities;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface IUserService
{
Task<NewResult<User>> RegisterUser(User user);
Task<NewResult<User>> ActivateAccount(string emailAddress, string activationCode);
Task<NewLoginResult<User>> UserLogin(LoginRequest loginRequest);
Task<NewLoginResult<Admin>> AdminLogin(AdminLoginRequest loginRequest);
Task<NewResult<string>> ResetPassword(string emailAddress, string verificationCode, string newPassword);
Task<NewResult<string>> InitiatePasswordReset(string emailAddress);
Task<NewResult<Admin>> RegisterAdmin(Admin admin);
Task<NewResult<Admin>> ActivateAdminAccount(string emailAddress, string activationCode);
Task<NewResult<string>> ResendVerificationCode(string emailAddress);
}
}
```
## User Service Implementation
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.DataTransferObjects;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Services.Interfaces;
using Microsoft.Extensions.Caching.Memory;
using Microsoft.IdentityModel.Tokens;
using System.IdentityModel.Tokens.Jwt;
using System.Security.Claims;
using System.Text;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class UserService : IUserService
{
private readonly IUserRepository userRepository;
private readonly IEmailService emailService;
private const string CacheKeyPrefix = "VerificationCode_";
private readonly IMemoryCache cache;
private readonly string jwtSecret = "hjejehukkehheukndhuywuiuwjbncduhbwiubdvuwyveyduwivuyegvryefrebuhjwbfjweuhbwllo"; // Change this to a secure secret key
private readonly double jwtExpirationMinutes = 60; // Token expiration time in minutes
private readonly ILogger logger;
public UserService(IUserRepository userRepository, IEmailService emailService, IMemoryCache cache, ILogger logger)
{
this.userRepository = userRepository;
this.emailService = emailService;
this.cache = cache;
this.logger = logger;
}
// private const string SessionKeyPrefix = "VerificationCode_";
public async Task<NewResult<User>> RegisterUser(User user)
{
// Hash the password
user.Password = HashPassword(user.Password);
NewResult<User> result = new NewResult<User>();
try
{
var userExists = await userRepository.GetUserByEmail(user.EmailAddress);
if (userExists != null)
{
return NewResult<User>.Duplicate(null, "Email address unavailable");
}
var response = await userRepository.RegisterUser(user);
if (response == null)
{
result = NewResult<User>.Failed(null, "Failed");
}
var newResponse = new User()
{
Id = user.Id,
FirstName = user.FirstName,
LastName = user.LastName,
EmailAddress = user.EmailAddress,
Password = user.Password,
PhoneNumber = user.PhoneNumber,
Address = user.Address,
isActivated = false,
Role = user.Role
};
// Generate activation code
string verificationCode = GenerateVerificationCode();
// Store the verification code temporarily (example: in session or cache)
StoreVerificationCodeTemporarily(user.EmailAddress, verificationCode);
//// Construct activation link
//// string activationLink = $"https://yourdomain.com/activate?token={verificationCode}";
// Send activation email with the activation link
await emailService.SendActivationEmail(user.EmailAddress, verificationCode);
result = NewResult<User>.Success(newResponse, "User registration successful");
return result;
}
catch (Exception)
{
throw;
}
}
public async Task<NewResult<User>> ActivateAccount(string emailAddress, string activationCode)
{
NewResult<User> result = new NewResult<User>();
try
{
// Retrieve the stored verification code for the given email address
string storedVerificationCode = RetrieveVerificationCode(emailAddress);
// Check if the verification code is found in the cache
if (string.IsNullOrEmpty(storedVerificationCode))
{
// If the verification code is not found, it means it has expired or does not exist
return NewResult<User>.Failed(null, "Verification code not found or expired.");
}
// Check if the verification code matches the one provided by the user
if (storedVerificationCode != activationCode)
{
// If the verification code does not match, return a failure result
return NewResult<User>.Failed(null, "Invalid verification code.");
}
// Proceed with account activation
// Retrieve the user by email address
var user = await userRepository.GetUserByEmail(emailAddress);
if (user is not null)
{
// Activate the user account (example: set IsActive flag to true)
user.isActivated = true;
// Update the user in the database
await userRepository.UpdateUserActivationStatus(emailAddress, user.isActivated);
// Optionally, you may want to remove the verification code from storage
RemoveVerificationCodeFromCache(emailAddress);
// Return a success result
return NewResult<User>.Success(null, "Account activated successfully.");
}
else
{
// If the user is not found, return a failure result
return NewResult<User>.Failed(null, "User not found.");
}
}
catch (Exception ex)
{
// If an exception occurs during the activation process, return a failure result
return NewResult<User>.Error(null, $"Error activating account: {ex.Message}");
}
}
public async Task<NewLoginResult<User>> UserLogin(LoginRequest loginRequest)
{
NewLoginResult<User> result = new NewLoginResult<User>();
try
{
var user = await userRepository.GetUserByEmail(loginRequest.EmailAddress);
if (user == null || !VerifyPassword(loginRequest.Password, user.Password))
{
//throw new Exception("Invalid email or password.");
return NewLoginResult<User>.Failed(null, "Invalid email or password");
}
if (!user.isActivated)
{
return NewLoginResult<User>.Failed(null, "Account is not activated");
}
// Generate JWT token
var token = GenerateJwtToken(user);
result = NewLoginResult<User>.Success(user, token, "Login successful");
return result;
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
return NewLoginResult<User>.Error(null, "An error occured while trying to login");
}
}
private string GenerateVerificationCode()
{
// Generate a random 6-digit verification code
Random random = new Random();
int verificationCode = random.Next(100000, 999999);
return verificationCode.ToString();
}
public void StoreVerificationCodeTemporarily(string emailAddress, string verificationCode)
{
// Generate a unique cache key for the verification code
string cacheKey = $"{CacheKeyPrefix}{emailAddress}";
// Store the verification code in the MemoryCache with a sliding expiration time
cache.Set(cacheKey, verificationCode, TimeSpan.FromMinutes(10));
}
public string RetrieveVerificationCode(string emailAddress)
{
// Generate the cache key for the given email address
string cacheKey = $"{CacheKeyPrefix}{emailAddress}";
// Retrieve the verification code from the MemoryCache
return cache.Get<string>(cacheKey);
}
private string HashPassword(string password)
{
// Hash the password using BCrypt
string hashedPassword = BCrypt.Net.BCrypt.HashPassword(password);
return hashedPassword;
}
private bool VerifyPassword(string enteredPassword, string hashedPassword)
{
// Verify the entered password against the hashed password using BCrypt
return BCrypt.Net.BCrypt.Verify(enteredPassword, hashedPassword);
}
private string GenerateJwtToken(User user)
{
var tokenHandler = new JwtSecurityTokenHandler();
var key = Encoding.ASCII.GetBytes(jwtSecret);
var tokenDescriptor = new SecurityTokenDescriptor
{
Subject = new ClaimsIdentity(new Claim[] {
new Claim(ClaimTypes.Name, user.Id),
// You can add more claims here as needed
}),
Expires = DateTime.UtcNow.AddMinutes(jwtExpirationMinutes),
SigningCredentials = new SigningCredentials(new SymmetricSecurityKey(key), SecurityAlgorithms.HmacSha256Signature)
};
var token = tokenHandler.CreateToken(tokenDescriptor);
return tokenHandler.WriteToken(token);
}
private string GenerateAdminJwtToken(Admin admin)
{
var tokenHandler = new JwtSecurityTokenHandler();
var key = Encoding.ASCII.GetBytes(jwtSecret);
var tokenDescriptor = new SecurityTokenDescriptor
{
Subject = new ClaimsIdentity(new Claim[] {
new Claim(ClaimTypes.Name, admin.AdminId),
// You can add more claims here as needed
}),
Expires = DateTime.UtcNow.AddMinutes(jwtExpirationMinutes),
SigningCredentials = new SigningCredentials(new SymmetricSecurityKey(key), SecurityAlgorithms.HmacSha256Signature)
};
var token = tokenHandler.CreateToken(tokenDescriptor);
return tokenHandler.WriteToken(token);
}
public async Task<NewResult<string>> ResetPassword(string emailAddress, string verificationCode, string newPassword)
{
NewResult<string> result = new NewResult<string>();
try
{
// Step 1: Check if the email exists
if (await userRepository.EmailExists(emailAddress))
{
// Step 2: Generate and send verification code to the email
string generatedVerificationCode = GenerateVerificationCode();
await emailService.SendActivationEmail(emailAddress, generatedVerificationCode);
// Step 3: Verify verification code
if (await VerifyVerificationCode(emailAddress, verificationCode))
{
// Step 4: Proceed and reset password
await userRepository.ResetPassword(emailAddress, newPassword);
return NewResult<string>.Success(null, "Password reset successfully");
}
else
{
return NewResult<string>.Failed(null, "Invalid verification code");
}
}
else
{
return NewResult<string>.Failed(null, "Email address does not exist");
}
}
catch (Exception)
{
throw;
}
}
private async Task<bool> VerifyVerificationCode(string emailAddress, string verificationCode)
{
// Retrieve the stored verification code from the cache
string storedVerificationCode = RetrieveVerificationCode(emailAddress);
// Check if the verification code matches
return storedVerificationCode == verificationCode;
}
private void RemoveVerificationCodeFromCache(string emailAddress)
{
// Generate the cache key for the given email address
string cacheKey = $"{CacheKeyPrefix}{emailAddress}";
// Remove the verification code from the cache
cache.Remove(cacheKey);
}
public async Task<NewResult<string>> InitiatePasswordReset(string emailAddress)
{
try
{
// Step 1: Check if the email exists
if (await userRepository.EmailExists(emailAddress))
{
// Step 2: Generate a verification code
string generatedVerificationCode = GenerateVerificationCode();
// Step 3: Send the verification code to the user's email
await emailService.SendActivationEmail(emailAddress, generatedVerificationCode);
// Step 4: Return success response with the generated verification code
return NewResult<string>.Success(generatedVerificationCode, "Verification code sent to your email.");
}
else
{
// Email address does not exist, return failure response
return NewResult<string>.Failed(null, "Email address does not exist");
}
}
catch (Exception)
{
throw;
}
}
public async Task<NewResult<Admin>> RegisterAdmin(Admin admin)
{
try
{
// Hash the password
admin.Password = HashPassword(admin.Password);
NewResult<Admin> result = new NewResult<Admin>();
var AdminExists = await userRepository.GetAdminByEmail(admin.EmailAddress);
if (AdminExists != null)
{
return NewResult<Admin>.Duplicate(null, "Email address unavailable");
}
// Download the image data as byte array before creating the Admin object
byte[] profilePictureData = await DownloadImageAsByteArray(admin.ProfilePictureUrl);
// Instantiate the Admin object with the profile picture byte array
Admin aadmin = new Admin
{
AdminId = admin.AdminId,
FirstName = admin.FirstName,
LastName = admin.LastName,
EmailAddress = admin.EmailAddress,
Password = admin.Password,
ProfilePictureUrl = admin.ProfilePictureUrl,
AdminLoginId = "A86478927",
isActivated = admin.isActivated,
Role = admin.Role
};
var response = await userRepository.RegisterAdmin(aadmin);
if (response == null)
{
result = NewResult<Admin>.Failed(null, "Failed");
}
var newResponse = new Admin()
{
AdminId = admin.AdminId,
FirstName = admin.FirstName,
LastName = admin.LastName,
EmailAddress = admin.EmailAddress,
Password = admin.Password,
ProfilePictureUrl = admin.ProfilePictureUrl,
AdminLoginId = "A86478927",
//PhoneNumber = admin.,
// Address = admin.,
isActivated = false,
Role = admin.Role
};
// Generate activation code
string verificationCode = GenerateVerificationCode();
// Store the verification code temporarily (example: in session or cache)
StoreVerificationCodeTemporarily(admin.EmailAddress, verificationCode);
//// Construct activation link
//// string activationLink = $"https://yourdomain.com/activate?token={verificationCode}";
// Send activation email with the activation link
await emailService.SendActivationEmail(admin.EmailAddress, verificationCode);
result = NewResult<Admin>.Success(newResponse, "Admin registration successful");
return result;
}
catch (Exception)
{
throw;
}
}
public async Task<NewLoginResult<Admin>> AdminLogin(AdminLoginRequest loginRequest)
{
NewLoginResult<Admin> result = new NewLoginResult<Admin>();
try
{
var admin = await userRepository.GetAdminByLoginId(loginRequest.AdminLoginId);
if (admin == null || !VerifyPassword(loginRequest.Password, admin.Password))
{
//throw new Exception("Invalid email or password.");
return NewLoginResult<Admin>.Failed(null, "Invalid login credientials");
}
if (!admin.isActivated)
{
return NewLoginResult<Admin>.Failed(null, "Account is not activated");
}
// Generate JWT token
var token = GenerateAdminJwtToken(admin);
result = NewLoginResult<Admin>.Success(admin, token, "Login successful");
return result;
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
return NewLoginResult<Admin>.Error(null, "An error occured while trying to login");
}
}
public async Task<byte[]> DownloadImageAsByteArray(string imageUrl)
{
using (var httpClient = new HttpClient())
{
var response = await httpClient.GetAsync(imageUrl);
if (response.IsSuccessStatusCode)
{
using (var stream = await response.Content.ReadAsStreamAsync())
{
using (var memoryStream = new MemoryStream())
{
await stream.CopyToAsync(memoryStream);
return memoryStream.ToArray();
}
}
}
else
{
// Handle error response
throw new Exception($"Failed to download image from URL: {imageUrl}.");
}
}
}
public async Task<NewResult<Admin>> ActivateAdminAccount(string emailAddress, string activationCode)
{
// NewResult<User> result = new NewResult<User>();
try
{
// Retrieve the stored verification code for the given email address
string storedVerificationCode = RetrieveVerificationCode(emailAddress);
// Check if the verification code is found in the cache
if (string.IsNullOrEmpty(storedVerificationCode))
{
// If the verification code is not found, it means it has expired or does not exist
return NewResult<Admin>.Failed(null, "Verification code not found or expired.");
}
// Check if the verification code matches the one provided by the user
if (storedVerificationCode != activationCode)
{
// If the verification code does not match, return a failure result
return NewResult<Admin>.Failed(null, "Invalid verification code.");
}
// Proceed with account activation
// Retrieve the admin by email address
var admin = await userRepository.GetAdminByEmail(emailAddress);
if (admin is not null)
{
// Activate the user account (example: set IsActive flag to true)
admin.isActivated = true;
// Update the user in the database
await userRepository.UpdateAdminActivationStatus(emailAddress, admin.isActivated);
// Optionally, you may want to remove the verification code from storage
RemoveVerificationCodeFromCache(emailAddress);
// Return a success result
return NewResult<Admin>.Success(null, "Account activated successfully.");
}
else
{
// If the user is not found, return a failure result
return NewResult<Admin>.Failed(null, "Admin not found.");
}
}
catch (Exception ex)
{
// If an exception occurs during the activation process, return a failure result
return NewResult<Admin>.Failed(null, $"Error activating account: {ex.Message}");
}
}
public async Task<NewResult<string>> ResendVerificationCode(string emailAddress)
{
try
{
// Generate a new verification code
string newVerificationCode = GenerateVerificationCode();
// Store the new verification code temporarily
StoreVerificationCodeTemporarily(emailAddress, newVerificationCode);
// Send the new verification code to the user's email address
await emailService.SendActivationEmail(emailAddress, newVerificationCode);
return NewResult<string>.Success(null, "Verification code resent successfully.");
}
catch (Exception ex)
{
// Handle any errors that occur during the resend process
return NewResult<string>.Failed(null, $"Failed to resend verification code: {ex.Message}");
}
}
}
}
```
## Authentication Controller
Authentication controller extends Base Controller.
**Base Controller**
```C#
using Lacariz.Furniture.Domain.Common;
namespace Lacariz.Furniture.API.Controllers.v1;
[Route("lacariz-furniture-service")]
//[Route("api/[controller]")]
[ApiController]
[ApiVersion("1.0")]
public class BaseController : ControllerBase
{
public BaseController()
{
}
internal Error PopulateError(int code, string message, string type)
{
return new Error()
{
Code = code,
Message = message,
Type = type
};
}
}
```
**Authentication Controller**
```C#
using Lacariz.Furniture.Domain.DataTransferObjects;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Services.Interfaces;
namespace Lacariz.Furniture.API.Controllers.v1
{
public class AuthenticationController : BaseController
{
private readonly IUserService userService;
public AuthenticationController(IUserService userService)
{
this.userService = userService;
}
[HttpPost("api/v{version:apiVersion}/[controller]/user/register")]
[ApiVersion("1.0")]
public async Task<IActionResult> RegisterUser([FromBody] User request)
{
var response = await userService.RegisterUser(request);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/admin/register")]
[ApiVersion("1.0")]
public async Task<IActionResult> RegisterAdmin([FromBody] Admin request)
{
var response = await userService.RegisterAdmin(request);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/user/login")]
[ApiVersion("1.0")]
public async Task<IActionResult> UserLogin([FromBody] LoginRequest request)
{
var response = await userService.UserLogin(request);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/admin/login")]
[ApiVersion("1.0")]
public async Task<IActionResult> AdminLogin([FromBody] AdminLoginRequest request)
{
var response = await userService.AdminLogin(request);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/user/reset-password")]
[ApiVersion("1.0")]
public async Task<IActionResult> ResetPassword([FromBody] ResetPasswordRequest request)
{
var response = await userService.ResetPassword(request.EmailAddress, request.VerificationCode, request.NewPassword);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/user/activate-user-account")]
[ApiVersion("1.0")]
public async Task<IActionResult> ActivateUserAccount([FromBody] ActivateAccountRequest request)
{
var response = await userService.ActivateAccount(request.EmailAddress, request.ActivationCode);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/admin/activate-admin-account")]
[ApiVersion("1.0")]
public async Task<IActionResult> ActivateAdminAccount([FromBody] ActivateAccountRequest request)
{
var response = await userService.ActivateAdminAccount(request.EmailAddress, request.ActivationCode);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/resend-verification-code")]
[ApiVersion("1.0")]
public async Task<IActionResult> ResendVerificationCode(string EmailAddress)
{
var response = await userService.ResendVerificationCode(EmailAddress);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
}
}
```

## Implementing the Customer Support Service for Lacariz Furniture E-commerce Application
Next, we will implement the customer support service, which involves setting up the necessary repository, service, and controller layers.
## CustomerSupport Repository
```C#
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Data.Repositories.Interfaces
{
public interface ICustomerSupportRepository
{
Task AddInquiryAsync(CustomerInquiry inquiry);
Task<IEnumerable<CustomerInquiry>> GetInquiriesByUserIdAsync(string userId);
Task<CustomerInquiry> GetInquiryByIdAsync(string inquiryId);
Task<bool> UpdateInquiryAsync(CustomerInquiry inquiry);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Entities;
using Microsoft.EntityFrameworkCore;
using MongoDB.Driver;
namespace Lacariz.Furniture.Data.Repositories.Implementations
{
public class CustomerSupportRepository : ICustomerSupportRepository
{
private readonly IMongoDBLogContext dbContext;
public CustomerSupportRepository(IMongoDBLogContext dbContext)
{
dbContext = dbContext;
}
public async Task AddInquiryAsync(CustomerInquiry inquiry)
{
await dbContext.CustomerInquiries.InsertOneAsync(inquiry);
}
public async Task<IEnumerable<CustomerInquiry>> GetInquiriesByUserIdAsync(string userId)
{
var filter = Builders<CustomerInquiry>.Filter.Eq(i => i.UserId, userId);
return await dbContext.CustomerInquiries.Find(filter).ToListAsync();
}
public async Task<CustomerInquiry> GetInquiryByIdAsync(string inquiryId)
{
var filter = Builders<CustomerInquiry>.Filter.Eq(i => i.Id, inquiryId);
return await dbContext.CustomerInquiries.Find(filter).FirstOrDefaultAsync();
}
public async Task<bool> UpdateInquiryAsync(CustomerInquiry inquiry)
{
var filter = Builders<CustomerInquiry>.Filter.Eq(i => i.Id, inquiry.Id);
var result = await dbContext.CustomerInquiries.ReplaceOneAsync(filter, inquiry);
return result.ModifiedCount > 0;
}
}
}
```
## CustomerSupport Service
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface ICustomerSupportService
{
Task<NewResult<string>> SubmitInquiryAsync(string userId, string subject, string message);
Task<NewResult<IEnumerable<CustomerInquiry>>> GetUserInquiriesAsync(string userId);
Task<NewResult<CustomerInquiry>> GetInquiryByIdAsync(string inquiryId);
Task<NewResult<string>> ResolveInquiryAsync(string inquiryId);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Services.Interfaces;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class CustomerSupportService : ICustomerSupportService
{
private readonly ICustomerSupportRepository customerSupportRepository;
public CustomerSupportService(ICustomerSupportRepository customerSupportRepository)
{
this.customerSupportRepository = customerSupportRepository;
}
public async Task<NewResult<CustomerInquiry>> GetInquiryByIdAsync(string inquiryId)
{
try
{
if (string.IsNullOrEmpty(inquiryId))
throw new ArgumentNullException(nameof(inquiryId), "Inquiry ID cannot be null or empty.");
var inquiry = await customerSupportRepository.GetInquiryByIdAsync(inquiryId);
return NewResult<CustomerInquiry>.Success(inquiry, "Inquiry retrieved successfully.");
}
catch (Exception ex)
{
return NewResult<CustomerInquiry>.Failed(null, $"Error occurred: {ex.Message}");
}
}
public async Task<NewResult<IEnumerable<CustomerInquiry>>> GetUserInquiriesAsync(string userId)
{
try
{
if (string.IsNullOrEmpty(userId))
throw new ArgumentNullException(nameof(userId), "User ID cannot be null or empty.");
var inquiries = await customerSupportRepository.GetInquiriesByUserIdAsync(userId);
return NewResult<IEnumerable<CustomerInquiry>>.Success(inquiries, "Inquiries retrieved successfully.");
}
catch (Exception ex)
{
return NewResult<IEnumerable<CustomerInquiry>>.Failed(null, $"Error occurred: {ex.Message}");
}
}
public async Task<NewResult<string>> ResolveInquiryAsync(string inquiryId)
{
try
{
if (string.IsNullOrEmpty(inquiryId))
throw new ArgumentNullException(nameof(inquiryId), "Inquiry ID cannot be null or empty.");
var inquiry = await customerSupportRepository.GetInquiryByIdAsync(inquiryId);
if (inquiry == null)
return NewResult<string>.Failed(null, "Inquiry not found.");
inquiry.IsResolved = true;
var updated = await customerSupportRepository.UpdateInquiryAsync(inquiry);
return updated ? NewResult<string>.Success(inquiryId, "Inquiry resolved successfully.")
: NewResult<string>.Failed(inquiryId, "Failed to resolve inquiry.");
}
catch (Exception ex)
{
return NewResult<string>.Failed(null, $"Error occurred: {ex.Message}");
}
}
public async Task<NewResult<string>> SubmitInquiryAsync(string userId, string subject, string message)
{
try
{
if (string.IsNullOrEmpty(userId) || string.IsNullOrEmpty(subject) || string.IsNullOrEmpty(message))
throw new ArgumentNullException("Invalid inquiry data");
var inquiry = new CustomerInquiry
{
Id = Guid.NewGuid().ToString(),
UserId = userId,
Subject = subject,
Message = message,
CreatedAt = DateTime.UtcNow,
IsResolved = false
};
await customerSupportRepository.AddInquiryAsync(inquiry);
return NewResult<string>.Success(inquiry.Id, "Inquiry submitted successfully.");
}
catch (Exception ex)
{
return NewResult<string>.Failed(null, $"Error occurred: {ex.Message}");
}
}
}
}
```
## Customer Support Controller
```C#
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Services.Interfaces;
using Microsoft.AspNetCore.Authorization;
namespace Lacariz.Furniture.API.Controllers.v1
{
public class CustomerSupportController : BaseController
{
private readonly ICustomerSupportService customerSupportService;
public CustomerSupportController(ICustomerSupportService customerSupportService)
{
this.customerSupportService = customerSupportService;
}
[HttpPost("api/v{version:apiVersion}/[controller]/support/submit-inquiry")]
[ApiVersion("1.0")]
public async Task<IActionResult> SubmitInquiry([FromBody] CustomerInquiry request)
{
var response = await customerSupportService.SubmitInquiryAsync(request.UserId, request.Subject, request.Message);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/support/admin/get-user-inquiries")]
[Authorize(Policy = "AdminOnly")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetUserInquiries(string userId)
{
var response = await customerSupportService.GetUserInquiriesAsync(userId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/support/admin/get-inquiry-by-id")]
[Authorize(Policy = "AdminOnly")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetInquiryById(string inquiryId)
{
var response = await customerSupportService.GetInquiryByIdAsync(inquiryId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/support/admin/resolve-inquiry")]
[Authorize(Policy = "AdminOnly")]
[ApiVersion("1.0")]
public async Task<IActionResult> ResolveInquiry(string inquiryId)
{
var response = await customerSupportService.ResolveInquiryAsync(inquiryId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
}
}
```

## Implementing the Inventory Service for Lacariz Furniture E-commerce Application
Next, we will implement the inventory service, which involves setting up the necessary repository, service, and controller layers.
## Inventory Repository
```C#
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Data.Repositories.Interfaces
{
public interface IInventoryRepository
{
Task<bool> UpdateStockLevelAsync(string furnitureItemId, int newStockLevel);
Task<IEnumerable<FurnitureItem>> GetCurrentStockLevelsAsync();
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Entities;
using Microsoft.EntityFrameworkCore;
using MongoDB.Driver;
namespace Lacariz.Furniture.Data.Repositories.Implementations
{
public class InventoryRepository : IInventoryRepository
{
private readonly IMongoDBLogContext DbContext;
public InventoryRepository(IMongoDBLogContext dbContext)
{
DbContext = dbContext;
}
public async Task<bool> UpdateStockLevelAsync(string furnitureItemId, int newStockLevel)
{
var filter = Builders<FurnitureItem>.Filter.Eq(f => f.Id, furnitureItemId);
var update = Builders<FurnitureItem>.Update.Set(f => f.StockQuantity, newStockLevel);
var result = await DbContext.FurnitureItems.UpdateOneAsync(filter, update);
return result.ModifiedCount > 0;
}
public async Task<IEnumerable<FurnitureItem>> GetCurrentStockLevelsAsync()
{
return await DbContext.FurnitureItems.Find(FilterDefinition<FurnitureItem>.Empty).ToListAsync();
}
}
}
```
## Inventory Service
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface IInventoryService
{
Task<NewResult<bool>> UpdateStockLevelAsync(string furnitureItemId, int newStockLevel);
Task<NewResult<IEnumerable<FurnitureItem>>> GetCurrentStockLevelsAsync();
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Implementations;
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Services.Interfaces;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class InventoryService : IInventoryService
{
private readonly IInventoryRepository inventoryRepository;
public InventoryService(IInventoryRepository inventoryRepository)
{
this.inventoryRepository = inventoryRepository;
}
public async Task<NewResult<IEnumerable<FurnitureItem>>> GetCurrentStockLevelsAsync()
{
try
{
var stockLevels = await inventoryRepository.GetCurrentStockLevelsAsync();
if (stockLevels != null && stockLevels.Any())
{
return NewResult<IEnumerable<FurnitureItem>>.Success(stockLevels, "Current stock levels retrieved successfully.");
}
else
{
return NewResult<IEnumerable<FurnitureItem>>.Failed(null, "No stock levels found.");
}
}
catch (Exception ex)
{
return NewResult<IEnumerable<FurnitureItem>>.Failed(null, $"Error occurred: {ex.Message}");
}
}
public async Task<NewResult<bool>> UpdateStockLevelAsync(string furnitureItemId, int newStockLevel)
{
try
{
if (string.IsNullOrEmpty(furnitureItemId))
throw new ArgumentNullException(nameof(furnitureItemId), "Furniture item ID cannot be null or empty.");
var result = await inventoryRepository.UpdateStockLevelAsync(furnitureItemId, newStockLevel);
if (result)
{
return NewResult<bool>.Success(true, "Stock level updated successfully.");
}
else
{
return NewResult<bool>.Failed(false, "Failed to update stock level.");
}
}
catch (Exception ex)
{
return NewResult<bool>.Failed(false, $"Error occurred: {ex.Message}");
}
}
}
}
```
## Inventory Controller
```C#
using Lacariz.Furniture.Domain.DataTransferObjects;
using Lacariz.Furniture.Service.Services.Implementations;
using Lacariz.Furniture.Service.Services.Interfaces;
using Microsoft.AspNetCore.Authorization;
namespace Lacariz.Furniture.API.Controllers.v1
{
public class InventoryController : BaseController
{
private readonly IInventoryService inventoryService;
public InventoryController(IInventoryService inventoryService)
{
this.inventoryService = inventoryService;
}
[HttpPost("api/v{version:apiVersion}/[controller]/update-stock-level")]
[Authorize(Policy = "AdminOnly")]
[ApiVersion("1.0")]
public async Task<IActionResult> UpdateStockLevel(StockLevelRequest request)
{
var response = await inventoryService.UpdateStockLevelAsync(request.FurnitureItemId, request.NewStockLevel);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/get-stock-level")]
[Authorize(Policy = "AdminOnly")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetStockLevels()
{
var response = await inventoryService.GetCurrentStockLevelsAsync();
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
}
}
```

## Implementing the Order Service for Lacariz Furniture E-commerce Application
Next, we will implement the order service, which involves setting up the necessary repository, service, and controller layers.
```C#
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Data.Repositories.Interfaces
{
public interface IOrderRepository
{
Task<Order> CreateOrderAsync(Order order);
Task<Order> GetOrderByIdAsync(string orderId);
Task<IEnumerable<Order>> GetOrdersByUserIdAsync(string userId);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Entities;
using Microsoft.EntityFrameworkCore;
using MongoDB.Driver;
namespace Lacariz.Furniture.Data.Repositories.Implementations
{
public class OrderRepository : IOrderRepository
{
private readonly IMongoDBLogContext DbContext;
public OrderRepository(IMongoDBLogContext dblogContext)
{
DbContext = DbContext;
}
public async Task<Order> CreateOrderAsync(Order order)
{
await DbContext.Orders.InsertOneAsync(order);
return order;
}
public async Task<Order> GetOrderByIdAsync(string orderId)
{
return await DbContext.Orders.Find(o => o.Id == orderId).FirstOrDefaultAsync();
}
public async Task<IEnumerable<Order>> GetOrdersByUserIdAsync(string userId)
{
return await DbContext.Orders.Find(o => o.UserId == userId).ToListAsync();
}
}
}
```
## Order Service
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface IOrderService
{
Task<NewResult<Order>> CreateOrderAsync(string userId, List<OrderItem> items);
Task<NewResult<Order>> GetOrderByIdAsync(string orderId);
Task<NewResult<IEnumerable<Order>>> GetOrdersByUserIdAsync(string userId);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Domain.Enum;
using Lacariz.Furniture.Service.Services.Interfaces;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class OrderService : IOrderService
{
private readonly IOrderRepository orderRepository;
public OrderService(IOrderRepository orderRepository)
{
this.orderRepository = orderRepository;
}
public async Task<NewResult<Order>> CreateOrderAsync(string userId, List<OrderItem> items)
{
try
{
if (string.IsNullOrEmpty(userId))
throw new ArgumentNullException(nameof(userId));
if (items == null || items.Count == 0)
throw new ArgumentNullException(nameof(items));
var order = new Order
{
UserId = userId,
Items = items,
OrderDate = DateTime.UtcNow,
Status = OrderStatus.Pending,
TotalAmount = items.Sum(i => i.Quantity * i.Price)
};
var createdOrder = await orderRepository.CreateOrderAsync(order);
if (createdOrder != null)
{
return NewResult<Order>.Success(createdOrder, "Order created successfully.");
}
return NewResult<Order>.Failed(null, "Unable to create order");
}
catch (Exception ex)
{
return NewResult<Order>.Error(null, $"Error occurred: {ex.Message}");
}
}
public async Task<NewResult<Order>> GetOrderByIdAsync(string orderId)
{
try
{
var order = await orderRepository.GetOrderByIdAsync(orderId);
if (order == null)
{
return NewResult<Order>.Failed(null, "Order not found.");
}
return NewResult<Order>.Success(order, "Order retrieved successfully.");
}
catch (Exception ex)
{
return NewResult<Order>.Error(null, $"Error occurred: {ex.Message}");
}
}
public async Task<NewResult<IEnumerable<Order>>> GetOrdersByUserIdAsync(string userId)
{
try
{
var orders = await orderRepository.GetOrdersByUserIdAsync(userId);
if (orders == null || !orders.Any())
{
return NewResult<IEnumerable<Order>>.Failed(null, "No orders found.");
}
return NewResult<IEnumerable<Order>>.Success(orders, "Orders retrieved successfully.");
}
catch (Exception ex)
{
return NewResult<IEnumerable<Order>>.Error(null, $"Error occurred: {ex.Message}");
}
}
}
}
```
## Order Controller
```C#
using Lacariz.Furniture.Domain.DataTransferObjects;
using Lacariz.Furniture.Service.Services.Implementations;
using Lacariz.Furniture.Service.Services.Interfaces;
namespace Lacariz.Furniture.API.Controllers.v1
{
public class OrderController : BaseController
{
private readonly IOrderService orderService;
public OrderController(IOrderService orderService)
{
this.orderService = orderService;
}
[HttpPost("api/v{version:apiVersion}/[controller]/create-order")]
[ApiVersion("1.0")]
public async Task<IActionResult> CreateOrder(CreateOrderRequest request)
{
var response = await orderService.CreateOrderAsync(request.UserId, request.items);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/get-order-by-id")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetOrderById(string orderId)
{
var response = await orderService.GetOrderByIdAsync(orderId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/get-order-by-user-id")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetOrdersByUserId(string userId)
{
var response = await orderService.GetOrdersByUserIdAsync(userId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
}
}
```

## Implementing the Pre-Order Service for Lacariz Furniture E-commerce Application
Next, we will implement the pre-order service, which involves setting up the necessary repository, service, and controller layers.
```C#
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Data.Repositories.Interfaces
{
public interface IPreOrderRepository
{
Task<PreOrder> CreatePreOrderAsync(PreOrder preOrder);
Task<PreOrder> GetPreOrderByIdAsync(string preorderId);
Task<IEnumerable<PreOrder>> GetPreOrdersByUserIdAsync(string userId);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Entities;
using Microsoft.EntityFrameworkCore;
using MongoDB.Driver;
namespace Lacariz.Furniture.Data.Repositories.Implementations
{
public class PreOrderRepository : IPreOrderRepository
{
private readonly IMongoDBLogContext DbContext;
public PreOrderRepository(IMongoDBLogContext dbContext)
{
DbContext = dbContext;
}
public async Task<PreOrder> CreatePreOrderAsync(PreOrder preOrder)
{
await DbContext.PreOrders.InsertOneAsync(preOrder);
return preOrder;
}
public async Task<PreOrder> GetPreOrderByIdAsync(string preorderId)
{
return await DbContext.PreOrders.Find(po => po.Id == preorderId).FirstOrDefaultAsync();
}
public async Task<IEnumerable<PreOrder>> GetPreOrdersByUserIdAsync(string userId)
{
return await DbContext.PreOrders.Find(po => po.UserId == userId).ToListAsync();
}
}
}
```
## Pre-Order Service
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface IPreOrderService
{
Task<NewResult<PreOrder>> CreatePreOrderAsync(string userId, string furnitureItemId, int quantity);
Task<NewResult<PreOrder>> GetPreOrderByIdAsync(string preorderId);
Task<NewResult<IEnumerable<PreOrder>>> GetPreOrdersByUserIdAsync(string userId);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Domain.Enum;
using Lacariz.Furniture.Service.Services.Interfaces;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class PreOrderService : IPreOrderService
{
private readonly IPreOrderRepository preOrderRepository;
public PreOrderService(IPreOrderRepository preOrderRepository)
{
this.preOrderRepository = preOrderRepository;
}
public async Task<NewResult<PreOrder>> CreatePreOrderAsync(string userId, string furnitureItemId, int quantity)
{
try
{
if (string.IsNullOrEmpty(userId))
throw new ArgumentNullException(nameof(userId));
if (string.IsNullOrEmpty(furnitureItemId))
throw new ArgumentNullException(nameof(furnitureItemId));
var preOrder = new PreOrder
{
UserId = userId,
FurnitureItemId = furnitureItemId,
PreOrderDate = DateTime.UtcNow,
Status = PreOrderStatus.Pending,
Quantity = quantity
};
var createdPreOrder = await preOrderRepository.CreatePreOrderAsync(preOrder);
return NewResult<PreOrder>.Success(createdPreOrder, "Pre-order created successfully.");
}
catch (Exception ex)
{
return NewResult<PreOrder>.Failed(null, $"Error occurred: {ex.Message}");
}
}
public async Task<NewResult<PreOrder>> GetPreOrderByIdAsync(string preorderId)
{
try
{
var preOrder = await preOrderRepository.GetPreOrderByIdAsync(preorderId);
if (preOrder == null)
{
return NewResult<PreOrder>.Failed(null, "Pre-order not found.");
}
return NewResult<PreOrder>.Success(preOrder, "Pre-order retrieved successfully.");
}
catch (Exception ex)
{
return NewResult<PreOrder>.Failed(null, $"Error occurred: {ex.Message}");
}
}
public async Task<NewResult<IEnumerable<PreOrder>>> GetPreOrdersByUserIdAsync(string userId)
{
try
{
var preOrders = await preOrderRepository.GetPreOrdersByUserIdAsync(userId);
if (preOrders == null || !preOrders.Any())
{
return NewResult<IEnumerable<PreOrder>>.Failed(null, "No pre-orders found.");
}
return NewResult<IEnumerable<PreOrder>>.Success(preOrders, "Pre-orders retrieved successfully.");
}
catch (Exception ex)
{
return NewResult<IEnumerable<PreOrder>>.Failed(null, $"Error occurred: {ex.Message}");
}
}
}
}
```
## Pre-Order Controller
```C#
using Lacariz.Furniture.Domain.DataTransferObjects;
using Lacariz.Furniture.Service.Services.Implementations;
using Lacariz.Furniture.Service.Services.Interfaces;
namespace Lacariz.Furniture.API.Controllers.v1
{
public class PreOrderController : BaseController
{
private readonly IPreOrderService preOrderService;
public PreOrderController(IPreOrderService preOrderService)
{
this.preOrderService = preOrderService;
}
[HttpPost("api/v{version:apiVersion}/[controller]/create-pre-order")]
[ApiVersion("1.0")]
public async Task<IActionResult> CreatePreOrder(CreatePreOrderRequest request)
{
var response = await preOrderService.CreatePreOrderAsync(request.UserId, request.FurnitureItemId, request.Quantity);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/get-pre-order-by-id")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetPreOrderById(string orderId)
{
var response = await preOrderService.GetPreOrderByIdAsync(orderId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/get-pre-order-by-user-id")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetPreOrdersByUserId(string userId)
{
var response = await preOrderService.GetPreOrdersByUserIdAsync(userId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
}
}
```

## Implementing the Payment Service for Lacariz Furniture E-commerce Application
To implement the payment service for the Lacariz Furniture e-commerce application, we set up the necessary repository, service, and controller layers. The payment service handles interactions with Paystack and Flutterwave for processing payments. We created a Payment entity to represent payment data and defined a repository interface and implementation for handling payment records. A service layer was developed to encapsulate the business logic for processing payments, including integration with Paystack and Flutterwave APIs. Finally, a PaymentController was added to expose endpoints for initiating and managing payments.
## PATSTACK SERVICE
```C#
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Services.External_Service.Requests;
using Lacariz.Furniture.Service.Services.External_Service.Responses;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Verification;
namespace Lacariz.Furniture.Service.Services.External_Service.Interfaces
{
public interface IPaystackService
{
Task<PaystackPaymentInitiationResponse> InitiatePaymentAsync(PaystackPaymentInitiationRequest request, MyBankLog log);
Task<PaystackPaymentVerificationResponse> VerifyPaymentAsync(PaystackPaymentVerificationRequest request, MyBankLog log);
}
}
```
```C#
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Helpers.Interfaces;
using Lacariz.Furniture.Service.Services.External_Service.Interfaces;
using Lacariz.Furniture.Service.Services.External_Service.Requests;
using Lacariz.Furniture.Service.Services.External_Service.Responses;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Verification;
namespace Lacariz.Furniture.Service.Services.External_Service.Implementations
{
public class PaystackService : IPaystackService
{
// private readonly IConfiguration config;
private readonly IRestHelper restHelper;
private readonly ILogger logger;
public PaystackService(IRestHelper restHelper, ILogger logger)
{
// this.config = config;
this.restHelper = restHelper;
this.logger = logger;
}
public async Task<PaystackPaymentInitiationResponse> InitiatePaymentAsync(PaystackPaymentInitiationRequest request, MyBankLog log)
{
try
{
logger.Information("Initiate payment service");
string paystackApiUrl = Environment.GetEnvironmentVariable("InitiatePaymentApiUrl");
string paystackSecretKey = Environment.GetEnvironmentVariable("SecretKey");
var paystackRequest = new PaystackPaymentInitiationRequest
{
Amount = request.Amount,
Email = request.Email,
};
var headers = new Dictionary<string, string>
{
{ "Authorization", $"Bearer {paystackSecretKey}" },
};
var initiationResponse = await restHelper.DoWebRequestAsync<PaystackPaymentInitiationResponse>(log,
paystackApiUrl,
paystackRequest,
"post",
headers);
return initiationResponse;
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
log.AdditionalInformation = ex.Message;
return null;
}
}
public async Task<PaystackPaymentVerificationResponse> VerifyPaymentAsync(PaystackPaymentVerificationRequest request, MyBankLog log)
{
try
{
logger.Information("Verify payment service");
string paystackApiUrl = Environment.GetEnvironmentVariable("VerifyReferenceApiUrl") + $"/{request.Reference}";
string paystackSecretKey = Environment.GetEnvironmentVariable("SecretKey");
var paystackVerificationRequest = new PaystackPaymentVerificationRequest
{
Reference = request.Reference,
};
var headers = new Dictionary<string, string>
{
{ "Authorization", $"Bearer {paystackSecretKey}" },
};
var verificationResponse = await restHelper.DoWebRequestAsync<PaystackPaymentVerificationResponse>(log,
paystackApiUrl,
null,
"get",
headers);
return verificationResponse;
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
log.AdditionalInformation = ex.Message;
return null;
}
}
}
}
```
## REST HELPER : For consuming external service using HTTP Client .
```C#
using Lacariz.Furniture.Domain.Entities;
namespace Lacariz.Furniture.Service.Helpers.Interfaces;
public interface IRestHelper
{
Task<T> DoWebRequestAsync<T>(MyBankLog log, string url, object request, string requestType, Dictionary<string, string> headers = null) where T : new();
}
```
```C#
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Helpers.Interfaces;
using RestSharp;
namespace Lacariz.Furniture.Service.Helpers.Implementations;
public class RestHelper : IRestHelper
{
private readonly ILogger _logger;
private readonly IConfiguration _config;
public RestHelper(ILogger logger, IConfiguration config)
{
_logger = logger;
_config = config;
}
public async Task<T> DoWebRequestAsync<T>(MyBankLog log, string url, object request, string requestType, Dictionary<string, string> headers = null) where T : new()
{
var SDS = JsonConvert.SerializeObject(request);
_logger.Information("URL: " + url + " " + JsonConvert.SerializeObject(request));
T result = new T();
Method method = requestType.ToLower() == "post" ? Method.Post : Method.Get;
var client = new RestClient(url);
var restRequest = new RestRequest(url, method);
if (method == Method.Post)
{
restRequest.RequestFormat = DataFormat.Json;
restRequest.AddJsonBody(request);
}
if (headers != null)
{
foreach (var item in headers)
{
restRequest.AddHeader(item.Key, item.Value);
}
}
try
{
RestResponse<T> response = await client.ExecuteAsync<T>(restRequest);
_logger.Information("URL: " + url + " " + response.Content);
if (!response.IsSuccessful)
{
log.AdditionalInformation = $"URL: {url} {response.Content}";
}
result = JsonConvert.DeserializeObject<T>(response.Content);
return result;
}
catch (Exception ex)
{
_logger.Error(ex, ex.Message);
log.AdditionalInformation = $"URL: {url} {ex.Message}";
return result;
}
}
}
```
## FlutterWave Service
```C#
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Services.External_Service.Requests;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyPaymentResponse;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyResponse;
namespace Lacariz.Furniture.Service.Services.External_Service.Interfaces
{
public interface IFlutterwaveService
{
Task<FlutterwaveInitiateCardPaymentResponse> InitiatePaymentAsync(FlutterwaveInitiateCardPaymentRequest request, MyBankLog log);
Task<FlutterwaveValidateChargeResponse> ValidateChargeAsync(FlutterwaveValidateChargeRequest request, MyBankLog log);
Task<FlutterwaveVerifyCardPaymentResponse> VerifyPaymentAsync(FlutterwaveVerifyCardPaymentRequest request, MyBankLog log);
}
}
```
```C#
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Helpers.Interfaces;
using Lacariz.Furniture.Service.Services.External_Service.Interfaces;
using Lacariz.Furniture.Service.Services.External_Service.Requests;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyPaymentResponse;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyResponse;
using Authorization = Lacariz.Furniture.Service.Services.External_Service.Requests.Authorization;
namespace Lacariz.Furniture.Service.Services.External_Service.Implementations
{
public class FlutterwaveService : IFlutterwaveService
{
private readonly IRestHelper restHelper;
private readonly ILogger logger;
private readonly string encryptionKey;
private readonly IEncryptionHelper encryptionHelper;
public FlutterwaveService(IRestHelper restHelper, ILogger logger, IEncryptionHelper encryptionHelper)
{
this.restHelper = restHelper;
this.logger = logger;
this.encryptionHelper = encryptionHelper;
this.encryptionKey = Environment.GetEnvironmentVariable("FlutterwaveEncryptionKey");
}
public async Task<FlutterwaveInitiateCardPaymentResponse> InitiatePaymentAsync(FlutterwaveInitiateCardPaymentRequest request, MyBankLog log)
{
try
{
logger.Information("Initiate payment service");
string flutterwaveApiUrl = Environment.GetEnvironmentVariable("InitiateFlutterwaveApiUrl");
string flutterwaveSecretKey = Environment.GetEnvironmentVariable("FlutterwaveSecretKey");
var flutterwaveRequest = new FlutterwaveInitiateCardPaymentRequest
{
CardNumber = request.CardNumber,
CVV = request.CVV,
ExpiryMonth = request.ExpiryMonth,
ExpiryYear = request.ExpiryYear,
Currency = request.Currency,
Amount = request.Amount,
FullName = request.FullName,
Email = request.Email,
TransactionReference = request.TransactionReference,
RedirectUrl = "https://www.flutterwave.ng",
Authorization = new Authorization
{
Mode = request.Authorization.Mode,
City = request.Authorization.City,
Address = request.Authorization.Address,
State = request.Authorization.State,
Country = request.Authorization.Country,
Zipcode = request.Authorization.Zipcode
}
};
var headers = new Dictionary<string, string>
{
{ "Authorization", $"Bearer {flutterwaveSecretKey}" },
};
// Serialize the request to JSON
string requestJson = JsonConvert.SerializeObject(flutterwaveRequest);
// Encrypt the JSON payload using 3DES
string encryptedPayload = encryptionHelper.Encrypt3DES(requestJson, encryptionKey);
var initiationResponse = await restHelper.DoWebRequestAsync<FlutterwaveInitiateCardPaymentResponse>(log,
flutterwaveApiUrl,
encryptedPayload,
"post",
headers);
return initiationResponse;
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
log.AdditionalInformation = ex.Message;
return null;
}
}
public async Task<FlutterwaveValidateChargeResponse> ValidateChargeAsync(FlutterwaveValidateChargeRequest request, MyBankLog log)
{
try
{
logger.Information("Validate charge service");
string flutterwaveApiUrl = Environment.GetEnvironmentVariable("FlutterwaveValidateChargeApiUrl");
string flutterwaveSecretKey = Environment.GetEnvironmentVariable("FlutterwaveSecretKey");
var flutterwaveRequest = new FlutterwaveValidateChargeRequest
{
Otp = request.Otp,
Flw_ref = request.Flw_ref
};
var headers = new Dictionary<string, string>
{
{ "Authorization", $"Bearer {flutterwaveSecretKey}" },
};
var verifyResponse = await restHelper.DoWebRequestAsync<FlutterwaveValidateChargeResponse>(log,
flutterwaveApiUrl,
flutterwaveRequest,
"post",
headers);
return verifyResponse;
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
log.AdditionalInformation = ex.Message;
return null;
}
}
public async Task<FlutterwaveVerifyCardPaymentResponse> VerifyPaymentAsync(FlutterwaveVerifyCardPaymentRequest request, MyBankLog log)
{
try
{
logger.Information("Validate charge service");
string flutterwaveApiUrl = Environment.GetEnvironmentVariable("VerifyCardPaymentFlutterwaveApiUrl") + $"/{request.TransactionId}" + "/verify";
string flutterwaveSecretKey = Environment.GetEnvironmentVariable("FlutterwaveSecretKey");
var headers = new Dictionary<string, string>
{
{ "Authorization", $"Bearer {flutterwaveSecretKey}" },
};
var verifyResponse = await restHelper.DoWebRequestAsync<FlutterwaveVerifyCardPaymentResponse>(log,
flutterwaveApiUrl,
null,
"get",
headers);
return verifyResponse;
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
log.AdditionalInformation = ex.Message;
return null;
}
}
}
}
```
## Payment Service : This is the internal service that interfaces with both paystack and flutterwave
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Service.Services.External_Service.Requests;
using Lacariz.Furniture.Service.Services.External_Service.Responses;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyPaymentResponse;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyResponse;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Verification;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface IPaymentService
{
Task<NewResult<PaystackPaymentInitiationResponse>> InitiatePayment(double amount, string email);
Task<NewResult<PaystackPaymentVerificationResponse>> VerifyPayment(string reference);
Task<NewResult<FlutterwaveInitiateCardPaymentResponse>> InitiateFlutterwaveCardPayment(FlutterwaveInitiateCardPaymentRequest request);
Task<NewResult<FlutterwaveValidateChargeResponse>> FlutterwaveValidateCharge(FlutterwaveValidateChargeRequest request);
Task<NewResult<FlutterwaveVerifyCardPaymentResponse>> FlutterwaveVerifyCardPayment(FlutterwaveVerifyCardPaymentRequest request);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Service.Services.External_Service.Interfaces;
using Lacariz.Furniture.Service.Services.External_Service.Responses;
using Lacariz.Furniture.Service.Services.Interfaces;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Services.External_Service.Requests;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Verification;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave;
using static MongoDB.Driver.WriteConcern;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyResponse;
using Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyPaymentResponse;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class PaymentService : IPaymentService
{
private readonly IPaystackService paystackService;
//private readonly ISampleRepository sampleRepository;
//private readonly IMyBankLogRepository myBankLogRepository;
private readonly ILogger logger;
private readonly IFlutterwaveService flutterwaveService;
public PaymentService(IPaystackService paystackService,
//ISampleRepository sampleRepository,
//IMyBankLogRepository myBankLogRepository,
ILogger logger,
IFlutterwaveService flutterwaveService)
{
this.paystackService = paystackService;
//this.sampleRepository = sampleRepository;
//this.myBankLogRepository = myBankLogRepository;
this.logger = logger;
this.flutterwaveService = flutterwaveService;
}
public async Task<NewResult<FlutterwaveValidateChargeResponse>> FlutterwaveValidateCharge(FlutterwaveValidateChargeRequest request)
{
MyBankLog dbLog = new MyBankLog();
try
{
if (string.IsNullOrWhiteSpace(request.Otp))
{
throw new ArgumentNullException(nameof(request.Otp), "OTP cannot be null or empty.");
}
if (string.IsNullOrWhiteSpace(request.Flw_ref))
{
throw new ArgumentNullException(nameof(request.Flw_ref), "FLW Reference cannot be null or empty.");
}
// Call the Flutterwave service to validate the charge
var validateChargeResponse = await flutterwaveService.ValidateChargeAsync(request, dbLog);
// Check the response from the service
if (validateChargeResponse != null && validateChargeResponse.Status.Equals("success", StringComparison.OrdinalIgnoreCase))
{
return NewResult<FlutterwaveValidateChargeResponse>.Success(validateChargeResponse, "Charge successfully validated.");
}
return NewResult<FlutterwaveValidateChargeResponse>.Failed(null, "Charge validation failed.");
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
return NewResult<FlutterwaveValidateChargeResponse>.Error(null, $"Error while validating charge: {ex.Message}");
}
}
public async Task<NewResult<FlutterwaveVerifyCardPaymentResponse>> FlutterwaveVerifyCardPayment(FlutterwaveVerifyCardPaymentRequest request)
{
MyBankLog dbLog = new MyBankLog();
try
{
if (string.IsNullOrWhiteSpace(request.TransactionId))
{
throw new ArgumentNullException(nameof(request.TransactionId), "TransactionId cannot be null or empty.");
}
var verifyCardPayment = await flutterwaveService.VerifyPaymentAsync(request, dbLog);
// Check if the response is not null
if (verifyCardPayment == null)
{
return NewResult<FlutterwaveVerifyCardPaymentResponse>.Failed(null, "Verification failed: no response from payment service.");
}
// Check the status of the response
if (verifyCardPayment.Status != "success")
{
return NewResult<FlutterwaveVerifyCardPaymentResponse>.Failed(null, $"Verification failed: {verifyCardPayment.Message}");
}
// Check critical fields in the data
if (verifyCardPayment.Data == null ||
string.IsNullOrWhiteSpace(verifyCardPayment.Data.TxRef) ||
string.IsNullOrWhiteSpace(verifyCardPayment.Data.FlwRef))
{
return NewResult<FlutterwaveVerifyCardPaymentResponse>.Failed(null, "Verification failed: missing critical data in response.");
}
return NewResult<FlutterwaveVerifyCardPaymentResponse>.Success(verifyCardPayment, "Payment successfully verified.");
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
return NewResult<FlutterwaveVerifyCardPaymentResponse>.Error(null, $"Error while verifying payment: {ex.Message}");
}
}
public async Task<NewResult<FlutterwaveInitiateCardPaymentResponse>> InitiateFlutterwaveCardPayment(FlutterwaveInitiateCardPaymentRequest request)
{
MyBankLog dbLog = new MyBankLog();
try
{
if (request.Amount <= 0)
{
throw new ArgumentException("Amount must be greater than zero.");
}
if (!IsValidEmail(request.Email))
{
throw new ArgumentException("Invalid email format.", nameof(request.Email));
}
var initiatePayment = await flutterwaveService.InitiatePaymentAsync(request, dbLog);
if (initiatePayment != null && initiatePayment.Status == "success")
{
return NewResult<FlutterwaveInitiateCardPaymentResponse>.Success(initiatePayment, "Payment initiation successful.");
}
else
{
return NewResult<FlutterwaveInitiateCardPaymentResponse>.Failed(null, "Payment initiation failed.");
}
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
return NewResult<FlutterwaveInitiateCardPaymentResponse>.Error(null, $"Error while initiating payment : {ex.Message}");
}
}
public async Task<NewResult<PaystackPaymentInitiationResponse>> InitiatePayment(double amount, string email)
{
MyBankLog dbLog = new MyBankLog();
try
{
if (amount <= 0)
{
throw new ArgumentException("Amount must be greater than zero.");
}
if (string.IsNullOrWhiteSpace(email))
{
throw new ArgumentNullException(nameof(email), "Email cannot be null or empty.");
}
if (!IsValidEmail(email))
{
throw new ArgumentException("Invalid email format.", nameof(email));
}
var initiatePaymentRequest = new PaystackPaymentInitiationRequest()
{
Amount = amount,
Email = email
};
var initiatePayment = await paystackService.InitiatePaymentAsync(initiatePaymentRequest, dbLog);
// Validate the response
if (initiatePayment != null && initiatePayment.Status)
{
if (!string.IsNullOrWhiteSpace(initiatePayment.Data.AuthorizationUrl) && !string.IsNullOrWhiteSpace(initiatePayment.Data.Reference))
{
return NewResult<PaystackPaymentInitiationResponse>.Success(initiatePayment, "Payment successfully initiated.");
}
return NewResult<PaystackPaymentInitiationResponse>.Failed(null, "Payment initiation response missing critical data.");
}
return NewResult<PaystackPaymentInitiationResponse>.Failed(null, "Payment initiation failed.");
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
return NewResult<PaystackPaymentInitiationResponse>.Error(null, $"Error while initiating payment : {ex.Message}");
}
}
public async Task<NewResult<PaystackPaymentVerificationResponse>> VerifyPayment(string reference)
{
MyBankLog dbLog = new MyBankLog();
try
{
if (string.IsNullOrWhiteSpace(reference))
{
throw new ArgumentNullException(nameof(reference), "Reference cannot be null or empty.");
}
var verifyPaymentRequest = new PaystackPaymentVerificationRequest()
{
Reference = reference
};
var verifyPayment = await paystackService.VerifyPaymentAsync(verifyPaymentRequest, dbLog);
if (verifyPayment != null && verifyPayment.Status)
{
if (verifyPayment.Data != null && !string.IsNullOrWhiteSpace(verifyPayment.Data.Status))
{
return NewResult<PaystackPaymentVerificationResponse>.Success(verifyPayment, "Payment successfully verified.");
}
return NewResult<PaystackPaymentVerificationResponse>.Failed(null, "Payment verification response missing critical data.");
}
return NewResult<PaystackPaymentVerificationResponse>.Failed(null, "Payment verification failed.");
}
catch (Exception ex)
{
logger.Error(ex, ex.Message);
return NewResult<PaystackPaymentVerificationResponse>.Error(null, $"Error while verifying payment: {ex.Message}");
}
}
private bool IsValidEmail(string email)
{
try
{
var addr = new System.Net.Mail.MailAddress(email);
return addr.Address == email;
}
catch
{
return false;
}
}
}
}
```
## Payment Controller
```C#
using Lacariz.Furniture.Service.Services.External_Service.Requests;
using Lacariz.Furniture.Service.Services.Interfaces;
namespace Lacariz.Furniture.API.Controllers.v1
{
public class PaymentController : BaseController
{
private readonly IPaymentService paymentService;
public PaymentController(IPaymentService paymentService)
{
this.paymentService = paymentService;
}
[HttpPost("api/v{version:apiVersion}/[controller]/initiate-paystack-payment")]
[ApiVersion("1.0")]
public async Task<IActionResult> InitiatePaystackPayment(PaystackPaymentInitiationRequest request)
{
var response = await paymentService.InitiatePayment(request.Amount, request.Email);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/verify-paystack-payment")]
[ApiVersion("1.0")]
public async Task<IActionResult> VerifyPaystackPayment([FromQuery] PaystackPaymentVerificationRequest request)
{
var response = await paymentService.VerifyPayment(request.Reference);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/initiate-flutterwave-card-payment")]
[ApiVersion("1.0")]
public async Task<IActionResult> InitiateFlutterwavePayment([FromBody] FlutterwaveInitiateCardPaymentRequest request)
{
var response = await paymentService.InitiateFlutterwaveCardPayment(request);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/flutterwave-validate-charge")]
[ApiVersion("1.0")]
public async Task<IActionResult> FlutterwaveValidateCharge([FromBody] FlutterwaveValidateChargeRequest request)
{
var response = await paymentService.FlutterwaveValidateCharge(request);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/flutterwave-get-payment-status")]
[ApiVersion("1.0")]
public async Task<IActionResult> FlutterwaveVerifyPayment([FromQuery] FlutterwaveVerifyCardPaymentRequest request)
{
var response = await paymentService.FlutterwaveVerifyCardPayment(request);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
}
}
```

## Implementing the Product Service for Lacariz Furniture E-commerce Application
```C#
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Domain.Enum;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Data.Repositories.Interfaces
{
public interface IFurnitureRepository
{
Task<IEnumerable<FurnitureItem>> GetAllFurnitureItemsAsync();
Task<FurnitureItem> GetFurnitureItemByIdAsync(string furnitureItemId);
Task<IEnumerable<FurnitureItem>> SearchFurnitureItemsAsync(FurnitureCategory? category, decimal minPrice, decimal maxPrice, string keyword);
Task<FurnitureItem> AddFurnitureItemAsync(FurnitureItem item);
Task<bool> UpdateFurnitureItemAsync(FurnitureItem item);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Domain.Enum;
using Microsoft.EntityFrameworkCore;
using MongoDB.Driver;
namespace Lacariz.Furniture.Data.Repositories.Implementations
{
public class FurnitureRepository : IFurnitureRepository
{
private readonly IMongoDBLogContext dbContext;
public FurnitureRepository(IMongoDBLogContext dbContext)
{
this.dbContext = dbContext;
}
public async Task<FurnitureItem> AddFurnitureItemAsync(FurnitureItem item)
{
await dbContext.FurnitureItems.InsertOneAsync(item);
return item;
}
public async Task<IEnumerable<FurnitureItem>> GetAllFurnitureItemsAsync()
{
var furnitures = await dbContext.FurnitureItems.FindAsync(_ => true);
return await furnitures.ToListAsync();
}
public async Task<FurnitureItem> GetFurnitureItemByIdAsync(string furnitureItemId)
{
return await dbContext.FurnitureItems.Find(item => item.Id == furnitureItemId).FirstOrDefaultAsync();
}
//public async Task<IEnumerable<FurnitureItem>> SearchFurnitureItemsAsync(string category, decimal minPrice, decimal maxPrice, string keyword)
//{
// var filterBuilder = Builders<FurnitureItem>.Filter;
// var filter = filterBuilder.Empty;
// if (!string.IsNullOrWhiteSpace(category))
// filter &= filterBuilder.Eq(item => item.Category, category);
// if (minPrice >= 0 && maxPrice > minPrice)
// filter &= filterBuilder.Gte(item => item.Price, minPrice) & filterBuilder.Lte(item => item.Price, maxPrice);
// if (!string.IsNullOrWhiteSpace(keyword))
// filter &= filterBuilder.Text(keyword);
// return await dbContext.FurnitureItems.Find(filter).ToListAsync();
//}
public async Task<IEnumerable<FurnitureItem>> SearchFurnitureItemsAsync(FurnitureCategory? category, decimal minPrice, decimal maxPrice, string keyword)
{
var filterBuilder = Builders<FurnitureItem>.Filter;
var filter = filterBuilder.Empty;
if (category.HasValue)
filter &= filterBuilder.Eq(item => item.Category, category);
if (minPrice >= 0 && maxPrice > minPrice)
filter &= filterBuilder.Gte(item => item.Price, minPrice) & filterBuilder.Lte(item => item.Price, maxPrice);
if (!string.IsNullOrWhiteSpace(keyword))
filter &= filterBuilder.Text(keyword);
return await dbContext.FurnitureItems.Find(filter).ToListAsync();
}
public async Task<bool> UpdateFurnitureItemAsync(FurnitureItem item)
{
var filter = Builders<FurnitureItem>.Filter.Eq(f => f.Id, item.Id);
var update = Builders<FurnitureItem>.Update
.Set(f => f.Name, item.Name)
.Set(f => f.Description, item.Description)
.Set(f => f.Price, item.Price)
.Set(f => f.StockQuantity, item.StockQuantity)
.Set(f => f.Category, item.Category);
var result = await dbContext.FurnitureItems.UpdateOneAsync(filter, update);
return result.ModifiedCount > 0;
}
}
}
```
## Product Service
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Domain.Enum;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface IFurnitureService
{
Task<NewResult<IEnumerable<FurnitureItem>>> GetAllFurnitureItemsAsync();
Task<NewResult<FurnitureItem>> GetFurnitureItemByIdAsync(string furnitureItemId);
Task<NewResult<IEnumerable<FurnitureItem>>> SearchFurnitureItemsAsync(FurnitureCategory category, decimal minPrice, decimal maxPrice, string keyword);
Task<NewResult<FurnitureItem>> AddFurnitureItemAsync(FurnitureItem item);
Task<NewResult<FurnitureItem>> UpdateFurnitureItemAsync(FurnitureItem item);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Implementations;
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Domain.Enum;
using Lacariz.Furniture.Service.Services.Interfaces;
using System.DirectoryServices;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class FurnitureService : IFurnitureService
{
private readonly IFurnitureRepository furnitureRepository;
public FurnitureService(IFurnitureRepository furnitureRepository)
{
this.furnitureRepository = furnitureRepository;
}
public async Task<NewResult<FurnitureItem>> AddFurnitureItemAsync(FurnitureItem item)
{
try
{
var newItem = await furnitureRepository.AddFurnitureItemAsync(item);
if (newItem != null)
{
return NewResult<FurnitureItem>.Success(newItem, "Furniture item added successfully.");
}
return NewResult<FurnitureItem>.Failed(null, "Failure to add item.");
}
catch (Exception ex)
{
return NewResult<FurnitureItem>.Error(null, $"Error occurred: {ex.Message}");
}
}
public async Task<NewResult<IEnumerable<FurnitureItem>>> GetAllFurnitureItemsAsync()
{
try
{
var furnitureItems = await furnitureRepository.GetAllFurnitureItemsAsync();
/// return furnitureItems ?? Enumerable.Empty<FurnitureItem>();
if (furnitureItems == null)
{
return NewResult<IEnumerable<FurnitureItem>>.Failed(null, "Kindly add items");
}
return NewResult<IEnumerable<FurnitureItem>>.Success(furnitureItems, "Items retrieved successfully");
}
catch (Exception ex)
{
return NewResult<IEnumerable<FurnitureItem>>.Error(null, $"Error retrieving items: {ex.Message}");
}
}
public async Task<NewResult<FurnitureItem>> GetFurnitureItemByIdAsync(string furnitureItemId)
{
try
{
var furnitureItem = await furnitureRepository.GetFurnitureItemByIdAsync(furnitureItemId);
if (furnitureItem == null)
{
return NewResult<FurnitureItem>.Failed(null, "Item doesn't exist");
}
return NewResult<FurnitureItem>.Success(furnitureItem, "Item retrieved successfully");
}
catch (Exception ex)
{
return NewResult<FurnitureItem>.Error(null, $"Error while retrieving item: {ex.Message} ");
}
}
public async Task<NewResult<IEnumerable<FurnitureItem>>> SearchFurnitureItemsAsync(FurnitureCategory category, decimal minPrice, decimal maxPrice, string keyword)
{
try
{
var searchResult = await furnitureRepository.SearchFurnitureItemsAsync(category, minPrice, maxPrice, keyword);
if (searchResult == null)
{
return NewResult<IEnumerable<FurnitureItem>>.Failed(null, "unable to provide search results");
}
return NewResult<IEnumerable<FurnitureItem>>.Success(searchResult, "Search carried out successfully");
}
catch (Exception ex)
{
return NewResult<IEnumerable<FurnitureItem>>.Error(null, $"An error occured while carrying out search: {ex.Message}");
}
}
public async Task<NewResult<FurnitureItem>> UpdateFurnitureItemAsync(FurnitureItem item)
{
try
{
var updated = await furnitureRepository.UpdateFurnitureItemAsync(item);
if (updated)
{
return NewResult<FurnitureItem>.Success(item, "Furniture item updated successfully.");
}
else
{
return NewResult<FurnitureItem>.Failed(null, "Failed to update furniture item.");
}
}
catch (Exception ex)
{
return NewResult<FurnitureItem>.Failed(null, $"Error occurred: {ex.Message}");
}
}
}
}
```
## Product Controller
```C#
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Domain.Enum;
using Lacariz.Furniture.Service.Services.Implementations;
using Lacariz.Furniture.Service.Services.Interfaces;
using Microsoft.AspNetCore.Authorization;
namespace Lacariz.Furniture.API.Controllers.v1
{
public class ProductController : BaseController
{
private readonly IFurnitureService furnitureService;
private readonly IShoppingCartService shoppingCartService;
public ProductController(IFurnitureService furnitureService, IShoppingCartService shoppingCartService)
{
this.furnitureService = furnitureService;
this.shoppingCartService = shoppingCartService;
}
[HttpPost("api/v{version:apiVersion}/[controller]/admin/add-products")]
[Authorize(Policy = "AdminOnly")]
[ApiVersion("1.0")]
public async Task<IActionResult> AddFurnitureItems([FromBody] FurnitureItem item)
{
var response = await furnitureService.AddFurnitureItemAsync(item);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/get-all-products")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetAllItems()
{
var response = await furnitureService.GetAllFurnitureItemsAsync();
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/get-product-by-id")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetItemById(string productId)
{
var response = await furnitureService.GetFurnitureItemByIdAsync(productId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/admin/update-product")]
[Authorize(Policy = "AdminOnly")]
[ApiVersion("1.0")]
public async Task<IActionResult> UpdateFurnitureItems([FromBody] FurnitureItem item)
{
var response = await furnitureService.UpdateFurnitureItemAsync(item);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/search-product")]
[ApiVersion("1.0")]
public async Task<IActionResult> SearchProduct(FurnitureCategory category, decimal minPrice, decimal maxPrice, string keyword)
{
var response = await furnitureService.SearchFurnitureItemsAsync(category, minPrice, maxPrice, keyword);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpPost("api/v{version:apiVersion}/[controller]/add-item-to-cart")]
[ApiVersion("1.0")]
public async Task<IActionResult> AddItemToCart(string userId, ShoppingCartItem item)
{
var response = await shoppingCartService.AddItemToCartAsync(userId, item);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpDelete("api/v{version:apiVersion}/[controller]/clear-cart-items")]
[ApiVersion("1.0")]
public async Task<IActionResult> ClearCartAsync(string userId)
{
var response = await shoppingCartService.ClearCartAsync(userId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpDelete("api/v{version:apiVersion}/[controller]/delete-cart-item")]
[ApiVersion("1.0")]
public async Task<IActionResult> DeleteItem(string userId, string productId)
{
var response = await shoppingCartService.DeleteItemAsync(userId, productId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/get-cart-item")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetCartItem(string userId, string productId)
{
var response = await shoppingCartService.GetCartItemAsync(userId, productId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/get-all-cart-items")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetCartItems(string userId)
{
var response = await shoppingCartService.GetCartItemsAsync(userId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
}
}
```

## Implementing the Push Notification Service for Lacariz Furniture E-commerce Application
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface IPushNotificationService
{
Task<NewResult<string>> SendPushNotificationAsync(string userId, string body);
}
}
```
```C#
using FirebaseAdmin;
using FirebaseAdmin.Messaging;
using Google.Apis.Auth.OAuth2;
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Service.Services.Interfaces;
using System.Text;
using Message = FirebaseAdmin.Messaging.Message;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class PushNotificationService : IPushNotificationService
{
private readonly IUserRepository userRepository;
public PushNotificationService(IUserRepository userRepository)
{
this.userRepository = userRepository;
}
public async Task<NewResult<string>> SendPushNotificationAsync(string userId, string body)
{
try
{
var user = await userRepository.GetUserById(userId);
//if (user == null || string.IsNullOrEmpty(user.DeviceToken))
// throw new ArgumentNullException(nameof(user.DeviceToken), "User device token cannot be null or empty.");
var message = new Message
{
Token = GenerateMockDeviceToken(),
Notification = new Notification
{
Title = "Order Update",
Body = body
}
};
// Ensure Firebase is initialized
if (FirebaseApp.DefaultInstance == null)
{
FirebaseApp.Create(new AppOptions
{
Credential = GoogleCredential.FromFile("Properties/NotificationFile.json")
});
}
string response = await FirebaseMessaging.DefaultInstance.SendAsync(message);
return NewResult<string>.Success(response, "Push notification sent successfully.");
}
catch (Exception ex)
{
return NewResult<string>.Failed(null, $"Error occurred: {ex.Message}");
}
}
public string GenerateMockDeviceToken()
{
// Define the characters allowed in the device token
const string allowedChars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
// Define the length of the device token
const int tokenLength = 140;
// Use a StringBuilder to construct the device token
StringBuilder tokenBuilder = new StringBuilder();
// Use a random number generator to select characters from the allowed set
Random random = new Random();
for (int i = 0; i < tokenLength; i++)
{
int index = random.Next(allowedChars.Length);
tokenBuilder.Append(allowedChars[index]);
}
// Return the generated device token
return tokenBuilder.ToString();
}
}
}
```
## PushNotification Controller
```C#
using Lacariz.Furniture.Domain.DataTransferObjects;
using Lacariz.Furniture.Service.Services.Implementations;
using Lacariz.Furniture.Service.Services.Interfaces;
namespace Lacariz.Furniture.API.Controllers.v1
{
public class PushNotificationController : BaseController
{
private readonly IPushNotificationService pushNotificationService;
public PushNotificationController(IPushNotificationService pushNotificationService)
{
this.pushNotificationService = pushNotificationService;
}
[HttpPost("api/v{version:apiVersion}/[controller]/send-push-notification")]
[ApiVersion("1.0")]
public async Task<IActionResult> SendPushNotification([FromBody] PushNotificationRequest request)
{
var response = await pushNotificationService.SendPushNotificationAsync(request.UserId, request.body);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
}
}
```

Implementing the WishList Service for Lacariz Furniture E-commerce Application
```C#
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Data.Repositories.Interfaces
{
public interface IWishlistRepository
{
Task<WishlistItem> AddItemToWishlistAsync(string userId, string furnitureItemId);
Task<bool> RemoveItemFromWishlistAsync(string userId, string furnitureItemId);
Task<IEnumerable<WishlistItem>> GetUserWishlistAsync(string userId);
Task<bool> ClearUserWishlistAsync(string userId);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Entities;
using Microsoft.EntityFrameworkCore;
using MongoDB.Driver;
namespace Lacariz.Furniture.Data.Repositories.Implementations
{
public class WishlistRepository : IWishlistRepository
{
private readonly IMongoDBLogContext dbContext;
public WishlistRepository(IMongoDBLogContext dbContext)
{
this.dbContext = dbContext;
}
public async Task<WishlistItem> AddItemToWishlistAsync(string userId, string furnitureItemId)
{
var filter = Builders<WishlistItem>.Filter.And(
Builders<WishlistItem>.Filter.Eq(w => w.UserId, userId),
Builders<WishlistItem>.Filter.Eq(w => w.FurnitureItemId, furnitureItemId)
);
var existingWishlistItem = await dbContext.WishlistItems.Find(filter).FirstOrDefaultAsync();
if (existingWishlistItem == null)
{
var wishlistItem = new WishlistItem
{
UserId = userId,
FurnitureItemId = furnitureItemId
};
await dbContext.WishlistItems.InsertOneAsync(wishlistItem);
return wishlistItem;
}
else
{
return existingWishlistItem;
}
}
public async Task<bool> ClearUserWishlistAsync(string userId)
{
try
{
var filter = Builders<WishlistItem>.Filter.Eq(w => w.UserId, userId);
var result = await dbContext.WishlistItems.DeleteManyAsync(filter);
return result.IsAcknowledged && result.DeletedCount > 0;
}
catch (Exception ex)
{
// Log the exception
Console.WriteLine($"Error occurred while clearing user wishlist: {ex.Message}");
return false;
}
}
public async Task<IEnumerable<WishlistItem>> GetUserWishlistAsync(string userId)
{
try
{
var filter = Builders<WishlistItem>.Filter.Eq(w => w.UserId, userId);
var wishlistItems = await dbContext.WishlistItems.Find(filter).ToListAsync();
return wishlistItems;
}
catch (Exception ex)
{
// Log the exception
Console.WriteLine($"Error occurred while retrieving user wishlist: {ex.Message}");
throw;
}
}
public async Task<bool> RemoveItemFromWishlistAsync(string userId, string furnitureItemId)
{
try
{
var filter = Builders<WishlistItem>.Filter.And(
Builders<WishlistItem>.Filter.Eq(w => w.UserId, userId),
Builders<WishlistItem>.Filter.Eq(w => w.FurnitureItemId, furnitureItemId)
);
var result = await dbContext.WishlistItems.DeleteOneAsync(filter);
return result.DeletedCount > 0;
}
catch (Exception ex)
{
// Log the exception
Console.WriteLine($"Error occurred while removing item from wishlist: {ex.Message}");
throw;
}
}
}
}
```
## WishList Service
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface IWishlistService
{
Task<NewResult<WishlistItem>> AddItemToWishlistAsync(string userId, string furnitureItemId);
Task<NewResult<bool>> RemoveItemFromWishlistAsync(string userId, string furnitureItemId);
Task<NewResult<IEnumerable<WishlistItem>>> GetUserWishlistAsync(string userId);
Task<NewResult<bool>> ClearUserWishlistAsync(string userId);
}
}
```
```C#
using Lacariz.Furniture.Data.Repositories.Implementations;
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Domain.Entities;
using Lacariz.Furniture.Service.Services.Interfaces;
using MySqlX.XDevAPI.Common;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class WishlistService : IWishlistService
{
private readonly IWishlistRepository wishlistRepository;
public WishlistService(IWishlistRepository wishlistRepository)
{
this.wishlistRepository = wishlistRepository;
}
public async Task<NewResult<WishlistItem>> AddItemToWishlistAsync(string userId, string furnitureItemId)
{
NewResult<WishlistItem> result = new NewResult<WishlistItem>();
try
{
if (string.IsNullOrEmpty(userId))
throw new ArgumentNullException(nameof(userId));
if (string.IsNullOrEmpty(furnitureItemId))
throw new ArgumentNullException(nameof(furnitureItemId));
var addItem = await wishlistRepository.AddItemToWishlistAsync(userId, furnitureItemId);
if (addItem != null)
{
return NewResult<WishlistItem>.Success(addItem, "Item added to wishlist successfully");
}
return NewResult<WishlistItem>.Failed(null, "Item added to wishlist successfully");
}
catch (Exception ex)
{
return NewResult<WishlistItem>.Error(null, $"Error while clearing cart: {ex.Message}");
}
}
public async Task<NewResult<bool>> ClearUserWishlistAsync(string userId)
{
try
{
if (string.IsNullOrEmpty(userId))
throw new ArgumentNullException(nameof(userId));
bool success = await wishlistRepository.ClearUserWishlistAsync(userId);
if (!success)
{
return NewResult<bool>.Failed(false, "Error while clearing wishlist");
}
return NewResult<bool>.Success(true, "Wishlist cleared successfully");
}
catch (Exception ex)
{
return NewResult<bool>.Error(false, $"Error while clearing cart: {ex.Message}");
}
}
public async Task<NewResult<IEnumerable<WishlistItem>>> GetUserWishlistAsync(string userId)
{
try
{
var wishlistItems = await wishlistRepository.GetUserWishlistAsync(userId);
if (wishlistItems != null)
{
return NewResult<IEnumerable<WishlistItem>>.Success(wishlistItems, "User wishlist retrieved successfully.");
}
else
{
return NewResult<IEnumerable<WishlistItem>>.Failed(null, "Wishlist not found.");
}
}
catch (Exception ex)
{
return NewResult<IEnumerable<WishlistItem>>.Error(null, $"An error occured while retrieving user wishlist: {ex.Message}");
}
}
public async Task<NewResult<bool>> RemoveItemFromWishlistAsync(string userId, string furnitureItemId)
{
try
{
if (string.IsNullOrEmpty(userId))
throw new ArgumentNullException(nameof(userId), "User ID cannot be null or empty.");
if (string.IsNullOrEmpty(furnitureItemId))
throw new ArgumentNullException(nameof(furnitureItemId), "Furniture item ID cannot be null or empty.");
bool success = await wishlistRepository.RemoveItemFromWishlistAsync(userId, furnitureItemId);
if (success)
{
return NewResult<bool>.Success(true, "Item removed from wishlist successfully.");
}
else
{
return NewResult<bool>.Failed(false, "Failed to remove item from wishlist.");
}
}
catch (Exception ex)
{
return NewResult<bool>.Error(false, $"Error occurred: {ex.Message}");
}
}
}
}
```
## WishList Controller
```C#
using Lacariz.Furniture.Domain.DataTransferObjects;
using Lacariz.Furniture.Service.Services.Interfaces;
namespace Lacariz.Furniture.API.Controllers.v1
{
public class WishlistController : BaseController
{
private readonly IWishlistService wishlistService;
public WishlistController(IWishlistService wishlistService)
{
this.wishlistService = wishlistService;
}
[HttpPost("api/v{version:apiVersion}/[controller]/add-item-to-wishlist")]
[ApiVersion("1.0")]
public async Task<IActionResult> AddItemToWishlist(WishlistRequest request)
{
var response = await wishlistService.AddItemToWishlistAsync(request.UserId, request.FurnitureItemId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpDelete("api/v{version:apiVersion}/[controller]/remove-item-from-Wishlist")]
[ApiVersion("1.0")]
public async Task<IActionResult> RemoveItemFromWishlist(WishlistRequest request)
{
var response = await wishlistService.RemoveItemFromWishlistAsync(request.UserId, request.FurnitureItemId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpGet("api/v{version:apiVersion}/[controller]/get-user-wishlist")]
[ApiVersion("1.0")]
public async Task<IActionResult> GetUserWishlist(string userId)
{
var response = await wishlistService.GetUserWishlistAsync(userId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
[HttpDelete("api/v{version:apiVersion}/[controller]/clear-user-wishlist")]
[ApiVersion("1.0")]
public async Task<IActionResult> ClearUserWishlist(string userId)
{
var response = await wishlistService.ClearUserWishlistAsync(userId);
return response.ResponseCode switch
{
"00" => Ok(response),
"99" => BadRequest(response),
"77" => StatusCode(417, response), // DUPLICATE
_ => StatusCode(500, response)
};
}
}
}
```
## EMAIL SERVICE
```C#
using Lacariz.Furniture.Domain.Common.Generics;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.Interfaces
{
public interface IEmailService
{
Task<NewResult<string>> SendActivationEmail(string emailAddress, string verificationCode);
}
}
```
```C#
using System.Net.Mail;
using System.Net;
using Lacariz.Furniture.Domain.Common.Generics;
using Lacariz.Furniture.Service.Services.Interfaces;
namespace Lacariz.Furniture.Service.Services.Implementations
{
public class EmailService : IEmailService
{
public readonly IConfiguration configuration;
public EmailService(IConfiguration configuration)
{
this.configuration = configuration;
}
public async Task<NewResult<string>> SendActivationEmail(string emailAddress, string verificationCode)
{
NewResult<string> result = new NewResult<string>();
try
{
// Get SMTP server settings from appsettings
var smtpServer = configuration["EmailSettings:SmtpHost"];
var port = int.Parse(configuration["EmailSettings:SmtpPort"]);
var username = configuration["EmailSettings:SmtpUser"];
var password = configuration["EmailSettings:SmtpPass"];
using (var client = new SmtpClient())
{
// Specify SMTP server settings
client.Host = smtpServer;
client.Port = port;
client.UseDefaultCredentials = false;
client.Credentials = new NetworkCredential(username, password);
client.EnableSsl = true;
// Create and configure the email message
var message = new MailMessage();
message.From = new MailAddress(username); // Sender email address
message.To.Add(emailAddress);
message.Subject = "Account Activation"; // Email subject
message.Body = $"Please click the following link to activate your account: {verificationCode}"; // Email body
// Send email asynchronously
await client.SendMailAsync(message);
}
return NewResult<string>.Success(null, "Email sent successfully");
}
catch (Exception)
{
return NewResult<string>.Failed(null, "unable to send email");
}
}
}
}
```
## DOMAIN LAYER : Models are listed below
```C#
using MongoDB.Bson.Serialization.Attributes;
using MongoDB.Bson;
using Lacariz.Furniture.Domain.Enum;
namespace Lacariz.Furniture.Domain.Entities
{
public class User
{
[BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public string Id { get; set; } = Guid.NewGuid().ToString();
public string FirstName { get; set; }
public string LastName { get; set; }
public string EmailAddress { get; set; }
public string Address { get; set; }
public string PhoneNumber { get; set; }
public string Password { get; set; }
public bool isActivated { get; set; } = false;
public UserRole Role { get; set; } // Use UserRole enum for Role property
}
}
```
```C#
using Lacariz.Furniture.Domain.Enum;
using MongoDB.Bson.Serialization.Attributes;
using MongoDB.Bson;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Entities
{
public class Admin
{
[BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public string AdminId { get; set; } = Guid.NewGuid().ToString();
public string FirstName { get; set; }
public string LastName { get; set; }
public string EmailAddress { get; set; }
public string Password { get; set; }
public string ProfilePictureUrl { get; set; }
public string AdminLoginId { get; set; } = "A86478927";
public bool isActivated { get; set; } = false;
public UserRole Role { get; set; } // Use UserRole enum for Role property
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Entities
{
public class CustomerInquiry
{
public string Id { get; set; }
public string UserId { get; set; }
public string Subject { get; set; }
public string Message { get; set; }
public DateTime CreatedAt { get; set; }
public bool IsResolved { get; set; }
}
}
```
```C#
using Lacariz.Furniture.Domain.Enum;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Entities
{
public class FurnitureItem
{
public string Id { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public decimal Price { get; set; }
public int StockQuantity { get; set; }
public FurnitureCategory Category { get; set; }
}
}
```
```C#
namespace Lacariz.Furniture.Domain.Entities
{
public class MyBankLog
{
public string ServiceName { get; set; }
public string Endpoint { get; set; }
public string UserId { get; set; }
public string ChannelId { get; set; }
public string RequestDate { get; set; }
public string RequestDetails { get; set; }
public string ResponseDate { get; set; }
public string UserToken { get; set; }
public string Response { get; set; } = "Failed";
public string ResponseDetails { get; set; }
public string AdditionalInformation { get; set; }
public string Amount { get; set; }
}
}
```
```C#
using Lacariz.Furniture.Domain.Enum;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Entities
{
public class Order
{
public string Id { get; set; }
public string UserId { get; set; }
public List<OrderItem> Items { get; set; }
public DateTime OrderDate { get; set; }
public OrderStatus Status { get; set; }
public decimal TotalAmount { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Entities
{
public class OrderItem
{
public string FurnitureItemId { get; set; }
public int Quantity { get; set; }
public decimal Price { get; set; }
}
}
```
```C#
using Lacariz.Furniture.Domain.Enum;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Entities
{
public class PreOrder
{
public string Id { get; set; }
public string UserId { get; set; }
public string FurnitureItemId { get; set; }
public DateTime PreOrderDate { get; set; }
public PreOrderStatus Status { get; set; }
public int Quantity { get; set; }
}
}
```
```C#
using MongoDB.Bson.Serialization.Attributes;
using MongoDB.Bson;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Entities
{
public class ShoppingCart
{
[BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public string Id { get; set; }
public string UserId { get; set; } // Add UserId property
public List<ShoppingCartItem> Items { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Entities
{
public class ShoppingCartItem
{
public string FurnitureItemId { get; set; }
public int Quantity { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Entities
{
public class WishlistItem
{
public string Id { get; set; }
public string UserId { get; set; }
public string FurnitureItemId { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class ActivateAccountRequest
{
public string EmailAddress { get; set; }
public string ActivationCode { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class AdminLoginRequest
{
public string AdminLoginId { get; set; }
public string Password { get; set; }
}
}
```
```C#
using Lacariz.Furniture.Domain.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class CreateOrderRequest
{
public string UserId { get; set; }
public List<OrderItem> items { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class CreatePreOrderRequest
{
public string UserId { get; set; }
public string FurnitureItemId { get; set; }
public int Quantity { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class LoginRequest
{
public string EmailAddress { get; set; }
public string Password { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class PushNotificationRequest
{
public string UserId { get; set; }
public string body { get; set; }
// public string DeviceToken { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class ResetPasswordRequest
{
public string EmailAddress { get; set; }
public string VerificationCode { get; set; }
public string NewPassword { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class ResetPasswordRequestDto
{
public string EmailAddress { get; set; }
public string NewPassword { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class StockLevelRequest
{
public string FurnitureItemId { get; set; }
public int NewStockLevel { get; set;}
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.DataTransferObjects
{
public class WishlistRequest
{
public string UserId { get; set; }
public string FurnitureItemId { get; set; }
}
}
```
## ENUM
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Enum
{
public enum FurnitureCategory
{
LivingRoom,
Bedroom,
DiningRoom,
Office,
Outdoor,
Children
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Enum
{
public enum OrderStatus
{
Pending,
Processing,
Shipped,
Delivered,
Cancelled
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Enum
{
public enum PreOrderStatus
{
Pending,
Fulfilled,
Cancelled
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Domain.Enum
{
public enum UserRole
{
NormalUser,
Admin
}
}
```
## EXTERNAL SERVICE ENTITIES
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Requests
{
public class FlutterwaveInitiateCardPaymentRequest
{
[JsonProperty("card_number")]
public string CardNumber { get; set; }
[JsonProperty("cvv")]
public string CVV { get; set; }
[JsonProperty("expiry_month")]
public string ExpiryMonth { get; set; }
[JsonProperty("expiry_year")]
public string ExpiryYear { get; set; }
[JsonProperty("currency")]
public string Currency { get; set; }
[JsonProperty("amount")]
public double Amount { get; set; }
[JsonProperty("fullname")]
public string FullName { get; set; }
[JsonProperty("email")]
public string Email { get; set; }
[JsonProperty("tx_ref")]
public string TransactionReference { get; set; }
[JsonProperty("redirect_url")]
public string? RedirectUrl { get; set; } = "https://www.flutterwave.ng";
[JsonProperty("authorization")]
public Authorization Authorization { get; set; }
}
public class Authorization
{
[JsonProperty("mode")]
public string Mode { get; set; }
[JsonProperty("city")]
public string City { get; set; }
[JsonProperty("address")]
public string Address { get; set; }
[JsonProperty("state")]
public string State { get; set; }
[JsonProperty("country")]
public string Country { get; set; }
[JsonProperty("zipcode")]
public string Zipcode { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Requests
{
public class FlutterwaveValidateChargeRequest
{
[JsonProperty("otp")]
public string Otp { get; set; }
[JsonProperty("flw_ref")]
public string Flw_ref { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Requests
{
public class FlutterwaveVerifyCardPaymentRequest
{
public string TransactionId { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Requests
{
public class PaystackPaymentInitiationRequest
{
[JsonProperty("amount")]
public double Amount { get; set; }
[JsonProperty("email")]
public string? Email { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Requests
{
public class PaystackPaymentVerificationRequest
{
public string Reference { get; set; } = string.Empty;
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyResponse
{
public class Data
{
[JsonProperty("id")]
public int Id { get; set; }
[JsonProperty("tx_ref")]
public string TxRef { get; set; }
[JsonProperty("flw_ref")]
public string FlwRef { get; set; }
}
public class FlutterwaveValidateChargeResponse
{
[JsonProperty("status")]
public string Status { get; set; }
[JsonProperty("message")]
public string Message { get; set; }
[JsonProperty("data")]
public Data Data { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave.VerifyPaymentResponse
{
public class Data
{
[JsonProperty("id")]
public int Id { get; set; }
[JsonProperty("tx_ref")]
public string TxRef { get; set; }
[JsonProperty("flw_ref")]
public string FlwRef { get; set; }
}
public class FlutterwaveVerifyCardPaymentResponse
{
[JsonProperty("status")]
public string Status { get; set; }
[JsonProperty("message")]
public string Message { get; set; }
[JsonProperty("data")]
public Data Data { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Responses.Flutterwave
{
public class Authorization
{
[JsonProperty("mode")]
public string Mode { get; set; }
[JsonProperty("endpoint")]
public string Endpoint { get; set; }
}
public class Data
{
[JsonProperty("id")]
public int Id { get; set; }
[JsonProperty("tx_ref")]
public string TxRef { get; set; }
[JsonProperty("flw_ref")]
public string FlwRef { get; set; }
[JsonProperty("processor_response")]
public string ProcessorResponse { get; set; }
}
public class Meta
{
[JsonProperty("authorization")]
public Authorization Authorization { get; set; }
}
public class FlutterwaveInitiateCardPaymentResponse
{
[JsonProperty("status")]
public string Status { get; set; }
[JsonProperty("message")]
public string Message { get; set; }
[JsonProperty("data")]
public Data Data { get; set; }
[JsonProperty("meta")]
public Meta Meta { get; set; }
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Responses.Verification
{
public class Authorization
{
[JsonProperty("authorization_code")]
public string AuthorizationCode { get; set; }
[JsonProperty("bin")]
public string Bin { get; set; }
[JsonProperty("last4")]
public string Last4 { get; set; }
[JsonProperty("exp_month")]
public string ExpMonth { get; set; }
[JsonProperty("exp_year")]
public string ExpYear { get; set; }
[JsonProperty("channel")]
public string Channel { get; set; }
[JsonProperty("card_type")]
public string CardType { get; set; }
[JsonProperty("bank")]
public string Bank { get; set; }
[JsonProperty("country_code")]
public string CountryCode { get; set; }
[JsonProperty("brand")]
public string Brand { get; set; }
[JsonProperty("reusable")]
public bool Reusable { get; set; }
[JsonProperty("signature")]
public string Signature { get; set; }
[JsonProperty("account_name")]
public object AccountName { get; set; }
}
public class Customer
{
[JsonProperty("id")]
public int Id { get; set; }
[JsonProperty("first_name")]
public object FirstName { get; set; }
[JsonProperty("last_name")]
public object LastName { get; set; }
[JsonProperty("email")]
public string Email { get; set; }
[JsonProperty("customer_code")]
public string CustomerCode { get; set; }
[JsonProperty("phone")]
public object Phone { get; set; }
[JsonProperty("metadata")]
public object Metadata { get; set; }
[JsonProperty("risk_action")]
public string RiskAction { get; set; }
[JsonProperty("international_format_phone")]
public object InternationalFormatPhone { get; set; }
}
public class Data
{
[JsonProperty("id")]
public long Id { get; set; }
[JsonProperty("domain")]
public string Domain { get; set; }
[JsonProperty("status")]
public string Status { get; set; }
[JsonProperty("reference")]
public string Reference { get; set; }
[JsonProperty("amount")]
public int Amount { get; set; }
[JsonProperty("message")]
public object Message { get; set; }
[JsonProperty("gateway_response")]
public string GatewayResponse { get; set; }
[JsonProperty("paid_at")]
public DateTime? PaidAt { get; set; }
[JsonProperty("created_at")]
public DateTime? CreatedAt { get; set; }
[JsonProperty("channel")]
public string Channel { get; set; }
[JsonProperty("currency")]
public string Currency { get; set; }
[JsonProperty("ip_address")]
public string IpAddress { get; set; }
[JsonProperty("metadata")]
public string Metadata { get; set; }
[JsonProperty("log")]
public Log Log { get; set; }
[JsonProperty("fees")]
public int? Fees { get; set; }
[JsonProperty("fees_split")]
public object? FeesSplit { get; set; }
[JsonProperty("authorization")]
public Authorization Authorization { get; set; }
[JsonProperty("customer")]
public Customer Customer { get; set; }
[JsonProperty("plan")]
public object Plan { get; set; }
[JsonProperty("split")]
public Split Split { get; set; }
[JsonProperty("order_id")]
public object OrderId { get; set; }
[JsonProperty("requested_amount")]
public int RequestedAmount { get; set; }
[JsonProperty("pos_transaction_data")]
public object PosTransactionData { get; set; }
[JsonProperty("source")]
public object Source { get; set; }
[JsonProperty("fees_breakdown")]
public object FeesBreakdown { get; set; }
[JsonProperty("transaction_date")]
public DateTime TransactionDate { get; set; }
[JsonProperty("plan_object")]
public PlanObject PlanObject { get; set; }
[JsonProperty("subaccount")]
public Subaccount Subaccount { get; set; }
}
public class History
{
[JsonProperty("type")]
public string Type { get; set; }
[JsonProperty("message")]
public string Message { get; set; }
[JsonProperty("time")]
public int Time { get; set; }
}
public class Log
{
[JsonProperty("start_time")]
public int StartTime { get; set; }
[JsonProperty("time_spent")]
public int TimeSpent { get; set; }
[JsonProperty("attempts")]
public int Attempts { get; set; }
[JsonProperty("errors")]
public int Errors { get; set; }
[JsonProperty("success")]
public bool Success { get; set; }
[JsonProperty("mobile")]
public bool Mobile { get; set; }
[JsonProperty("input")]
public List<object> Input { get; set; }
[JsonProperty("history")]
public List<History> History { get; set; }
}
public class PlanObject
{
}
public class PaystackPaymentVerificationResponse
{
[JsonProperty("status")]
public bool Status { get; set; }
[JsonProperty("message")]
public string Message { get; set; }
[JsonProperty("data")]
public Data Data { get; set; }
}
public class Split
{
}
public class Subaccount
{
}
}
```
```C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Lacariz.Furniture.Service.Services.External_Service.Responses
{
// Root myDeserializedClass = JsonConvert.DeserializeObject<Root>(myJsonResponse);
public class Data
{
[JsonProperty("authorization_url")]
public string AuthorizationUrl { get; set; }
[JsonProperty("access_code")]
public string AccessCode { get; set; }
[JsonProperty("reference")]
public string Reference { get; set; }
}
public class PaystackPaymentInitiationResponse
{
[JsonProperty("status")]
public bool Status { get; set; }
[JsonProperty("message")]
public string Message { get; set; }
[JsonProperty("data")]
public Data Data { get; set; }
}
}
```
## SERVICE REGISTRATION
```C#
using Lacariz.Furniture.Data.Repositories.Implementations;
using Lacariz.Furniture.Data.Repositories.Interfaces;
using Lacariz.Furniture.Domain.Config.Implementations;
using Lacariz.Furniture.Domain.Config.Interfaces;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using MongoDB.Driver;
using System.Configuration;
namespace Lacariz.Furniture.Data;
public static class ServiceRegistration
{
public static IServiceCollection AddDataDependencies(this IServiceCollection services, IConfiguration configuration)
{
try
{
services.AddScoped<ISampleRepository, SampleRepository>();
services.AddScoped<IUserRepository, UserRepository>();
services.AddScoped<IFurnitureRepository, FurnitureRepository>();
services.AddScoped<IShoppingCartRepository, ShoppingCartRepository>();
services.AddScoped<IOrderRepository, OrderRepository>();
services.AddScoped<IPreOrderRepository, PreOrderRepository>();
services.AddScoped<IWishlistRepository, WishlistRepository>();
services.AddScoped<IInventoryRepository, InventoryRepository>();
services.AddScoped<ICustomerSupportRepository, CustomerSupportRepository>();
services.AddSingleton<IMongoClient, MongoClient>(sp => new MongoClient(configuration["MongoDbSettings:ConnectionString"]));
services.AddSingleton<IMongoDbConfig, MongoDbConfig>(
sp => new MongoDbConfig(configuration.GetSection("MongoDbSettings:ConnectionString").Value,
configuration.GetSection("MongoDbSettings:DatabaseName").Value));
services.AddScoped<IMongoDBLogContext, MongoDBLogContext>();
services.AddScoped<IMyBankLogRepository, MyBankLogRepository>();
return services;
}
catch (Exception ex)
{
return null;
}
}
}
```
```C#
using Lacariz.Furniture.Service.Helpers.Implementations;
using Lacariz.Furniture.Service.Helpers.Interfaces;
using Lacariz.Furniture.Service.Services.External_Service.Implementations;
using Lacariz.Furniture.Service.Services.External_Service.Interfaces;
using Lacariz.Furniture.Service.Services.Implementations;
using Lacariz.Furniture.Service.Services.Interfaces;
namespace Lacariz.Furniture.Service;
public static class ServiceRegistration
{
public static IServiceCollection AddServiceDependencies(this IServiceCollection services, IConfiguration configuration)
{
services.AddScoped<ISampleService, SampleService>();
services.AddScoped<IUserService, UserService>();
services.AddScoped<IEmailService, EmailService>();
services.AddScoped<IShoppingCartService, ShoppingCartService>();
services.AddScoped<IFurnitureService, FurnitureService>();
services.AddScoped<IOrderService, OrderService>();
services.AddScoped<IPreOrderService, PreOrderService>();
services.AddScoped<IWishlistService, WishlistService>();
services.AddScoped<IInventoryService, InventoryService>();
services.AddScoped<IPushNotificationService, PushNotificationService>();
services.AddScoped<ICustomerSupportService, CustomerSupportService>();
services.AddScoped<IPaystackService, PaystackService>();
services.AddScoped<IFlutterwaveService, FlutterwaveService>();
services.AddScoped<IPaymentService, PaymentService>();
services.AddScoped<IEncryptionHelper, EncryptionHelper>();
services.AddScoped<IAuthService, AuthService>();
services.AddTransient<IHttpContextAccessor, HttpContextAccessor>();
services.AddScoped<IRestHelper, RestHelper>();
services.AddAutoMapper(typeof(ServiceRegistration));
services.AddControllersWithViews();
return services;
}
}
```
FIREBASE INITIALIZATION
```C#
namespace Lacariz.Furniture.API
{
using FirebaseAdmin;
using Google.Apis.Auth.OAuth2;
public static class FirebaseInitializer
{
public static void Initialize()
{
FirebaseApp.Create(new AppOptions
{
Credential = GoogleCredential.FromFile("appsettings.json/appsettings.Development.json")
});
}
}
}
```
## EXTENSION
```C#
public static class Extensions
{
public static WebApplicationBuilder AddConfiguration(this WebApplicationBuilder builder)
{
builder.Configuration
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddJsonFile($"appsettings.{builder.Environment.EnvironmentName}.json", optional: true, reloadOnChange: true);
return builder;
}
}
```
`LinkedIn Account` : [LinkedIn](https://www.linkedin.com/in/matthew-odumosu/)
`Twitter Account `: [Twitter](https://twitter.com/iamcymentho)
**Credit**: Graphics sourced from [Medium](https://medium.com/c-sharp-progarmming/tutorial-net-core-3-1-first-web-api-for-beginners-1dc82d3d2794)
| iamcymentho |
1,895,823 | Creating a CV in 2024 | A third person blog. Back by unpopular demand! Ben was thinking he should... | 27,670 | 2024-07-10T20:05:56 | https://dev.to/ivorjetski/creating-a-cv-in-2024-21be | cssart, cv, frontend, css |
##A third person blog.
##Back by unpopular demand!
Ben was thinking he should probably update his CV. He hadn't done this for... Ben, how long has it been? He doesn't know 🙄 but he says the format has not really changed since 2010 😬😮
So it's about time for a change!
Ben had been putting off updating his CV (for 15 years), but when thinking about how to put it off even longer, he came up with two solutions:
1. Writing this blog about it and not actually do it.
2. Make it a fun thing to do!
He wasn't going to ask his chatty mate; Gary Pon-Tovi (ChatGPT), this time!
To make it fun and less of a chore, he thought he would start from scratch in the only way that Ben loves the most to start from scratch... A nice, empty, HTML document! 🤩
This was quite interesting as he thought he would have to make it the most accessible, most perfectly semantic web page he had ever produced. To be easily found and also demonstrate his skill level to future employers. A daunting prospect! But Ben loves nothing more than a seemingly impossible challenge, so he set to work, full of vim and vigor*.
*Wow! Vim & vigor broke my spellcheck! Ben's going off down a vim & vigor word-origin rabbit hole now. Gosh does anyone still use Vim? The most user-unfriendly text editor ever used!
That aside... Ben's ultimate CV would of course be only HTML and CSS, and have a link through to his reasonably up-to-date (CSS only) portfolio:
{% codepen https://codepen.io/ivorjetski/pen/xxzpeoO %}
(This is an older version. The real version is [here](https://tinydesign.co.uk/) )
So... Ben has built a basic semantic layout of the page and designed the left side of the header. Everything is pretty dull so far. It needs a link to Ben's portfolio in the header. He doesn't really want to add a profile picture. 🤔 So what about his logo? The CSS version of his painting signature? And when you hover over it... It could rotate!? As a hint to his portfolio!!? Great ideas!!!
But a card is now boring to Ben, and a cube is so basic! 🤔 What about a three-sided shape? A three-sided shape that has proper lighting? Like the Moka pot he did?! Yes!! These are all great ideas! Ben thought. Much more fun than writing a CV!
{% codepen https://codepen.io/ivorjetski/pen/yLGQYJo %}
After the moka pot, Ben made a room with a ceiling fan, which had realistic lighting. The lighting from the window is barely noticeable, but Ben thinks it makes it look so much more three-dimensional.
{% codepen https://codepen.io/ivorjetski/pen/vYbPgdE %}
The lighting effect is actually quite simple to do. Unlike background colours, CSS filters blend from one state to another. So it's just a case of timing when to add brightness or shade along with the rotation.
```
@keyframes light {
50% {
filter: brightness(0.5);
}
}
```
And after all this procrastination... Ben didn't do any of that! He instead put in a profile pic, like he didn't want to do 🙃 And then spent far too long styling the link underlines and YouTube hover effects.
So anyway, here it is, let him know what you think:
[](https://www.tinydesign.co.uk/cv/)
[BEN EVANS - WEB DESIGNER - LONDON](https://www.tinydesign.co.uk/cv/)
| ivorjetski |
1,897,853 | Build a Chatbot with Amazon Bedrock: Automate API Calls Using Powertools for AWS Lambda and CDK | Bedrock and LLMs are the cool kids in town. I decided to figure out how easy it is to build a... | 0 | 2024-07-08T07:51:34 | https://www.ranthebuilder.cloud/post/automating-api-calls-with-agents-for-amazon-bedrock-with-powertools | aws, bedrock, llm, serverless | ](https://cdn-images-1.medium.com/max/3196/1*cskwkbPAO21YTz01zr0QeQ.png)
Bedrock and LLMs are the cool kids in town. I decided to figure out how easy it is to build a chatbot using Bedrock agents’ capability to trigger an API based on OpenAPI schema with the new Bedrock support that Powertools for AWS offers.
**This post will teach you how to use Amazon Bedrock agents to automate Lambda function-based API calls using Powertools for AWS and CDK for Python. We will also discuss the limitations and gotchas of this solution with Bedrock agents.**
Disclaimer: I’m not an AI guy, so if you want to learn more about how it works under the hood, this is not your blog. **I will show you practical code for setting up Bedrock agents with your APIs in 5 minutes.**
](https://cdn-images-1.medium.com/max/2624/0*lrHYLOizEg3p81m0.png)
**This blog post was originally published on my website, [“Ran The Builder.”](https://www.ranthebuilder.cloud/)**
## Table of Contents
1. **Bedrock Introduction**
2. Bedrock Agents
3. Powertools for AWS Bedrock Support
4. **What are We Building**
5. Infrastructure
6. Lambda Function Handler — Powertools for Bedrock
7. Generating OpenAPI File
8. **Bedrock Agent in Action**
9. **Limitations and Gotchas**
10. **Summary**
## Bedrock Introduction
> *Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies — [https://aws.amazon.com/bedrock/](https://aws.amazon.com/bedrock/)*
Bedrock introduces a bold claim:
> *The easiest way to build and scale generative AI applications with foundation models*
However, the fact that I could build such an application within an hour or so speaks volumes about this claim. I did, however, have help; I used [Powertool for AWS](https://docs.powertools.aws.dev/lambda/python/) Lambda’s excellent documentation and new [support](https://docs.powertools.aws.dev/lambda/python/latest/core/event_handler/bedrock_agents/) for Bedrock agents.
The way I see it, Bedrock offers a wide range of APIs and Agents that empower you to interact with third-party [LLMs](https://en.wikipedia.org/wiki/Large_language_model) and utilize them for any purpose, depending on the LLM’s expertise — whether it’s general purpose helper, writing music, creating pictures out of text or calling APIs on your behalf.
Bedrock doesn’t require any particular infrastructure for you to deploy (VPCs etc.) or manage. It is a fully managed service, and you pay you pay only for what you use, but it can get expensive. The [pricing](https://aws.amazon.com/bedrock/pricing/) model is quite complex and varies greatly depending on the models you use and the features you select.
Highly sought-after features like guardrails (clean language filters, personal identifiable information scanners, etc.) add to the cost, but again, they are fully managed.
### Bedrock Agents
> *Agents enable generative AI applications to execute multistep tasks across company systems and data sources*
Agents are your friendly chatbots that can run multi-step tasks. In our case, they will call APIs according to user input.
Bedrock agents have several components:
1. Model — the LLM you select for the agent to use.
2. Instructions — the initial prompt that sets the context for the agent’s session. This is a classic [prompt engineering](https://platform.openai.com/docs/guides/prompt-engineering) practice: ‘you are a sales agent, selling X for customers’.
3. Actions groups — You define the agent’s actions for the user. You provide an OpenAPI schema and a Lambda function that implements that OpenAPI schema.
4. Knowledge bases — Optional. The agent queries the knowledge base for extra context to augment response generation and input into steps of the orchestration process.
If you want to learn how they work, check out [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-how.html).
](https://cdn-images-1.medium.com/max/3412/1*v1L4vtbl-_8yVTpF4cwbCw.png)
### Powertools for AWS Bedrock Support
Agents for Bedrock, or just “agents,” understand the free text input, find the correct API to trigger, build the payload according to the OpenAPI, and learn whether the call was successful.
At first, I didn’t realize that Bedrock expects your APIs to change.
Usually, I serve my APIs with API Gateway, which triggers my Lambda functions. The event sent to the function has API Gateway metadata and information, and the body, comes as a JSON-encoded string.
With agents, they don’t interact with an API Gateway URL. They interact with a Lambda function (or more than one), each providing a different OpenAPI file. Agents invoke the functions directly, send a different [input](https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html#apigateway-example-event) than API GW, and expect different response than the regular API Gateway [response](https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html#apigateway-types-transforms).
Powertools abstract these differences. I was able to take a Lambda function that worked behind an API Gateway, use Powertools’ event handler for API Gateway, and change the event handler type to Bedrock handler, and it just worked with the agents. Amazing!
Below, you can see the flow of events:
1. Agents use LLM and user input to understand what API (Lambda function) to invoke using the OpenAPI file that describes the API.
2. Powertools handles the Bedrock agent input parsing, validation, and routes to the correct inner function. Each inner function handles a different API route, thus creating a [monolith Lambda](https://docs.aws.amazon.com/lambda/latest/operatorguide/monolith.html).
3. Your custom business logic runs and returns a response that adheres to the OpenAPI schema.
4. Powertools returns a Bedrock format response that contains the response from section 3.
**This brings me to problem number one — you can’t use the Lambda function with Bedrock agents and an API Gateway. You need to choose only one.**
**This is a major problem. It means I need to duplicate my APIs — one for Bedrock and another for regular customers. The inputs and responses are just too different. It’s really a shame that Bedrock didn’t extend the API Gateway model with add Bedrock agents context and headers.**
](https://cdn-images-1.medium.com/max/3268/1*-e5V-f-Qmfc3uz-Zi-58rw.png)
**If you want to see a TypeScript variation without Powertools, then I highly suggest you check out Lee Gilmore’s [post](https://blog.serverlessadvocate.com/automating-tasks-using-amazon-bedrock-agents-and-ai-4b6fb8856589).**
## What are We Building
We will build a Bedrock agent that will represent a seller. We will ask the agent to purchase orders on our behalf. We will use my [AWS Lambda Handler Cookbook](https://github.com/ran-isenberg/aws-lambda-handler-cookbook) template open-source project that represents an order service. You can place orders by calling the POST ‘/api/orders’ API with a JSON payload of customer name and item counts. Orders are saved to a DynamoDB table by a Lambda function.
The Cookbook template was recently featured in an [AWS articl](https://aws.amazon.com/blogs/infrastructure-and-automation/best-practices-for-accelerating-development-with-serverless-blueprints/)e.
I altered the template and replaced the API Gateway with Bedrock agents. We will build the following parts:
1. Agents with CDK infrastructure as code
2. Generate OpenAPI schema file
3. Lambda function handler’s code to support Bedrock
All code examples can be found at the bedrock [branch](https://github.com/ran-isenberg/aws-lambda-handler-cookbook/tree/bedrock2).
### Infrastructure
We will start with the CDK code to deploy the Lambda function and the agent.
You can also use SAM according to Powertool’s [documentation](https://docs.powertools.aws.dev/lambda/python/latest/core/event_handler/bedrock_agents/#using-aws-serverless-application-model-sam).
First, add the ‘cdklabs’ constructs to your poetry file.

Let’s review the Bedrock construct.
This construct is 90% the one that was shown on the Powertool’s excellent documentation:
{% gist https://gist.github.com/ran-isenberg/f0716bc45cf7a46ece09b4398a217d93.js %}
In lines 18–24, we create the Bedrock agent.
In line 21, we select the model we wish to use.
In line 22, we supply the prompt engineering instruction.
In line 23, we [prepare](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_PrepareAgent.html) the agent to be used and tested immediately after deployment.
In lines 26–38, we prepare the action group and connect the Lambda function. We get input for the OpenAPI file. The OpenAPI file in this example resides in the ‘docs/swagger/openapi.json’ file.
### Lambda Function Handler — Powertools for Bedrock
Let’s review the code for the Lambda function that implements the API of the orders service.
{% gist https://gist.github.com/ran-isenberg/bcddfc06a47ba9773e2452838993e7d6.js %}
In line 17, we initiate the Bedrock Powertools event handler. Input validation is enabled by default.
In lines 27–43, we define our POST /API/orders API. This metadata helps Powertools generate an OpenAPI file for us (see next section). It defines the API description, input schema, HTTP responses, and their schemas.
In lines 59–62, we define the function’s entry point. According to the input path and HTTP command (POST), it will route the Bedrock agent request to the correct inner function. In this example, there is just one function in line 20.
In lines 49–53, we handle the input (validated automatically by Powertools!) and pass it to the inner logic handler to create the order and save it in the database. This is a hexagonal architectural implementation. You can learn more about it [here](https://www.ranthebuilder.cloud/post/learn-how-to-write-aws-lambda-functions-with-architecture-layers).
In line 56, we return a Pydantic response object, and Powertools handles the Bedrock response format for us.
Find the complete code [here](https://github.com/ran-isenberg/aws-lambda-handler-cookbook/blob/bedrock2/service/handlers/handle_create_order.py).
### Generating OpenAPI File
Powertools for AWS Lambda provides a way to generate the required OpenAPI file from the code.
Let’s see the simplified version below:
{% gist https://gist.github.com/ran-isenberg/f79e38881d1ae2d92ea52e6b90c14150.js %}
You can run this code and then move the output file to the folder you assign in the CDK construct at line 28.
## Bedrock Agent in Action
After we deploy our service, we can enter the Bedrock console and see our new agent waiting for us:

Let’s test it out and chat with it in the console:

Success! It understood that we wanted to place an order; it built the payload input, executed the Lambda function successfully, and even displayed its output.
Let’s see what input it sent to the function (I printed it off the Lambda logs)
{% gist https://gist.github.com/ran-isenberg/a58315fc4a17321612a32082c5b76542.js %}
As you can see, it’s very different from the API Gateway schema. Line 8 contains all sorts of metadata about the agent origin, the API path, the payload, lines 9–18, and the HTTP command, which is shown in line 20.
Let’s verify the functionality of the function and the accuracy of the agents by examining the order that was successfully saved to the DynamoDB table.

As you can see, the order id matches the agent’s response and the input parameters.
## Limitations and Gotchas
The Powertools’ documentation and code examples were spot on. It worked out of the box.
Powertools did an excellent job. However, I’ve had several issues with Bedrock agents:
1. Bedrock agents currently support OpenAPI schema 3.0.0 but not 3.1.0. Pydantic 2, which I use to define my models, generates the latest version. I had to change the number to 3.0.0 manually and hope that my API does not use any special 3.1.0 features. Luckily, it didn’t, but it was tough to find the error as it was raised during CloudFormation deployment (‘OpenAPI file rejected) and didn’t explain why my schema file was unsuitable. Powertool’s excellent support over their discord channel helped me. Thanks, Leandro!
2. Your Lambda needs to be monolithic and contain all the routes of your OpenAPI. An alternative would be to create multiple action groups with multiple OpenAPI files, which is doable but does not scale with a large number of routes and APIs.
3. **This one’s a major issue** — You can’t use the Lambda function with agents and an API Gateway. You need to choose. This means I need to duplicate my APIs — one for Bedrock, another for regular customers. The inputs and responses are just too different. **It’s really a shame that Bedrock didn’t extend the API Gateway model with add Bedrock agents context and headers.**
4. My agent sent an incorrect payload type. It marked the payload as an integer but kept sending the JSON object as a string instead of a number. My API has strict validation, so it didn’t convert the string to a number and failed the request. I had to debug the matter, which was not as easy as I’d hoped. Your mileage may vary with different LLMs; I chose the “simplest” one.
## Summary
Chatting with an “agent” that resulted in a DynamoDB entry being created is quite amazing. Even more amazing is the fact that I was able to get this working so fast. Managed services with CDK support are the way to go forward!
I hope Bedrock makes changes according to my feedback and improves the user experience. The current implementation does not allow me to use it in production APIs without duplicating a lot of code.
| ranisenberg |
1,899,146 | Building with and Testing Rapyd's OpenAPI | The Rapyd API provides a straightforward and efficient process for integrating payment... | 0 | 2024-07-11T08:00:00 | https://community.rapyd.net/t/building-with-and-testing-rapyds-openapi/59373 | flutter, payments, fintech, tutorial | The [Rapyd API](https://www.rapyd.net/developers/get-started/) provides a straightforward and efficient process for integrating payment infrastructures into your application. Rapyd also recently introduced an OpenAPI specification designed to further simplify the integration of Rapyd's payment-related functionalities. As such, you are able to seamlessly accept various kinds of payments, streamline cross-border transactions, and enhance your business's cash flow management.
This article explains how to integrate Rapyd's OpenAPI into an existing Flutter application, which has a product page, checkout button, and payment success and failure pages.
## Building an Application with Rapyd's OpenAPI
Before you begin, you'll need the Flutter SDK installed on your local computer. If you don't already have the SDK, go to the [install page](https://docs.flutter.dev/get-started/install), select your operating system, and follow the installation steps provided for your operating system.
You'll also need a Rapyd account, which you can easily create on the [sign-up page](https://dashboard.rapyd.net/sign-up).
### Cloning and Demonstrating the Flutter Demo App
So you can focus on the integration, this article uses an already-existing Flutter app. To clone the app, navigate to the [GitHub repo](https://github.com/Rapyd-Samples/rapyd-openapi-flutter-checkout-demo), click the green **Code** button on the top right, and you should see different clone options.

Use any of the options to clone the repo, and then change the directory to the application directory and run the application with the following command, where `{{YOUR_PORT_NUMBER}}` is your preferred and open port number:
```
flutter run -d chrome --web-port `{{YOUR_PORT_NUMBER}}` --web-hostname 0.0.0.0 --web-browser-flag "--disable-web-security"
```
This tutorial uses port 8080. The Flutter demo is built mainly for Flutter web, so irrespective of your machine's operating system, you will be able to demo the application. When you run the command, you should see an output similar to the following:

A new Chrome window will open automatically, showing the landing page.

Click on any of the products to see the product detail page.

Click the **Checkout** button to see the payment page.

Fill out the payment form using any details. After filling out the form, click on the **Pay** button. No payment or action occurs when you click the button as you are yet to integrate Rapyd. However, in the case of a successful payment, you would be redirected to the "payment successful" page.

### Integrating Rapyd's OpenAPI into the Demo Flutter Application
The following instructions will guide you through the process of integrating Rapyd's OpenAPI into the application, allowing you to make an actual or test payment to purchase a product.
#### Obtaining Rapyd Credentials
You'll need to locate your access and secret keys as you are going to use them on every API request. To locate these, navigate to the [Rapyd Client Portal login page](https://dashboard.rapyd.net/login), log in with your login details, and click **Developers** on the sidebar. You should be routed to Rapyd's Credential Details page.

#### Installing the Required Package
The Flutter http, crypto, and convert packages must be installed to consume the Rapyd APIs. To install the packages, go to the root directory of the application and run the following command:
```bash
flutter pub add http && flutter pub add crypto && flutter pub add convert
```
Alternatively, you can add the code snippet below into the **pubspec.yaml** file under the `dependencies` key:
```
http: ^0.13.4
crypto: ^3.0.3
convert: ^3.1.1
```
### Setting Up Raypd Request Headers and Base Functions
You'll now set up the request headers and the base functions in a utility file named **rapyd.dart**.
Start by creating a new folder named `utilities` in the `lib` folder. In the `utilities` folder, create a file named **rapyd.dart**.
Into that file, copy and paste the following code snippet, which contains functions that set up the salt, signature, request headers, and a function that receives the API method and API endpoint URL. The request body then initiates an API request to Rapyd:
```
import 'dart:convert';
import 'dart:math';
import 'package:convert/convert.dart';
import 'package:http/http.dart' as http;
import 'package:crypto/crypto.dart';
class Rapyd {
// Declaring global variables
final String ACCESS_KEY = "{{YOUR_ACCESS_KEY}}";
final String SECRET_KEY = "{{YOUR_ACCESS_KEY}}";
final String BASEURL = "https://sandboxapi.rapyd.net";
// Generating the salt for each request
String getSaltString(int len) {
var randomValues = List<int>.generate(len, (i) => Random.secure().nextInt(256));
return base64Url.encode(randomValues);
}
// Generating the Signature for each request
String getSignature(String httpMethod, String urlPath, String salt,
String timestamp, String dataBody) {
// string concatenation prior to string hashing
String sigString = httpMethod +
urlPath +
salt +
timestamp +
ACCESS_KEY +
SECRET_KEY +
dataBody;
// using the SHA256 method to run the concatenated string through HMAC
Hmac hmac = Hmac(sha256, utf8.encode(SECRET_KEY));
Digest digest = hmac.convert(utf8.encode(sigString));
var ss = hex.encode(digest.bytes);
// encoding and returning the result
return base64UrlEncode(ss.codeUnits);
}
// Generating the Headers for each request
Map<String, String> getHeaders(String urlEndpoint, {String body = ""}) {
//generate a random string of length 16
String salt = getSaltString(16);
//calculating the unix timestamp in seconds
String timestamp = (DateTime.now().toUtc().millisecondsSinceEpoch / 1000)
.round()
.toString();
//generating the signature for the request according to the docs
String signature =
getSignature("post", urlEndpoint, salt, timestamp, body);
//Returning a map containing the headers and generated values
return <String, String>{
"access_key": ACCESS_KEY,
"signature": signature,
"salt": salt,
"timestamp": timestamp,
"Content-Type": "application/json",
};
}
// helper function to make all HTTP request
Future<http.StreamedResponse> httpWithMethod(String method, String url, String dataBody, Map<String, String> headers) async {
var request = http.Request(method, Uri.parse(url))
..body = dataBody
..headers.addAll(headers);
// Add any additional body content here.
return request.send();
}
// fuction to make all API request
Future<Map> makeRequest(String method, String url, Object bodyData) async {
try {
final responseURL = "$BASEURL$url";
final String body = jsonEncode(bodyData);
var response = await httpWithMethod(method, responseURL, body, getHeaders(url, body: body));
var respStr = await response.stream.bytesToString();
Map repBody = jsonDecode(respStr) as Map;
//return data if request was successful
if (response.statusCode == 200) {
return repBody["data"] as Map;
}
throw repBody["status"] as Map;
} catch (error) {
throw error;
}
}
}
```
Ensure you change `{{YOUR_ACCESS_KEY}}` to your access key and `{{YOUR_SECRET_KEY}}` to your secret key.
Keep in mind that when developing a real-world application, it is advisable to store your API keys using an environment variable.
#### Raypd's OpenAPI Overview and Payment Endpoint Integration
This tutorial consumes only the payment endpoint. To get a full list of the OpenAPI endpoints, go to the [workspace overview](https://www.postman.com/rapyd-dev/workspace/rapyd-dev/collection/22519936-30d06119-8a82-434d-a375-baf5030bd26f) on Postman. Ensure you fork the API collection so you can edit it and run the request on your Postman account. Follow the instructions on the overview page to properly fork the collection alongside Rapyd's OpenAPI environment.
For the payment endpoint, this tutorial uses the "pay with Visa card" type. You can also explore the [official documentation](https://docs.rapyd.net/en/create-payment.html).
To consume the payment endpoint using the "pay with Visa card" type, navigate to the **checkout_view_page.dart** file and import the Rapyd utility file you created in the previous step:
```
import 'utilities/rapyd.dart'; // Import the Rapyd Utilities
```
In the same file, find this code snippet:
```
print(number);
print(expMonth);
print(expYear);
print(name);
print(cvv);
print(email);
print(product);
redirectTo('/success');
```
Replace it with the following:
```
final rapydClient = Rapyd();
final amount = product.price.toStringAsFixed(2);
final body = <String, dynamic>{
"amount": amount,
"currency": "USD",
"description": "Payment for ${product.name}",
"receipt_email": email,
"payment_method": {
"type": "il_visa_card",
"fields": {
"number": number,
"expiration_month": expMonth,
"expiration_year": expYear,
"cvv": cvv,
"name": name
}
}
};
try {
final response = await rapydClient.makeRequest("post", "/v1/payments", body);
if (response["paid"] == true) {
redirectTo('/success');
} else {
redirectTo('/failed');
}
print (response);
} catch (e) {
print('ERROR: ${e.toString()}');
}
```
The code snippet instantiates the `Rapyd` class and calls the `makeRequest` function, which accepts the API endpoint method, the API endpoint URL, and the endpoint payload. In this case, the endpoint method is `Post`, the endpoint URL is `Create Payment Endpoint`, and the endpoint payload is the payment and product data assigned to the `body` variable.
Once the payment is successful, it sends the customer to the "payment successful" page.
### Flutter Application Demonstration
You've successfully integrated Rapyd's OpenAPI, and you can now test the app. As mentioned, the application is primarily designed for compatibility with Flutter web. This means you can run the app without additional configuration regardless of your local machine's operating system.
If you haven't closed the existing terminal used to demo the Futter application, click the terminal and press the R key to hot restart, which triggers a reload of the application. However, if you have closed the terminal, you can run the command below to start the application, where `{{YOUR_PORT_NUMBER}}` is your preferred and open port number:
```
flutter run -d chrome --web-port "{{YOUR_PORT_NUMBER}}" --web-hostname 0.0.0.0 --web-browser-flag "--disable-web-security"
```
A new Chrome window will automatically open, displaying the landing page. The page should look like the screenshot below:

Click any of the products to see the product details and purchase the product.

Click **Checkout** to go to a page with a payment form.

Fill out the payment form using the Visa test card details below:
* **Card number:** 4111111111111111
* **Expiration month:** 12
* **Expiration year:** 27
* **CVV:** 123
* **Name:** any name
* **Email:** any email
After filling out the form, click the **Pay** button. The payment will be processed, and if it's successful, you'll be redirected to the "payment successful" page.

## Get the Code
You can also play with OpenAPI on Postman and enjoy its limitless capability to build payment-dependent applications. This tutorial’s [complete code](https://github.com/Rapyd-Samples/rapyd-openapi-flutter-checkout-demo) is also available on GitHub
| uxdrew |
1,899,327 | Back-End Testing | Content Plan 1. Introduction to Back-End Testing Briefly explain the importance of... | 27,559 | 2024-07-10T13:00:00 | https://dev.to/suhaspalani/back-end-testing-13dn | backend, testing, npm, javascript | #### Content Plan
**1. Introduction to Back-End Testing**
- Briefly explain the importance of testing in software development.
- Highlight the focus on testing Node.js APIs specifically.
- Introduce Mocha and Chai as the tools of choice for this tutorial.
**2. Setting Up the Environment**
- Prerequisites: Node.js, npm, a text editor (like VS Code).
- Step-by-step instructions to set up a new Node.js project:
```sh
mkdir backend-testing
cd backend-testing
npm init -y
npm install express mocha chai supertest --save-dev
```
- Explanation of the installed packages:
- `express`: To create a sample API.
- `mocha`: Test framework.
- `chai`: Assertion library.
- `supertest`: For making HTTP assertions.
**3. Creating a Simple API with Express**
- Example code for a basic Express server with a few endpoints:
```javascript
// server.js
const express = require('express');
const app = express();
app.get('/api/hello', (req, res) => {
res.status(200).json({ message: 'Hello, world!' });
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
module.exports = app;
```
- Explanation of the API structure and endpoints.
**4. Writing Your First Test with Mocha and Chai**
- Creating the test directory and a basic test file:
```sh
mkdir test
touch test/test.js
```
- Writing a simple test:
```javascript
// test/test.js
const request = require('supertest');
const app = require('../server');
const chai = require('chai');
const expect = chai.expect;
describe('GET /api/hello', () => {
it('should return a 200 status and a message', (done) => {
request(app)
.get('/api/hello')
.end((err, res) => {
expect(res.status).to.equal(200);
expect(res.body).to.have.property('message', 'Hello, world!');
done();
});
});
});
```
- Explanation of the test code:
- Using `supertest` to make HTTP requests.
- `chai`'s `expect` syntax for assertions.
- The `done` callback to handle asynchronous tests.
**5. Running the Tests**
- How to run the tests using Mocha:
```sh
npx mocha
```
- Interpreting the test results.
**6. Additional Test Cases**
- Writing more test cases for different scenarios:
- Testing for a 404 error for an unknown route.
- Testing POST, PUT, DELETE endpoints if present.
- Example:
```javascript
describe('GET /api/unknown', () => {
it('should return a 404 status', (done) => {
request(app)
.get('/api/unknown')
.end((err, res) => {
expect(res.status).to.equal(404);
done();
});
});
});
```
**7. Best Practices for Back-End Testing**
- Keep tests isolated and independent.
- Use descriptive names for test cases.
- Ensure your tests cover various edge cases.
- Mocking dependencies when necessary.
- Continuously integrate testing into your development workflow.
**8. Conclusion**
- Summarize the key takeaways.
- Encourage readers to apply these techniques to their own projects.
- Provide links to additional resources for further learning.
**9. Additional Resources**
- Official Mocha documentation: [Mocha](https://mochajs.org/)
- Official Chai documentation: [Chai](https://www.chaijs.com/)
- Supertest documentation: [Supertest](https://github.com/visionmedia/supertest)
- Articles and tutorials on testing best practices.
**10. Call to Action**
- Invite readers to share their experiences and ask questions in the comments.
- Suggest they subscribe for future articles on full stack development and DevOps.
| suhaspalani |
1,899,937 | Back-End Development for Custom Web Applications: A Comprehensive Guide | Have you ever wondered about the hidden secret behind web applications that appear visually stunning... | 0 | 2024-07-08T10:20:18 | https://www.cygnismedia.com/blog/back-end-development-for-custom-web-applications-guide/ | webdev, tutorial, javascript, discuss | ---
title: Back-End Development for Custom Web Applications: A Comprehensive Guide
published: true
date: 2024-06-25 09:30:41 UTC
tags: webdev,tutorial,javascript,discuss
canonical_url: https://www.cygnismedia.com/blog/back-end-development-for-custom-web-applications-guide/
---

Have you ever wondered about the hidden secret behind web applications that appear visually stunning and appealing yet function so well? What’s behind the scenes that keeps it running and performing effortlessly over different web browsers? The quick and simple answer: **_Backend Development_**.
In this comprehensive guide, we’ll dive into the in-depth details of the backend [web app development](https://www.cygnismedia.com/web-application-development/); exploring the role of backend developers, the key components of back-end development, and the programming languages and frameworks used in backend development.
Before we delve into the technical intricacies of back-end development, it’s important to understand and know what custom web app development is. So, without further ado, let’s get straight to the point!
## Key Takeaways
- Custom web app development involves creating tailored web applications to meet specific business needs and user requirements, providing unique features and functionalities.
- Backend development refers to the creation of the server-side logic, databases, and APIs that power the functionality of a web application. It ensures seamless data processing, storage, and communication between the front end and server.
- Key components of backend architecture include servers, databases, APIs, and middleware for communication and data management.
- Selecting from popular backend frameworks and languages, including Node.js, Python, Django Flask, PHP, and Laravel streamline development, provide robust security features, and support scalable, efficient handling of server-side tasks in web applications.
- Web hosting is the service that makes websites accessible on the internet by storing their files on servers. It allows users to access and view websites anytime, anywhere.
#### Table of Content
- **[What is a Custom Web Application](#what-is-a-custom-web-application)**
- **[What is Backend Development](#what-is-backend-development)**
- [The Role of Back-end Developers](#the-role-of-back-end-developers)
- **[What is a Web Application Architecture](#what-is-a-web-application-architecture)**
- **[Web Application Backend Architecture Key Components](#web-application-backend-architecture-key-components)**
- [Server-side (Web Server)](#server-side-web-server)
- [Database](#database)
- [Middleware](#middleware)
- [APIs](#apis)
- **[Popular Back-End Web Development Programming Languages](#popular-back-end-web-development-programming-languages)**
- [Python](#python)
- [JavaScript (Node.js)](#javascript-node-js)
- [PHP](#php)
- [Java](#java)
- [Ruby](#ruby)
- [C#](#c)
- **[Popular Back-End Frameworks for Custom Web App Development](#popular-back-end-frameworks-for-custom-web-app-development)**
- [Django](#django)
- [Flask](#flask)
- [Express.js](#express-js)
- [Laravel](#laravel)
- [Ruby on Rails](#ruby-on-rails)
- [ASP.NET](#asp-net)
- **[Popular Web Hosting Solutions for Custom Web Applications](#popular-web-hosting-solutions-for-custom-web-applications)**
- [AWS (Amazon Web Services)](#aws-amazon-web-services-)
- [Google Cloud Web Hosting](#google-cloud-web-hosting)
- [Azure](#azure)
- [Linode](#linode)
- **[How to Build a Custom Web App Development: Key Steps](#how-to-build-a-custom-web-app-development-key-steps)**
- [Determine Objectives and Requirements](#determine-objectives-and-requirements)
- [Conduct Market Research](#conduct-market-research)
- [Design Prototypes](#design-prototypes)
- [Choose the Right Technology Stack](#choose-the-right-technology-stack)
- [Design Frontend Components](#design-frontend-components)
- [Develop Backend Functionality](#develop-backend-functionality)
- [Conduct Rigorous Testing](#conduct-rigorous-testing)
- [Deployment and Maintenance](#deployment-and-maintenance)
## What is a Custom Web Application
Unlike off-the-shelf or ready-made web applications that serve various industries and target audiences, [custom web applications](https://www.cygnismedia.com/case-studies/web/) are tailor-made to meet the unique needs and challenges of a specific organization or individual. Designing a custom web app involves identifying features, functionality, and design aspects that can be customized to address your target audience's pain points and meet your exact business or brand needs.
Unlike static web apps, dynamic web applications offer interaction and update content in real-time based on user inputs, indicating the importance of building highly interactive web apps. Some examples of custom web applications include [progressive web apps](https://dev.to/cygnismedia/how-progressive-web-app-facilitates-e-commerce-businesses-in-enhancing-online-shopping-experiences-100o-temp-slug-859296), portal web apps, custom [e-commerce web apps](https://www.cygnismedia.com/ecommerce-apps-development/), and content management systems (CMS).
Now, that we’ve understood what custom web app development is, let’s move towards discovering the backbone of custom web applications, i.e., back-end development.
## What is Backend Development
You typically don’t worry about how the website functions, how you move from one page to another, or where your data or personal information you entered when you sign up on the website goes, right? That’s where the back-end development comes into the picture. As simple as it could be to explain, the back-end is the backbone of a custom web application where back-end developers deal with the server-side functionality of a web app not visible to users like the [front-end](https://dev.to/cygnismedia/the-future-of-front-end-development-why-it-still-matters-4o1n) (user interface).
In technical jargon, backend development refers to the server side of programming, focusing on the server, database, application logic, and API integrations to ensure the web app can handle data properly, and offers top-level security and performance. Additionally, backend development encompasses server configuration, scalability, and deployment processes. Overall, backend development is the core aspect of a web app that provides the necessary support for the frontend interface to function securely and reliably.
### The Role of Back-end Developers
Backend developers are the coding wizards that implement the logic, structure, and functionality of a web application backend to ensure it performs efficiently and correctly. They work on the creation of a back-end web architecture that allows a database and an application to communicate with one another. So, anything associated with the development and maintenance of the server-side logic and databases, API creation, and handling server configuration falls under the sole responsibility of backend developers. Moreover, they also ensure that the data or services requested by the front-end team are delivered successfully.
## What is a Web Application Architecture

_Confused about what web application architecture is and what its purpose is? Let’s have a quick look at this._
Backend web architecture is the process of designing the structure of the server-side components, including **databases, servers, and APIs** that aren’t visible to users. **For example** , a website developer builds a web application that enables users to calculate the value of compound interest by entering values in financial formulas. Here, the backend functionality of the application comes into play when the user inputs values into the website fields, and the backend components work together to calculate and display the answer.
The output of the calculated number appears on the screen that is visible to the user but they don’t know how the back-end architecture works, how servers and networks communicate, or how the database stores or retrieves the computational value.
The main purpose of backend web architecture is to create programs that provide useful experiences for users while keeping them away from the website's internal logic.
## Web Application Backend Architecture Key Components
The four significant key components of web app backend architecture are mentioned below.
### Server-side (Web Server)
The server-side, or backend, handles the core functionality, logic, and database interactions of a web application. It operates on web servers, which manage requests from clients, process them, and send the appropriate responses. Server-side programming languages include Java, Python, Ruby, PHP, and Node.js. This part of the application manages authentication, data processing, business logic, and communication with databases. It ensures security, performance, and scalability by efficiently handling client requests and server resources.
**_There are different web servers available for custom web applications:_**
- **Apache:** Apache HTTP Server is one of the most widely used web servers, in charge of receiving HTTP requests from users and responding with web pages containing the requested content. Put another way, it lets users view the content over the web app. Apache can serve both static and dynamic content efficiently. It's known for its flexibility, scalability, and strong community support. It's commonly used in LAMP (Linux, Apache, MySQL, PHP) stacks and can run on multiple operating systems, making it a versatile choice for web developers.
- **Nginx:** Nginx functions as a web server, reverse proxy, load balancer, and HTTP cache. Nginx is designed to deliver static content quickly, has a sophisticated module structure, and can route dynamic requests to other software as required. Its asynchronous, event-driven architecture allows it to handle high traffic with better performance and reliability than traditional web servers.
- **IIS:** Internet Information Services (IIS) is a flexible, secure, and manageable web server created by Microsoft. It is designed to host websites, services, and applications on a Windows server. IIS supports various protocols including HTTP, HTTPS, FTP, and SMTP. It integrates seamlessly with the .NET framework, which makes it an ideal choice for applications built on the Microsoft technology stack. IIS offers robust security features, easy configuration, and comprehensive management tools through the IIS Manager.
### Database
Databases play an integral role in data management, whether it be a mobile app, a web app, or any other kind of application. [Databases](https://www.cygnismedia.com/blog/choose-right-database-for-web-app-development-project/) ensure that data is accessible and consistent, enabling dynamic content generation and user interaction. They maintain data integrity and support the app's backend functionality. Without a database, a web app would struggle to handle complex data operations. Ultimately, the database empowers web apps to deliver dynamic and personalized experiences to users, enhancing interactivity and functionality.
### Middleware
Middleware is software that serves as a means for data sharing and communication across various applications or systems. It ensures that data flows smoothly and securely between different parts of the web application. By managing these essential tasks, middleware simplifies the development process and improves the web app’s performance and security. It allows developers to focus on building features rather than managing data flow and security protocols.
### APIs
APIs the short form of (Application Programming Interfaces) are sets of rules and protocols that allow different software systems to communicate and exchange information with each other using requests and responses. It enables web app developers to access the functionality of other software components, services, or platforms, enhancing modularity and reusability. They are crucial in integrating third-party services to extend the functionality of web apps. Moreover, [APIs](https://dev.to/cygnismedia/the-role-of-apis-in-mobile-app-development-2nbm) ensure interoperability, scalability, and flexibility in [modern web application development](https://www.cygnismedia.com/blog/role-of-design-in-app-development/).
## Popular Back-End Web Development Programming Languages

Below is the list of the popular and in-demand back-end programming languages that enable developers to build highly interactive, responsive custom web applications.
### Python
Python is a versatile programming language that can be used on a server for back-end web application development. Its simplicity and readability make it a favorite among web app developers. Python's extensive libraries and frameworks facilitate the rapid development and deployment of powerful web applications. Python’s strong community support and comprehensive documentation make it easy to find solutions and best practices. Plus, it's platform-independent and allows you to run Python applications on various operating systems without modification. Overall, Python is ideal for both beginners and seasoned backend developers looking to build robust and scalable custom web applications.
### JavaScript (Node.js)
JavaScript is used widely in frontend development, but in recent years it has been used for backend development too. [Node. js](https://www.cygnismedia.com/blog/why-nodejs-is-the-top-choice-for-web-development/) (a JavaScript runtime environment) makes that possible by providing backend functionality. Node.js allows developers to leverage JavaScript to build fast and scalable server-side applications. Its non-blocking, event-driven architecture makes it an excellent choice for real-time applications like chat apps or online gaming. With a vast ecosystem of modules available through npm, Node.js simplifies the development process by providing pre-built functionalities. It’s particularly powerful for full-stack development, enabling developers to use a single language across both the front end and back end.
### PHP
One of the widely adopted programming languages for web app development is PHP- a scripting language. It's embedded within HTML, ideal for creating dynamic and interactive web pages. WordPress, Joomla, and Drupal are examples of content management systems (CMS) that are powered by PHP. So, if you’re about to create an [e-commerce website](https://www.cygnismedia.com/ecommerce-apps-development/) or other CMS web apps, PHP is your go-to choice. PHP’s integration with various databases and its extensive array of built-in functions simplify tasks like form handling, session management, and cookie handling. Its continuous updates and large community support ensure it remains a reliable choice for backend web app development.
### Java
Next on our list of well-known programming languages for back-end web application development is Java– an object-oriented programming language. Since its launch, Java is been widely used in web application development, especially on the server side. Its platform independence, enabled by the Java Virtual Machine (JVM), allows code to run on any device that supports Java. Java is known for its performance, scalability, and security, making it ideal for large-scale applications both desktop and web-based. Java’s extensive standard library and comprehensive documentation make web application backend development a breeze.
### Ruby
If you wish to build a website using a simple, productive programming language with helpful features, there’s no better choice than Ruby- a dynamic, object-oriented language. It powers the Ruby on Rails framework, which is highly regarded for its convention over configuration approach. This means developers can build applications faster with fewer lines of code. Ruby emphasizes human-readable syntax, making it accessible and enjoyable to write. It’s popular for web startups due to its quick development life cycle. It’s a good pick for those looking to balance development speed with code maintainability.
### C
Today’s competitive web app development market demands an innovative and modern programming language that allows the development of incredibly fast and efficient web applications. Here, we cannot emphasize enough the importance of opting for C#- one of the most in-demand programming languages. Developed by Microsoft, C# is widely used for backend development within the .NET ecosystem. C# integrates seamlessly with the .NET framework, providing a powerful environment for building custom web applications, particularly in Windows environments.
## Popular Back-End Frameworks for Custom Web App Development

Below are the top back-end frameworks that help you build reliable, modern, and scalable web applications customized to meet your specific business needs and requirements.
### Django
If you want to choose a Python-based framework, then Django is a popular choice. This web framework _\*\*_comes with a lot of built-in features, including ORM (Object-Relational Mapping), authentication, and an admin panel, which makes it easy to develop complex, database-driven web apps. It's designed to be secure, scalable, and maintainable, making it a popular choice for web developers building robust web applications.
### Features
- MTV architecture
- Built-in security features
- Scalability and customization
- Extensive documentation
### Flask
Flask is a lightweight WSGI (web server gateway interface) web framework for Python. It's designed to be simple and flexible, allowing developers to create web applications with minimal boilerplate code. Flask provides the essential tools needed for web development but does not include everything by default, giving developers the freedom to choose and integrate additional libraries as needed. Flask’s simplicity and extensibility make it a popular choice for both beginners and experienced developers looking for a straightforward, yet powerful framework to build web applications.
### Features
- Lightweight and modular
- Built-in development server
- Flexible templating with Jinja2
- Restful request dispatching
- WSGI compliance
### Express.js
Express.js is a lightweight and flexible Node.js web application framework that comes with a powerful feature set for creating web and mobile applications. It's designed to be simple and unopinionated, allowing developers to build applications from the ground up using only the components they need. Express handles routing, middleware, and HTTP requests and responses, making it easier to develop server-side applications. Web developers looking to build RESTful APIs and single-page applications can opt for Express.js due to its flexibility, simplicity, and the below-mentioned features.
### Features
- Fast server-side development
- Middleware support
- Flexible routing
- Templating engines
- Compatible with numerous middleware and plugins
### Laravel
Laravel is a PHP framework designed to make web development easier and more enjoyable. It follows the MVC (Model-View-Controller) architectural pattern, which helps in organizing code and separating logic. Laravel provides an elegant syntax, a robust set of tools, and extensive documentation, making it ideal for developers of all skill levels. Its strong community support and comprehensive ecosystem make Laravel a popular choice for building modern, scalable web applications.
### Features
- Eloquent ORM
- Blade Templating Engine
- Task Scheduling
- Supports MVC architecture
- Password hashing with Salts
### Ruby on Rails
Ruby on Rails, often simply called Rails, is a web application framework written in Ruby. It follows the convention over the configuration principle, meaning it reduces the number of decisions developers need to make, enabling rapid development. It promotes the use of RESTful architecture, making it easy to create scalable and maintainable web applications. Rails emphasizes writing clean, readable code and includes tools for testing and deployment. Rails is a go-to choice for backend developers looking to build dynamic and customized web applications.
### Features
- Extensive libraries (gems)
- Active record ORM
- Scaffold generation
- Integrated testing Tools
- View Rendering
### ASP.NET
ASP.NET is a powerful framework developed by Microsoft for building dynamic web applications and services using C#. It provides a comprehensive set of tools and libraries for developing secure, scalable, and high-performance applications. ASP.NET supports various development models, including model-view-controller (MVP), web API for creating RESTful services, and web pages for a simpler, page-based approach. Visual Studio, the integrated development environment for ASP.NET, enhances productivity with debugging, testing, and deployment tools.
### Features
- MVC Architecture
- Dependency injection
- Robust authentication mechanisms
- Cross-platform
- Rich tooling support
## Popular Web Hosting Solutions for Custom Web Applications

Web hosting plays a crucial role in custom web application development as it provides the infrastructure needed to store, manage, and serve web applications to users over the internet. It ensures the web application is accessible, secure, and scalable. We’ve compiled a list of the three most common web hosting solutions for building custom web applications:
### AWS (Amazon Web Services)
AWS is a comprehensive cloud computing platform provided by Amazon, tailored for web app development and hosting. It provides a reliable and scalable web hosting solution by offering services like Amazon EC2 for running virtual servers, Amazon S3 for storing objects, and Amazon RDS for managing databases. It ensures high availability and security of applications. AWS also offers tools for monitoring performance and automatically scaling resources based on demand. You can easily host and manage your web applications using the AWS service.
### Google Cloud Web Hosting
Google Cloud web hosting is a robust suite of cloud services designed to [host and manage web applications](https://www.cygnismedia.com/blog/serverless-computing-for-developers/) in the cloud with high performance and reliability. Firebase Hosting lets developers quickly deploy web apps to a global CDN (content delivery network) with a single command. offering fast and secure hosting. It also provides extensive support for various programming languages and frameworks, along with advanced analytics and AI tools to optimize web application performance. With a focus on scalability, security, and seamless integration with Google’s ecosystem, Google Cloud Web Hosting is ideal for a wide range of web hosting needs.
### Azure
The all-in-one cloud platform powered by Microsoft, known as Azure, gives web developers the freedom to build, launch, and host their custom web applications on the platform of their choice. It provides automatic scaling, load balancing, and integrated DevOps capabilities, including continuous deployment from GitHub, Azure DevOps, or any Git repository. Web developers can also use Azure Functions for serverless computing, integrating seamlessly with other Azure services like databases and storage. The cherry on top, Azure Web Hosting offers rigorous security that allows developers to protect their data from threats, continuously monitor their web app, diagnose and tackle any issues, and improve its performance over time.
### Linode
Web developers looking for a secure, flexible, user-friendly, and reliable web hosting solution should definitely consider [Linode](https://www.linode.com/docs/guides/set-up-web-server-host-website/) virtual private servers (VPs). With its robust network infrastructure and hardware, it enables web applications to perform efficiently and run smoothly. Moreover, Linode supports a variety of development stacks and tools to facilitate seamless deployment and management of custom web apps. Linode features like a dedicated CPU, versatile API, two-factor authentication, and backups. Additionally, the Linode Cloud Manager offers an easy-to-use interface for setting up backups, controlling network settings, and deploying instances, which makes it indispensable for developers managing and hosting their web applications.
## How to Build a Custom Web App Development: Key Steps

Build your web application custom-fit to your needs by following the [application development process](https://www.cygnismedia.com/approach.html) outlined in the section below:
### Determine Objectives and Requirements
The very first step is to outline your project’s goals and technical requirements. This is where you gather all the necessary information and define the scope of your web app. You’ll also create a project timeline and set milestones. Planning is crucial as it sets the foundation for your development process, ensuring everyone is on the same page and aligned with the project’s objectives.
### Conduct Market Research
Conducting market research is the second stage where you’ll gather information about your target audience, competitors, and industry trends. This helps you understand user needs and identify gaps in the market. Use surveys, interviews, and analysis tools to collect data. Additionally, you’ll understand what features to include in your app, ensuring it meets user expectations and stands out from the competition.
### Design Prototypes
During the design and prototyping phase, you’ll focus on the user interface and user experience. UI/UX designers create wireframes and prototypes to visualize your web app’s layout and functionality. This stage helps in experimenting with different designs and getting feedback from end-users before moving on to development.
**Learn more:** [How to design visually stunning web apps that engage and inspire users](https://www.cygnismedia.com/blog/visually-stunning-web-apps-design-strategies/#step-by-step-guide-to-web-application-designing)
### Choose the Right Technology Stack
Choosing the right technology stack is crucial for your web app’s performance and scalability. You’ll select the programming languages, frameworks, and tools that best suit your project’s requirements. Consider factors like development speed, cost, and the expertise of your team. A well-chosen tech stack ensures your app is robust, efficient, and easy to maintain.
### Design Frontend Components
Since you’ve built the prototype of your web app, it’s time to transform it into a final design with the help of front-end technologies, like HTML, CSS, and JavaScript, along with frameworks like [React.js](https://www.cygnismedia.com/blog/progressive-web-apps-reactive-frameworks/). You’ll develop the visual elements and interactive features outlined in the wireframes that users see and interact with. A well-designed front end makes your app visually appealing, responsive, dynamic, and easy to use.
### Develop Backend Functionality
Now move towards developing the backend functionality of your web app. At this stage, back-end developers focus on creating APIs, managing databases, and implementing business logic. This is where the core functionality of your app is developed and integrated with the front end to ensure the app is functional and performs efficiently. A strong backend ensures your app is secure, reliable, and capable of handling user interactions smoothly.
### Conduct Rigorous Testing
The next step is to conduct thorough quality assurance testing to detect and fix any issues in the web app. You’ll conduct various quality assurance tests, such as unit tests, integration tests, and user acceptance testing, to identify and fix bugs. This stage helps verify that your app meets the required standards and performs well under different conditions while ensuring a smooth and reliable [user experience](https://www.cygnismedia.com/blog/four-ways-how-to-retain-users/).
### Deployment and Maintenance
Finally, at this stage, you’ll launch your web app and make it available to users. This involves setting up hosting, configuring servers, and deploying the application code. After deployment, continuous monitoring and maintenance are essential to keep your app running smoothly. You’ll need to update the app regularly, fix any bugs, and ensure [security patches](https://www.cygnismedia.com/blog/enterprise-app-security-best-practices/) are applied to protect against vulnerabilities.
[](https://www.cygnismedia.com/contact.html?ref=blog)
## Conclusion
Building a custom web application for business is crucial to stay ahead of the competition, build brand identity and customer base, and drive business growth. Customizing web application as per specific business needs while pinpointing your target audience is an effective way to boost success and ensure utmost user satisfaction. At this point, the main factor that keeps web applications fully operational and manageable is the backend development.
There are various aspects of back-end web app development, including its architecture, servers, databases, APIS, and more. Additionally, we’ve also discussed the backend development programming languages and frameworks along with the steps involved in the development process. There is a lot more to backend development, and as a back-end developer, you must keep learning and stay abreast of the innovation happening in the web app development industry. | cygnismedia |
1,900,101 | The Art of Reusability: Generics in TypeScript and React | As frontend developers, we're no strangers to the concept of reusability. We strive to write code... | 0 | 2024-07-09T15:45:47 | https://dev.to/gboladetrue/the-art-of-reusability-generics-in-typescript-and-react-21mi | webdev, typescript, react, frontend | As frontend developers, we're no strangers to the concept of reusability. We strive to write code that's modular, flexible, and easy to maintain. But when it comes to working with different data types, we often find ourselves writing duplicate code or resorting to any types, sacrificing type safety in the process.
In React, we're familiar with the concept of components as functions that take in props and return JSX elements. But what if we want to create a component that can work with different types of data, such as strings, numbers, or custom objects? How can we ensure that our component is type-safe and flexible enough to accommodate different data types?
This is where generics come in – a powerful feature in TypeScript that allows us to write reusable, type-safe code. By using generics, we can create components that can work with multiple data types, without sacrificing type safety or flexibility.
In this post, we'll explore the world of generics in TypeScript, using real-world React components to illustrate and explain. We'll start with a simple example of a generic container component, and then move on to more complex examples, such as a generic list component. By the end of this post, you'll have a solid understanding of how to use generics to write more reusable, maintainable, and type-safe code in your React applications.
**What are Generics?**
Generics are a way to create reusable functions, classes, and interfaces that can work with multiple data types while maintaining type safety. They allow you to specify a type parameter that can be replaced with a specific type when the generic is instantiated.
Think of generics like a blueprint or a template. You define a generic function or class with a type parameter, and then you can use that generic with different types, just like how you would use a blueprint to build different houses.
**A Simple Example: A Generic Container Component**
Let's create a simple React component that demonstrates the power of generics. Imagine we want to create a container component that can render any type of data. We can use a generic to achieve this:
```typescript
interface ContainerProps<T> {
data: T;
}
const Container = ({ data }: ContainerProps<any>) => {
return <div>{JSON.stringify(data)}</div>;
};
```
In this example, we define an interface `ContainerProps` with a type parameter `T`. This interface has a single property data of type `T`. We then create a React component `Container` that uses this interface as its props type.
Notice the `any` type passed to the `ContainerProps` interface. This tells TypeScript that we want to allow any type for the `T` parameter. However, this can lead to type safety issues, so let's improve this example.
**Improving Type Safety with Generics**
Instead of using `any`, we can specify the type parameter when we use the Container component:
```typescript
interface ContainerProps<T> {
data: T;
}
const Container = <T,>({ data }: ContainerProps<T>) => {
return <div>{JSON.stringify(data)}</div>;
};
// Using the Container component with a string type
const StringContainer = () => {
return <Container<string> data="Hello, World!" />;
};
// Using the Container component with a number type
const NumberContainer = () => {
return <Container<number> data={42} />;
};
```
Now, we've improved the type safety of our `Container` component. When we use the component, we specify the type parameter, which ensures that the `data` property has the correct type. For instance, for the `StringContainer` component, if we try passing a `number` value or a value of any other type e.g. an object, we get an error from Typescript informing us of the type mismatch. Ditto for the `NumberContainer` component.
## A More Complex Example: A Generic List Component
Let's create a more complex React component that demonstrates the power of generics. Imagine creating a list component that can render a list of any type of data. We can use a generic to achieve this:
```typescript
interface ListProps<T> {
items: T[];
renderItem: (item: T) => JSX.Element;
}
const List = <T,>({ items, renderItem }: ListProps<T>) => {
return (
<ul>
{items.map((item, index) => (
<li key={index}>{renderItem(item)}</li>
))}
</ul>
);
};
// Using the List component with a string type
const StringList = () => {
return (
<List<string>
items={['Apple', 'Banana', 'Cherry']}
renderItem={(item) => <span>{item}</span>}
/>
);
};
// Using the List component with a custom type
interface Person {
name: string;
age: number;
}
const PersonList = () => {
return (
<List<Person>
items={[
{ name: 'John', age: 30 },
{ name: 'Jane', age: 25 },
]}
renderItem={(person) => (
<span>
{person.name} ({person.age})
</span>
)}
/>
);
};
```
In this example, we define a List component that takes a type parameter `T`. The component has two props: items which is an array of type `T`, and `renderItem` which is a function that takes an item of type `T` and returns a JSX element.
We then use the `List` component with different types: `string` and a custom `Person` type. The `renderItem` function is specific to each type, ensuring that the component is type-safe and flexible.
**Conclusion**
Generics is a powerful feature in TypeScript that allows you to write reusable, type-safe code. By using real-world React components, we've demonstrated how generics can help you create flexible and maintainable code.
Remember, generics are like blueprints or templates that can be used with different types. By specifying type parameters, you can ensure type safety and flexibility in your code.
I hope this post has triggered your interest in generics in TypeScript and inspired you to write more reusable and maintainable code. Happy coding!
Useful Links
- https://www.typescriptlang.org/docs/handbook/2/generics.html
| gboladetrue |
1,900,443 | What’s New in Blazor: 2024 Volume 2 | TL;DR: Showcasing the new features and components introduced in the Syncfusion Blazor platform for... | 0 | 2024-07-08T14:25:10 | https://www.syncfusion.com/blogs/post/whats-new-blazor-2024-volume-2 | blazor, development, web, ui | ---
title: What’s New in Blazor: 2024 Volume 2
published: true
date: 2024-06-25 17:31:25 UTC
tags: blazor, development, web, ui
canonical_url: https://www.syncfusion.com/blogs/post/whats-new-blazor-2024-volume-2
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hv1pwbz3tui7rc9mmlh.png
---
**TL;DR:** Showcasing the new features and components introduced in the Syncfusion Blazor platform for the 2024 Volume 2 release with vivid, picturesque illustrations.
We are happy to announce the release of [2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "2024 Volume 2 release"). The release brings many exciting new components, features, themes, and accessibility improvements to our [Syncfusion Blazor components](https://www.syncfusion.com/blazor-components "Blazor components").
This blog will share the most highlighted updates in the Syncfusion Blazor suite for the 2024 Volume 2 release.
## Common updates
### .NET 9 previews compatibility
Syncfusion Blazor components offer full compatibility with .NET 9, including the latest preview versions.
### Introducing Fluent 2.0: A modern theme for your Blazor apps
Introducing a new Fluent 2.0 theme for Syncfusion Blazor components! This theme elevates your apps’ visual appeal with a clean look and feel.
- **Light and dark variants:** Choose a light theme for a bright and airy feel or a dark theme for a sophisticated look.
- **Customization:** With the intuitive [Syncfusion Blazor Theme Studio](https://blazor.syncfusion.com/themestudio "Blazor Theme Studio"), we can easily customize the theme’s appearance. You can also easily adjust colors, fonts, and other visual elements.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Fluent-2.0-theme-support-in-Syncfusion-Blazor-components.png" alt="Fluent 2.0 theme support in Syncfusion Blazor components" style="width:100%">
<figcaption>Fluent 2.0 theme support in Syncfusion Blazor components</figcaption>
</figure>
**Note:** Refer to the [Fluent 2.0 theme in the Blazor components demo](https://blazor.syncfusion.com/demos/datagrid/overview?theme=fluent2 "Fluent 2.0 theme in the Blazor components demo") for more details.
## New Blazor components
In this release, the following most awaited new Blazor components arrived in preview mode:
### Blazor 3D Charts (Preview)
The [Blazor 3D Charts](https://www.syncfusion.com/blazor-components/blazor-3d-charts "Blazor 3D Charts") component visually represents data in three dimensions, showcasing relationships and trends among variables. Unlike traditional 2D charts, 3D charts add depth to the visualization, allowing for a better understanding of data patterns.
The component’s key features include:
- **Data binding:** Bind the 3D Charts component to a collection of objects or a DataManager. In addition to chart series, data labels and tooltips can also be bound to your data.
- **Data labels:** Annotate data points with labels to improve the readability of data.
- **Axis types:** Plot different data types such as numbers, datetime, logarithmic, and string.
- **Axis features:** It supports multiple axes, inverted axes, multiple panes, opposed positions, and smart labels.
- **Legend:** Provide additional information about a series with customization options.
- **Animation** – Animates the Chart series when the chart is rendered and refreshed.
- **User interaction:** Supports interactive features such as tooltips and data point selection.
- **Export:** Print the Chart directly from the browser and export it in JPEG and PNG formats.
- **RTL** – Provides a full-fledged right-to-left mode that aligns the axis, tooltip, legend, and data in the 3D Charts component from right to left direction.
- **Appearance:** The built-in theme picks the colors for the 3D Charts, but each element can be customized with simple configuration options.
- **Accessibility** —The 3D Charts component is designed to be accessible to users with disabilities. Its features, such as WAI-ARIA standard compliance and keyboard navigation, ensure that it can be effectively used with assistive technologies such as screen readers.
The Blazor 3D Charts component offers six versatile chart types; all are easily configurable and have built-in support for visually stunning effects.
- **Column:** Represents data with vertical bars for easy value comparison.
- **Bar:** Utilizes horizontal bars to display data and facilitate value comparison.
- **Stacked Column:** Plots data points on top of each other using vertical bars for comprehensive visualization.
- **Stacked Bar:** Achieves the same effect as the Stacked Column but with horizontal bars.
- **100% Stacked Column:** This column illustrates the percentage distribution of multiple datasets within a total, with each column adding up to 100%.
- **100% Stacked Bar:** This bar resembles the Stacked Column but uses horizontal bars to showcase the percentage distribution of datasets within a total.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Blazor-3D-Charts-1.gif" alt="Blazor 3D Charts" style="width:100%">
<figcaption>Blazor 3D Charts</figcaption>
</figure>
**Note:** For more details, refer to the Blazor 3D Charts [documentation](https://blazor.syncfusion.com/documentation/3d-chart/getting-started "Getting started with Blazor 3D Charts") and [demos](https://blazor.syncfusion.com/demos/chart-3d/column?theme=fluent2 "Blazor 3D Charts demo").
### Blazor OTP Input (Preview)
The [Blazor OTP Input](https://www.syncfusion.com/blazor-components/blazor-otp-input "Blazor OTP Input") is a form component used to input one-time passwords (OTP) during multi-factor authentication. It provides extensive customization options, allowing users to change input types, placeholders, separators, and more to suit their needs.
Its key features include:
- **Input types:** Allowed input types are text, number, or password.
- **Styling modes:** Offer built-in styling options such as underline, outline, or fill.
- **Placeholders:** Set a hint character for each input field, indicating the expected value.
- **Separators:** Specify a character between the input fields.
- **Customization:** Customize the default appearance, including input field styling, input length, and more.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Blazor-OTP-Input-component.jpg" alt="Blazor OTP Input component" style="width:100%">
<figcaption>Blazor OTP Input component</figcaption>
</figure>
**Note:** For more details, refer to the Blazor OTP Input component [documentation](https://blazor.syncfusion.com/documentation/otp-input/getting-started-webapp "Getting started with Blazor OTP Input") and [demos](https://blazor.syncfusion.com/demos/otp-input/default-functionalities "Blazor OTP Input demo").
### Blazor TextArea (Preview)
The [Blazor TextArea](https://www.syncfusion.com/blazor-components/blazor-textarea "Blazor TextArea") allows users to input multiple lines of text within a specific area, such as comments, messages, composing detailed notes, or other content.
Its key features include:
- **Resizable:** The Blazor TextArea component can be resized vertically, horizontally, or in both directions by selecting the corresponding **ResizeMode** option.
- **Row-column configurations:** You can configure the size and appearance of the TextArea element in terms of **RowCount** and **ColumnCount** to suit specific design and functionality requirements.
- **Floating label:** The Blazor TextArea intelligently floats the placeholder text based on the specified **FloatLabelType** option, providing users with clear guidance and enhancing usability.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Blazor-TextArea-component.gif" alt="Blazor TextArea component" style="width:100%">
<figcaption>Blazor TextArea component</figcaption>
</figure>
**Note:** For more details, refer to the Blazor TextArea [documentation](https://blazor.syncfusion.com/documentation/textarea/getting-started-webapp "Getting started with Blazor TextArea") and [demos](https://blazor.syncfusion.com/demos/textarea/floatinglabel?theme=fluent "Blazor TextArea demo").
## Preview to production-ready component
The [Blazor Timeline](https://www.syncfusion.com/blazor-components/blazor-timeline "Blazor Timeline component") component, introduced in the 2024 Volume 1 release, is now production-ready and meets the industry standards for stability and performance.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Blazor-Timeline-component.png" alt="Blazor Timeline component" style="width:100%">
<figcaption>Blazor Timeline component</figcaption>
</figure>
**Note:** For more details, refer to the [Blazor Timeline component demo](https://blazor.syncfusion.com/demos/timeline/default-functionalities?theme=fluent "Example of default functionalities in Blazor Timeline component").
## What’s new in our existing Blazor components?
### Blazor Diagram
The new features added to the [Blazor Diagram](https://www.syncfusion.com/blazor-components/blazor-diagram "Blazor Diagram") component are as follows:
#### Rulers
Streamline your diagram creation process using rulers in the Blazor Diagram component! This powerful new feature empowers you to:
- **Ensure accuracy:** Achieve precise placement, sizing, and alignment of shapes and objects within your diagrams.
- **Horizontal and vertical measurements:** Utilize horizontal and vertical rulers for comprehensive measurement capabilities.
The [RulerSettings](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.SfDiagramComponent.html#Syncfusion_Blazor_Diagram_SfDiagramComponent_RulerSettings "RulerSettings property of Blazor Diagram") property is used to customize ruler behavior and appearance.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Rulers-in-Blazor-Diagram-component.gif" alt="Rulers in Blazor Diagram component" style="width:100%">
<figcaption>Rulers in Blazor Diagram component</figcaption>
</figure>
**Note:** For more details, refer to the [rulers in the Blazor Diagram component demos](https://blazor.syncfusion.com/demos/diagramcomponent/rulers?theme=fluent2 "Example of rulers in Blazor Diagram component").
#### Handle complex diagrams with chunking support
We have added support for rendering large diagrams with annotations, paths, text, images, and SVG shapes without exceeding the maximum size limit for a single incoming hub message (MaximumReceiveMessageSize of 32KB). You can achieve it by setting the [EnableChunkMessages](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.SfDiagramComponent.html#Syncfusion_Blazor_Diagram_SfDiagramComponent_EnableChunkMessages "EnableChunkMessages property of Blazor Diagram component") property to **true**.
#### Advanced routing
We have enhanced the dynamic updating of connector routes based on the placement or movement of nearby shapes. You can enable this feature by setting the [RoutingType](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.RoutingTypes.html#Syncfusion_Blazor_Diagram_RoutingTypes_Advanced "RoutingType property of Blazor Diagram component") property to **Advanced**. It provides an optimized path with fewer bends and the shortest possible connector length.
#### Search symbols in the symbol palette
This feature allows users to search for symbols in the symbol palette by entering a symbol’s ID or search keywords into a text box and clicking Search. The search results are retrieved by matching the ID or **SearchTags** property with the string entered in the Search text box. The **ShowSearchTextBox** property of the symbol palette is used to show or hide the search text box in the symbol palette.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Searching-symbols-in-the-symbol-palette-of-the-Blazor-Diagram-component.gif" alt="Searching symbols in the symbol palette of the Blazor Diagram component" style="width:100%">
<figcaption>Searching symbols in the symbol palette of the Blazor Diagram component</figcaption>
</figure>
### PDF Viewer
The [Blazor PDF Viewer](https://www.syncfusion.com/blazor-components/blazor-pdf-viewer "Blazor PDF Viewer") delivers the following new features:
#### Improved PDF Viewer performance
The PDF Viewer now loads large-sized documents quickly, boosting the loading speed and efficiency of large PDF files.
Experience faster document viewing with the optimized PDF Viewer, designed to handle large files quickly.
#### Enhanced custom stamp
The Blazor PDF Viewer now supports adding PNG images as custom stamps. Also, it displays any graphical object as a custom stamp in an existing PDF document. You can rotate custom stamps to better fit your documents.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Custom-stamps-in-Blazor-PDF-Viewer.png" alt="Custom stamps in Blazor PDF Viewer" style="width:100%">
<figcaption>Custom stamps in Blazor PDF Viewer</figcaption>
</figure>
#### Customizable date and time format
In the comment panel, you can customize the date and time format. The display can also be tailored to suit your preferred style and regional settings.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Customizable-date-and-time-format-support-in-Blazor-PDF-Viewer.png" alt="Customizable date and time format support in Blazor PDF Viewer" style="width:100%">
<figcaption>Customizable date and time format support in Blazor PDF Viewer</figcaption>
</figure>
### Multiline comments
Multiline comments can now be added to the annotations. This allows better organization and clarity in the PDF documents.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Adding-multiple-comments-to-annotation-in-PDF.png" alt="Adding multiple comments to an annotation in PDF" style="width:100%">
<figcaption>Adding multiple comments to an annotation in PDF</figcaption>
</figure>
### Image Editor
The [Blazor Image Editor](https://www.syncfusion.com/blazor-components/blazor-image-editor "Blazor Image Editor") is rolled out with the following advanced features:
#### Continuous drawing mode for seamless image editing
Users can now draw multiple annotations simultaneously in the Image Editor, enhancing creative flexibility. Furthermore, every action, including customizations, will be tracked in the undo/redo collection, ensuring a seamless user experience and making it easier to experiment with different designs. This feature can be achieved through the UI and built-in public methods, namely [EnableActiveAnnotationAsync](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.ImageEditor.SfImageEditor.html#Syncfusion_Blazor_ImageEditor_SfImageEditor_EnableActiveAnnotationAsync_Syncfusion_Blazor_ImageEditor_ShapeType_ "EnableActiveAnnotationAsync method of Blazor Image Editor") and [DisableActiveAnnotationAsync](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.ImageEditor.SfImageEditor.html#Syncfusion_Blazor_ImageEditor_SfImageEditor_DisableActiveAnnotationAsync "DisableActiveAnnotationAsync method of Blazor Image Editor").
**Note:** For more details, refer to the [Blazor Image Editor demo](https://blazor.syncfusion.com/demos/image-editor/default-functionalities?theme=fluent "Blazor Image Editor demo").
#### Save support enhancement
Users can save an image with a specified file name, type, and image with quality. This enhancement provides more control over the output, ensuring users can save their work exactly as needed.
The feature can be achieved through UI and [ExportAsync](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.ImageEditor.SfImageEditor.html#Syncfusion_Blazor_ImageEditor_SfImageEditor_ExportAsync_System_String_Syncfusion_Blazor_ImageEditor_ImageEditorFileType_System_Double_ "ExportAsync method of Blazor Image Editor") public method.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Enhanced-save-support-in-Blazor-Image-Editor.png" alt="Enhanced save support in Blazor Image Editor" style="width:100%">
<figcaption>Enhanced save support in Blazor Image Editor</figcaption>
</figure>
#### Z-order support for layered editing
This feature allows users to adjust the positioning of annotations. It is handy for designing personalized templates like greeting cards or posters, where managing the layering of multiple annotations is crucial for a polished final product. This feature can be implemented through UI and the public methods [BringForwardAsync](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.ImageEditor.SfImageEditor.html#Syncfusion_Blazor_ImageEditor_SfImageEditor_BringForwardAsync_System_String_ "BringForwardAsync method of Blazor Image Editor"), [BringToFrontAsync](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.ImageEditor.SfImageEditor.html#Syncfusion_Blazor_ImageEditor_SfImageEditor_BringToFrontAsync_System_String_ "BringToFrontAsync method of Blazor Image Editor"), [SendBackwardAsync](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.ImageEditor.SfImageEditor.html#Syncfusion_Blazor_ImageEditor_SfImageEditor_SendBackwardAsync_System_String_ "SendBackwardAsync method of Blazor Image Editor"), and [SendToBackAsync](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.ImageEditor.SfImageEditor.html#Syncfusion_Blazor_ImageEditor_SfImageEditor_SendToBackAsync_System_String_ "SendToBackAsync method of Blazor Image Editor").
Types of z-order adjustments include:
- **Sent Backward:** Switch the selected annotation with the annotation one layer behind it.
- **Send to Back:** Move the selected annotation to behind all other annotations.
- **Bring to Front:** Move the selected annotation ahead of all others.
- **Bring forward:** Switch the selected annotation with the annotation one layer ahead of it.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Z-order-annotation-rendering-options-in-Blazor-Image-Editor.png" alt="Z-order annotation rendering options in Blazor Image Editor" style="width:100%">
<figcaption>Z-order annotation rendering options in Blazor Image Editor</figcaption>
</figure>
### Gantt Chart
The [Blazor Gantt Chart](https://www.syncfusion.com/blazor-components/blazor-gantt-chart "Blazor Gantt Chart") delivers the following new user-friendly updates:
#### Revamped resource binding for more efficient project management
The [resource allocation](https://blazor.syncfusion.com/documentation/gantt-chart/resources "Resources in Blazor Gantt Chart") feature in the Blazor Gantt Chart has been enhanced with simplified data binding support and improved resource assignment management.
#### Row hover support for interactive viewing
We’ve introduced the new row hover selection support in the Gantt Chart. The [EnableRowHover](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Gantt.SfGantt-1.html#Syncfusion_Blazor_Gantt_SfGantt_1_EnableRowHover "EnableRowHover property of Blazor Gantt Chart") property enhances the visual experience and makes it easier to identify the row currently under the cursor.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Row-hovering-feature-in-the-Blazor-Gantt-Chart.gif" alt="Row hovering feature in the Blazor Gantt Chart" style="width:100%">
<figcaption>Row hovering feature in the Blazor Gantt Chart</figcaption>
</figure>
**Note:** For more details, refer to the [row hovering in the Blazor Gantt Chart demo](https://blazor.syncfusion.com/demos/gantt-chart/selection?theme=fluent "Row hovering in the Blazor Gantt Chart demo").
### Query Builder
The [Blazor Query Builder](https://www.syncfusion.com/blazor-components/blazor-query-builder "Blazor Query Builder") supports the following new features in this 2024 volume 2 release:
#### Drag and drop support for intuitive query building
Users can reposition rules or groups within the component effortlessly. This enhancement provides a more intuitive and flexible way to construct and modify queries.
**Note:** For more details, refer to the [example of drag and drop in Blazor Query Builder](https://blazor.syncfusion.com/demos/query-builder/drag-and-drop?theme=fluent2 "Example of drag and drop in Blazor Query Builder").
#### Separate connectors for clear visualization
Users can integrate standalone connectors between rules or groups within the same group. This feature allows for greater flexibility, as users can connect rules or groups using different connectors, enhancing the complexity and precision of query construction.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Separate-connectors-feature-in-Blazor-Query-Builder.png" alt="Separate connectors feature in Blazor Query Builder" style="width:100%">
<figcaption>Separate connectors feature in Blazor Query Builder</figcaption>
</figure>
**Note:** For more details, refer to the [example of separate connectors in Blazor Query Builder](https://blazor.syncfusion.com/demos/query-builder/separate-connector?theme=fluent2 "Example of separate connectors in Blazor Query Builder").
### DataGrid
#### Add new row option for easier data entry
The [Blazor DataGrid](https://www.syncfusion.com/blazor-components/blazor-datagrid "Blazor DataGrid") now includes an **add new row** feature during inline editing. This feature ensures that a blank row is constantly visible within the grid content, facilitating the easy insertion of new records.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Add-new-row-feature-in-Blazor-DataGrid.gif" alt="Add new row feature in Blazor DataGrid" style="width:100%">
<figcaption>Add new row feature in Blazor DataGrid</figcaption>
</figure>
**Note:** For more details, refer to the [example of inline editing in Blazor DataGrid](https://blazor.syncfusion.com/demos/datagrid/inline-editing?theme=fluent "Example of inline editing in Blazor DataGrid").
### Dropdown components
#### Disable specific items
The dropdown components ([AutoComplete](https://www.syncfusion.com/blazor-components/blazor-autocomplete "Blazor AutoComplete"), [ListBox](https://www.syncfusion.com/blazor-components/blazor-listbox "Blazor ListBox"), [ComboBox](https://www.syncfusion.com/blazor-components/blazor-combobox "Blazor ComboBox"), [Dropdown List](https://www.syncfusion.com/blazor-components/blazor-dropdown-list "Blazor Dropdown List"), [Dropdown Tree](https://www.syncfusion.com/blazor-components/blazor-dropdowntree "Blazor Dropdown Tree"), [MultiSelect Dropdown](https://www.syncfusion.com/blazor-components/blazor-multiselect-dropdown "Blazor MultiSelect Dropdown"), and [Mention](https://www.syncfusion.com/blazor-components/blazor-mention "Blazor Mention")) can now enable or [disable individual items](https://blazor.syncfusion.com/demos/dropdown-list/disableditems?theme=fluent "Example of disabled items in Blazor Dropdown List") for specific scenarios. Once an item is disabled, it cannot be selected as a value for the component. This is particularly useful for disabling options such as out-of-stock products or inactive account types. To configure the disabled item columns, use the [DropDownListFieldSettings.Disabled](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.DropDowns.DropDownListFieldSettings.html#Syncfusion_Blazor_DropDowns_DropDownListFieldSettings_Disabled "DropDownListFieldSettings.Disabled property") property.
### MultiSelect Dropdown
#### Virtual scrolling
The [Blazor MultiSelect Dropdown](https://www.syncfusion.com/blazor-components/blazor-multiselect-dropdown "Blazor MultiSelect Dropdown") component’s virtualization support allows users to navigate large lists of options efficiently by loading the items on demand. Virtualization can be used with filtering, grouping, custom values, and checkbox mode features. This feature can be enabled by setting the [EnableVirtualization](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.DropDowns.SfMultiSelect-2.html#Syncfusion_Blazor_DropDowns_SfMultiSelect_2_EnableVirtualization "EnableVirtualization property of Blazor MultiSelect Dropdown") property to **true**.
**Note:** For more details, refer to the [example of virtualization in the Blazor MultiSelect Dropdown component](https://blazor.syncfusion.com/demos/multiselect-dropdown/virtualization?theme=fluent "Example of virtualization in Blazor MultiSelect Dropdown").
### Tree Grid
#### Performance improvements in expand and collapse features
We are excited to announce significant performance improvements for [Blazor TreeGrid](https://www.syncfusion.com/blazor-components/blazor-tree-grid "Blazor TreeGrid") with single root node data configurations. This enhancement targets scenarios with only one root parent row and thousands of child rows.
Following are the performance metrics comparing the old release with the current 2024 volume 2 release for 10,000 records:
<table>
<tbody>
<tr>
<td width="208">
<p><strong>Actions</strong></p>
</td>
<td width="208">
<p><strong>Old release</strong></p>
</td>
<td width="208">
<p><strong>2024 Volume 2 release</strong></p>
</td>
</tr>
<tr>
<td width="208">
<p>Initial rendering</p>
</td>
<td width="208">
<p>1.7 seconds</p>
</td>
<td width="208">
<p>1.1 seconds</p>
</td>
</tr>
<tr>
<td width="208">
<p>Expand row</p>
</td>
<td width="208">
<p>31 seconds</p>
</td>
<td width="208">
<p>1.4 seconds</p>
</td>
</tr>
<tr>
<td width="208">
<p>Collapse row</p>
</td>
<td width="208">
<p>560 milliseconds</p>
</td>
<td width="208">
<p>150 milliseconds</p>
</td>
</tr>
</tbody>
</table>
### Rich Text Editor
#### Quick format toolbar for easy text formatting
The [Blazor Rich Text Editor](https://www.syncfusion.com/blazor-components/blazor-wysiwyg-rich-text-editor "Blazor Rich Text Editor") features a quick toolbar that appears upon text selection, offering convenient access to text formatting options. This floating toolbar allows users to easily apply bold, italic, underline, strikethrough, and other formats directly from the quick toolbar near the selected text.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Add-new-row-feature-in-Blazor-DataGrid.gif" alt="Quick format toolbar in Blazor Rich Text Editor" style="width:100%">
<figcaption>Quick format toolbar in Blazor Rich Text Editor</figcaption>
</figure>
**Note:** For more details, refer to the [example of quick format toolbar in Blazor Rich Text Editor](https://blazor.syncfusion.com/demos/rich-text-editor/quick-format-toolbar?theme=fluent2 "Example of quick format toolbar in Blazor Rich Text Editor").
### TreeView
#### Virtualization for smoother performance
The [Blazor TreeView](https://www.syncfusion.com/blazor-components/blazor-treeview "Blazor TreeView") component now supports [virtualization](https://blazor.syncfusion.com/demos/treeview/ui-virtualization?theme=fluent2 "Example of virtualization in Blazor TreeView") to optimize performance while handling a huge volume of data. This will render only nodes based on the TreeView height and swap the nodes when the user scrolls, avoiding rendering off-screen items. To enable this feature, set the required [Height](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Navigations.SfTreeView-1.html#Syncfusion_Blazor_Navigations_SfTreeView_1_Height "Height property of Blazor TreeView") property for TreeView and the value of [EnableVirtualization](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Navigations.SfTreeView-1.html#Syncfusion_Blazor_Navigations_SfTreeView_1_EnableVirtualization "EnableVirtualization Property for Blazor TreeView") to **true**.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Virtualization-in-Blazor-TreeView.gif" alt="Virtualization in Blazor TreeView" style="width:100%">
<figcaption>Virtualization in Blazor TreeView</figcaption>
</figure>
## Conclusion
Thanks for reading! In this blog, we have explored the new components and features added to the [Syncfusion Blazor](https://www.syncfusion.com/blazor-components "Blazor components") suite for the [2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release. To discover more about the features available in this release, please visit our [Release Notes](https://help.syncfusion.com/common/essential-studio/release-notes/v26.1.35 "Essential Studio Release Notes") and [What’s New](https://www.syncfusion.com/products/whatsnew/blazor-components "What’s New in Syncfusion Blazor components") pages. Try them out and leave your feedback in the comments section below!
For current Syncfusion users, you can access the most recent version of Essential Studio on the [License and Downloads](https://www.syncfusion.com/account/downloads "License and downloads page of Essential Studio") page. If you’re new to Syncfusion, we offer a 30-day [free trial](https://www.syncfusion.com/downloads "Free evaluation of the Syncfusion Essential Studio") to test these exciting new features.
Try out these features and share your feedback as comments on this blog. You can also reach us through our [support forums](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal.](https://www.syncfusion.com/feedback/ "Syncfusion Feedback Portal")
## Related blogs
- [Syncfusion Essential Studio 2024 Volume 2 Is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 Is Here!")
- [Introducing the New Blazor 3D Charts Component](https://www.syncfusion.com/blogs/post/blazor-3d-charts-component "Blog: Introducing the New Blazor 3D Charts Component")
- [How to Maintain State in the Blazor ListBox Using Fluxor?](https://www.syncfusion.com/blogs/post/maintain-state-blazor-listbox-fluxor "Blog: How to Maintain State in the Blazor ListBox Using Fluxor?")
- [What’s New in C# 13 for Developers?](https://www.syncfusion.com/blogs/post/whats-new-csharp-13-for-developers "Blog: What’s New in C# 13 for Developers?") | jollenmoyani |
1,900,454 | Is Binary Fundamental to Programming? | Binary is the bedrock of all programming languages. It is a system of storing numbers in only two... | 0 | 2024-07-09T14:00:00 | https://dev.to/anitaolsen/is-binary-fundamental-to-programming-1f70 | discuss, programming | Binary is the bedrock of all programming languages. It is a system of storing numbers in only two places: 0s and 1s which correspond to the on and off states your computer can understand.
How important is it to learn binary? Should I learn it as a programmer?
Would you say binary is fundamental to programming? | anitaolsen |
1,900,514 | Getting there. Ish... | Trying to return to life as normal | 0 | 2024-07-09T08:39:36 | https://dev.to/stacy_cash/getting-there-ish-2ii4 | csharp, book, illness, recovery | ---
title: Getting there. Ish...
published: true
description: Trying to return to life as normal
tags: csharp, book, illness, recovery
cover_image: https://raw.githubusercontent.com/StacyCash/blog-posts/main/general/2024/getting-there/cover-image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-25 19:55 +0000
---
Since February 2022 I have been on sick leave due to Long Covid, [POTS](https://en.wikipedia.org/wiki/Postural_orthostatic_tachycardia_syndrome), and Chronic Fatigue. Last week I had an assessment from the government because I have been on sick leave for so long. I have no idea what the result is; I'll find out in time I guess.
It's hard to think that it's been 16 months since I was first ill, and mind blowing that I'm still only working 4 to 5 hours a day.
When I was first ill, I was so tired that I basically just collapsed on the sofa or bed, and did nothing. I didn't even have the energy to get bored at the nothing I could do. Netflix and YouTube were hit hard, but nothing really went in.
The specialist clinic that is treating me in Amsterdam tries new treatments every 6 weeks. After 3 weeks we have a chat about progress, and at the 6 week mark decide whether to continue or not before trying something else.
First they tried to get my POTS under control. So this time last year I was starting to get bored again. But still didn't have the energy, or brain capacity, to do much.
So... I thought a good option would be to work through the book I wrote back in 2022. Seeing as I wrote it, I must be able to do it. At least that was my thought process...
How wrong I was. The brain fog was so thick that I couldn't understand anything (judging from the 4 and 5 star reviews, I'm assuming this is a me now problem, and not a me then problem 😅🫣).
Without a doubt that was the lowest time of the illness. The way that I saw it was:
> If I could not even understand my own book then what chance did I have at continuing with my career?
I love technology, and software development! And it's not just a career; it's been a hobby since I was in single digits! At that time I wondered if I'd ever work again.
But since autumn last year I have been slowly getting back into it. I've given some talks, streamed some coding, and other IT stuff. And, whilst it's only 4 to 5 hours a day, I am working now and starting to feel a little useful again.
It's not easy, and sometimes my brain fog means that things that should take an hour or 2 can take me a day or more. But other days I can get more done than I hoped. Ups and downs.
And that book... I can work through it again, I use it for reference for other things that I do now (the best reason to write a book!).
I even reviewed [Jimmy Engström](https://twitter.com/EngstromJimmy)'s [book](https://www.amazon.nl/-/en/Jimmy-Engstr%C3%B6m/dp/1835465919/ref=sr_1_1?crid=21NTGAYTKSMJ&dib=eyJ2IjoiMSJ9.GKqHjak_0urI0mQ6kj6uVfwlvYs8LP-4SveG27enuVbmFZLvOUU2SwEcTXTq6v4oBt4vNGQAVA6XhJldBf95oc-kW1qA4GC4t5HIQe6S2ZxolXRrV3QUhh_ItoAVkSsgxBRlCIaYNFHiwq1wsUGgebBmZZhynff6ZRW1HZr3hXNXobbHHTGXt744U0WyAhHF.XajWEHDB9jQXzbRVZLKhExoigDF6qdGsAJtLcygOAI0&dib_tag=se&keywords=blazor+jimmy&qid=1719763481&sprefix=blazor+jimmy%2Caps%2C80&sr=8-1) for him! Both a learning opportunity, and a chance to stretch my brain a little.
I also got some great news about my book. I've gotten the green light for a second edition!
I'm taking my time - it's going to be for .NET 9, so not before the end of the year. I'm updating what is there, looking at where I think I can improve my explanations, and adding some new chapters!
- Data API
- Making GitHub Workflows for infra as code, building, and deploying the code
- Adding a chapter on custom auth using Auth0
I have a long way to go, but it feels awesome to be back! | stacy_cash |
1,901,385 | Chart of the Week: Creating a .NET MAUI Doughnut Chart to Visualize the World’s Biggest Oil Producers | TL;DR: Let’s craft a .NET MAUI Doughnut Chart to visualize the 2023 oil production data. We cover... | 0 | 2024-07-08T15:35:27 | https://www.syncfusion.com/blogs/post/maui-doughnut-chart-oil-producers | dotnetmaui, chart, maui, datavisualization | ---
title: Chart of the Week: Creating a .NET MAUI Doughnut Chart to Visualize the World’s Biggest Oil Producers
published: true
date: 2024-06-26 13:59:30 UTC
tags: dotnetmaui, chart, maui, datavisualization
canonical_url: https://www.syncfusion.com/blogs/post/maui-doughnut-chart-oil-producers
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cnlgbdybyu1vc07lmaiy.png
---
**TL;DR:** Let’s craft a .NET MAUI Doughnut Chart to visualize the 2023 oil production data. We cover data preparation, chart configuration, and customization steps to create an interactive and visually appealing chart.
Welcome to our **Chart of the Week** blog series!
Today, we’ll visualize the world’s biggest oil producers in 2023 using Syncfusion’s [.NET MAUI Doughnut Chart](https://www.syncfusion.com/maui-controls/maui-circular-charts/chart-types/maui-doughnut-chart ".NET MAUI Doughnut Chart").
Despite ongoing efforts to decarbonize the global economy, oil remains one of the world’s most vital resources. It’s produced by a relatively small group of countries, granting them significant economic and political leverage.
This graphic illustrates global crude oil production in 2023, measured in million barrels per day, sourced from the U.S. Energy Information Administration [(EIA)](https://www.eia.gov/international/data/world/petroleum-and-other-liquids/monthly-petroleum-and-other-liquids-production "U.S. Energy Information Administration").
The following image shows the Doughnut Chart we’re going to create.

Let’s get started!
## Step 1: Gather oil production data
Before creating the .NET MAUI Doughnut Chart, we need to gather data from the U.S. Energy Information Administration [(EIA)](https://www.eia.gov/international/data/world/petroleum-and-other-liquids/monthly-petroleum-and-other-liquids-production "U.S. Energy Information Administration"). We can also download the data in CSV format.
## Step 2: Prepare the data for the Doughnut Chart
Then, we need to organize our data properly. This involves creating an **OilProductionModel** class, which defines the structure of our oil production data, and a **WorldOilProduction** class, which handles the data manipulation and communication between the model and the Doughnut Chart.
The data model represents the data we want to visualize. It could contain the properties to store the details such as the country name, oil production value, and country flag image.
```csharp
public class OilProductionModel
{
public string Country { get; set; }
public double Production { get; set; }
public string FlagImage { get; set; }
public OilProductionModel(string country, double production, string flagImage)
{
Country = country;
Production = production;
FlagImage = flagImage;
}
}
```
Now, create the **WorldOilProduction** class, which acts as an intermediary between the data models and the user interface elements (Doughnut Chart), preparing and formatting data for display and interaction.
Additionally, we need to configure the properties to update the exploding segment values in the center view labels.
```csharp
public class WorldOilProduction : INotifyPropertyChanged
{
public List<OilProductionModel> ProductionDetails { get; set; }
public List<Brush> PaletteBrushes { get; set; }
private double productionValue;
public double ProductionValue
{
get
{
return productionValue;
}
set
{
productionValue = value;
NotifyPropertyChanged(nameof(ProductionValue));
}
}
private string countryName;
public string CountryName
{
get
{
return countryName;
}
set
{
countryName = value;
NotifyPropertyChanged(nameof(CountryName));
}
}
int explodeIndex = 1;
public int ExplodeIndex
{
get
{
return explodeIndex;
}
set
{
explodeIndex = value;
UpdateIndexValues(value);
NotifyPropertyChanged(nameof(ExplodeIndex));
}
}
public WorldOilProduction()
{
ProductionDetails = new List<OilProductionModel>(ReadCSV());
PaletteBrushes = new List<Brush>
{
new SolidColorBrush(Color.FromArgb("#583101")),
new SolidColorBrush(Color.FromArgb("#603808")),
new SolidColorBrush(Color.FromArgb("#6f4518")),
new SolidColorBrush(Color.FromArgb("#8b5e34")),
new SolidColorBrush(Color.FromArgb("#a47148")),
new SolidColorBrush(Color.FromArgb("#bc8a5f")),
new SolidColorBrush(Color.FromArgb("#d4a276")),
new SolidColorBrush(Color.FromArgb("#e7bc91")),
new SolidColorBrush(Color.FromArgb("#f3d5b5")),
new SolidColorBrush(Color.FromArgb("#FFEDD8"))
};
}
public event PropertyChangedEventHandler PropertyChanged;
public void NotifyPropertyChanged([CallerMemberName] String propertyName = "")
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
private void UpdateIndexValues(int value)
{
if (value >= 0 && value < ProductionDetails.Count)
{
var model = ProductionDetails[value];
if (model != null && model.Country != null)
{
ProductionValue = model.Production;
CountryName = model.Country;
}
}
}
}
```
Then, convert the CSV data into a data collection using the **ReadCSV** method.
```csharp
public IEnumerable<OilProductionModel> ReadCSV()
{
Assembly executingAssembly = typeof(App).GetTypeInfo().Assembly;
Stream inputStream = executingAssembly.GetManifestResourceStream("OilProductionChart.Resources.Raw.data.csv");
string? line;
List<string> lines = new List<string>();
using StreamReader reader = new StreamReader(inputStream);
while ((line = reader.ReadLine()) != null)
{
lines.Add(line);
}
return lines.Select(line =>
{
string[] data = line.Split(',');
return new OilProductionModel(data[1], Convert.ToDouble(data[2]), data[3]);
});
}
```
## Step 3: Configure the Syncfusion .NET MAUI Doughnut Chart
Let’s configure the .NET MAUI Doughnut Chart control using this [documentation](https://help.syncfusion.com/maui/circular-charts/getting-started "Getting started with .NET MAUI Charts").
Refer to the following code example.
```xml
<ContentPage
. . .
xmlns:chart="clr-namespace:Syncfusion.Maui.Charts;assembly=Syncfusion.Maui.Charts">
<chart:SfCircularChart>
. . .
<chart:DoughnutSeries>
. . .
</chart:DoughnutSeries>
</chart:SfCircularChart>
</ContentPage>
```
## Step 4: Bind the data to .NET MAUI Doughnut Series
To effectively display the oil production data, we’ll use the Syncfusion **DoughnutSeries** instance and bind our **ProductionDetails** collection to the chart.
Refer to the following code example.
```xml
<chart:SfCircularChart>
. . .
<chart:DoughnutSeries ItemsSource="{Binding ProductionDetails}"
XBindingPath="Country"
YBindingPath="Production"/>
</chart:SfCircularChart>
```
In the above code, the **ProductionDetails** collection from the **WorldOilProduction** ViewModel is bound to the **ItemSource** property. The **XBindingPath** and **YBindingPath** properties are bound to the **Country** and **Production** properties, respectively.
## Step 5: Customize the .NET MAUI Doughnut Chart appearance
Let’s customize the appearance of the .NET MAUI Doughnut Chart to enhance its readability.
### **Adding the chart title**
Adding a [Title](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartBase.html#Syncfusion_Maui_Charts_ChartBase_Title "Title property of .NET MAUI Doughnut Chart") helps users understand the content of the chart more effectively. Refer to the following code example to customize the chart title.
```xml
<chart:SfCircularChart.Title>
<Grid Margin="4,5,0,0" >
<Grid.RowDefinitions>
<RowDefinition Height="80"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="55"/>
<ColumnDefinition Width="Auto"/>
</Grid.ColumnDefinitions>
<Image Grid.Column="0"
Grid.RowSpan="2"
Source="oil.png"
Margin="0,-5,0,0"
HeightRequest="60"
WidthRequest="60"/>
<StackLayout Grid.Column="1" Grid.Row="0" Margin="0,10,0,0">
<Label Text="The World's Biggest Oil Producers in 2023"
Margin="0,0,0,6"
FontSize="22"
FontFamily="centurygothic"
FontAttributes="Bold"
TextColor="Black"/>
<Label Text="Crude Oil Production (Million barrels per day)"
FontSize="18"
FontFamily="centurygothic"
TextColor="Black"
Margin="0,2,0,0"/>
</StackLayout>
</Grid>
</chart:SfCircularChart.Title>
```
### Customize the doughnut series
Refer to the following code example to customize the doughnut series using the [Radius](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.CircularSeries.html#Syncfusion_Maui_Charts_CircularSeries_Radius "Radius property of .NET MAUI Doughnut Chart"), [InnerRadius](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.DoughnutSeries.html#Syncfusion_Maui_Charts_DoughnutSeries_InnerRadius "InnerRadius property of .NET MAUI Doughnut Chart"), [PaletteBrushes](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartSeries.html#Syncfusion_Maui_Charts_ChartSeries_PaletteBrushes "PaletteBrushes property of .NET MAUI Doughnut Chart"), and other properties.
```xml
<chart:DoughnutSeries ItemsSource="{Binding ProductionDetails}"
XBindingPath="Country"
YBindingPath="Production"
ShowDataLabels="True"
PaletteBrushes="{Binding PaletteBrushes}"
Radius="{OnPlatform Android=0.6,Default=0.65,iOS=0.5}"
InnerRadius="0.5"/>
```
### Customize the data labels
Let’s customize the data labels using the [LabelTemplate](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartSeries.html#Syncfusion_Maui_Charts_ChartSeries_LabelTemplate "LabelTemplate property of .NET MAUI Doughnut Chart") support. Using the [LabelContext](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.LabelContext.html "LabelContext property of .NET MAUI Doughnut Chart"), we can tailor the label content to display the percentage value of the corresponding data point. Additionally, use the [SmartLabelAlignment](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.CircularDataLabelSettings.html#Syncfusion_Maui_Charts_CircularDataLabelSettings_SmartLabelAlignment "SmartLabelAlignment property of .NET MAUI Doughnut Chart") property to arrange the data labels intelligently, preventing them from overlapping.
Refer to the following code example.
```xml
<chart:SfCircularChart x:Name="chart" Margin="{OnPlatform Default='0,0,0,0',iOS='-35,0,0,0'}" >
<chart:SfCircularChart.Resources>
<DataTemplate x:Key="labelTemplate">
<HorizontalStackLayout Spacing="5">
<Image Source="{Binding Item.FlagImage}" VerticalOptions="Center" HeightRequest="{OnPlatform Android=20,Default=30,iOS=20}" WidthRequest="{OnPlatform Android=20,Default=30,iOS=20}" />
<Label Text="{Binding Item.Country}" VerticalOptions="Center" FontFamily="centurygothic" FontSize="{OnPlatform Android=15,Default=20,iOS=15}"/>
<Label Text="{Binding Item.Production,StringFormat=': {0}%'}" Margin="-4,0,0,0" VerticalOptions="Center" FontFamily="centurygothic" FontSize="{OnPlatform Android=15,Default=20,iOS=15}"/>
</HorizontalStackLayout>
</DataTemplate>
</chart:SfCircularChart.Resources>
. . .
<chart:DoughnutSeries ItemsSource="{Binding ProductionDetails}"
XBindingPath="Country"
YBindingPath="Production"
ShowDataLabels="True"
LabelContext="Percentage"
LabelTemplate="{StaticResource labelTemplate}"
PaletteBrushes="{Binding PaletteBrushes}"
Radius="{OnPlatform Android=0.6,Default=0.65,iOS=0.5}"
InnerRadius="0.5">
<chart:DoughnutSeries.DataLabelSettings>
<chart:CircularDataLabelSettings SmartLabelAlignment="Shift" LabelPosition="Outside">
<chart:CircularDataLabelSettings.ConnectorLineSettings>
<chart:ConnectorLineStyle ConnectorType="Line" StrokeWidth="3"></chart:ConnectorLineStyle>
</chart:CircularDataLabelSettings.ConnectorLineSettings>
</chart:CircularDataLabelSettings>
</chart:DoughnutSeries.DataLabelSettings>
</chart:DoughnutSeries>
</chart:SfCircularChart>
```
### Adding a center view to the Doughnut Chart
Now, configure the [CenterView ](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.DoughnutSeries.html#Syncfusion_Maui_Charts_DoughnutSeries_CenterView "CenterView property of .NET MAUI Doughnut Chart")property to add content about the selected data segment’s oil production at the center of the chart.
Refer to the following code example.
```xml
<chart:DoughnutSeries.CenterView>
<VerticalStackLayout Spacing="{OnPlatform Android=3.5,Default=7,iOS=3.5}">
<Image HorizontalOptions="Center"
Opacity="0.8"
HeightRequest="{OnPlatform Android=15,Default=50,iOS=15}"
WidthRequest="{OnPlatform Android=15,Default=50,iOS=15}"
Margin="{OnPlatform Default='5,0,0,0', Android='2.5,0,0,0'}"
Source="oildrum.png"/>
<Label HorizontalOptions="Center" FontFamily="centurygothic" FontAttributes="Bold" FontSize="{OnPlatform Android=10,Default=21,iOS=10}" Text="{Binding CountryName,Source={x:Reference worldOilProduction}}"/>
<Label HorizontalOptions="Center" FontFamily="centurygothic" FontAttributes="Bold" FontSize="{OnPlatform Android=10,Default=20,iOS=10}" Text="{Binding ProductionValue,Source={x:Reference worldOilProduction},StringFormat='{0}M'}" />
</VerticalStackLayout>
</chart:DoughnutSeries.CenterView>
```
### Add interactive features
Using the [ExplodeOnTouch](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.PieSeries.html#Syncfusion_Maui_Charts_PieSeries_ExplodeOnTouch "ExplodeOnTouch property of .NET MAUI Doughnut Chart") property, we can add interactive features to view the data for the exploded segment in our Chart. Exploding a segment helps pull attention to a specific area of the Chart.
Here, we’ll bind the **ExplodeIndex** property to the series center view to show additional information about the data.
Refer to the following code example.
```xml
<chart:DoughnutSeries ItemsSource="{Binding ProductionDetails}"
XBindingPath="Country"
YBindingPath="Production"
. . .
ExplodeOnTouch="True"
ExplodeIndex="{Binding ExplodeIndex,Mode=TwoWay}"/>
```
After executing the above code examples, we will get the output that resembles the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Visualizing-the-worlds-biggest-oil-producers-data-using-the-Syncfusion-.NET-MAUI-Doughnut-Chart.gif" alt="Visualizing the world’s biggest oil producers’ data using the Syncfusion .NET MAUI Doughnut Chart" style="width:100%">
<figcaption>Visualizing the world’s biggest oil producers’ data using the Syncfusion .NET MAUI Doughnut Chart</figcaption>
</figure>
## GitHub reference
For more details, refer to visualizing the world’s biggest oil producers in 2023 using the .NET MAUI Doughnut Chart [GitHub demo](https://github.com/SyncfusionExamples/Creating-the-.NET-MAUI-Pie-Chart-to-Visualize-the-World-s-Biggest-Oil-Producers-in-2023 ".NET MAUI Doughnut Chart to visualize the world's biggest oil producers in 2023 GitHub demo").
## Conclusion
Thanks for reading! In this blog, we’ve seen how to use the Syncfusion [.NET MAUI Doughnut Chart](https://www.syncfusion.com/maui-controls/maui-circular-charts/chart-types/maui-doughnut-chart ".NET MAUI Doughnut Chart") to visualize the world’s biggest oil producers in 2023. We strongly encourage you to follow the steps outlined in this blog and share your thoughts in the comments below.
The existing customers can download the new version of Essential Studio on the [License and Downloads](https://www.syncfusion.com/account "Essential Studio License and Downloads page") page. If you are not a Syncfusion customer, try our 30-day [free trial](https://www.syncfusion.com/downloads "Get free evaluation for the Essential Studio products") to check out our incredible features.
You can also contact us through our [support forums](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback "Syncfusion Feedback Portal"). We are always happy to assist you!
## Related blogs
- [Introducing the New .NET MAUI Digital Gauge Control](https://www.syncfusion.com/blogs/post/dotnetmaui-digital-gauge-control "Blog: Introducing the New .NET MAUI Digital Gauge Control")
- [Introducing the 12th Set of New .NET MAUI Controls and Features](https://www.syncfusion.com/blogs/post/syncfusion-dotnet-maui-2024-volume-2 "Blog: Introducing the 12th Set of New .NET MAUI Controls and Features")
- [Syncfusion Essential Studio 2024 Volume 2 Is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 Is Here!")
- [What’s New in .NET MAUI Charts: 2024 Volume 2](https://www.syncfusion.com/blogs/post/dotnet-maui-charts-2024-volume-2 "Blog: What’s New in .NET MAUI Charts: 2024 Volume 2") | jollenmoyani |
1,901,491 | A Comprehensive Guide on How to Open a Food Truck Business and Succeed | The food truck business has seen a significant surge in popularity over the past few years.... | 0 | 2024-07-08T06:28:00 | https://www.kopatech.com/blog/a-comprehensive-guide-on-how-to-open-a-food-truck-business-and-succeed | kopatech, foodorderingsystem, foodtruck, multirestaurant | The food truck business has seen a significant surge in popularity over the past few years. Entrepreneurs prefer to open a food truck business model over traditional brick-and-mortar establishments for a variety of reasons.
**1. Minimized Investment:**
One of the primary reasons is the minimized initial investment. Starting a restaurant in a fixed location can be a costly affair, with expenses like rent, renovations, and interior design quickly adding up. In contrast, a food truck requires a lower initial investment. The cost of a truck and the necessary equipment is often significantly less than the cost of setting up a restaurant. This lower financial barrier makes the food truck business an attractive option for budding entrepreneurs.
**2. Flexibility of Location:**
Food trucks offer the unique advantage of mobility. Unlike traditional restaurants, food trucks aren’t tied to a single location. You can even use it to enhance your reach. Food Trucks have the freedom to move around and serve customers in different locations, from bustling city centres to local festivals and events. This flexibility allows food truck owners to go where the customers are, rather than waiting for customers to come to them.
**3. Specialty Cuisine:**
Food trucks are often associated with specialty or niche cuisines. Entrepreneurs can use food trucks to test out unique food concepts or cater to specific dietary preferences without the risk of investing in a full-scale restaurant. The smaller scale of a food truck operation allows for more menu flexibility and innovation.
**4. Minimized Real Estate Investment to open a food truck business:**
With a food truck, investing in expensive real estate is unnecessary. Entrepreneurs can avoid the long-term financial commitment of a lease or property purchase. This reduces costs and eliminates the risk of being stuck in an unfavourable location.
**5. Convenience for Customers:**
In today’s fast-paced world, convenience is king. Food trucks cater to this by offering quick service with an [online food ordering system](https://www.kopatech.com/online-food-ordering-system). Customers can grab a bite on the go without needing a sit-down meal. This convenience factor can attract a wide range of customers, from busy office workers to families out for a day in the park.
**6. Marketing Advantage:**
Finally, food trucks serve as moving advertisements for the business. A well-designed truck can catch the eye and draw in customers. Plus, the novelty of a food truck can create a buzz and attract media attention, providing free advertising.
## Some Best Foods Suited for Food Truck Business
In the diverse culinary landscape of India, food trucks have the unique opportunity to offer a variety of cuisines that set them apart from regular hotels and restaurants. Here are some cuisine ideas that can make your food truck stand out:
**1. Italian Cuisine:** Italian food has gained immense popularity in India. However, it’s not just about pizza and pasta. Consider offering lesser-known Italian dishes like Risotto, Tiramisu, or Bruschetta. You could also give these dishes an Indian twist, such as a Paneer Tikka Pizza or a Masala Fusilli.
**2. South Indian Dishes:** South Indian cuisine is renowned for its rich flavours and is much more than just Idli and Dosa. Consider offering regional delicacies like Chettinad Chicken, Malabar Parotta, or Andhra-style Prawns. These dishes can provide a refreshing change from the usual North Indian fare found in many restaurants.
**3. North Indian Dishes:** North Indian cuisine is diverse and flavourful. While many restaurants offer standard dishes like Butter Chicken and Paneer Tikka, a food truck could stand out by offering regional specialties like Amritsari Kulcha, Lucknowi Biryani, or Rajasthani Gatte ki Sabzi.
**4. Popular American Food:** American food, particularly fast food, is widely popular in India. However, instead of the usual burgers and fries, consider offering dishes like Southern Fried Chicken, New York-style Hot Dogs, or Tex-Mex cuisine. Again, adding an Indian twist to these dishes can make them even more appealing.
Now, let’s look at some popular dishes best suited to open a food truck business:
**1. Tacos:** Tacos are a great food truck item. They’re easy to make, customizable, and can be eaten on the go. Consider offering a variety of fillings, from classic Mexican to fusion fillings like Tandoori Chicken or Paneer Tikka.
**2. Grilled Sandwiches:** Grilled sandwiches are another excellent choice. They’re quick to prepare and easy to eat. Plus, the options for fillings are endless. You could offer everything from a classic Grilled Cheese to a gourmet Mushroom and Truffle Oil sandwich.
**3. Rice Bowls:** Rice bowls are becoming increasingly popular. They’re a complete meal in a bowl and are easy to customize. Consider offering a variety of rice bowls, from a South Indian Lemon Rice bowl to a North Indian Rajma Chawal bowl.
**4. Pasta:** Pasta is a crowd-pleaser and is relatively simple to prepare in a food truck setting. Offer a variety of sauces and add-ons, and consider offering whole wheat or gluten-free pasta for health-conscious customers.
**5. Kebabs and Rolls:** Kebabs and rolls are perfect for a food truck. They’re delicious, easy to eat, and can be prepared in advance. Plus, they’re versatile - you can offer everything from a classic Chicken Tikka Kebab to a fusion Paneer Tikka Roll.
Remember, to open a food truck business successfully you must offer high-quality, delicious food that’s different from what’s already available. So, don’t be afraid to get creative and experiment with different cuisines and dishes. Good luck!
## Intricacies of Operating the Food Truck Onsite
Before you open a food truck business, consider the following factors. Food preparation and cutlery selection to seating arrangements and customer feedback. Here’s a comprehensive guide on how to operate a food truck business:
**1. Food Preparation:** The first step in operating a food truck business is food preparation. Depending on the size of your truck and the complexity of your menu, some or all of your food preparation may need to be done in a separate kitchen facility. This allows for better control over food safety and quality. It’s important to plan your menu carefully, taking into account the limited space and equipment in a food truck.
**2. Cutlery Selection:** Choosing the right cutlery is crucial for a food truck business. Given that most food truck meals are eaten on the go, disposable cutlery is often the most practical choice. However, it’s important to choose eco-friendly options, such as biodegradable or compostable cutlery, to minimize environmental impact.
**3. Seating Arrangements:** While food trucks are primarily take-away businesses, providing seating can enhance the customer experience. If space and local regulations permit, consider setting up a few tables and chairs near your truck. Alternatively, you could park your truck near a public seating area or park.
**4. Appointing a Chief Chef:** The success of a food truck business largely depends on the quality of the food, and that’s where a good chef comes in. The chef will be responsible for creating the menu, ensuring food quality, and overseeing food preparation. When choosing a chef, consider their experience, creativity, and ability to work in a food truck environment.
**5. Customer Feedback:** Customer feedback is invaluable for improving your food truck business. Regularly ask for feedback from your customers about the food, service, and overall experience. This can be done verbally, through comment cards, or via online reviews. Take all feedback seriously and make necessary improvements.
**6. Marketing and Promotion:** In addition to the above, effective marketing and promotion are key to operating a successful food truck business. This could include social media marketing, participating in local events, offering special promotions, and more.
**7. Compliance with Regulations:** Last but not least, it’s crucial to comply with all local health, safety, and business regulations. This includes obtaining necessary permits and licenses, regular health inspections, and adherence to local parking regulations.
## Locating the Food Truck Business
Choosing the right location to open a food truck business is a critical decision that can significantly impact your success. Here are some factors to consider when selecting the best place to locate your food truck:
**1. Municipal Restrictions:**
Every city has its own set of rules and regulations regarding food trucks. Some cities have minimal restrictions, while others may have stringent regulations regarding where food trucks can operate. It’s crucial to understand these regulations before deciding on a location. Research the local laws, obtain the necessary permits, and ensure your chosen location complies with all municipal restrictions.
**2. High Foot Traffic Areas are Best Suited to open a food truck business:**
The best locations for food trucks are often areas with high foot traffic. These could include business districts, shopping centres, universities, and tourist attractions. These locations are typically bustling with potential customers looking for a quick and convenient meal option.
**3. Hangout Spots:**
Consider locations where people like to hang out, such as parks, beaches, and entertainment venues. These spots often attract large crowds, especially on weekends and during events, making them ideal locations for a food truck.
**4. Avoiding Traffic Disruptions:**
While it’s important to be where the people are, it’s equally important not to cause a hindrance to public or vehicle traffic. Setting up in a location that disrupts traffic flow can lead to complaints and potential fines. Choose a location that’s easily accessible but doesn’t obstruct traffic or pedestrian pathways. You can add more punch to your business with a [Multi Restaurant Online Ordering System](https://www.kopatech.com/multi-restaurant-delivery-software) for bigger sales.
**5. Pollution-Free Zones:**
Food is associated with cleanliness. Therefore, setting up your food truck in a pollution-free zone can enhance your business’s image and attract more customers. Avoid locations near factories, construction sites, or busy roads where dust and exhaust fumes are prevalent.
**6. Parks and Leisure Spots:**
Parks, waterfronts, and other leisure spots are excellent locations for food trucks. People visiting these places are often looking for a bite to eat, and the relaxed atmosphere can make the food truck experience even more enjoyable. Plus, these locations often host events and festivals, which can bring in additional business.
**7. Proximity to Complementary Businesses:**
Consider setting up nearby businesses that complement yours, such as breweries, bars, or coffee shops. Customers of these businesses might be looking for food options, providing a ready market for your food truck.
### Quality Compliance Standards to open a food truck business
Maintaining high food quality standards is crucial for eateries, including food trucks. These standards are governed by the Food Safety and Standards Authority and are designed to ensure the health and safety of consumers.
**Cleanliness:** Cleanliness is paramount in any food service establishment. This includes regular cleaning of cooking and serving areas, proper waste disposal, and pest control. Eateries are required to use safe and clean water for cooking and cleaning purposes.
**Workers’ Health:** Workers involved in food handling must maintain good personal hygiene. They should be free from any infectious diseases, and regular health check-ups are recommended. Proper hand hygiene, use of clean uniforms, and safe food handling practices are essential.
**Food Contamination:** Eateries must take measures to prevent food contamination. This includes proper storage of food at correct temperatures, separate storage of raw and cooked food, and use of clean and sanitized utensils and equipment.
**Food Inspector’s Right to Inspect:** Food inspectors have the right to inspect eateries at any time to ensure compliance with food safety standards. They can check the premises, cooking and storage areas, and food items. Non-compliance can lead to penalties and even closure of the eatery.
**Potential Risks to Public Health:** Non-compliance with food safety standards can pose serious risks to public health, including food poisoning and the spread of foodborne illnesses. Therefore, eateries must adhere to these standards strictly.
**Avoiding the Use of Alcoholic Drinks:** The sale and service of alcoholic drinks require a separate license. Moreover, alcohol service may not be suitable for all types of eateries, especially food trucks, due to space constraints and the quick-service model. Therefore, many food trucks choose to focus on serving a variety of non-alcoholic beverages.
The rise of food trucks reflects a dynamic shift in the culinary landscape, offering entrepreneurs an enticing blend of low investment, mobility, and culinary creativity. With a focus on quality, innovation, and compliance, food trucks carve a niche in the bustling market, enticing customers with convenience and culinary adventures. Embrace the journey of food truck entrepreneurship, where every meal is a savoury success story.
_This post originally appeared on [kopatech.com](https://www.kopatech.com/blog/a-comprehensive-guide-on-how-to-open-a-food-truck-business-and-succeed)_
| kopatech2000 |
1,920,799 | test | The Indian cricket team has experienced significant growth and success in the last five years, marked... | 0 | 2024-07-12T08:54:30 | https://dev.to/dk_studio_ab384807396e816/test-20ad | The Indian cricket team has experienced significant growth and success in the last five years, marked by several milestones and achievements. Here are some key highlights:
### 2019-2020: The Rise of Virat Kohli
Virat Kohli took over as the captain of the Indian cricket team in 2014 and has been instrumental in shaping the team's performance. Under his leadership, India has dominated bilateral series, including a historic Test series win in Australia in 2018-2019. Kohli's aggressive batting style and strategic acumen have been crucial in maintaining India's strong performance in international cricket[1][2].
### 2020-2021: The IPL's Global Reach
The Indian Premier League (IPL) has become one of the most popular and lucrative cricket leagues globally. The 2020 IPL season saw a significant increase in crowd participation and financial returns. The IPL's social media presence is also impressive, with the top five cricket clubs on social media being IPL teams[1].
### 2021-2022: Women's Cricket Emerges
The Indian women's cricket team has seen a surge in popularity, thanks to the success of players like Smriti Mandhana, Harmanpreet Kaur, and Jhulan Goswami. The team has been performing well in international tournaments, and the BCCI has been actively promoting women's cricket through various initiatives[3].
### 2022-2023: New Talent and Leadership
The emergence of young talents like Shubman Gill, Yashasvi Jaiswal, and Rishabh Pant has added depth to the Indian team. Rohit Sharma, who took over as captain in 2021, has been instrumental in shaping the team's philosophy and performance. The team has been performing well in bilateral series and is preparing for the 2023 ICC ODI World Cup[2][3].
### 2023-2024: The Future Looks Bright
The future of Indian cricket looks promising with the rise of new talents like Arshdeep Singh and the continued success of established players. The BCCI's efforts to develop domestic cricket and promote young players are expected to bear fruit in the coming years[2][3].
### Conclusion
The last five years have been a period of growth and success for Indian cricket. The team's performance in international cricket, the popularity of the IPL, and the rise of women's cricket are all testament to the sport's increasing popularity and the BCCI's efforts to develop the game. As the team prepares for future challenges, the future looks bright for Indian cricket.
Citations:
[1] https://en.wikipedia.org/wiki/Cricket_in_India
[2] https://www.zapcricket.com/blogs/newsroom/indian-cricket-team
[3] https://www.superprof.co.in/blog/what-makes-the-indian-national-cricket-team-great/
[4] https://economictimes.indiatimes.com/news/sports/the-day-india-cricket-changed/articleshow/101246259.cms
[5] https://eprints.lse.ac.uk/75310/1/blogs.lse.ac.uk-Cricket%20and%20the%20rise%20of%20modern%20India.pdf | dk_studio_ab384807396e816 | |
1,902,697 | Step-by-Step Guide: Building an Auto-Verified Decentralized Application | Hello Devs 👋 Blockchain development is crucial in today's rapidly evolving digital... | 0 | 2024-07-08T11:15:35 | https://dev.to/azeezabidoye/step-by-step-guide-building-an-auto-verified-decentralized-application-9pb | blockchain, web3, ethereum, dapp | ## Hello Devs 👋
Blockchain development is crucial in today's rapidly evolving digital landscape. It is widely adopted across various sectors, including finance, education, entertainment, healthcare, and creative arts, with vast growth potential. Understanding smart contract verification is essential for web3 developers, but the critical skill is programmatically enabling this verification.
In this tutorial, we will build a decentralized application (DApp) for managing book records, allowing users to track their reading progress and engagement with various books. This DApp will function like a library catalog, providing users with access to books and options to mark them as read for effective record-keeping and management.
I recommend you read this [documentation](https://ethereum.org/en/developers/docs/smart-contracts/verifying/) by [Ethereum](https://ethereum.org/en/) foundation for more understanding of smart contract verification.
>_Checkout this tutorial to learn the fundamentals of blockchain development, this will serve as a practical guide for the rest of this tutorial._
{% link azeezabidoye/full-stack-ethereum-and-dapp-development-a-comprehensive-guide-2024-4jfd %}
## Prerequisites 📚
1. Node JS (v16 or later)
2. NPM (v6 or later)
3. Metamask
4. Testnet ethers
5. Etherscan API Key
## Dev Tools 🛠️
1. Yarn
```shell
npm install -g yarn
```
> The source code for this tutorial is located here:
{% github azeezabidoye/book-record-dapp %}
### Step #1: Create a new React project
```shell
npm create vite@latest book-record-dapp --template react
```
- Navigate into the newly created project.
```shell
cd book-record-dapp
```
### Step #2: Install Hardhat as a dependency using `yarn`.
```shell
yarn add hardhat
```
### Bonus: How to create Etherscan API Key
Smart contract verification can be performed manually on Etherscan, but it is advisable for developers to handle this programmatically. This can be achieved using an Etherscan API key, Hardhat plugins, and custom logic.
- Sign up/Sign in on [etherscan.io](www.etherscan.io)
- Select your profile at the top right corner and choose `API Key` from the options.

- Select `Add` button to generate a new API key

- Provide a name for your project and select `Create New API Key`

### Step #3: Initialize Hardhat framework for development.
```shell
yarn hardhat init
```
### Step #4: Setup environment variables
- Install an NPM module that loads environment variable from `.env` file
```shell
yarn add --dev dotenv
```
- Create a new file in the root directory named `.env`.
- Create three (3) new variables needed for configuration
```javascript
PRIVATE_KEY="INSERT-YOUR-PRIVATE-KEY-HERE"
INFURA_SEPOLIA_URL="INSERT-INFURA-URL-HERE"
ETHERSCAN_API_KEY="INSERT-ETHERSCAN-API-KEY-HERE"
```
>_An example of the file is included in the source code above. Rename the `.env_example` to `.env` and populate the variables therein accordingly_
### Step #5: Configure Hardhat for DApp development
- Navigate to `hardhat.config.cjs` file and setup the configuration
```javascript
require("@nomicfoundation/hardhat-toolbox");
require("dotenv").config();
const { PRIVATE_KEY, INFURA_SEPOLIA_URL} = process.env;
module.exports = {
solidity: "0.8.24",
networks: {
hardhat: { chainId: 1337 },
sepolia: {
url: INFURA_SEPOLIA_URL,
accounts: [`0x${PRIVATE_KEY}`],
chainId: 11155111,
}
}
};
```
### Step #6: Create smart contract
- Navigate to the `contracts` directory and create a new file named `BookRecord.sol`
```javascript
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
contract BookRecord {
// Events
event AddBook(address reader, uint256 id);
event SetCompleted(uint256 bookId, bool completed);
// The struct for new book
struct Book {
uint id;
string title;
uint year;
string author;
bool completed;
}
// Array of new books added by users
Book[] private bookList;
// Mapping of book Id to new users address adding new books under their names
mapping (uint256 => address) bookToReader;
function addBook(string memory title, uint256 year, string memory author, bool completed) external {
// Define a variable for the bookId
uint256 bookId = bookList.length;
// Add new book to books-array
bookList.push(Book(bookId, title, year, author, completed));
// Map new user to new book added
bookToReader[bookId] = msg.sender;
// Emit event for adding new book
emit AddBook(msg.sender, bookId);
}
function getBookList(bool completed) private view returns (Book[] memory) {
// Create an array to save finished books
Book[] memory temporary = new Book[](bookList.length);
// Define a counter variable to compare bookList and temporaryBooks arrays
uint256 counter = 0;
// Loop through the bookList array to filter completed books
for(uint256 i = 0; i < bookList.length; i++) {
// Check if the user address and the Completed books matches
if(bookToReader[i] == msg.sender && bookList[i].completed == completed) {
temporary[counter] = bookList[i];
counter++;
}
}
// Create a new array to save the compared/matched results
Book[] memory result = new Book[](counter);
// Loop through the counter array to fetch matching results of reader and books
for (uint256 i = 0; i < counter; i++) {
result[i] = temporary[i];
}
return result;
}
function getCompletedBooks() external view returns (Book[] memory) {
return getBookList(true);
}
function getUncompletedBooks() external view returns (Book[] memory) {
return getBookList(false);
}
function setCompleted(uint256 bookId, bool completed) external {
if (bookToReader[bookId] == msg.sender) {
bookList[bookId].completed = completed;
}
emit SetCompleted(bookId, completed);
}
}
```
### Step #7: Compile smart contract
- Specify the directory where the ABI should be stored
```shell
paths: {
artifacts: "./src/artifacts",
}
```
- After adding the `paths`. Your Hardhat configuration should look this
```javascript
require("@nomicfoundation/hardhat-toolbox");
require("dotenv").config();
const { PRIVATE_KEY, INFURA_SEPOLIA_URL} = process.env;
module.exports = {
solidity: "0.8.24",
paths: {
artifacts: "./src/artifacts",
},
networks: {
hardhat: { chainId: 1337 },
sepolia: {
url: INFURA_SEPOLIA_URL,
accounts: [`0x${PRIVATE_KEY}`],
chainId: 11155111,
}
}
};
```
- Navigate to the terminal and run the command below
```shell
yarn hardhat compile
```
### Step #8: Configure DApp for deployment
- Create a new folder for deployment scripts in the root directory
```shell
mkdir deploy
```
- Create a file for the deployment scripts in the `deploy` directory like this: `00-deploy-book-record`
- Install an Hardhat plugin as a package for deployment
```shell
yarn add --dev hardhat-deploy
```
- Import `hardhat-deploy` package into Hardhat configuration file
```shell
require("hardhat-deploy")
```
- Install another Hardhat plugin to override the `@nomiclabs/hardhat-ethers` package
```shell
yarn add --dev @nomiclabs/hardhat-ethers@npm:hardhat-deploy-ethers
```
- Set up a deployer account in the Hardhat configuration file
```javascript
networks: {
// Code Here
},
namedAccounts: {
deployer: {
default: 0,
}
}
```
- Update the deploy script with the following code to deploy the smart contract
```javascript
module.exports = async ({ getNamedAccounts, deployments }) => {
const { deploy, log } = deployments;
const { deployer } = await getNamedAccounts();
const args = [];
await deploy("BookRecord", {
contract: "BookRecord",
args: args,
from: deployer,
log: true, // Logs statements to console
});
};
module.exports.tags = ["BookRecord"];
```
- Open the terminal and deploy the contract on the Sepolia testnet
```shell
yarn hardhat deploy --network sepolia
```
> ✍️ _Copy the address of your deployed contract. You can store it in the `.env` file_
### Step #9: Configure DApp for automatic verification
- Install the Hardhat plugin to verify the source code of deployed contract
```shell
yarn add --dev @nomicfoundation/hardhat-verify
```
- Add the following statement to your Hardhat configuration
```javascript
require("@nomicfoundation/hardhat-verify");
```
- Add Etherscan API key to the environment variables in the Hardhat configuration
```javascript
const { PRIVATE_KEY, INFURA_SEPOLIA_URL, ETHERSCAN_API_KEY } = process.env;
```
- Add Etherscan config to your Hardhat configuration
```javascript
module.exports = {
networks: {
// code here
},
etherscan: {
apiKey: "ETHERSCAN_API_KEY"
}
```
- Create a new folder for _**utilities**_ in the root directory
```shell
mkdir utils
```
- Create a new file named `verify.cjs` in the `utils` directory for the verification logic
- Update `verify.cjs` with the following code:
```javascript
const { run } = require("hardhat");
const verify = async (contractAddress, args) => {
console.log(`Verifying contract...`);
try {
await run("verify:verify", {
address: contractAddress,
constructorArguments: args,
});
} catch (e) {
if (e.message.toLowerCase().includes("verify")) {
console.log("Contract already verified!");
} else {
console.log(e);
}
}
};
module.exports = { verify };
```
- Update the deploy script with the verification logic
> ✍️ _Create a condition to confirm contract verification after deployment_
Your updated `00-deploy-book-record.cjs` code should look like this:
```javascript
const { verify } = require("../utils/verify.cjs");
module.exports = async ({ getNamedAccounts, deployments }) => {
const { deploy, log } = deployments;
const { deployer } = await getNamedAccounts();
const args = [];
const bookRecord = await deploy("BookRecord", {
contract: "BookRecord",
args: args,
from: deployer,
log: true, // Logs statements to console
});
if (process.env.ETHERSCAN_API_KEY) {
await verify(bookRecord.target, args);
}
log("Contract verification successful...");
log("............................................................");
};
module.exports.tags = ["BookRecord"];
```
- Now, let's verify the contract...open the terminal and run:
```shell
yarn hardhat verify [CONTRACT_ADDRESS] [CONSTRUCTOR_ARGS] --network sepolia
```
- In our case, the smart contract doesn't contain a function constructor, therefore we can skip the arguments
- Run:
```shell
yarn hardhat verify [CONTRACT_ADDRESS] --network sepolia
```
Here is the result... copy the provided link into your browser's URL bar.
```shell
Successfully submitted source code for contract
contracts/BookRecord.sol:BookRecord at 0x01615160e8f6e362B5a3a9bC22670a3aa59C2421
for verification on the block explorer. Waiting for verification result...
Successfully verified contract BookRecord on the block explorer.
https://sepolia.etherscan.io/address/0x01615160e8f6e362B5a3a9bC22670a3aa59C2421#code
```

**Congratulations on successfully deploying and verifying your decentralized application.** I commend you for following this tutorial up to this point, and I'm pleased to announce that we have achieved our goal.
However, a DApp is incomplete without its frontend components. We began this lesson by initializing a React application, which is ideal for building UI components for Ethereum-based decentralized applications.
Here are a few more steps we need to complete in order to construct a full-stack DApp:
✅ Create unit tests with Mocha and Chai.
✅ Create and connect UI components.
✅ Interact with our Dapp.
### Step #10: Write unit tests with Mocha and Chai
- Install the required dependencies for unit tests.
```shell
yarn add --dev mocha chai@4.3.7
```
- Navigate to `test` directory and create a new file name `book-record-test.cjs`.
- Here is the code for unit tests:
```javascript
const { expect } = require("chai");
const { ethers } = require("hardhat");
describe("BookRecord", function () {
let BookRecord, bookRecord, owner, addr1;
beforeEach(async function () {
BookRecord = await ethers.getContractFactory("BookRecord");
[owner, addr1] = await ethers.getSigners();
bookRecord = await BookRecord.deploy();
await bookRecord.waitForDeployment();
});
describe("Add Book", function () {
it("should add a new book and emit and AddBook event", async function () {
await expect(
bookRecord.addBook(
"The Great Gatsby",
1925,
"F. Scott Fitzgerald",
false
)
)
.to.emit(bookRecord, "AddBook")
.withArgs(owner.getAddress(), 0);
const books = await bookRecord.getUncompletedBooks();
expect(books.length).to.equal(1);
expect(books[0].title).to.equal("The Great Gatsby");
});
});
describe("Set Completed", function () {
it("should mark a book as completed and emit a SetCompleted event", async function () {
await bookRecord.addBook("1984", 1949, "George Orwell", false);
await expect(bookRecord.setCompleted(0, true))
.to.emit(bookRecord, "SetCompleted")
.withArgs(0, true);
const completedBooks = await bookRecord.getCompletedBooks();
expect(completedBooks.length).to.equal(1);
expect(completedBooks[0].completed).to.be.true;
});
});
describe("Get Book Lists", function () {
it("should return the correct list of completed and uncompleted books", async function () {
await bookRecord.addBook("Book 1", 2000, "Author 1", false);
await bookRecord.addBook("Book 2", 2001, "Author 2", true);
const uncompletedBooks = await bookRecord.getUncompletedBooks();
const completedBooks = await bookRecord.getCompletedBooks();
expect(uncompletedBooks.length).to.equal(1);
expect(uncompletedBooks[0].title).to.equal("Book 1");
expect(completedBooks.length).to.equal(1);
expect(completedBooks[0].title).to.equal("Book 2");
});
it("should only return books added by the caller", async function () {
await bookRecord.addBook("Owner's Book", 2002, "Owner Author", false);
await bookRecord
.connect(addr1)
.addBook("Addr1's Book", 2003, "Addr1 Author", true);
const ownerBooks = await bookRecord.getUncompletedBooks();
const addr1Books = await bookRecord.connect(addr1).getCompletedBooks();
expect(ownerBooks.length).to.equal(1);
expect(ownerBooks[0].title).to.equal("Owner's Book");
expect(addr1Books.length).to.equal(1);
expect(addr1Books[0].title).to.equal("Addr1's Book");
});
});
});
```
- Navigate to the terminal and run the test.
```shell
yarn hardhat test
```
The result of your test should be similar to this:
```shell
BookRecord
Add Book
✔ should add a new book and emit and AddBook event
Set Completed
✔ should mark a book as completed and emit a SetCompleted event
Get Book Lists
✔ should return the correct list of completed and uncompleted books
✔ should only return books added by the caller
4 passing (460ms)
✨ Done in 2.05s.
```
### Step #11: Create and connect the UI components
- Open the `src/App.jsx` file and update it with the following code, set the value of `BookRecordAddress` variable to the address of your smart contract:
```javascript
import React, { useState, useEffect } from "react";
import { ethers, BrowserProvider } from "ethers";
import "./App.css";
import BookRecordAbi from "./artifacts/contracts/BookRecord.sol/BookRecord.json"; // Import the ABI of the contract
const BookRecordAddress = "your-contract-address"; // Replace with your contract address
const BookRecord = () => {
const [provider, setProvider] = useState(null);
const [signer, setSigner] = useState(null);
const [contract, setContract] = useState(null);
const [books, setBooks] = useState([]);
const [title, setTitle] = useState("");
const [year, setYear] = useState("");
const [author, setAuthor] = useState("");
const [completed, setCompleted] = useState(false);
useEffect(() => {
const init = async () => {
if (typeof window.ethereum !== "undefined") {
const web3Provider = new ethers.BrowserProvider(window.ethereum);
const signer = await web3Provider.getSigner();
const contract = new ethers.Contract(
BookRecordAddress,
BookRecordAbi.abi,
signer
);
setProvider(web3Provider);
setSigner(signer);
setContract(contract);
}
};
init();
}, []);
const fetchBooks = async () => {
try {
const completedBooks = await contract.getCompletedBooks();
const uncompletedBooks = await contract.getUncompletedBooks();
setBooks([...completedBooks, ...uncompletedBooks]);
} catch (error) {
console.error("Error fetching books:", error);
}
};
const addBook = async () => {
try {
const tx = await contract.addBook(title, year, author, completed);
await tx.wait();
fetchBooks();
setTitle("");
setYear("");
setAuthor("");
setCompleted(false);
} catch (error) {
console.error("Error adding book:", error);
}
};
const markAsCompleted = async (bookId) => {
try {
const tx = await contract.setCompleted(bookId, true);
await tx.wait();
fetchBooks();
} catch (error) {
console.error("Error marking book as completed:", error);
}
};
return (
<div className="container">
<h1>Book Record</h1>
<div>
<input
type="text"
placeholder="Title"
value={title}
onChange={(e) => setTitle(e.target.value)}
/>
<input
type="number"
placeholder="Year"
value={year}
onChange={(e) => setYear(e.target.value)}
/>
<input
type="text"
placeholder="Author"
value={author}
onChange={(e) => setAuthor(e.target.value)}
/>
<label>
Completed:
<input
type="checkbox"
checked={completed}
onChange={(e) => setCompleted(e.target.checked)}
/>
</label>
<button onClick={addBook}>Add Book</button>
</div>
<h2>Book List</h2>
<ul>
{books.map((book) => (
<li key={book.id}>
{book.title} by {book.author}: {book.year.toString()}
{book.completed ? "Completed" : "Not Completed"}
{!book.completed && (
<button onClick={() => markAsCompleted(book.id)}>
Mark as Completed
</button>
)}
</li>
))}
</ul>
</div>
);
};
export default BookRecord;
```
- Add some CSS styles to the `App.css` file:
```css
/* BookRecord.css */
body {
font-family: Arial, sans-serif;
background-color: #f9f9f9;
margin: 0;
padding: 0;
}
.container {
max-width: 800px;
margin: 50px auto;
padding: 20px;
background-color: #ffffff;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
h1 {
text-align: center;
color: #333;
}
form {
display: flex;
flex-direction: column;
gap: 10px;
}
input[type="text"],
input[type="number"],
input[type="checkbox"] {
padding: 10px;
border: 1px solid #ddd;
border-radius: 4px;
font-size: 16px;
width: calc(100% - 24px);
}
label {
display: flex;
align-items: center;
gap: 10px;
}
button {
padding: 10px 20px;
background-color: #007bff;
color: #fff;
border: none;
border-radius: 4px;
cursor: pointer;
font-size: 16px;
}
button:hover {
background-color: #0056b3;
}
h2 {
margin-top: 20px;
color: #333;
}
ul {
list-style-type: none;
padding: 0;
}
li {
padding: 10px;
border-bottom: 1px solid #ddd;
display: flex;
justify-content: space-between;
align-items: center;
}
li:last-child {
border-bottom: none;
}
li button {
background-color: #28a745;
}
li button:hover {
background-color: #218838;
}
```
- Start your React App:
```shell
yarn run dev
```

### Conclusion
Congratulations on completing the "Step-by-Step Guide: Building an Auto-Verified Decentralized Application." You've successfully deployed and verified your smart contract, integrating essential backend and frontend components. This comprehensive process ensures your DApp is secure, functional, and user-friendly. Keep exploring and refining your skills to advance in the world of decentralized applications. Happy coding!
| azeezabidoye |
1,902,727 | Introducing the New Blazor TextArea Component | TL;DR: The new Syncfusion Blazor TextArea component is a game-changer for multiline text input. This... | 0 | 2024-07-08T15:37:24 | https://www.syncfusion.com/blogs/post/new-blazor-textarea-component | blazor, development, whatsnew, web | ---
title: Introducing the New Blazor TextArea Component
published: true
date: 2024-06-27 10:54:36 UTC
tags: blazor, development, whatsnew, web
canonical_url: https://www.syncfusion.com/blogs/post/new-blazor-textarea-component
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w8tyfufoozhdky5maudz.png
---
**TL;DR:** The new Syncfusion Blazor TextArea component is a game-changer for multiline text input. This blog delves into its unique attributes and advantages and provides a guide on how to seamlessly incorporate it into your projects.
Have you ever found yourself wrestling with restrictive text input fields? Have you yearned for a more versatile and feature-rich multiline text experience? If these challenges resonate with you, we have some exciting news! We are thrilled to unveil the Syncfusion [Blazor TextArea](https://www.syncfusion.com/blazor-components/blazor-textarea "Blazor TextArea") component, a standout feature of the [Essential Studio 2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release.
The Blazor TextArea component is meticulously crafted to revolutionize the multiline text input experience for developers and end-users.
In this blog, we’ll delve into its unique features and guide you on seamlessly incorporating it into your projects.
## Blazor TextArea component: An overview
The TextArea component is a new addition to the suite of input controls available in our [Blazor framework](https://www.syncfusion.com/blazor-components/ "Blazor UI components"). It provides a user-friendly way to handle multiline text input and offers a range of features to improve usability and accessibility. This component is beneficial for scenarios where users must input large amounts of text, such as comments, descriptions, or messages.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Blazor-TextArea-component-1-1024x375.png" alt="Blazor TextArea component" style="width:100%">
<figcaption>Blazor TextArea component</figcaption>
</figure>
## Key features
The key features of the Blazor TextArea component are as follows:
### Floating labels
[Floating labels](https://blazor.syncfusion.com/demos/textarea/floatinglabel?theme=fluent2 "Demo: Floating label functionalities of the Blazor TextArea component") are a modern UI/UX feature that enhances the user experience by providing a clear and intuitive way to understand the input field’s purpose. When the user focuses on the TextArea or starts typing, the placeholder text transforms into a floating label above the field. This ensures the label is always visible, helping users remember their input.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Floating-label-in-the-Blazor-TextArea.gif" alt="Floating label in the Blazor TextArea component" style="width:100%">
<figcaption>Floating label in the Blazor TextArea component</figcaption>
</figure>
### Resize
Another standout feature of the Blazor TextArea component is its [auto-resizing capability](https://blazor.syncfusion.com/demos/textarea/resize?theme=fluent2 "Demo: Resize functionalities of the Blazor TextArea component"). It can be resized vertically, horizontally, or in both directions by selecting the corresponding **ResizeMode** option.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Resizing-the-Blazor-TextArea-component.gif" alt="Resizing the Blazor TextArea component" style="width:100%">
<figcaption>Resizing the Blazor TextArea component</figcaption>
</figure>
### Rows and columns
You can specify the number of [rows and columns](https://blazor.syncfusion.com/demos/textarea/api?theme=fluent2 "Demo: Specifying rows and columns in Blazor TextArea component"), giving you precise control over the size of the text area. This is particularly useful for forms where you need to fit multiple input fields into a limited space. By defining the rows and columns, you can ensure that your text areas are appropriately sized for the content they are expected to hold.
### Maximum length
Managing text input length is crucial in many apps. The Blazor TextArea component includes a built-in feature to set a [maximum character limit](https://blazor.syncfusion.com/demos/textarea/api?theme=fluent2 "Demo: Setting maximum character limit in Blazor TextArea component"). This feature helps you control the length of the input, ensuring users do not exceed the specified number of characters. It is useful for inputs like descriptions, comments, or any field where text length is restricted.
### Accessibility
The Blazor TextArea component is designed with accessibility in mind. It supports ARIA attributes and keyboard navigation to ensure a smooth experience for all users, including those relying on assistive technologies.
## How to use the Blazor TextArea component?
Getting started with the Blazor TextArea component is quick and easy. You can configure the component’s properties and customize its appearance and behavior according to your requirements. You can learn more about it, in this [documentation](https://blazor.syncfusion.com/documentation/textarea/getting-started-webapp "Getting started with Blazor TextArea component").
### Add the Blazor TextArea component to your app
Once you’ve installed the NuGet packages and configured the basic imports in your app, add the following code in the Blazor component ( **.razor** ) file to add the Blazor TextArea component.
```xml
<SfTextArea Placeholder="Enter your comments" @bind-Value="userInput" RowCount="5" ColumnCount="150" FloatLabelType="FloatLabelType.Auto"></SfTextArea>
@code {
private string userInput = string.Empty;
}
```
Refer to the following output image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Integrating-the-TextArea-component-in-the-Blazor-app.gif" alt="Integrating the TextArea component in the Blazor app" style="width:100%">
<figcaption>Integrating the TextArea component in the Blazor app</figcaption>
</figure>
## Conclusion
Thanks for reading! The new Syncfusion [Blazor TextArea](https://www.syncfusion.com/blazor-components/blazor-textarea "Blazor TextArea") component is a powerful addition to the Blazor framework, providing an enhanced multiline text input experience. With these features, it addresses the common challenges developers face when working with text areas. By integrating this component into your Blazor apps, you can improve usability and deliver a better user experience.
We hope this introduction to the Blazor TextArea component has been helpful. Start exploring its features today and see how it can benefit your projects!
For a detailed overview of all the exciting updates in this release, we invite you to visit our [Release Notes](https://help.syncfusion.com/common/essential-studio/release-notes/v26.1.35 "Essential Studio Release Notes") and [What’s New](https://www.syncfusion.com/products/whatsnew/blazor-components "What’s New in Syncfusion Blazor components") pages.
For our existing customers, the new version of Essential Studio is now available on the [License and Downloads](https://www.syncfusion.com/account/login "License and downloads page of Essential Studio") page. If you’re new to Syncfusion, sign up for a [30-day free trial](https://www.syncfusion.com/downloads "Free evaluation of the Syncfusion Essential Studio products") to try our controls yourself.
If you have any questions, you can reach us through our [support forum](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/ "Syncfusion Feedback Portal"). We’re always happy to assist you!
## Related blogs
- [Syncfusion Essential Studio 2024 Volume 2 Is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 Is Here!")
- [What’s New in Blazor: 2024 Volume 2](https://www.syncfusion.com/blogs/post/whats-new-blazor-2024-volume-2 "Blog: What’s New in Blazor: 2024 Volume 2")
- [Introducing the New Blazor 3D Charts Component](https://www.syncfusion.com/blogs/post/blazor-3d-charts-component "Blog: Introducing the New Blazor 3D Charts Component")
- [How to Maintain State in the Blazor ListBox Using Fluxor?](https://www.syncfusion.com/blogs/post/maintain-state-blazor-listbox-fluxor "Blog: How to Maintain State in the Blazor ListBox Using Fluxor?") | jollenmoyani |
1,902,967 | What was your win this week? | 👋👋👋👋 Looking back on your week -- what was something you're proud of? All wins count -- big or... | 0 | 2024-07-12T13:41:00 | https://dev.to/devteam/what-was-your-win-this-week-2gej | weeklyretro | 👋👋👋👋
Looking back on your week -- what was something you're proud of?
All wins count -- big or small 🎉
Examples of 'wins' include:
- Getting a promotion!
- Starting a new project
- Fixing a tricky bug
- Finally finding the remote control after days of searching 📺

Happy Friday!
| jess |
1,904,129 | Top 4 MySQL Memory Calculators | Why MySQL Memory Allocation Matters Simply put, memory allocation directly influences the... | 0 | 2024-07-09T13:57:16 | https://releem.com/blog/mysql-memory-calculators | mysql, webdev, php, laravel | ## Why MySQL Memory Allocation Matters
Simply put, memory allocation directly influences the speed and efficiency of your MySQL database. This, in turn, affects everything from the speed at which queries are processed to how transactions are handled – ultimately impacting the overall performance of your application.
But proper memory allocation is not just about making your MySQL database run faster – it's also about preventing resource overcommitment and guaranteeing that your server operates within its physical limits. This helps maintain consistent performance and avoids the pitfalls of excessive memory usage that can lead to downtime and service interruptions.
Here's a breakdown of how proper memory allocation can benefit your database:
- **Preventing Hardware Overload** – Allocating memory correctly helps ensure that your MySQL server does not exceed the physical RAM available on your system. Exceeding your hardware's memory capacity can cause your server to swap memory to disk. This slows down your operations and increases the risk of server crashes.
- **Query Processing Speed** – Memory allocation for components like the buffer pool or query cache directly affects how quickly MySQL can retrieve and process data. A well-allocated buffer pool, for example, allows more data to be held in memory, reducing the need to fetch data from disk.
- **Transaction Handling** – Adequate memory for transactions allows your database to handle multiple operations simultaneously without slowing down. Properly tuned memory settings allow MySQL to efficiently manage transaction logs and buffer updates, leading to faster commit times and less contention for resources.
Reducing Server Load – By optimizing memory usage, your database can handle more concurrent users and operations without a decline in performance. Memory tuning helps avoid bottlenecks that could otherwise slow down your entire system.
## How Can MySQL Memory Calculators Help?
Dealing with MySQL memory allocation can be tricky, and that's where memory calculators come in handy. These tools help you figure out the best memory settings for your MySQL setup by taking into account your system's resources and how you use your database. One of the biggest advantages of using a memory calculator is that it helps prevent you from allocating more memory than your system can handle.
Traditional memory calculators, while useful, require you to make manual adjustments and understand how changing different parameters will impact overall memory usage and performance.
## Top 4 MySQL Memory Calculators
Several tools are available to help database administrators calculate the appropriate memory settings for their MySQL databases. Here, we compare the leading MySQL memory calculators:
### 1. [Alexander Avchinnikov’s MySQL Memory Calculator](https://avchinnikov.com/utils/mysqlcalc)
This online calculator has a minimalist interface with no directions of any kind. You input your server’s memory capacity and 12 other parameter values from your configuration file. A bar fills to visualize what percentage of your memory capacity is consumed by your configuration settings.
**Pros**
- Easy-to-use, minimalist UI.
- Great visualization to understand how much server memory is consumed.
**Cons**
- No recommendations and no directions.
- Must make blind configuration changes, without understanding performance impact, if your settings exceed memory capacity.

### 2. [MySQL Memory Calculator](https://www.mysqlcalculator.com/)
This web-based tool offers a simple interface to input your values for 13 specific memory-associated parameters from your my.cnf file. Once the table is filled out, it provides an estimate on the maximum MySQL memory usage expected from your configuration. You can then tweak settings to find an ideal maximum that works with your available hardware.
**Pros**
- Shows the MySQL default for each parameter.
- Very convenient for on-the-go checks.
**Cons**
- Only provides information on maximum memory needs, no recommendations of any kind.

### 3. [Abhinavbit’s MySQL Memory Calculator](https://www.abhinavbit.com/p/mysql-memory-calculator.html)
This MySQL memory calculator is another simple web calculator in basically the same format. It asks for your configuration values for various parameters. After inputting your values, you will learn:
- Base Memory
- Memory per Connection
- Total Minimum Memory Used
- Total Maximum Memory Used
**Pros**
- Simple interface
- Reveals insights into base memory and memory per connection.
**Cons**
- Does not provide recommendations or tuning suggestions.

### 4. [MySQLTuner](https://github.com/major/MySQLTuner-perl)
MySQL Tuner is a popular script that performs a comprehensive health check of your MySQL server, including memory usage. As a script, MySQL Tuner is run directly from the command line, making it quick to deploy without the need for a complex setup. The tool checks for the following memory conditions:
- Get total RAM/swap
- Is there enough memory for max connections reached by MySQL?
- Is there enough memory for max connections allowed by MySQL?
- Max percentage of memory used (<85%)
**Pros**
- Easy to use with straightforward, actionable recommendations.
- Suitable for quick checks and immediate improvements.
- Works with various MySQL versions and forks like MariaDB and Percona.
**Cons**
- Limited to analyzing existing configurations rather than predicting future memory needs.
- May not provide in-depth memory-specific tuning.

## Automated Memory Optimization with Releem
While these memory calculators can provide a starting point to understanding the memory requirements for your current MySQL configuration, they provide no information on how to make parameter changes that will help with memory allocation without potentially impacting performance in other ways. To do that, you need a deep understanding of MySQLs internals and the time to make manual changes and monitor your server.
This is where Releem shines. It takes the guesswork out of the process by automatically optimizing memory allocation for you, so you don't have to spend time tweaking settings. Releem keeps your database in top form without you having to constantly check and adjust everything.
### What Makes Releem Stand Out?
**Automatic Metric Analysis**
Releem keeps a constant eye on your performance metrics and adjusts memory settings on the fly. You won’t need to worry about manually calculating or tweaking configurations. Releem handles it all, helping your server uses memory efficiently and stays within your hardware limits.
**Detects Performance Issues**
Releem identifies and fixes memory-related performance issues on its own, such as excessive memory usage by certain queries, inefficient cache allocations, or buffer pool bottlenecks. This proactive management keeps your database running smoothly and prevents slowdowns or crashes.
**Easy Installation and Configuration**
With Releem, you don’t need to worry about complex configurations. Simply install the tool, and it will handle the rest, offering tailored memory settings that optimize your server's performance.
**Broad Compatibility**
Releem supports various MySQL distributions, including MySQL, MariaDB, and Percona, making it a versatile solution that works regardless of your database environment.
### Why Choose Releem Over Traditional Memory Calculators?
MySQL memory calculators cannot help you maintain and anticipate memory requirements over time. You need a tool that is dynamic, a tool that offers continuous, automated tuning that adapts to your database's changing needs. Releem is that tool:
**Real-Time Adjustments**
Unlike static calculators, Releem makes real-time adjustments based on current database usage and workload patterns, ensuring your memory settings are always up-to-date.
**User-Friendly Interface**
Releem’s intuitive interface makes it easy for users of all skill levels to manage their MySQL memory settings without needing deep technical knowledge.
**Peace of Mind**
With Releem, you get peace of mind knowing that your database is always running efficiently. It’s like having a dedicated database administrator on call, 24/7.

## The Future of MySQL Memory Management
If you're looking to simplify your memory management, Releem offers a groundbreaking solution that far surpasses what online MySQL memory calculators can do. By automating the analysis and tuning of memory settings, Releem not only saves you time but also keeps your database consistently runs at its best.
Make the switch to Releem and take the hassle out of memory optimization. Say goodbye to manual calculations and hello to a smarter, automated approach.
Try [Releem](https://releem.com/) today and experience the future of MySQL management! | drupaladmin |
1,904,197 | Improving software architecture through – murder | For more content like this subscribe to the ShiftMag newsletter. Marianne Bellotti, a software... | 0 | 2024-07-10T11:58:08 | https://shiftmag.dev/murder-software-architecture-3585/ | devrel, softwareengineering, craftconference, mariannebellotti | ---
title: Improving software architecture through – murder
published: true
date: 2024-06-28 08:27:39 UTC
tags: DeveloperExperience,SoftwareEngineering,Craftconference,MarianneBellotti
canonical_url: https://shiftmag.dev/murder-software-architecture-3585/
---

_For more content like this **[subscribe to the ShiftMag newsletter](https://shiftmag.dev/newsletter/)**._
[Marianne Bellotti](https://www.linkedin.com/in/bellmar/), a software developer with more than 20 years of experience and the author of Kill It with Fire, recently spoke about it at the [Craft Conference](https://craft-conf.com/2024).
Read on for the most important facts from this lecture, which can help IT industry leaders and software architects better organize and work efficiently on large projects.
## Managing the fear of making mistakes
“Murder Boards” have a long history in politics and military operations, but they actually started at NASA as **part of the engineering process**.
In highly risk-averse, technical endeavors where extreme efforts are made to prevent mistakes (e.g., satellite operations), murder boards aggressively review, without constraint or pleasantries, a **situation’s problem, assumptions, constraints, mitigations, and proposed solution**.
> The board’s goal is to kill the well-prepared proposal on technical merit; holding back even the least suspicion of a problem is not tolerated.
>
> Such argumentative murder boards consist of many subject matter experts of the specific system under review and of all interfacing systems.
Working on large software projects also requires making significant and strategic decisions, **and the fear of making mistakes is an integral part of the entire process**. That’s precisely why Marianne adds that building and deploying technology is inherently very risky, and so much engineering management is managing fear on a team. Also, fear changes the decisions people make.
## **Murder boards are not code reviews**
That is precisely why, adds Marianne, the Murder Board represents a panel of expertise. This way of knowledge or organization is used when working on and developing one of the projects, which is challenging. It also helps to see how errors can be removed or thrown out while developing large software solutions.
Although the Murder process could be a crucial part of software development, it is definitely not an alternative or different style of [code review](https://shiftmag.dev/code-review-1892/), a compliance/oversight activity, or a replacement for post-mortems.
When it comes to briefing colleagues regarding the murder board process, **it is important that both parties have prior knowledge of the project plan, its historical context, and, in fact, everything that any large organization would do** , only applied to the technical area, i.e., the IT industry.
## Preventing failures
Marianne says that many did not trust the murder process and were afraid of it, and in fact, one of its leading roles is to **protect colleagues from threats and fears** that can arise in the demanding process of software development:
> It is easiest to design and develop a plan at the beginning. Still, from the moment of checks and iterations until the execution of the plan, the fear meter starts to increase, which is exactly why you should have a murder board prepared in time.

Marianne adds that through good organization based on the murder board, two types of failure can be prevented:
**Technical Failure:**
- Scenarios where cascading failures will have unpredictable secondary effects;
- Board members: Principal engineers, staff engineers, architects.
**Social Capital Failure:**
- Embarrassment;
- External but also internal;
- Board members: Business leaders, strategy, legal, marketing/PR.
## Best for high-stress/high-fear projects
Marianne also mentioned that one of the surprising benefits of murder boarding is **that it can make people feel like their colleagues have their backs**. Also, having your decision validated by more senior people can help reinforce a blameless culture if the worst does happen.
**Before you set up your murder board, you should have in mind the following:**
- I like to give my murder boards a day or two to submit questions to the team;
- The best murder boards will result in an in-depth discussion, so giving the attackers a chance to think through which concerns are the most serious and giving the defenders a hint on what the board might focus on helps everyone.
**At the very end of her lecture, Marianne concluded that:**
- Murder boards fit very specific situations: High-stress/high-fear events. The events are necessary and unavoidable;
- Opportunity to thoroughly vet plans, but not a code review, Should not be a regular part of the development life cycle;
- Done well, they build trust and confidence and help the team support one another.
The post [Improving software architecture through – murder](https://shiftmag.dev/murder-software-architecture-3585/) appeared first on [ShiftMag](https://shiftmag.dev). | shiftmag |
1,904,346 | Understanding the Power Automate Definition | Power Automate is built on Azure Logic Apps so they both share the same definition schema. The... | 0 | 2024-07-08T06:11:24 | https://dev.to/wyattdave/understanding-the-power-automate-definition-42po | powerautomate, powerplatform, lowcode, json | Power Automate is built on Azure Logic Apps so they both share the same definition schema. The definition is a json object that holds the information for Power Automate/Logic Apps to complete all the actions.
Microsoft provides some good documentation on the definition ([https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-workflow-definition-language](https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-workflow-definition-language)), but I wanted to take a more detailed approach around Power Automate. This blog might not be for everyone, but if you were the type of child who had to open up the old toaster to see how it worked this could be for you.
A call out here is that the definition doesnt travel on its own very often, it also normally includes 'properties', 'connection references' and more, so I will start from a higher level and include these in the blog, this all combined is often called the definition file or clientData.
This blog will cover:
1. How to View the Definition
2. The Keys
3. Top Level Keys
4. Triggers
5. Actions
7. Variations
---
## 1. How to View the Definition/clientdata
Power Automate converts the definition schema into our flow so that we can edit it so its not that easy to see it, but there are a few ways.
**APIs**
Through the flow api:
```
https://us.api.flow.microsoft.com/providers/Microsoft.ProcessSimple/environments/{ENVIRONMENT_ID}/flows/{FLOW_ID_FROM_URL}?api-version=2016-11-01&$expand=swagger,properties.connectionreferences.apidefinition,properties.definitionSummary.operations.apiOperation,operationDefinition,plan,properties.throttleData,properties.estimatedsuspensiondata
```

_Shown here in Chrome Dev tools_
and the Dataverse api:
```
https://{URL_ENVIRONMENT_DYNAMICS_URL}/api/data/v9.2/workflows?$filter=resourceid eq '{FLOW_ID_FROM_URL}'
```

_Definition is in the 'clientdata' field, but only works on solution aware flows_
**Power Automate**
Using the Dataverse List rows on the workflows (Processes) table. Though this also only works on solution aware flows.

**AutoReview**
AutoReview is a free Chrome Extension I created for code reviews, it also include the function to view the clientdata (either on current open flow or exported zip file).
[https://chromewebstore.google.com/detail/autoreview-for-power-auto/laaendfpgmhjilhjkbebekgdgfjaajif](https://chromewebstore.google.com/detail/autoreview-for-power-auto/laaendfpgmhjilhjkbebekgdgfjaajif)

**Power Automate Tools**
Another Chrome Extention but this one allows you to actually edit the schema definition directly (very cool 😎).
[https://chromewebstore.google.com/detail/power-automate-tools/jccblbmcghkddifenlocnjfmeemjeacc](https://chromewebstore.google.com/detail/power-automate-tools/jccblbmcghkddifenlocnjfmeemjeacc)

**Exports**
Both the legacy and solution export zip files contain the definition file
## 2. The Keys
The current version of the definition key is [https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json](https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json). Depending on where the definition is accessed impacts what else is included, only the definition is consistent. So there are a lot of keys, but below is a list of the ones that I think are important:
- name
- id
- type
- connectionReferences
- isManaged
- properties/apiId
- properties/displayName
- properties/definition
- definition/metadata
- definition/$schema
- definition/parameters
- definition/triggers
- definition/actions
_parent/key_
## 3. Top Level Keys
The top level keys are kind of the metadata to the flow and not part of the actual definition, they give information about the flow but are not really part of it.
**Name**
Exampe:`name": "4b569087-cf7a-18bc-dfb4-f9b31fd7e635"`
This is the flow id, it is seen in the url and seen as resourceid in the workflow table (solution aware). This is not the guid for the workflow table (workflowid), but the guid witin Power Automate.
**Id**
Example: `id": "/providers/Microsoft.ProcessSimple/environments/Default-6b6c3ede-aa0d-4268-a46f-96b7621b13a8/flows/4b569087-cf7a-18bc-dfb4-f9b31fd7e635"`
Relative url path for the flow (Envirnoment id and flow name)
**Type**
` "type": "Microsoft.Flow/flows",`
Logic apps or flow
**Is Managed**
`isManaged": false`
Is the flow in a managed solution.
**Connection References**
``` javascript
"connectionReferences": {
"shared_sharepointonline": {
"connectionName": "shared-sharepointonl-594ec2f7-b783-4358-8a34-901d2cf18e0e",
"source": "Invoker",
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"tier": "NotSpecified"
}
```
Connection references used within the flow. The connectionReference is linked to the action in the flow with the first key (shared_sharepointonline in the above example). When there are multiple of the same connection they are incremented (e.g. shared_sharepointonline, shared_sharepointonline_1, shared_sharepointonline_2, and so on).
The connectionName is the guid for the actual connection reference. The id is the type of connection, so above shows it is for SharePoint. Tier flags if it is a premium connector.
_The following are all under the properties key so I stretched it with 'top level':_
**ApiId**
`"apiId": "/providers/Microsoft.PowerApps/apis/shared_logicflows"`
App on azure that processes the definition
**DisplayName**
`"displayName": "demo",`
Name of the flow shown in all of the menus etc
**Definition**
```javascript
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"$connections": {
"defaultValue": {},
"type": "Object"
},
"$authentication": {
"defaultValue": {},
"type": "SecureObject"
}
},
"triggers": {
"When_an_item_is_created": {
"recurrence": {
"frequency": "Minute",
"interval": 1
},
"splitOn": "@triggerOutputs()?['body/value']",
"metadata": {
"operationMetadataId": "44e16d66-9bad-4c13-b095-e0d1720f2a20"
},
"type": "OpenApiConnection",
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "shared_sharepointonline",
"operationId": "GetOnNewItems"
},
"parameters": {
"dataset": "https://37wcqv.sharepoint.com/sites/testsite",
"table": "742435b6-7897-4636-ab8e-ec347405b9a6"
},
"authentication": "@parameters('$authentication')"
}
}
},
"actions": {
"Compose": {
"runAfter": {},
"metadata": {
"operationMetadataId": "d725af82-56c4-435c-bcea-dba38ddf4e02"
},
"type": "Compose",
"inputs": "@triggerOutputs()?['body/Title']"
}
}
},
```
This is the contents of your flow, with triggers, connections and the actions you use.
## 4. Triggers
We are now inside the definition key, with the first key being triggers (yep plural, as it may bot be possible to have multiple in Power Automate but it is in Logic Apps). There 3 trigger schemas:
**Instant**
Instant (button/PowerApp/Copilot) have standard input schema with each input a item within properties. The item name is sequential of the type, so text,text_1,text_2 or number,number_1.
``` javascript
"triggers": {
"manual": {
"metadata": {
"operationMetadataId": "4c3b12d2-87bf-45e3-a766-9555de826c05"
},
"type": "Request",
"kind": "Button",
"inputs": {
"schema": {
"type": "object",
"properties": {
"text": {
"title": "name",
"type": "string",
"x-ms-dynamically-added": true,
"description": "Please enter your input",
"x-ms-content-hint": "TEXT"
},
"boolean": {
"title": "Yes/No",
"type": "boolean",
"x-ms-dynamically-added": true,
"description": "Please select yes or no",
"x-ms-content-hint": "BOOLEAN"
},
"text_1": {
"title": "Input 1",
"type": "string",
"x-ms-dynamically-added": true,
"description": "Please enter your input",
"enum": [
"First option",
"Second option"
],
"x-ms-content-hint": "TEXT"
},
"number": {
"title": "Number",
"type": "number",
"x-ms-dynamically-added": true,
"description": "Please enter a number",
"x-ms-content-hint": "NUMBER"
},
"date": {
"title": "Trigger date",
"type": "string",
"format": "date",
"x-ms-dynamically-added": true,
"description": "Please enter or select a date (YYYY-MM-DD)",
"x-ms-content-hint": "DATE"
},
"email": {
"title": "Email",
"type": "string",
"format": "email",
"x-ms-dynamically-added": true,
"description": "Please enter an e-mail address",
"x-ms-content-hint": "EMAIL"
},
"file": {
"title": "File Content",
"type": "object",
"x-ms-dynamically-added": true,
"description": "Please select file or image",
"x-ms-content-hint": "FILE",
"properties": {
"name": {
"type": "string"
},
"contentBytes": {
"type": "string",
"format": "byte"
}
}
}
},
"required": [
"text",
"boolean",
"text_1",
"number",
"date",
"email"
]
}
}
```
**Schdeuled**
Schedule are the simplist, with a recurrance key setting frequency and interval, so below is every 20 minutes.
``` javascript
"triggers": {
"Recurrence": {
"recurrence": {
"frequency": "Minute",
"interval": 20
},
"metadata": {
"operationMetadataId": "7331237b-733a-4f27-b47c-1858ff22e2b0"
},
"type": "Recurrence"
}
},
```
**Automated**
Automated show the trigger name as the main key, with recurrence being how often it is pinged to check for an event (different licenses,connectors and even enviornments can impact the frequency, this is why there is a test option as it ramps frequency to the seconds).
Options like splitOn and secure inputs are shown as keys within the trigger, along with connection type (OpenApiConnection). Finally is the parameters, which is the inputs to the connection, below is the SharePoint site and list id.
``` javascript
"triggers": {
"When_an_item_is_created": {
"recurrence": {
"frequency": "Minute",
"interval": 1
},
"splitOn": "@triggerOutputs()?['body/value']",
"metadata": {
"operationMetadataId": "51079ae2-e43a-4424-a687-d56dd5d511c5"
},
"type": "OpenApiConnection",
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
"connectionName": "shared_sharepointonline",
"operationId": "GetOnNewItems"
},
"parameters": {
"dataset": "https://sharepoint.com/sites/Sites3/",
"table": "7f8831ea-d237-412e-ac15-0f0777992d1f"
},
"authentication": "@parameters('$authentication')"
}
}
```
## Actions
_For referece, actions is a key that is collection of actions, so any action key is within the actions key, sorry I know that probably makes no sense_
Actions is all of the actions within the flow. Obviously this is the big one, and there are lots of variations of the action key, but there are 3 main groups, OpenApiConnection, Containers, Operations. Actions is an object not an array, so every action is an object within the object. It is also recursive, with containers having their own actions (see more later). The biggest call out is the run order, this is not the order in the object, that is the order they were added to the flow. The run order is actually backward, its based on the runAfter key. So in theory you start with the last action and find the item it ran after, and work your way back up the flow.
You can also peek the code of your actions to see everything.

**OpenApiConnection**
The name of the action is the key, and it has the following keys:
- runAfter: Object that can contain muliple actions (branch merge) and has an array of run after conditions (Succeeded,Failed,Timedout,Skipped).
- metadata/operationMetadataId: if of the action, e.g SharePoint GetItems will all have same operationMetadataId.
- inputs/host/connectionName: this is the reference used to link to the connection references, so this the key to creating a relationship with the connectionReferences key.
- inputs/host/operationId: the type of api action, e.g GetItem would include SharePoint Get Item, Excel Get a row, Dataverse Get a row by ID.
- parameters: all of the inputs within the action, this will often not be representive of the ui, so you may select a file name but the input be a file id.
- secureData: for securing inputs/outputs of the action, it has properties key that is an array for inputs/outputs.
- runtimeConfiguration: for returning arrays (like GetItems/ListItems), only shown if pagination is turned on, shows the minimumItemCount key for how many rows per page
- retryPolicy: what retry's the action does if fails (does not show if set to default). Includes type key (fixed,none,exponential), and for each different types will show different inputs
``` JavaScript
"Get_a_row_by_ID": {
"runAfter": {
"List_rows": [
"Succeeded"
]
},
"metadata": {
"operationMetadataId": "bccfb3ea-e703-468d-a216-0b978ef670d9"
},
"type": "OpenApiConnection",
"inputs": {
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_commondataserviceforapps",
"connectionName": "shared_commondataserviceforapps",
"operationId": "GetItem"
},
"retryPolicy": {
"type": "fixed",
"count": 1,
"interval": "PT20S"
},
"parameters": {
"entityName": "workflows",
"recordId": "@outputs('List_rows')?['body/value'][0]?['workflowid']"
},
"authentication": "@parameters('$authentication')"
}
},
```
**Containers**
Containers are actions that have internal actions: Scope, Condition, Switch, ApplyToEach, DoUntil. They are all a little different, mainly around the branching logic within them. They also mess with our run order, as they break the rule for the first action within each branch. These run by position within the container, not runAfer (as runAfter is used for the container.
- type: what type of container
- actions: contains all of the actions within the container
- runAfter: same as before
- expression (not for Scopes): the input for the container
- else (condition Only): the branch for false, same structure as actions
- cases (switch Only): each switch branch sits within the cases key, with Case, Case 2 etc and a Default. Each Case has anothor key case for the condition match and its own actions
- foreach: (ApplyToEach Only) array input
``` javascript
"Condition": {
"actions": {
"Get_a_row_by_ID": {
"type": "OpenApiConnection",
"inputs": {
"parameters": {
"entityName": "accounts",
"recordId": "12345"
},
"host": {
"apiId": "/providers/Microsoft.PowerApps/apis/shared_commondataserviceforapps",
"connectionName": "shared_commondataserviceforapps",
"operationId": "GetItem"
},
"authentication": "@parameters('$authentication')"
}
}
},
"runAfter": {},
"else": {
"actions": {
"Compose": {
"type": "Compose",
"inputs": "hello world"
}
}
},
"expression": {
"and": [
{
"equals": [
"",
""
]
}
]
},
"type": "If"
}
}
```
**Operations**
Operations are the standard actions like setVariable and compose.
type: type of operation
runAfter: same as before
inputs: inputs for the action, specific to each operation
``` javscript
Select": {
"runAfter": {
"For_each": [
"Succeeded"
]
},
"type": "Select",
"inputs": {
"from": "@outputs('Get_items')?['body/value']",
"select": {
"@{item()?['Title']}": "test"
}
}
}
```
---
I have barely scratched the surface of the definition json (and Im sure I have a few things wrong as a lot was deduced from nosing around). Understanding it may not be necessary, but I find this knowledge realy useful when things don't work as expected, and I also find it kind of cool to get into the nuts and bolts of how things work, and as you made it to the end I suspect you do too 😎 | wyattdave |
1,904,402 | Introducing the New Angular TextArea Component | TLDR: The new Syncfusion Angular TextArea component, available in the 2024 Volume 2 release, enhances... | 0 | 2024-07-08T15:40:07 | https://www.syncfusion.com/blogs/post/new-angular-textarea-component | angular, development, web, ui | ---
title: Introducing the New Angular TextArea Component
published: true
date: 2024-06-28 11:26:20 UTC
tags: angular, development, web, ui
canonical_url: https://www.syncfusion.com/blogs/post/new-angular-textarea-component
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cxflx2ytyhdnrn0kylk2.png
---
**TLDR:** The new Syncfusion Angular TextArea component, available in the 2024 Volume 2 release, enhances the multiline text input experience with features like resizing, floating labels, and extensive customization options. Explore its robust features and learn the steps to get started with it.
We’re thrilled to unveil the new [Angular TextArea](https://www.syncfusion.com/angular-components/angular-textarea "Angular TextArea") component in the [2024 volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release.
It is a robust and flexible user interface element designed to enhance multiline text input. It offers a wide range of features and customization options, enabling developers to create rich, interactive text areas for their web apps. This component supports advanced functionalities, including character count limits, resizing, placeholder text, and custom styling, making it an essential tool for improving user experience and boosting productivity in web development.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Angular-TextArea-component.png" alt="Angular TextArea component" style="width:100%">
<figcaption>Angular TextArea component</figcaption>
</figure>
## Key features
The Angular TextArea component offers a variety of features designed to enhance user experience and provide extensive customization options. With these, you can seamlessly input and edit multiline text content.
### Resizable text areas
The Angular TextArea component can be [resized](https://ej2.syncfusion.com/angular/demos/#/material3/textarea/resize "Demo: Resize functionality of the Angular TextArea component") vertically, horizontally, or in both directions by selecting the corresponding resize mode option.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Resizing-the-Angular-TextArea-component-1.gif" alt="Resizing the Angular TextArea component" style="width:100%">
<figcaption>Resizing the Angular TextArea component</figcaption>
</figure>
### Floating label
The Angular TextArea intelligently floats the placeholder text based on the specified [floating label type](https://ej2.syncfusion.com/angular/demos/#/material3/textarea/floating-label "Demo: Floating label functionality of the Angular TextArea component") option. When users start typing, the label transitions elegantly above the text area. This provides users with clear guidance about the required input and enhances usability.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Resizing-the-Angular-TextArea-component-1.gif" alt="Floating label in Angular TextArea component" style="width:100%">
<figcaption>Floating label in Angular TextArea component</figcaption>
</figure>
### Customization options
Developers have complete control over the TextArea component’s appearance and behavior. They can customize the number of rows and columns, specify the maximum length of input, turn the text area on or off, and apply custom CSS styles to match the look and feel of their app.
#### Rows and columns
Easily customize the dimensions of your text area by specifying the desired number of [rows and columns](https://ej2.syncfusion.com/angular/documentation/textarea/rows-columns "Rows and columns in Angular TextArea component") so that it fits seamlessly into any application layout.
#### Maximum length
You can also define the [maximum number of characters](https://ej2.syncfusion.com/angular/documentation/textarea/max-length "Maximum length in Angular TextArea component") users can input in the TextArea.
### Accessibility and compatibility
Accessibility is a top priority in modern web development. The Angular TextArea component is designed with accessibility in mind. It ensures that users with disabilities can navigate and interact with the text area using assistive technologies such as screen readers and keyboard shortcuts. Additionally, the component is compatible with all major web browsers, ensuring a seamless experience for users across different platforms.
### Seamless integration
The TextArea component seamlessly integrates with other components and frameworks, making it easy to incorporate into existing web apps.
## Getting started with Angular TextArea component
We’ve explored the user-friendly features of the Angular TextArea component. Let’s see how to integrate it into your app.
1. First, refer to the [Getting started with Angular TextArea component documentation](https://ej2.syncfusion.com/angular/documentation/textarea/getting-started "Getting started with Angular TextArea component").
2. Include the necessary EJ2 scripts and stylesheets in your project.
3. Then, add the Angular TextArea component to your HTML markup.
4. You can then configure the component properties and customize its appearance and behavior according to your requirements. Refer to the following code example.
**[app.component.ts]**
```js
import { Component } from ‘@angular/core’;
@Component({
selector: 'app-root',
// Specifies the template string for the TextArea component.
Template: `<div><ejs-textarea id='default' placeholder='Enter your comments' floatLabelType='Auto' resizeMode='Both' ></ejs-textarea></div>`
})
export class AppComponent {
constructor() {
}
}
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Integrating-TextArea-component-in-Angular-app.gif" alt="Integrating TextArea component in Angular app" style="width:100%">
<figcaption>Integrating TextArea component in Angular app</figcaption>
</figure>
## Conclusion
Thanks for reading! The [Angular TextArea](https://www.syncfusion.com/angular-components/angular-textarea "Angular TextArea") component is a powerful tool for enhancing multiline text input in web apps. Its advanced features, customization options, and seamless integration empower developers to create rich and interactive text areas that provide an exceptional user experience. Whether you’re building a simple form or a complex text editing app, the Angular TextArea component has you covered.
Ready to take your multiline text input to the next level? Try out the Angular TextArea component today and see the difference it can make in your web apps!
For a detailed overview of this release’s exciting updates, we invite you to visit our [Release Notes](https://help.syncfusion.com/common/essential-studio/release-notes/v26.1.35 "Essential Studio Release Notes") and [What’s New](https://www.syncfusion.com/products/whatsnew/angular "What’s new in Syncfusion Angular UI components") pages.
For our existing customers, the new version of Essential Studio is now available on the [License and Downloads](https://www.syncfusion.com/account/login "Essential Studio license and downloads page") page. If you’re new to Syncfusion, sign up for a [30-day free trial](https://www.syncfusion.com/downloads "Free evaluation of the Essential Studio products") to try our controls yourself.
If you have any questions, you can reach us through our [support forum](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/ "Syncfusion Feedback Portal"). We’re always happy to assist you!
## Related blogs
- [Syncfusion Essential Studio 2024 Volume 2 Is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 Is Here!")
- [What’s New in Essential JS 2: 2024 Volume 2](https://www.syncfusion.com/blogs/post/whats-new-essential-js-2-2024-vol2 "Blog: What’s New in Essential JS 2: 2024 Volume 2")
- [What’s New in Angular 18?](https://www.syncfusion.com/blogs/post/whats-new-in-angular-18 "Blog: What’s New in Angular 18?")
- [Optimize Blog Management with Angular Gantt Chart](https://www.syncfusion.com/blogs/post/blog-management-angular-gantt-chart "Blog: Optimize Blog Management with Angular Gantt Chart") | jollenmoyani |
1,905,622 | Docker for Node.js Developers: A DevOps Guide to Efficient Deployment | Introduction We're working on "Docker for Node.js Developers: A DevOps Guide to Efficient... | 0 | 2024-07-08T17:29:40 | https://dev.to/rifat87/docker-for-nodejs-developers-a-devops-guide-to-efficient-deployment-294j | docker, node, express, javascript | ##Introduction
We're working on "Docker for Node.js Developers: A DevOps Guide to Efficient Deployment". We'll explore how to master containerization with Docker and transform our Node.js application deployment. We'll learn to optimize workflows, manage dependencies, and deploy to cloud platforms, reducing errors and increasing efficiency. We'll get our application to market faster with Docker.
##Creating custom image and Docker file

1 . At first we initialize the node, `npm i -y`
2 . Install express, `npm i express`
3 . create a app.js file for server,

```javascript
const express = require("express")
const app = express()
app.get("/", (req, res) => {
res.send("<h2> Hi there , I am Tahzib!!! wait there i am comming </h2>")
})
const port = process.env.PORT || 3000;
app.listen(port, ()=> console.log(`Listening on port ${port}`));
```
4 . Create a **Dockerfile**,
```
FROM node:22.3-alpine3.19
WORKDIR /app
COPY package.json .
#Here the (.) means the current directory it is same as /app Here package.json file will be copied to /app directory
RUN npm install
#Why we are copying the whole directory to app directory and whats the purpose of copying package.json separately?
#Its beacuse Docker image works in layers, It means FROM is a layer and WORKDIR is a layer and RUN is a layer,
# And all the layer is cached, so that if COPY package.json is not changed then it will skip this layer , same for RUN command
# But in last copy which copy all the files of the directory becuse we migt need to change anything in the application .
#The summary is , copying twice will give optimization When it will see there is no change in package.json then it will not copy it and comes to RUN and same for RUN and it will then come to other files and only copy other files.
#--------------------- docker ignore -------------------
#Here copy will copy all the files we have in . directory but we dont need the node_modules , same way there might be lots of files and folder we don't need. So That we can use docker ignore file which will help us to ignore the unneccessary files.
COPY . ./
#Its the port number where the server will run
ENV PORT 3000
EXPOSE $PORT
#Entry Point for the container
CMD ["npm", "run", "dev"]
# In the context of the dev script in your package.json file, the command nodemon -L app.js will start nodemon in legacy watching mode, watching for changes to the app.js file and restarting the application when changes are detected.
#CMD [ "nodemon", "app"]
# i think this might work for global nodemon CMD [ "nodemon", "app.js"]
#NOTE: docker run -v pathtofolderlocation:pathtofolercontainer -d --name node-app -p 3000:3000 node-app-image
#docker run -v D:\BackendDevelopment\NodeDocker:/app -d --name node-app -p 3000:3000 node-app-image
```

5 . Now we a `.dockerignore` file. The dockerignore file has the ability to avoid such list of files that we does not need to copy while we built the image.

Here, we have some file list like, node_modules, Dockerfile, .dockerignore, .git and .gitignore. We might have to fill more later to ignore the files that we don't need to copy in our container.
6 . We need to add the **nodemon** for continuous refresh,


In package.json, we need modify the scripts. we have to add the start and dev according to our imgae.
7 . Next we have to built the image, by giving,
- `docker build -t node-app-image .`
- Let's break down the docker build command:
1. **docker build:** This is the Docker command to build a Docker image from a Dockerfile.
2. **-t node-app-image:** This option specifies the tag or name for the resulting Docker image. In this case, the image will be named node-app-image. The -t option is short for --tag.
3. **.:** This is the build context, which is the directory that contains the Dockerfile and any other files required for the build process. The dot (.) represents the current working directory, which is D:\BackendDevelopment\NodeDocker in this case.
So, when you run the command docker build -t node-app-image., Docker will:
Look for a file named Dockerfile in the current directory (D:\BackendDevelopment\NodeDocker).
Read the instructions in the Dockerfile to build the Docker image.
Create a new Docker image with the name node-app-image.
Use the files in the current directory as the build context, which means Docker will copy the files into the image during the build process.
In summary, this command tells Docker to build a new image named node-app-image using the instructions in the Dockerfile located in the current directory (D:\BackendDevelopment\NodeDocker).
8 . We check the image by giving, `docker imgaes`
## Create Container
1. `docker run -v ${pwd}:/app:ro -v /app/node_modules --env PORT=4000 -d --name node-app -p 3000:4000 node-app-image`
**docker run:** This command is used to run a Docker container from a Docker image.
**-v ${pwd}:/app:ro:** This option mounts a volume from the host machine to the container. Here's what it does:
**${pwd}** is a variable that represents the current working directory (in this case, D:\BackendDevelopment\NodeDocker).
**:/app** specifies the directory in the container where the volume will be mounted. In this case, it's /app.
**:ro** means the volume is mounted as read-only. This means that the container can read files from the host machine, but cannot modify them.
So, this option mounts the current working directory on the host machine to the /app directory in the container, allowing the container to access the files in the current directory, but not modify them.
**-v /app/node_modules:** This option mounts another volume, but this time, it's a "virtual" volume that allows the container to persist data even after the container is restarted or deleted. This is useful for storing dependencies installed by npm or yarn, so that they don't need to be reinstalled every time the container is restarted.
**--env-file ./.env:** This option tells Docker to load environment variables from a file named .env in the current directory. The .env file typically contains sensitive information such as database credentials, API keys, or other configuration settings. By loading these variables from a file, you can keep them separate from your code and avoid hardcoding them.
**-d:** This option tells Docker to run the container in detached mode, which means the container will run in the background, and you won't see its output in the terminal.
**--name node-app:** This option gives the container a name, node-app, which can be used to reference the container in other Docker commands.
**-p 3000:4000:** This option maps a port from the host machine to a port in the container. In this case, it maps port 3000 on the host machine to port 4000 in the container. This allows you to access the application running inside the container by visiting http://localhost:3000 in your browser.
**node-app-image:** This is the name of the Docker image that the container will be created from.
So, when you run this command, Docker will:
Create a new container from the node-app-image image.
Mount the current working directory on the host machine to the /app directory in the container, allowing the container to access the files in the current directory.
Mount a virtual volume at /app/node_modules to persist dependencies installed by npm or yarn.
Load environment variables from the .env file in the current directory.
Run the container in detached mode.
Give the container the name node-app.
Map port 3000 on the host machine to port 4000 in the container.
This command sets up a Node.js application to run in a Docker container, with the application code mounted from the host machine, environment variables loaded from a file, and the ability to access the application from the host machine via port 3000.
---
##Deleting State Volumes
1. `docker volume ls` , will show all the volumes that we create.

2 . We can delete them using,
- `docker volume rm <id>`
- `docker volume prune`
Or we can delete them while removing the container. For that we can use: `docker rm node-app -fv`
##Docker Compose
So, we can see our command goes big for running a single container. And when we move to multiple container then it is very difficult to run such big commands and it is very easy to fall in typos. To solve that we have docker compose.
####Dockerfile vs Docker compose
We need a Dockerfile to create a Docker image, which is a packaged version of our application. The Dockerfile defines how to build the image, including the dependencies and configuration required to run our application.
We need Docker Compose to define and run a multi-container application, which consists of multiple services that work together. Docker Compose makes it easy to manage the dependencies between services, scale individual services, and restart services when they fail.
At first we need a file named **docker-compose.yaml**. The file must have the (**.yaml**) extension.
```yml
services:
node-app:
build: .
ports:
- "3000:3000"
volumes:
- ./:/app
- /app/node_modules
environment:
- PORT=3000
```
Ignore the preview, because i docker compose the indentation is very important, we use 4 space in every indentation. Follow the image,

And now the moment of truth, give command,
- `docker compose up`

- We can see the image and running container,

- To **remove** the container we can simply give ,`docker compose down`. But it will keep the volume data. To remove the volume also we need to give , `docker compose down -v`

1 . So we have one problem here, Though we can down the docker compose but when give the **`docker compose up -d`** , -d for detached mode and the 2nd time when we gave this then it will add the changes because docker compose is so dumb. To keep the changes and create the new image we have to give,

- We change the port: 4000,
`docker compose up -d --build`


- Again we give the port to 3000 and build the image,

Actually volume keeps the track of the files from them localhost to containers.
---
##Development vs Production configs
Till now we just use the docker for our development but the production does not work that way. We can't simply deploy what we have edited. We show the users the clean portion. For that we simply use **multiple docker compose file**. Different files for different use.

3 new files are created,
1 . `docker-compose.dev.yml` ,

```yml
services:
node-app:
volumes:
- ./:/app
- /app/node_modules
environment:
- NODE_ENV=development
command: npm run dev
```
2 . **`docker-compose.prod.yml`**,
```yml
services:
node-app:
environment:
- NODE_ENV=production
command: node app.js
```
3 . **`docker-compose.yml`**,
```yml
services:
node-app:
build: .
ports:
- "3000:3000"
environment:
- PORT=3000
```
and kept the previous docker compose file as docker-compose.bacup.yml.
- The most important , we have to use separate commands for production and development,
**Dev:** `docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d --build`
**Production:** `docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d -build `


We can see lots of docker-compose files which are not needed in the container. We can easily remove them by inserting them to **`.dockerignore`** file.

[ **means any file that matches the previous portion** ]
---
But We have to solve another issue, Now the RUN command in **Dockerfile** is running for both development and production while building and for both condition we are running `npm install`, which is not needed for production. To, solve this issue we need some changes in our Docker file.
1.

New **RUN** command, with **$NODE_ENV** as variable and it will be received as argument. So we set it as argument variable. and we remove the previous RUN command. And new RUN command is shell scripting code.
```
ARG NODE_ENV
RUN if [ "$NODE_ENV" = "development" ]; \
then npm install; \
else npm install --only=production; \
fi
```
2 . Next we change in docker-compse.dev.yml file,

added build , context and args.
3 . Same for docker-compose.prod.yml,

And the terminal commands are same for production and development.

But we don't need the --build any more because we add that to the compose file.
##Adding a mongo container

```yml
mongo:
image: mongo
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
```
- Added this mongo service, and run this command to termina, `docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d` .
- Then open in shell command mode, giving , `docker exec -it devopswithnodejs-mongo-1 sh`
- Now give, `mongosh -u "amidn" -p "password"`
And you are successfully logged into the mongodb server.

You can check the database using mongodb database commands.
Keep in mind from now, we can not use `-v` tag for docker compose down , because the mongo volume is now persistent .
##Communication between containers
1. Install mongoose , `npm i mongoose`
2. Down all the containers and build again because of new package,

- `docker compose -f docker-compose.yml -f docker-compose.dev.yml down`
3 . Now we are going to build the image again,

- ` docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d --build`
Next is connect the database,
4 .

in the place of admin: use the user name we gave for db
- After admin, we have : and then we have to give the password that we gave for mongo. Then the IP address of the mongo container. But Here comes the network thing. Because both mongo and nodejs are in same network so we don't need the ip we can use the image name.
- To check the network list

- Here our network name is devopswithnodejs_default. And to inspect it we can give, `docker network inspect devopswithnodejs_default`

##Adding config.js
It is not a mandatory part ,
1. Create config directory and config.js file.

inside that give,
```javascript
module.exports = {
MONGO_IP: process.env.MONGO_IP || "mongo",
MONGO_PORT: process.env.MONGO_PORT || 27017,
MONGO_USER: process.env.MONGO_USER,
MONGO_PASSWORD: process.env.MONGO_PASSWORD
}
```
2 . Use them on **app.js**,

```javascript
const express = require("express")
const mongoose = require( "mongoose");
const { MONGO_USER, MONGO_PASSWORD, MONGO_IP, MONGO_PORT } = require("./config/config");
const app = express()
mongoose.connect(`mongodb://${MONGO_USER}:${MONGO_PASSWORD}@${MONGO_IP}:${MONGO_PORT}/?authsource=admin`).then(()=> console.log("Successfully connected to dB")).catch((e) => console.log("Error is: ", e));
app.get("/", (req, res) => {
res.send("<h2> Hi....., I am Tahzib, wait there i am comming </h2>")
})
const port = process.env.PORT || 3000;
app.listen(port, ()=> console.log(`Listening on port ${port}`));
```
And keep in mind to use, docker compose down and docker compose up.

to check and verify the connection,

Yo may see some warnings for mongo connection. These are for using older version of mongo. If you face the problem use AI, how to add the dependencies to mongo. And you are done.
##Container bootup order
We have two containers one node and other is mongo. Here, if the node container runs first then it will not find the mongo container and which will cause error. So that, we use dependencies on docker-compose file which will give us the order.

```yml
depends_on: # It neans it dpends to mongo container , so the node container will run after the mongo contaner.
- mongo
```
and for the safety we can use a function which will try to connect the container in every 5se,

```javascript
const connectWithRetry = () => {
mongoose.connect(`mongodb://${MONGO_USER}:${MONGO_PASSWORD}@${MONGO_IP}:${MONGO_PORT}/?authsource=admin`).then(()=> console.log("Successfully connected to dB")).catch((e) => { console.log("Error is: ", e);
setTimeout(connectWithRetry, 5000);
})
}
connectWithRetry();
```
and for connection check we can give, ` docker logs devopswithnodejs-node-app-1 -f `

##Building a CRUD application
1 . Added these 2 lines to app.js,
- `const postRouter = require("./routes/postRoutes")`
- `app.use(express.json())`

2 . Now we are going to create 3 folders and inside them 3 files. Like: folder->file,
- controllers->postController.js
- models->postModels.js
- routes->postroutes.js

Here are the file data,
####postController.js
```javascript
const Post = require("../models/postModel")
exports.getAllPosts = async (req, res, next) => {
try {
const posts = await Post.find();
res.status(200).json({
status: "success",
results: posts.length,
data: {
posts,
},
})
} catch(e) {
res.status(400).json({
status: "fail",
})
}
}
//localhost:3000/posts/:id
exports.getOnePost = async(req, res, next) => {
try {
const post = await Post.findById(req.params.id);
res.status(200).json({
status: "success",
data: {
post,
},
})
} catch(e) {
res.status(400).json({
status: "fail",
})
}
}
exports.createPost = async(req, res, next) => {
try {
const post = await Post.create(req.body);
res.status(200).json({
status: "success",
data: {
post,
},
})
} catch(e) {
res.status(400).json({
status: "fail",
})
}
}
exports.updatePost = async(req, res, next) => {
try {
const post = await Post.findByIdAndUpdate(req.params.id, req.body, { new: true, runValidators: true,});
res.status(200).json({
status: "success",
data: {
post,
},
})
} catch(e) {
res.status(400).json({
status: "fail",
})
}
}
exports.deletePost = async(req, res, next) => {
try {
const post = await Post.findByIdAndDelete(req.params.id);
res.status(200).json({
status: "success",
})
} catch(e) {
res.status(400).json({
status: "fail",
})
}
}
```
####postModel.js
```javascript
const mongoose = require("mongoose");
const postSchema = new mongoose.Schema({
title: {
type: String,
require: [true, "Post must have title"],
},
body: {
type: String,
required: [true, "post must have body"],
}
})
const Post = mongoose.model("Post", postSchema)
module.exports = Post;
```
####postRoutes.js
```javascript
const express = require("express")
const postController = require("../controllers/postController");
const router = express.Router();
router
.route("/")
.get(postController.getAllPosts)
.post(postController.createPost)
router
.route("/:id")
.get(postController.getOnePost)
.patch(postController.updatePost)
.delete(postController.deletePost)
module.exports = router;
```
##Inserting data to database
We are going to use **Postman** to insert data.
1.

- use this api for posting data, `http://localhost:3000/api/v1/posts`
- Use the HTTP method **POST**.
2 . Use **GET** method to see all the data using the same api

##Sign up and Login
1. At first we are going to create a user model in **models** directory. Like: models->userModel.js

```javascript
const mongoose = require("mongoose")
const userSchema = new mongoose.Schema({
username: {
type: String,
require: [true, 'User must have a user name'],
unique: true,
},
password: {
type: String,
require: [true, 'User must have a password'],
},
})
const User = mongoose.model("User", userSchema)
module.exports = User
```
2 . Now we create a controller for it, it is **authController.js** in **controllers** directory.

```javascript
const User = require("../models/userModel")
exports.signUp = async(req, res) => {
try{
const newUser = await User.create(req.body)
res.status(201).json({
status: "success",
data: {
user: newUser,
},
})
}catch(e){
res.status(400).json({
status: "fail"
})
}
}
```
3 . Now we need the routes. So we create **userRoutes.js** in **routes** directory.

```javascript
const express = require("express")
const authController = require("../controllers/authController")
const router = express.Router()
router.post("/signup", authController.signUp)
module.exports = router;
```
4 . Last we have to add the middleware in **app.js** .

- `const userRouter = require("./routes/userRoutes")`
- `app.use("/api/v1/users", userRouter)`
And we are done, we can check it on **postman**.
API,

- `http://localhost:3000/api/v1/users/signup`

- Body:
```json
{
"username": "Tahzib",
"password": "password"
}
```
Till now our password is saved as plain text. To encrypt it we are going to use **bcrypt** package.
- `npm i bcrypt`

**NOTE: use bcrypt**
Now make the docker compose down and up again with build.
- ` docker compose -f docker-compose.yml -f docker-compose.dev.yml down`
- `docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d --build`


I officially end my backend journey here, time to explore other tech stacks. Goodbye, and sorry for sudden death of this post.
| rifat87 |
1,905,842 | Code Transformation with Amazon Q | Introduction An exciting feature of Amazon Q is the concept of agents that autonomously... | 0 | 2024-07-09T09:24:42 | https://dev.to/aws-heroes/code-transformation-with-amazon-q-40df | aws, tutorial, genai, java | ## Introduction
An exciting feature of `Amazon Q` is the concept of agents that autonomously perform a complex, multistep task from a single prompt. One of these agents is the "Developer Agent for Code Transformation" which automates the process of upgrading and transforming Java applications from `Java 8` or `Java 11` to `Java 17`, with more language support on the way.
I have previously demonstrated this capability using a simple `Java 8` example. However, when I stumbled upon an old `Java 11` Spring Boot application with thousands of lines of code, the build failed to compile on `Java 17` with various upgrade issues, and it meant a time-consuming process to manually step through all of the problems.
Now, a few months on, I wanted to see whether any step change improvements had been made to the agent, so dug out the old codebase.
## Setting up the Java 11 Bicycle Licence Application
In March 2020, I presented at an online AWS Tech Talk on new features of `Amazon QLDB`. To bring this to life, we built a quick demo using Java 11 and Spring Boot. You can find the code repository on GitHub [here]( https://github.com/mlewis7127/bicycle-licence-ui-master). The repository does require a QLDB Ledger and table to be set up in the `eu-west-2` region, with more details provided in associated `README.md` file.
First create a new QLDB ledger in the `eu-west-2` region:

And once created, open up the `PartiQL editor` and run a command to create a new table:

Then we create a new directory by cloning the repository using the following command:
```shell
git clone https://github.com/mlewis7127/bicycle-licence-ui-master.git
```
We will use the JetBrains Intellij IDEA for this transformation, and choose to open a new project and select the folder created in the previous step.

If a pop-up appears, enable annotation processing for Lombok. Make sure to set the project to use version 11 of the `Java SDK`. I am using `Amazon Corretto` for my distribution of the Open Java Development Kit (OpenJDK).

This is set for the project by selected `File --> Project Structure`, and select `Project` under `Project Settings`. Clicking on the SDK dropdown box, allows you to select `Download SDK` where you can specify the version and vendor of the JDK to download.
From here, run a Maven `clean` and then a Maven `compile`. In the screenshot below I am doing this from the Maven plugin, or you can run from the command line. This will compile all of the code successfully as shown below.

Now launch the application by running the `BicycleLicenceApplication` class in a new configuration.

This will start the application, which can be accessed in a browser window on `http://localhost:8080/`. I have put together a short video below showing the application running as a `Java 11` application.
{% embed https://youtu.be/LbmzamV9CZA %}
## Using Amazon Q to automatically upgrade to Java 17
Now we have the application successfully running using `Java 11`, we want to upgrade to `Java 17`. We tell `Amazon Q` that we want to use the Code Transformation agent by opening a chat window and typing `/transform` and selecting the agent.

This launches the `Developer Agent for Code Transformation`. The `bicycle-licence-ui` module is automatically selected, and we press confirm to let the agent know we want to upgrade to `Java 17`.

Once we select transform, the agent takes over. It starts by building the Java module. Then it uploads the project artefacts that it has just built. Once these files have been uploaded, the code transformation job has been accepted, and the agent begins building the code in a secure build environment. Once built, `Amazon Q` begins analysing the code in order to generate a transformation plan.
Once created, you can see a summary of the transformation plan. In this case, we have an application with almost 2500 lines of code, in which we need to replace 2 dependencies, with changes made to 5 files.

A summary is also provided of the planned transformation changes.

The transformation plan generated follows a 3 step process:
1. Update JDK version, dependencies and related code
2. Upgrade deprecated code
3. Finalise code changes and generate transformation summary
This is where the massive improvements in code transformation shone. A few months ago, when the build of the application to `Java 17` failed, the automated process abruptly ended. Now, the agent takes the compilation errors, and looks to make changes to fix the errors, before attempting to build the code again. You can see below it took several attempts, making changes each time, before the code could successfully build on `Java 17`. The key point is this was all automated, with no manual input required.

After just 16 minutes, the code transformation was complete and had succeeded.

`Amazon Q` lets us know about the planned dependency that it updated, with the other identified dependency having been removed.

As well as a number of additional dependencies that were updated during the upgrade

A pop up box allows you to see which files have been changed, and you can select each file individually and run a side by sided comparison to evaluate the changes.

As an example, I can see that all all references to `javax.servlet.http.HttpServletRequest` have been replaced by `jakarta.servlet.http.HttpServletRequest` as `Spring Boot 3.0` has migrated from `Java EE` to `Jakarta EE` APIs for all dependencies. The agent had also implemented a new interface that was present in the latest Spring Framework version, that had not previously existed.
After this, we accept the automated updates, and run a Maven `clean` and a Maven `compile` before making sure we click on the top left button in the screenshot to reload the latest version of the dependencies:

We can launch and test the application which is now running on `Java 17`.
{% embed https://youtu.be/eKIGflrCQ1s %}
## Final Observations
There has been a **massive improvement** in the capability of the “Developer Agent for Code Transformation” over the past few months. If you had tried and discounted the agent, you should definitely look to give it another go. As the underlying models improve, I think we will see further step change improvements happen in very short timescales on an ongoing basis.
Two areas to call out for me are:
**Unit Testing**
In the video above, the call to verify the digest failed with a `java.lang.NoSuchFieldError` in the logs. This is an example where despite having no compilation errors, we can still experience runtime errors. I have updated the GitHub repository with some unit tests as an example, which demonstrates how Amazon Q executes these tests as part of the code transformation.

We were able to fix the error by correcting a version mismatch between AWS SDK modules. This may not be found by unit tests, unless we were able to interact directly with AWS endpoints as part of these tests, and this is something being worked on. Nevertheless, ensuring there are sufficient unit tests to provide coverage of the core functionality will help to reduce examples of code compiling correctly but failing at runtime.
**AWS SDK Upgrades**
Although a number of libraries and frameworks were upgraded such as Spring and Log4j, the AWS SDK itself remained on Java 1.x. A big reason for this is the upgrade to AWS SDK for Java 2.x is a major rewrite that will typically require custom development to make work.
Watch the video below to see the agent in action and the steps it takes as described throughout this post
{% embed https://youtu.be/e7Mvek3wP38 %}
| mlewis7127 |
1,906,545 | What is different between undefined and null in Java Script | I introduce that what is different between undefined and null because it is difficult to describe... | 0 | 2024-07-10T11:47:56 | https://dev.to/noah-00/what-is-different-between-undefined-and-null-in-java-script-1mdf | typescript, javascript, webdev, programming | I introduce that what is different between `undefined` and `null` because it is difficult to describe it exactly.
## Understanding the Difference between undefined and null
In many programming languages, there is usually one way to represent "no value," such as null.
However, in JavaScript, there are two ways to represent "no value": `null` and `undefined`. This can be surprising and confusing for people coming to JavaScript from other languages.
## 💡Basics
### null
`null` doesn't occur unless programmer Intentionally uses it.
### undefined
On the other hand, `undefined` occurs naturally, even if you don't use it explicitly.
For example, if you declare a variable and it has no initial value, JavaScript will assign `undefined` to it.
```ts
let test;
console.log(test);
// → undefined
```
When accessing the property that doesn't exist in a object or the element that is not in array, it will automatically became undefined.
```ts
const object = {};
console.log(object.foo);
// → undefined
const array = [];
console.log(array[0]);
// → undefined
```
## 🔍JSON
### undefined
When you use `undefined` for the value of an object's property and then convert that object to JSON using JSON.stringify, the property is removed from the JSON output.
```ts
console.log(JSON.stringify({ name: undefined }));
// → {}
```
### null
On the other hand, when the value of property is `null`, it will retained .
```ts
console.log(JSON.stringify({ name: null }));
// → {"name": null}
```
## 👀Conclusion
It is easier to unify with `undefined` because `undefined ` occurs naturally everywhere.
Happy Coding☀️ | noah-00 |
1,906,879 | How to build an enterprise chatbot with Amazon Q Business to integrate Confluence data | GenAI tools like ChatGPT are now widely used - for private purposes. In an enterprise context, the... | 0 | 2024-07-09T20:31:06 | https://dev.to/aws-builders/how-to-build-an-enterprise-chatbot-with-amazon-q-business-to-integrate-confluence-data-228j | amazonq, chatbot, genai, confluence | GenAI tools like ChatGPT are now widely used - for private purposes. In an enterprise context, the requirements for building a GenAI chatbot are more complex. Source data must be securely integrated into the chatbot. The data must not be shared with other companies using the same GenAI platform of for training the model. User permissions must be supported so that restricted data is not shared with employees who should not have access to it.
[Amazon Q Business](https://aws.amazon.com/q/business/) is an AWS service that meets these requirements and enables businesses to easily build chatbots and GenAI assistants. [Connectors](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/connectors-list.html) are available to integrate various enterprise applications with Amazon Q Business. Confluence integration is one of the connectors available out of the box. It can import Confluence data into Q Business while respecting user-specific permissions.
This blog post shows how to integrate Confluence with Amazon Q Business and demonstrates that Q Business really respects permissions - it doesn't leak data to unauthorized users or employees.
## Prerequisites
Amazon Q Business requires AWS IAM Identity Center for managed users. Therefore, set up IAM Identity Center first. The Identity Center instance must be set up in the same region as Q Business. Since Q Business is only available in the us-east-1 and us-west-2 regions, you must create AWS IAM Identity Center in one of these regions. If you have already configured IAM Identity Center in a different region, you must [delete](https://docs.aws.amazon.com/singlesignon/latest/userguide/regions.html#region-data
) it and recreate it in one of the matching regions.
In addition, Q and the Confluence must use the same user data, otherwise the permissions won't work. Confluence can also be integrated with AWS IAM Identity Center, so you can use single sign-on for Q and Confluence. To do this, first [set up AWS IAM Identity Center as the identity provider](https://dev.to/aws-builders/setting-up-aws-iam-identity-center-as-an-identity-provider-for-confluence-2l8
) for Confluence.
## Getting started: Create a Confluence API key
To integrate Confluence with Amazon Q Business, create an API key for one of your Admin users. They will have access to all content necessary to crawl the entire Confluence instance. For production use, you would use a technical user.
If you are using Confluence SaaS, go to the following URL to create a new API key: https://id.atlassian.com/manage-profile/security/api-tokens

Next, copy the API key.

## Copy the API key to Secrets Manager
Q can use API keys that are stored in Secrets Manager. So create a new secret, select "other type of secret". Add the following attributes:
- username: email of your confluence user
- password: API key
- hostUrl: Confluence URL
Enter a secret name that begins with `QBusiness-`. Only secrets that start with this name can be used in Q.

## Create your Amazon Q Business application
Open Q Business in your AWS console and create a new application.

Enter the name of your application, such as `AmazonQ`, and then press "Next".

Configure the retrievers that will fetch data from the data source. If this is a proof of concept, select "Starter" and choose "1" unit.

## Add the Confluence data source
Select the Confluence data source from the list of data sources.

Enter a name for the data source and specify the source type (cloud or on-premises) and URL.

Select "Basic authentication" and choose the secret you created earlier.

Let's create a new service role, which is a recommended option.

Select the sync scope - select "Pages" as the minimum option to transfer Confluence pages to Q.

Next, choose how often you want Q to sync your Confluence data. In the case of a proof of connect, "Run on demand" is sufficient to avoid costs.

## User configuration
Select some AWS IAM Identity Center users who should have access to Q Business and select the subscription. Since there is a monthly fee after the free trial, check the [pricing](https://aws.amazon.com/q/business/pricing/) when selecting the subscription.
Now you are ready to use Amazon Q Business.

## Prepare Confluence test data
To test the permissions feature in Amazon Q, create a Confluence page with restricted access. Only users who have access to the page in Confluence will be able to use the information in Q Business. In this example, only "User One" has access to the page.

Create a sample content for the new page.

## Sync data source
Open the Q Business application, select the data source, and choose "Sync Now" to synchronize the data.

This will take a few minutes - Q Business will show the sync status in the meantime.

## Debugging Amazon Q Business
In the "Sync run history" there is a link to view the sync log in CloudWatch Logs. Again, it takes a few minutes for the log to be available.
The log information is useful in case Q Business doesn't work as expected. In my first test, Q Business only worked for the first user (with API key). I was able to analyze the behavior in the Cloudwatch logs.

Q Business uses API calls like this: https://jumic-q.atlassian.net/wiki/rest/api/content/1310724/restriction/byOperation/read?&limit=200&start=0
In my case, the email attribute was empty because my first user was set up manually. I claimed it in the Confluence organization, then it worked.

## Final test - Does Q Business handle permissions correctly?
First, let's test with "User Two". This user doesn't have access to the sample car information page.

Is it possible to leak the information in Q Business? No - Amazon Q responds that it can't find any relevant information.

Positive test - Can User One ask questions about the company car policy? Yes, it works because he is authorized in Confluence. Q Business works as expected.

## Summary
Amazon Q Business with Confluence works really well. You can create a chatbot / AI assistant without programming - really cool. The security features are also great. It's good to see that Q Business doesn't leak information to unauthorized users. It only shows information when users are authorized in Confluence. In addition, Q Business also the source pages in their answers. That's useful for double-checking each result to make sure the results are correct.
Are there alternatives to Amazon Q Business? Atlassian has introduced its own GenAI functionality in Confluence. It's called [Atlassian Intelligence](https://support.atlassian.com/organization-administration/docs/atlassian-intelligence-features-in-confluence/
). It can summarize pages or answer questions about the Confluence content. If you only need GenAI capabilities for Confluence, this feature might be sufficient.
Why still use Amazon Q Business? Q Business can connect to [many other business applications](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/connectors-list.html) such as MS SharePoint, MS Teams, Slack, databases or even S3 buckets. Users don't need to know in which source system the information is stored in. They can just ask Q Business, which uses information from all these source systems - which is more powerful.
| jumic |
1,907,240 | First STAG Game Post Mortem | Over the past few months the game studio I'm a part of, STAG has created and now released their first... | 0 | 2024-07-11T00:01:54 | https://dev.to/chigbeef_77/first-stag-game-post-mortem-1e8l | gamedev, programming | Over the past few months the game studio I'm a part of, [STAG](https://saythatagaingames.itch.io/) has created and now released their first game, [Cheese Clout](https://saythatagaingames.itch.io/cheese-clout). I only just released it, but I thought I would already do a post mortem, as I feel I have a lot to reflect on.
## Deadlines
There are 4 people in the studio (give or take), which may sound like plenty to develop a game, but we are all extremely busy people, so between us we aren't able to get much work done.
From what I can gather, I have the most free hours out of anyone, but I've learned that just because I have more time doesn't mean more can be done. I was often in a position where I wanted to add more, but didn't have the assets from the team to add it. That's not to say anything bad, people are simply busy, so it's not always that easy. We originally planned to develop the game in a single month, which we did on purpose, we scoped it so that the deadline was possible.
## Error Handling
Something I'm proud of from making Cheese Clout is the error handling (that I know of). I created a module called "stagerror," which behaves differently depending on how the game is being played. I set a flag called `IS_RELEASE` to control this. If we're not in release mode, we want to catch every single error as it comes, so we use `panic` to get the developer's attention. However, as a user, this is a terrible way to play a game, having it crash all the time. Because of this, in release mode we don't hard error, we just write to a log file in the background and try to continue on as best we can. Cheese Clout has a primitive version of this, and I'm still getting better at using this module, but a great example for its use would be images. If the game is missing one image, I don't want it to crash and be unplayable, instead, it should log out that it's missing an image.
## Level Editor
Right now, I haven't released the level editor for Cheese Clout. I will have that out soon (if I remember), especially since it's a great part of the game to be able to make and play your own levels. The level editor isn't that great. It works, but I wish I had spent more time making it nicer, such as making the text boxes more convenient, and just a bunch of other ease of use features. | chigbeef_77 |
1,907,261 | Why are NoSQL Databases beeter at Horizontal Scaling Compared to SQL Databases | The ability of NoSQL databases to horizontally scale more effectively than SQL databases is rooted in... | 0 | 2024-07-08T08:53:12 | https://dev.to/jacktt/why-are-nosql-databases-beeter-at-horizontal-scaling-compared-to-sql-databases-1hk2 | database | The ability of NoSQL databases to horizontally scale more effectively than SQL databases is rooted in their fundamental design principles and architectures. Here's a detailed explanation:
### 1. **Data Model**
- **NoSQL Databases:** Typically use flexible schema designs, such as key-value pairs, document stores, column-family stores, or graph databases. This flexibility allows for the easy distribution of data across multiple nodes.
- **SQL Databases:** Use a fixed schema with tables, rows, and columns. The rigid structure makes it more challenging to distribute data without compromising the integrity of relationships and transactions.
### 2. **Sharding**
- **NoSQL Databases:** Designed with sharding in mind. Sharding involves splitting data across multiple nodes, allowing for efficient distribution and retrieval. Each shard operates independently, making it easier to add or remove nodes as needed.
- **SQL Databases:** Can also use sharding, but it’s more complex to implement. Maintaining ACID (Atomicity, Consistency, Isolation, Durability) properties across shards requires sophisticated mechanisms, making it less straightforward than in NoSQL databases.
### 3. **Consistency Models**
- **NoSQL Databases:** Often favor eventual consistency over strong consistency. This means they can tolerate temporary inconsistencies across nodes, which simplifies horizontal scaling and allows for higher availability and partition tolerance.
- **SQL Databases:** Typically use strong consistency models. Ensuring that all nodes are always in sync (strong consistency) across distributed environments adds complexity to horizontal scaling.
### 4. **Replication**
- **NoSQL Databases:** Often use masterless architectures or master-slave replication that can handle write operations on multiple nodes, making it easier to scale horizontally.
- **SQL Databases:** Generally rely on master-slave replication where only the master node handles writes, creating a bottleneck. While multi-master setups exist, they are complex and harder to manage.
### 5. **Transaction Management**
- **NoSQL Databases:** Designed to handle simpler, denormalized data models without the need for complex transactions. This reduces the overhead and simplifies distribution across nodes.
- **SQL Databases:** Designed for complex transactions involving multiple tables and rows. Ensuring ACID properties in a distributed environment complicates horizontal scaling.
### 6. **Query Patterns**
- **NoSQL Databases:** Optimized for specific query patterns (e.g., key-value lookups, document retrievals) that can be efficiently distributed across nodes.
- **SQL Databases:** Use a general-purpose query language (SQL) that supports complex joins, aggregations, and nested queries, which can be challenging to distribute and scale horizontally.
### Conclusion
The inherent flexibility, simpler consistency models, and architectural designs of NoSQL databases make them better suited for horizontal scaling. SQL databases, with their rigid schema, complex transaction management, and strong consistency requirements, face more challenges when scaling horizontally. | jacktt |
1,907,400 | Boosting Developer Productivity With Pieces | There is a lot of buzz around AI tools in the development world, but very few tools solve the real... | 0 | 2024-07-07T08:46:26 | https://dev.to/pradumnasaraf/boosting-developer-productivity-with-pieces-5fnj | ai, productivity, devops, tutorial | There is a lot of buzz around AI tools in the development world, but very few tools solve the real issues developers face while being functional and practical products.
One such tool is [Pieces](https://pieces.app/?utm_source=blog&utm_medium=cpc&utm_campaign=pradumna) (Yes, it's called Pieces, which sounds like some puzzle or game, right?). I will talk about it in this blog and share a different perspective on how it helped me from coding to content creation (no, not generating content with the help of AI).
## So, What is Pieces?
[Pieces](https://pieces.app/?utm_source=blog&utm_medium=cpc&utm_campaign=pradumna) is an AI-powered on-device productivity tool that serves as a companion and a code manager. It offers various functionalities such as global search, Live Context, saving code snippets, and more. It helps you be more productive and integrates smoothly with your development environment. Most importantly, it helps overcome the issue of continuous context switching, which we developers don't like.
## What I Use Pieces For
I've been using Pieces for a while for multiple purposes, from saving snippets to summarizing the work I've done throughout my day. I will talk in more detail about each use case going forward.
I am a big fan of the native Pieces Desktop [App for Mac](https://docs.pieces.app/installation-getting-started/macos), and there are also extensions and plugins for [Chrome](https://chromewebstore.google.com/detail/pieces-for-developers-cop/igbgibhbfonhmjlechmeefimncpekepm?pli=1), [VS Code](https://marketplace.visualstudio.com/items?itemName=MeshIntelligentTechnologiesInc.pieces-vscode), and [Obsidian](https://docs.pieces.app/extensions-plugins/obsidian). You can find all the plugins [here](https://pieces.app/plugins). Choose whichever you prefer and are comfortable with. Let's explore these features in more detail.
### Saved Materials
As a DevOps professional, there are always some configurations or code snippets we often use in our day-to-day development workflow. The saving snippets feature is super helpful. Not only does it save snippets like a normal GitHub Gist, but it also automatically adds labels, and supporting articles, and creates shareable links. I can also open a Copilot Chat to ask for improvements or explanations with examples.
My favourite feature is that it summarizes what the snippet does as soon as you paste it with the help of AI. We often forget why we saved a piece of code.

Talking about the content creation I mentioned at the start of the blog, whenever I find some interesting code, I often share code snippets on social media. It's easy to lose the code snippets or save them. Now, with Pieces, I just add a label with **content** so that it helps me filter the snippets I want to share.

### Live Context
With Live Context, Pieces gains the ability to look at the things I was working on before. It can help you pick up right where you left off, fixing a bug in a project you're developing locally or continuing work on a previous task. It's incredibly handy because you don't need to copy and paste all the files and folders or explain everything; it automatically understands everything.
One of my favourite ways to use it is to summarize the things I worked on recently. For me, this is sometimes really hard to do manually.

### Running LLM On-device
We've all encountered the chaos when ChatGPT is down, right? Local LLMs can be a game-changer in those situations. However, running local LLMs can sometimes be tricky, especially for beginners.
This is where the powerful feature of Pieces comes into play—it lets you choose the Copilot Model and runtime. In simple words, you can choose whether you want answers from cloud LLMs like ChatGPT, Gemini, etc., or use on-device models such as Llama 2, Misral, etc.

This is not the end of it; they have more great functionality like Global Search, Workflow Activity, etc.
## Conclusion
I am impressed by the folks at Pieces for bringing this product together and how simple it is to use. At the same time, it tackles the key areas we developers usually face, providing solutions all under one roof. This is especially valuable since we developers don't like context-switching.
I see myself using [Pieces](https://pieces.app) to tackle my daily tasks and I can't recommend it enough—it's definitely worth trying out.
I'll end this blog by providing some feedback and suggestions on the things I like and areas that can be improved.
### Things That Can Be Improved
- The intro screen can be more simplified so that beginners don't find it overwhelming when they first try it out.
- Syntax highlighting in saved snippets for Dockerfile and other file types can be enhanced.
- Downloading snippets as images would be useful so that when I'm sharing code on social media, I don't need to switch to another tool for this purpose.
### Things I Like
- Privacy through on-device processing.
- How simple it is to use and switch between different functionalities.
- The Live Context feature I mentioned above.
- The ability to switch between different models like ChatGPT, PaLM 2, Gemini, etc. This helps prevent vendor lock-in (I hope that’s the right term).
- Support for LLM locally, so you don't need to worry if ChatGPT is down.
- Last but not least, a shoutout to their team. I've met a lot of founders and tools, but the people at Pieces are really passionate about what they are doing. | pradumnasaraf |
1,907,695 | Building a SaaS app using Django | This quick blog will cover the key libraries and tools you need to build your first SaaS project with... | 0 | 2024-07-08T11:45:58 | https://dev.to/paul_freeman/building-a-saas-using-django-45kp | django, saas, webdev, programming | This quick blog will cover the key libraries and tools you need to build your first SaaS project with Django.
## Build an MVP
When creating a SaaS application, it's essential to focus on building a Minimum Viable Product (MVP). An MVP is the most basic version of your product that still delivers core value to your users. Instead of spending months perfecting every feature, an MVP allows you to launch quickly, gather user feedback, and iterate based on real-world usage.
## Build on existing solutions
Don't create everything from scratch, instead see how you can build on well tested existing solutions, libraries etc. This approach not only saves time but also ensures that your application benefits from well-tested and reliable components.
Start on a boiler plate such as [Django SaaS boiler plate](https://github.com/PaulleDemon/Django-SAAS-Boilerplate)
## Essential Libraries
When working on a Django SaaS app, using essential libraries and tools can really speed things up. They can handle routine tasks like authentication, user management, payments, and background processes, letting you focus on the unique parts of your application.
### 🪪 User Authentication and Management - Django-allauth
Django comes with a built-in user authentication system, but for more advanced features, you might want to use a third-party package like [django-allauth](https://github.com/pennersr/django-allauth).
It also comes with social authentication such as Google auth, Twitter auth, Github auth etc.
### 💳️ Subscription Management - Stripe
Getting paid is very essential to building a sustainable SAAS application, for this you would be required to use a [Payment gateway](https://templates.foxcraft.tech/blog/b/what-are-payment-gateways) such as Stripe, Paypal etc.
I recommend using the official stripe API and stripe python library.
You can read more about [django stripe payment](https://dev.to/paul_freeman/adding-payment-to-django-app-4cc9)
### 📨 Email - AnyMail
Instead of using the default SMTP mail, its recommeded to create an account with an ESP such as SendGrid, Brevo etc, so you are not stuck with default 500 emails per day, instead can send scalable, high deliveribility emails.
You can make use of [anymail](https://github.com/anymail/django-anymail) library as its compatible with Django's existing email and setting this up won't take you more than 10 mins.
You can read more on how to set this up in my other blog about [adding ESP to Djanago](https://dev.to/paul_freeman/adding-esp-to-supercharge-your-django-email-4jkp)
### ⚙️ API - DRF / Django ninja
When developing a SaaS application, you might choose to create a REST API and use a frontend framework like React.js or Vue.js. In this case you'd have two choices, one go with [Django rest framework](https://github.com/encode/django-rest-framework) or go with [Django ninja](https://github.com/vitalik/django-ninja)
Django ninja is designed to be lightweight and efficient, leveraging type hints for data validation and serialization.
**DRF**: Best for projects requiring a robust, feature-rich API with extensive customization options and a large community support. Suitable for complex applications where flexibility and a broad range of features are necessary.
**Django Ninja**: Ideal for projects where performance is critical, and a simple, minimalistic approach is preferred. Suitable for developers looking for a modern API framework with easy integration of type hints and async support.
### ☁️ Storage - Django Storage
When it comes to deploying to production and using cloud storage such as Aws S3 bucket / Google cloud storage, you would want to use a helper such as [django storages](https://github.com/jschneier/django-storages?tab=readme-ov-file)
### 🔄 Background tasks / schedule tasks - Celery
Background tasks, like sending emails, are best handled asynchronously to avoid blocking the main application processes and to improve the overall performance and responsiveness of your application. A popular tool for managing background tasks in Django is [Celery](https://github.com/celery/django-celery/).
For scheduled tasks, you can use [Django celery beat](https://github.com/celery/django-celery-beat) to manage periodic tasks, ensuring they run at specified intervals.
### 🔔 Websockets - Django Channels
If you are developing live in app notification, live chat etc, you should consider using [websockets](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API). In Django, [Django Channels](https://github.com/django/channels) is used for implementing WebSockets, allowing you to handle real-time communications within your application.
That's all, hope you found this quick blog helpful, if you know of any other essential library, comment down below! | paul_freeman |
1,907,731 | What’s New in Blazor Diagram: 2024 Volume 2 | TL;DR: Let’s explore the new updates in Syncfusion’s Blazor Diagram component, including rulers,... | 0 | 2024-07-08T15:42:35 | https://www.syncfusion.com/blogs/post/whats-new-blazor-diagram-2024-vol-2 | blazor, development, web, ui | ---
title: What’s New in Blazor Diagram: 2024 Volume 2
published: true
date: 2024-07-01 10:11:00 UTC
tags: blazor, development, web, ui
canonical_url: https://www.syncfusion.com/blogs/post/whats-new-blazor-diagram-2024-vol-2
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8a0n3ldohtv15oyry03l.png
---
**TL;DR:** Let’s explore the new updates in Syncfusion’s Blazor Diagram component, including rulers, symbol search, and more. These updates enhance diagram creation and editing, making it more interactive and precise.
Syncfusion [Blazor Diagram](https://www.syncfusion.com/blazor-components/blazor-diagram "Blazor Diagram") component has made significant strides in its development, providing an advanced toolkit for visualizing, creating, and editing interactive diagrams. Whether you’re working on flowcharts, organizational charts, mind maps, floor plans, or BPMN charts, this comprehensive library equips you with the tools to programmatically and interactively bring your ideas to life.
This blog will explore the latest enhancements and features introduced in the [2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release for the Blazor Diagram component. Discover how these new capabilities can elevate your diagramming experience and streamline your workflow.
Let’s get started!
## Ruler
The new ruler feature in the Blazor Diagram component provides horizontal and vertical guides for precise measurement, ensuring accuracy when placing, sizing, and aligning shapes and objects within a diagram.
### Adding rulers to the Blazor Diagram component
The [RulerSettings](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.SfDiagramComponent.html#Syncfusion_Blazor_Diagram_SfDiagramComponent_RulerSettings "RulerSettings property of Blazor Diagram component") property of the [SfDiagramComponent](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.SfDiagramComponent.html "SfDiagramComponent class of Blazor Diagram component") controls the ruler’s visibility and appearance in the diagram. The [HorizontalRuler](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.RulerSettings.html#Syncfusion_Blazor_Diagram_RulerSettings_HorizontalRuler "HorizontalRuler property of Blazor Diagram component") and [VerticalRuler](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.RulerSettings.html#Syncfusion_Blazor_Diagram_RulerSettings_VerticalRuler "VerticalRuler property of Blazor Diagram component") properties define and customize the horizontal and vertical rulers, respectively, in the diagram canvas.
The following code demonstrates how to add a ruler to the diagram.
```xml
@using Syncfusion.Blazor.Diagram
<SfDiagramComponent Height="600px">
<RulerSettings>
<HorizontalRuler />
<VerticalRuler />
</RulerSettings>
</SfDiagramComponent>
```
### Customizing the Ruler
The [HorizontalRuler](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.RulerSettings.html#Syncfusion_Blazor_Diagram_RulerSettings_HorizontalRuler "HorizontalRuler property of Blazor Diagram component") and [VerticalRuler](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.RulerSettings.html#Syncfusion_Blazor_Diagram_RulerSettings_VerticalRuler "VerticalRuler property of Blazor Diagram component") properties of the [RulerSettings](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.SfDiagramComponent.html#Syncfusion_Blazor_Diagram_SfDiagramComponent_RulerSettings "RulerSettings property of Blazor Diagram component") class are used to modify the appearance of the rulers in the diagram. The following properties are used to customize the appearance of both the horizontal and vertical rulers.
- [Interval](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.DiagramRuler.html#Syncfusion_Blazor_Diagram_DiagramRuler_Interval "Interval property of Blazor Diagram component") – To specify the number of intervals present on each segment of the horizontal and vertical rulers.
- [IsVisible](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.DiagramRuler.html#Syncfusion_Blazor_Diagram_DiagramRuler_IsVisible "IsVisible property of Blazor Diagram component") – Determines whether the horizontal and vertical rulers are displayed in the diagram. Depending on your requirements, this can be useful for toggling rulers on and off.
- [TickAlignment](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.DiagramRuler.html#Syncfusion_Blazor_Diagram_DiagramRuler_TickAlignment "TickAlignment property of Blazor Diagram component") – Controls the placement of the tick marks (also called hash marks) on the ruler. You can typically choose to have them positioned on the left or right for the vertical ruler and on the top or bottom for the horizontal ruler.
- [MarkerColor](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.DiagramRuler.html#Syncfusion_Blazor_Diagram_DiagramRuler_MarkerColor "MarkerColor property of Blazor Diagram component") – Defines the color of the marker line, also known as the cursor guide. This line appears in the diagram and aligns with the ruler, visually indicating the cursor’s current position.
The following code example illustrates how to customize the ruler in the Blazor Diagram component.
```xml
@using Syncfusion.Blazor.Diagram;
<SfDiagramComponent Height="600px" Nodes="@nodes">
<RulerSettings>
<HorizontalRuler IsVisible="true" Interval="10" TickAlignment="TickAlignment.LeftAndTop" MarkerColor="green" />
<VerticalRuler IsVisible="true" Interval="10" TickAlignment="TickAlignment.RightAndBottom" MarkerColor="red" />
</RulerSettings>
</SfDiagramComponent>
@code{
public DiagramObjectCollection<Node> nodes = new DiagramObjectCollection<Node>();
protected override void OnInitialized()
{
Node node = new Node()
{
OffsetX = 200,
OffsetY = 200,
Width = 100,
Height = 100,
Annotations = new DiagramObjectCollection<ShapeAnnotation>()
{
new ShapeAnnotation()
{
Content = "Node",
Style = new TextStyle()
{
Color = "white",
}
}
},
Style = new ShapeStyle() { Fill = "#6495ED", StrokeColor = "white" }
};
nodes. Add(node);
}
}
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Customizing-the-ruler-in-the-Blazor-Diagram-component.gif" alt="Customizing the ruler in the Blazor Diagram component" style="width:100%">
<figcaption>Customizing the ruler in the Blazor Diagram component</figcaption>
</figure>
**Note:** For more details, refer to the ruler in Blazor Diagram component [web demos](https://blazor.syncfusion.com/demos/diagramcomponent/rulers?theme=fluent "Example of rulers in Blazor Diagram component"), [GitHub demos](https://github.com/SyncfusionExamples/Blazor-Diagram-Examples/tree/master/UG-Samples/Ruler/CustomizingRuler "Ruler in Blazor Diagram component GitHub demos"), and [documentation](https://blazor.syncfusion.com/documentation/diagram/rulers "Ruler Settings in Blazor Diagram component documentation").
## Symbol search option in the symbol palette
The symbol palette now allows users to search for and retrieve symbols. Users can find matching symbols by entering a symbol’s ID or search keywords into a text box clicking the **Search** button, or pressing the **Enter** key. The search results are based on matching the [ID](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.NodeBase.html#Syncfusion_Blazor_Diagram_NodeBase_ID "ID property of Blazor Diagram component") or [SearchTags](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.NodeBase.html#Syncfusion_Blazor_Diagram_NodeBase_SearchTags "SearchTags property of Blazor Diagram component") property with the entered search string. The [ShowSearchTextBox](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.SymbolPalette.SfSymbolPaletteComponent.html#Syncfusion_Blazor_Diagram_SymbolPalette_SfSymbolPaletteComponent_ShowSearchTextBox "ShowSearchTextBox property of Blazor Diagram component") property of the palette controls the visibility of the search text box.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Symbol-search-option-in-Blazor-Diagram-components-symbol-palette.gif" alt="Symbol search option in Blazor Diagram component’s symbol palette" style="width:100%">
<figcaption>Symbol search option in Blazor Diagram component’s symbol palette</figcaption>
</figure>
**Note:** For more details, refer to the symbols search feature in the Blazor Diagram component’s [web demos](https://blazor.syncfusion.com/demos/diagramcomponent/symbolpalette?theme=fluent "Example of symbol palette in Blazor Diagram component"), [GitHub demos](https://github.com/SyncfusionExamples/Blazor-Diagram-Examples/tree/master/UG-Samples/SymbolPalette/SearchOption "Symbols search feature in the Blazor Diagram component’ GitHub demo"), and [documentation](https://blazor.syncfusion.com/documentation/diagram/symbol-palette/symbol-palette#how-to-enable-symbol-search-option-in-symbol-palette "How to enable symbol search option in symbol palette").
## Chunk message support for diagram and symbol palette
In the Blazor Diagram component, it’s essential to calculate the bounds of paths, text, images, and SVG data on the JavaScript side using **JsInterop** calls. Connection failure issues can arise when processing large data sets (exceeding 32KB for a single incoming hub message) in a single JS call.
To resolve this, we have introduced the [EnableChunkMessages](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.SfDiagramComponent.html#Syncfusion_Blazor_Diagram_SfDiagramComponent_EnableChunkMessages "EnableChunkMessages property of Blazor Diagram component") property in the Diagram and symbol palette. This property allows large data to be sent in smaller chunks, thereby preventing connection failure issues. Chunk messages facilitate the measurement of paths, images, text, and SVG data without exceeding the maximum size limit for a single incoming hub message (MaximumReceiveMessageSize of 32KB). By default, the EnableChunkMessages property is set to **false.**
**Note:** For more details, refer to the chunk message support in the Blazor Diagram [documentation](https://blazor.syncfusion.com/documentation/diagram/how-to#how-to-enable-the-chunk-message "How to enable the chunk message in the Blazor Diagram component?").
## Connector splitting
When a new node is dropped onto an existing connector, the user can split the connector between two nodes. This action creates new connections between the dropped node and the existing nodes. To enable this feature, set the [EnableConnectorSplitting](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.SfDiagramComponent.html#Syncfusion_Blazor_Diagram_SfDiagramComponent_EnableConnectorSplitting "EnableConnectorSplitting property of Blazor Diagram component") property of the [SfDiagramComponent](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.SfDiagramComponent.html "SfDiagramComponent class of Blazor Diagram component") to **true**. The [AllowDrop](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Diagram.ConnectorConstraints.html#Syncfusion_Blazor_Diagram_ConnectorConstraints_AllowDrop "AllowDrop property of Blazor Diagram component") constraints must also be enabled on the connector to allow node dropping.
The following code example illustrates how to enable the connector splitting feature in the Blazor Diagram component.
```csharp
@using Syncfusion.Blazor.Diagram
<SfDiagramComponent @ref="Diagram" Width="1000px" Height="500px" Nodes="@nodes" Connectors="@connectors" EnableConnectorSplitting="true" />
@code {
//Reference the diagram.
SfDiagramComponent Diagram;
// Initialize the diagram's connector collection.
DiagramObjectCollection<Connector> connectors = new DiagramObjectCollection<Connector>();
// Initialize the diagram's node collection.
DiagramObjectCollection<Node> nodes = new DiagramObjectCollection<Node>();
protected override void OnInitialized()
{
nodes = new DiagramObjectCollection<Node>()
{
new Node()
{
OffsetX = 100, OffsetY = 100,
Height = 50, Width = 100,
Style = new ShapeStyle(){ Fill = "#6495ED", StrokeColor = "#6495ED",},
Shape = new BasicShape() { Type = NodeShapes.Basic, Shape = NodeBasicShapes.Rectangle }
},
new Node()
{
OffsetX = 300, OffsetY = 300,
Height = 50, Width = 100,
Style = new ShapeStyle(){ Fill = "#6495ED", StrokeColor = "#6495ED",},
Shape = new BasicShape() { Type = NodeShapes.Basic, Shape = NodeBasicShapes.Rectangle }
},
new Node()
{
OffsetX = 300, OffsetY = 100,
Height = 50, Width = 100,
Style = new ShapeStyle(){ Fill = "#6495ED", StrokeColor = "#6495ED",},
Shape = new BasicShape() { Type = NodeShapes.Basic, Shape = NodeBasicShapes.Rectangle }
}
};
Connector Connector = new Connector()
{
ID = "connector1",
//Source node ID of the connector.
SourceID = "node1",
TargetDecorator = new DecoratorSettings()
{
Style = new ShapeStyle()
{
Fill = "#6495ED",
StrokeColor = "#6495ED",
}
},
//Target node ID of the connector.
TargetID = "node2",
Style = new ShapeStyle()
{
Fill = "#6495ED",
StrokeColor = "#6495ED",
},
// Type of the connector.
Type = ConnectorSegmentType.Straight,
Constraints = ConnectorConstraints.Default | ConnectorConstraints.AllowDrop,
};
connectors.Add(Connector);
}
}
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Connector-splitting-support-in-the-Blazor-Diagram-component.gif" alt="Connector splitting support in the Blazor Diagram component" style="width:100%">
<figcaption>Connector splitting support in the Blazor Diagram component</figcaption>
</figure>
**Note:** For more details, refer to the connector splitting in the Blazor Diagram component [GitHub demos ](https://github.com/SyncfusionExamples/Blazor-Diagram-Examples/tree/master/UG-Samples/Interaction "Connector splitting feature in the Blazor Diagram component GitHub demo")and [documentation](https://blazor.syncfusion.com/documentation/diagram/connectors/customization#how-to-enable-connector-split "How to enable connector split feature in Blazor Diagram?").
## Conclusion
Thanks for reading! In this blog, we’ve seen the exciting new features added to the Syncfusion [Blazor Diagram](https://www.syncfusion.com/blazor-components/blazor-diagram "Blazor Diagram") component for the [2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release. Use them to create and visualize interactive diagrams, and leave your feedback in the comments section below!
Check out our [Release Notes](https://blazor.syncfusion.com/documentation/release-notes/26.1.35?type=all "Essential Studio Release Notes") and [What’s New](https://www.syncfusion.com/products/whatsnew "Essential Studio What’s New page") pages for other 2024 Volume 2 release updates.
The new version of Essential Studio is available for existing customers on the [License and Downloads](https://www.syncfusion.com/account/downloads "Essential Studio License and Downloads page") page. If you are not a Syncfusion customer, try our 30-day [free trial](https://www.syncfusion.com/downloads "Get free evaluation of the Essential Studio products") to check out our available features.
You can also contact us through our [support forum](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/ "Syncfusion Feedback Portal"). We are always happy to assist you!
## Related blogs
- [Introducing the New Blazor TextArea Component](https://www.syncfusion.com/blogs/post/new-blazor-textarea-component "Blog: Introducing the New Blazor TextArea Component")
- [What’s New in Blazor: 2024 Volume 2](https://www.syncfusion.com/blogs/post/whats-new-blazor-2024-volume-2 "Blog: What’s New in Blazor: 2024 Volume 2")
- [Introducing the New Blazor 3D Charts Component](https://www.syncfusion.com/blogs/post/blazor-3d-charts-component "Blog: Introducing the New Blazor 3D Charts Component")
- [Syncfusion Essential Studio 2024 Volume 2 Is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 Is Here!") | jollenmoyani |
1,907,756 | Cookies, Code, and AI - A Sweet Journey into Backpropagation | Discover how backpropagation empowers AI systems to learn and improve by drawing parallels with baking cookies. Understand its role in making neural networks more intelligent, efficient, and capable of tackling complex tasks. Explore key insights from Artem Kirsanov's video and learn practical applications of this crucial algorithm to enhance decision-making, automation, and personalization in business. Embrace the power of backpropagation and stay ahead in the AI-driven world. | 0 | 2024-07-11T13:56:01 | https://dev.to/dev3l/cookies-code-and-ai-a-sweet-journey-into-backpropagation-df5 | ai, softwaredevelopment, learningculture, generativeai | ---
title: Cookies, Code, and AI - A Sweet Journey into Backpropagation
published: true
description: Discover how backpropagation empowers AI systems to learn and improve by drawing parallels with baking cookies. Understand its role in making neural networks more intelligent, efficient, and capable of tackling complex tasks. Explore key insights from Artem Kirsanov's video and learn practical applications of this crucial algorithm to enhance decision-making, automation, and personalization in business. Embrace the power of backpropagation and stay ahead in the AI-driven world.
tags: AI, SoftwareDevelopment, LearningCulture, GenerativeAI
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hl63rzysxsntpz2i5ml6.png
---
[Originally posted on Dev3loper.ai](https://www.dev3loper.ai/insights/cookies-code-and-ai-a-sweet-journey-into-backpropagation)
Have you ever wondered how computers learn from their mistakes, just like we do when baking cookies? Imagine you've whipped up a batch of too-salty or sweet cookies. You wouldn't give up, right? Instead, you'd tweak your recipe—maybe add more sugar or a pinch less salt until they are perfect. This process of tasting, adjusting, and tasting again is like how computers learn using an incredible **backpropagation** technique.
So, what exactly is backpropagation? Simply put, it's a way for computers to learn from their mistakes. When a computer makes a guess and finds out it's wrong, backpropagation helps it figure out how to improve its guess next time. It's like having a magical cookbook that enables you to adjust your recipe based on how your cookies taste.
To help understand this better, think about the first time you baked cookies. You followed a recipe, but they were too salty when you took a bite. You realize you added too much salt and not enough sugar. The next time, you might reduce the salt by half a teaspoon and increase the sugar by a tablespoon. After another batch, you taste and adjust again. Maybe the cookies are better this time, but they are still imperfect, so more tweaking is needed. Each time you bake and taste, you learn from the previous batch and improve.
Backpropagation in neural networks works similarly. When the network makes a prediction, and it's wrong, it calculates the error—the difference between its prediction and the actual result. This is like tasting your cookies and finding them too salty. The network then adjusts its internal settings (called weights and biases) to reduce this error. Think of these settings as the ingredients in your recipe. A big adjustment is needed if the error is significant, like drastically decreasing salt. If the error is small, a tiny tweak is made, similar to adding just a pinch more sugar.
But why is backpropagation such a big deal? Well, it's the cornerstone of almost every modern machine-learning system. From your voice assistant understanding your commands to self-driving cars recognizing obstacles on the road, backpropagation is critical. It helps machines get more intelligent with each mistake they make, learning faster and performing better over time.
If you think about it, backpropagation is very much like baking cookies. You might start with a basic recipe when you bake, but you'll likely make adjustments depending on how the cookies turn out. Maybe more chocolate chips, less sugar, or an extra minute in the oven. Similarly, backpropagation allows computers to improve their 'recipes' by tweaking their processes based on feedback. Essentially, the computer is continuously learning and adjusting, just as you do with each batch of cookies.
Let's explore this concept using our baking analogy and see how it forms the backbone of many robust AI systems today.
## Summary of the Video by Artem Kirsanov
{% embed https://www.youtube.com/watch?v=SmZmBKc7Lrs %}
In a fascinating video by computational neuroscience student and researcher Artem Kirsanov, we dive deeper into the concept of backpropagation and its pivotal role in machine learning. Artem begins by highlighting that despite the varied applications and architectures within machine learning—from language models like GPT to image generators like Midjourney—all these systems share a standard underlying algorithm: backpropagation.
Artem delves into the historical development of backpropagation, tracing its roots back to the 17th century. However, the modern form of the algorithm was notably formulated by Seppo Linnainmaa in 1970 in his master's thesis. This foundational work was further elaborated in 1986 when David Rumelhart, Geoffrey Hinton, and Ronald Williams demonstrated that backpropagation could efficiently train multi-layered neural networks, which were then able to solve complex problems and recognize intricate patterns.
To explain backpropagation, Artem uses the example of a curve-fitting problem. Imagine you have collected a set of points on a graph and aim to draw a smooth curve that best fits these points. This curve is determined by coefficients, much like the ingredients in a cookie recipe. You desire to find the best combination of these coefficients to minimize the error or loss between the curve and the actual data points.
Artem then breaks down how backpropagation helps in this task. The algorithm calculates the gradient, which indicates the direction and rate at which each coefficient should be adjusted. This process is akin to tasting your cookies and adding more sugar or decreasing the salt to improve the taste. The algorithm iteratively adjusts the coefficients to reduce the loss, leading to a more accurate fit.
A crucial part of the video is explaining the chain rule, a fundamental principle in calculus that allows us to calculate the gradients of complex functions. Just as a recipe combines different ingredients in various steps, complex machine learning models combine various mathematical operations. The chain rule helps us determine how small changes in each step affect the final result, enabling precise adjustments.
Artem encapsulates these ideas within the concept of computational graphs. These graphs map out the sequence of calculations in a neural network, allowing backpropagation to compute the necessary adjustments efficiently. Even in large neural networks with millions of parameters, this method remains effective and scalable, making it instrumental in the rise of robust AI systems today.
By the end of the video, Artem beautifully ties these concepts back to their practical implications. Understanding backpropagation allows us to appreciate the elegance and efficiency of modern AI systems, highlighting why this algorithm is fundamental to their success. From recognizing images and translating text to generating creative content, backpropagation is the unsung hero behind many technological advancements.
### Recap:

- **Common Algorithm**: Backpropagation underpins diverse machine learning systems despite different applications.
- **Historical Roots**: Modern backpropagation formulation by Seppo Linnainmaa in 1970, further refined by Rumelhart, Hinton, and Williams in 1986.
- **Curve Fitting Example**: This section explains backpropagation using the analogy of fitting a curve to data points by adjusting coefficients.
- **Gradient Calculation**: The algorithm calculates gradients to determine how to adjust parameters for error minimization.
- **Chain Rule**: Fundamental in computing gradients of complex functions, enabling precise adjustments in neural networks.
- **Computational Graphs**: Visualize the sequence of calculations, allowing efficient adjustments even in large networks.
- **Practical Implications**: This paper highlights the critical role of backpropagation in the success of modern AI, from image recognition to text translation.
Artem's video serves as an enlightening deep dive into the workings of backpropagation, making complex mathematical concepts accessible and engaging. It underscores the importance of this algorithm in making AI more competent and shaping the future of technology.
## Personal Insights and Practical Applications

Understanding backpropagation isn't just about grasping a nifty algorithm—it's about recognizing its transformative impact on artificial intelligence and how we can leverage it in various applications. As an engineer and developer, I see several key benefits and opportunities for businesses harnessing this technology.
Firstly, backpropagation enhances neural networks' learning capabilities. Imagine trying to teach a child to recognize different animals. Initially, their guesses might be far off. But with each correct and incorrect guess, they get better, learning to distinguish a cat from a dog more accurately. Similarly, backpropagation allows AI systems to refine their understanding and improve their performance over time. This continual learning process makes technologies like facial recognition, voice assistants, and language translators effective.
Efficiency and scalability are other significant advantages. The beauty of backpropagation lies in its ability to handle large datasets and complex models efficiently. For instance, in healthcare, AI models trained via backpropagation can process vast amounts of patient data to predict disease outbreaks or recommend personalized treatment plans. These models become more accurate and reliable as they learn from more data, making them invaluable tools for critical decision-making.
From a business perspective, the applications of backpropagation in AI are vast and varied, offering numerous advantages. Improved decision-making capabilities are a standout benefit. Companies can leverage machine learning models to analyze vast datasets, uncover trends, and make data-driven decisions. For example, AI models in finance can predict market shifts, assess risks, and optimize investment strategies, enhancing profitability and reducing uncertainties.
Another practical application is automating complex tasks. Consider the e-commerce industry, where AI can handle customer service inquiries, manage inventory, and personalize shopping experiences. With backpropagation, these AI systems learn from each interaction, becoming more efficient and effective. This improves customer satisfaction and frees up human resources for more strategic tasks.
Personalization and enhancing customer experience is another area where backpropagation shines. Businesses can use AI to tailor recommendations based on user preferences and behavior. Think of streaming services suggesting movies or e-commerce sites recommending products. Thanks to backpropagation, these personalized experiences are powered by machine learning models that continuously learn and adapt.
Leveraging backpropagation can significantly enhance applications' capabilities in software development. We can create more thoughtful, intuitive software by incorporating machine learning models that learn and adapt. For example, developing a recommendation engine for an app or crafting an intelligent chatbot that provides accurate and relevant responses becomes feasible with backpropagation. Enhancing user experience and driving engagement through intelligent features can be a game-changer in the competitive tech landscape.
Understanding and applying backpropagation ultimately allows us to unlock AI's full potential. It equips us to build solutions that are not only smarter but also more efficient and adaptable. As businesses evolve in the digital age, integrating AI powered by backpropagation can provide a significant competitive edge, paving the way for innovation and growth.
## Conclusion

Understanding backpropagation is like having the secret ingredient that makes AI systems truly intelligent. Just as baking perfect cookies involves continuous testing and adjustment, backpropagation enables neural networks to learn from their errors and improve steadily over time. Through this process, AI becomes more reliable, efficient, and capable of solving increasingly complex problems.
Artem Kirsanov's exploration of backpropagation provides deep insights into the algorithm's fundamental workings, making complex concepts accessible and engaging. His explanation underscores the pivotal role of backpropagation in various AI advancements, from image recognition and natural language processing to more personalized and efficient business applications.
For those developing and implementing AI solutions, grasping how backpropagation works offers a significant advantage. It allows us to build systems that can learn and adapt, enhancing performance and delivering better results. This is crucial in today's competitive landscape, where technology and efficiency drive success.
Backpropagation is more than just an algorithm; it's the engine that powers modern AI. AI systems can achieve remarkable feats by learning from mistakes and continuously improving. Whether it's providing more accurate medical diagnoses, enhancing customer experiences, or optimizing business operations, the applications of backpropagation are as extensive as they are transformative.
---
I encourage you to watch Artem Kirsanov's video to learn more about this fascinating topic and better understand the mechanics behind backpropagation. As AI evolves, staying informed and understanding these foundational algorithms will be critical to leveraging their full potential.
Let's embrace the power of backpropagation and continue innovating, creating more innovative, efficient solutions that enhance our lives and businesses. Feel free to share your thoughts, experiences, or questions about backpropagation and its applications. Let's keep the conversation going and explore the endless possibilities of AI together! | dev3l |
1,907,992 | How to implement Micro Frontend on Salesforce Experience Cloud | Introduction Many companies face challenges related to scalability and code maintenance.... | 0 | 2024-07-10T17:02:04 | https://dev.to/arthurkellermann/how-to-implement-micro-frontend-in-salesforce-experience-cloud-59n3 | salesforce, microfrontend, frontend, ui | ##Introduction
Many companies face challenges related to scalability and code maintenance. Balancing the maintenance of active clients and the implementation of system updates without delays or interruptions highlights the need for proper software planning and the selection of a suitable architecture for product development.
In this context, the adoption of approaches such as the implementation of Micro Frontends has become common. This strategy allows companies to divide an application into smaller, independent components, providing greater flexibility in maintaining and evolving the system, as well as improving the end-user experience.
##What is Micro Frontend (MFE)?
MFEs, or Micro Frontends, are an architectural design approach that divides a frontend application into two or more independent parts. Each part is responsible for a specific functionality and can be developed, tested, and deployed independently. It is similar to adopting a Microservices architecture in a backend application, where an application is divided into small, independent services, each performing a specific function and communicating with each other via APIs.

This approach allows multiple teams to work simultaneously on different parts of an application, increasing efficiency and scalability. The final result is a single application that combines all the parts at runtime.
When consuming Micro Frontends (MFEs), it is generally preferable to use the latest available version, as this ensures access to the latest features, performance improvements, and bug fixes. However, in some cases, it may be necessary to force the use of an older version for stability or compatibility reasons with other systems.
##Web Components
First, to understand how MFEs would work in the Salesforce environment, it is important to highlight the concept of Web Components in the frontend universe.
Modern Web Development increasingly uses methods and approaches that prioritize the encapsulation of user interface functionalities, offering an effective way to create interface components that can be reused in different projects and contexts.
> With Web Components, the developer's work is simplified to merely showing the browser when and where the component will be used.
Thus, it is very common to see the implementation of Web Components in a
Micro Frontend architecture. However, entering the world of Salesforce
makes everything even easier.
##How to implement MFE's in Salesforce Experience Cloud?
Salesforce Experience Cloud is a cloud-based platform that enables companies to create connected, personalized, and scalable experiences for their customers and employees.
Once you break a monolith into more parts, it is necessary to plan to choose the best possible approach for the frontend architecture. For the implementation of MFEs with Salesforce Experience, the most common and used approach is the Shell Application or Shell Architecture
###Shell Architecture
In a microfrontends implementation, the Shell Architecture is the main application layer that hosts and manages the microfrontends. Thus, the Shell is responsible for controlling the navigation between different microfrontends, managing their dynamic loading, and facilitating communication between modules through events or messages.
The first step is to identify the different functionalities of the system that can be separated into modules. Each module should have a well-defined interface for communication with other modules. This can include events, APIs, or messaging services.

###Salesforce Experience Cloud Shell Application

##Development with Lightning Web Components (LWC)
Lightning Web Components is a web component development framework introduced by Salesforce that enables the creation of dynamic and efficient user interfaces on the Salesforce platform. It is based on modern web standards such as HTML, CSS, and JavaScript, and leverages the native functionality of browsers, resulting in better performance and lower resource consumption.
Each component should be developed with its own logic and user interface. For communication and integration between them, you can use Salesforce's messaging service, Lightning Message Service. This service allows LWC components to communicate efficiently, regardless of whether they are in the same component tree or not. It is particularly useful in complex architectures like Micro Frontends, where communication between independent modules is essential.
###Lightning Message Service (LMS)
- Publish and Subscribe to Messages: Allows one component to publish a message and another component (or components) to subscribe to that message to receive data or notifications.
- Message Context: Provides the necessary context to send or receive messages.
- Custom Message Channels: Define the types of messages that will be exchanged between components.
**To illustrate, here is a small example of implementation using LWC, Lightning Message Service and Salesforce Experience Cloud.**
The first step is to identify the different functionalities of the system that can be separated into modules. For example, in a CRM, you can have modules for lead management, contact management, and reporting.
**Message Channel**
First, we create the Lead Channel, which is a messaging channel of the Lightning Message Service that you define in your Salesforce project using an XML file.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<MessageChannel xmlns="http://soap.sforce.com/2006/04/metadata">
<description>Lead Channel Message</description>
<isExposed>true</isExposed>
<masterLabel>Lead Channel</masterLabel>
</MessageChannel>
```
**Lead Module**
After that, we create the leadModule component, which publishes messages about new leads using the Lightning Message Service (LMS).
```html
<template>
<lightning-card title="Lead Module">
<div class="slds-m-around_medium">
<lightning-button label="Create Lead" onclick={handleCreateLead}></lightning-button>
</div>
</lightning-card>
</template>
```
```javascript
import { LightningElement, wire } from 'lwc';
import { publish, MessageContext } from 'lightning/messageService';
import LEAD_CHANNEL from '@salesforce/messageChannel/LeadChannel__c';
export default class LeadModule extends LightningElement {
@wire(MessageContext)
messageContext;
handleCreateLead() {
const lead = { name: 'John Doe', company: 'Company Name' };
publish(this.messageContext, LEAD_CHANNEL, lead);
}
}
```
```xml
<?xml version="1.0" encoding="UTF-8"?>
<LightningComponentBundle xmlns="http://soap.sforce.com/2006/04/metadata">
<apiVersion>59.0</apiVersion>
<isExposed>true</isExposed>
<targets>
<target>lightning__AppPage</target>
<target>lightning__RecordPage</target>
<target>lightning__HomePage</target>
<target>lightningCommunity__Default</target>
<target>lightningCommunity__Page</target>
</targets>
</LightningComponentBundle>
```
Then, we develop the contactModule component, which subscribes to the LeadChannel message channel and displays information about the received leads.
**Contact Module**
```html
<template>
<lightning-card title="Contact Module">
<div class="slds-m-around_medium">
<template if:true={leadCreated}>
<p>Lead created: {leadName}, {leadCompany}</p>
</template>
<template if:false={leadCreated}>
<p>There are no leads created.</p>
</template>
</div>
</lightning-card>
</template>
```
```javascript
import { LightningElement, track, wire } from 'lwc';
import { subscribe, MessageContext } from 'lightning/messageService';
import LEAD_CHANNEL from '@salesforce/messageChannel/LeadChannel__c';
export default class ContactModule extends LightningElement {
subscription = null;
@track leadCreated = false;
@track leadName = '';
@track leadCompany = '';
@wire(MessageContext)
messageContext;
connectedCallback() {
this.subscription = subscribe(this.messageContext, LEAD_CHANNEL, (message) => this.handleLeadCreated(message));
}
handleLeadCreated(message) {
this.leadCreated = true;
this.leadName = message.name;
this.leadCompany = message.company;
}
}
```
```xml
<?xml version="1.0" encoding="UTF-8"?>
<LightningComponentBundle xmlns="http://soap.sforce.com/2006/04/metadata">
<apiVersion>59.0</apiVersion>
<isExposed>true</isExposed>
<targets>
<target>lightning__AppPage</target>
<target>lightning__RecordPage</target>
<target>lightning__HomePage</target>
<target>lightningCommunity__Default</target>
<target>lightningCommunity__Page</target>
</targets>
</LightningComponentBundle>
```
To integrate these components, we create the appContainer, which organizes leadModule and contactModule into a single interface.
**App Container**
```html
<template>
<lightning-layout multiple-rows="true">
<lightning-layout-item size="12" padding="around-small">
<c-lead-module></c-lead-module>
</lightning-layout-item>
<lightning-layout-item size="12" padding="around-small">
<c-contact-module></c-contact-module>
</lightning-layout-item>
</lightning-layout>
</template>
```
This MFE architecture allows each component to be developed and maintained independently, promoting scalability and flexibility.


##Conclusion
The Micro Frontends (MFEs) architecture is evidenced in this implementation through the decomposition of the application into independent components that communicate with each other using the Lightning Message Service.
This implementation not only demonstrates the ability to create and integrate LWC components but also highlights the benefits of the Micro Frontends architecture in the context of Salesforce. The achieved modularity allows for better maintenance and evolution of the system, making it more adaptable to changes and future organizational needs.
| arthurkellermann |
1,908,056 | Build a Dynamic Watchlist for Your Web App with Angular & GraphQL (Part 6) | TL;DR: Implement a watchlist feature in your Angular application using the AddToWatchlist component.... | 0 | 2024-07-08T15:57:12 | https://www.syncfusion.com/blogs/post/build-dynamic-watchlist-angular-graphql | angular, development, web, fullstack | ---
title: Build a Dynamic Watchlist for Your Web App with Angular & GraphQL (Part 6)
published: true
date: 2024-07-01 13:38:30 UTC
tags: angular, development, web, fullstack
canonical_url: https://www.syncfusion.com/blogs/post/build-dynamic-watchlist-angular-graphql
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3rblnm2hn293xnf9fyc.jpeg
---
**TL;DR:** Implement a watchlist feature in your Angular application using the AddToWatchlist component. Integrate it with a Watchlist component via GraphQL. Learn step-by-step with code examples and best practices.
In the [previous article](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-5 "Blog: A Full-Stack Web App Using Angular and GraphQL: Adding Login and Authorization Functionalities (Part 5)") of this series, we learned how to add login functionality and implement role-based authorization in our application.
In this article, we will cover how to implement a watchlist feature similar to the wishlist feature of e-commerce apps. When a user visits the movie details page, we will add a feature to display a list of movies from the same genre.
## Create the IWatchlist interface
First, create a new file named **IWatchlist.cs** in the **MovieApp\Interfaces** folder with the following method.
```js
public interface IWatchlist
{
Task ToggleWatchlistItem(int userId, int movieId);
Task<string> GetWatchlistId(int userId);
Task ClearWatchlist(int userId);
}
```
## Create the watchlist data access layer
Second, implement the data access layer for the watchlist functionality in a file named **WatchlistDataAccessLayer.cs** inside the **MovieApp.Server\DataAccess** folder. Add the following code inside it.
```js
public class WatchlistDataAccessLayer: IWatchlist
{
readonly MovieDbContext _dbContext;
public WatchlistDataAccessLayer(IDbContextFactory<MovieDbContext> dbContext)
{
_dbContext = dbContext.CreateDbContext();
}
public async Task<string> GetWatchlistId(int userId)
{
try
{
Watchlist? watchlist = await _dbContext.Watchlists.FirstOrDefaultAsync(x => x.UserId == userId);
if (watchlist is not null)
{
return watchlist.WatchlistId;
}
else
{
return await CreateWatchlist(userId);
}
}
catch
{
throw;
}
}
public async Task ToggleWatchlistItem(int userId, int movieId)
{
string watchlistId = await GetWatchlistId(userId);
WatchlistItem? existingWatchlistItem = await _dbContext.WatchlistItems
.FirstOrDefaultAsync(x => x.MovieId == movieId && x.WatchlistId == watchlistId);
if (existingWatchlistItem is not null)
{
_dbContext.WatchlistItems.Remove(existingWatchlistItem);
}
else
{
WatchlistItem watchlistItem = new()
{
WatchlistId = watchlistId,
MovieId = movieId,
};
_dbContext.WatchlistItems.Add(watchlistItem);
}
await _dbContext.SaveChangesAsync();
}
public async Task ClearWatchlist(int userId)
{
try
{
string watchlistId = await GetWatchlistId(userId);
List<WatchlistItem> watchlistItem = _dbContext.WatchlistItems.Where(x => x.WatchlistId == watchlistId).ToList();
if (!string.IsNullOrEmpty(watchlistId))
{
foreach (WatchlistItem item in watchlistItem)
{
_dbContext.WatchlistItems.Remove(item);
await _dbContext.SaveChangesAsync();
}
}
}
catch
{
throw;
}
}
async Task<string> CreateWatchlist(int userId)
{
try
{
Watchlist watchlist = new()
{
WatchlistId = Guid.NewGuid().ToString(),
UserId = userId,
DateCreated = DateTime.Now.Date
};
await _dbContext.Watchlists.AddAsync(watchlist);
await _dbContext.SaveChangesAsync();
return watchlist.WatchlistId;
}
catch
{
throw;
}
}
}
```
The **WatchlistDataAccessLayer** class implements the **IWatchlist** interface and utilizes an **IDbContextFactory<MovieDbContext>** in its constructor to instantiate a **MovieDbContext**, enabling seamless interaction with the database.
The **GetWatchlistId** is an asynchronous method for retrieving the watchlist ID for a given user ID. If a watchlist for the user doesn’t exist, it creates a new one.
The **ToggleWatchlistItem** is an asynchronous method for adding or removing a movie from a user’s watchlist. If the movie is already on the watchlist, it will be removed; otherwise, it will be added.
The **ClearWatchlist** method asynchronously removes all movies from a user’s watchlist.
The **CreateWatchlist** is a private asynchronous method for creating a new watchlist for a user. It generates a new GUID for the watchlist ID, sets the user ID, and sets the creation date as the current date.
In each method, database changes are saved using the asynchronous **SaveChangesAsync** method, which records all modifications made within the database context. The try-catch blocks within the methods ensure any potential exceptions during database interactions are handled smoothly.
## Update the IMovie interface
Update the **IMovie** interface by adding the method declaration like in the following code example in the **IMovie.cs** file.
```js
public interface IMovie
{
// other methods.
Task<List<Movie>> GetSimilarMovies(int movieId);
Task<List<Movie>> GetMoviesAvailableInWatchlist(string watchlistId);
}
```
## Update the MovieDataAccessLayer class
Update the **MovieDataAccessLayer** class by implementing the newly added methods of the **IMovie** interface.
```js
public async Task<List<Movie>> GetSimilarMovies(int movieId)
{
List<Movie> lstMovie = new();
Movie? movie = await _dbContext.Movies.FindAsync(movieId);
if (movie is not null)
{
lstMovie = _dbContext.Movies.Where(m => m.Genre == movie.Genre && m.MovieId != movie.MovieId)
.OrderBy(u => Guid.NewGuid())
.Take(5)
.ToList();
}
return lstMovie;
}
public async Task<List<Movie>> GetMoviesAvailableInWatchlist(string watchlistID)
{
try
{
List<Movie> userWatchlist = new();
List<WatchlistItem> watchlistItems =
_dbContext.WatchlistItems.Where(x => x.WatchlistId == watchlistID).ToList();
foreach (WatchlistItem item in watchlistItems)
{
Movie? movie = await GetMovieData(item.MovieId);
if (movie is not null)
{
userWatchlist.Add(movie);
}
}
return userWatchlist;
}
catch
{
throw;
}
}
```
The asynchronous method **GetSimilarMovies** retrieves a list of movies with the same genre as the selected movie. Identified by the **movieId** parameter, it queries the database to find the corresponding Movie object and stores the result in the movie variable. If the movie object is not null, it performs another query to find all the movies with the same genre but different MovieId, ensuring the list excludes the selected movie itself. The results are then randomly ordered using GUIDs to shuffle the titles. Finally, it selects and returns up to five movies from this shuffled list that are similar to the selected movie.
The **GetMoviesAvailableInWatchlist** method asynchronously retrieves all movies in a specific watchlist identified by the **watchlistID** parameter. It queries the database to get all **WatchlistItem** objects where the **WatchlistId** matches the provided **watchlistID** and stores the result in the **watchlistItems** list. The method then iterates over each watchlist item in the list, calling the **GetMovieData** method with the **MovieId** of each item to get the corresponding **Movie** object. If the **Movie** object is not null, it adds the movie to the **userWatchlist**. Finally, it returns the **userWatchlist**.
## Add the GraphQL server query resolver for the watchlist
Add a class named **WatchlistQueryResolver.cs** inside the **MovieApp\GraphQL** folder. Refer to the following code example.
```js
[ExtendObjectType(typeof(MovieQueryResolver))]
public class WatchlistQueryResolver
{
readonly IWatchlist _watchlistService;
readonly IMovie _movieService;
readonly IUser _userService;
public WatchlistQueryResolver(IWatchlist watchlistService, IMovie movieService, IUser userService)
{
_watchlistService = watchlistService;
_movieService = movieService;
_userService = userService;
}
[Authorize]
[GraphQLDescription("Get the user Watchlist.")]
public async Task<List<Movie>> GetWatchlist(int userId)
{
bool user = await _userService.IsUserExists(userId);
if (user)
{
string watchlistid = await _watchlistService.GetWatchlistId(userId);
return await _movieService.GetMoviesAvailableInWatchlist(watchlistid);
}
else
{
return new List<Movie>();
}
}
}
```
The **WatchlistQueryResolver** class has three private, read-only fields: **\_watchlistService**, **\_movieService**, and **\_userService**. These fields are instances of the **IWatchlist**, **IMovie**, and **IUser** interfaces, respectively, and they are initialized via dependency injection in the constructor. This class extends the **MovieQueryResolver** type, adding additional fields to the existing GraphQL type.
The **GetWatchlist** method checks if a user with the provided **userId** exists by calling the **IsUserExists** method of **\_userService**. The result is stored in the user variable. If the user exists, it gets the ID of the user’s watchlist by calling the **GetWatchlistId** method of **\_watchlistService**. It then retrieves the movies available in the watchlist by calling the **GetMoviesAvailableInWatchlist** method of **\_movieService** with the watchlist ID. If the user does not exist, it returns an empty list of **Movie** objects. This method is decorated with the **Authorize** attribute, which requires the client to be authenticated to call this method.
## Add the GraphQL server mutation resolver for the watchlist
Add a class named **WatchlistMutationResolver.cs** inside the **MovieApp\GraphQL** folder. Refer to the following code example.
```js
[ExtendObjectType(typeof(MovieMutationResolver))]
public class WatchlistMutationResolver
{
readonly IWatchlist _watchlistService;
readonly IMovie _movieService;
readonly IUser _userService;
public WatchlistMutationResolver(IWatchlist watchlistService, IMovie movieService, IUser userService)
{
_watchlistService = watchlistService;
_movieService = movieService;
_userService = userService;
}
[Authorize]
[GraphQLDescription("Toggle Watchlist item.")]
public async Task<List<Movie>> ToggleWatchlist(int userId, int movieId)
{
await _watchlistService.ToggleWatchlistItem(userId, movieId);
bool user = await _userService.IsUserExists(userId);
if (user)
{
string watchlistid = await _watchlistService.GetWatchlistId(userId);
return await _movieService.GetMoviesAvailableInWatchlist(watchlistid);
}
else
{
return new List<Movie>();
}
}
[Authorize]
[GraphQLDescription("Delete all items from Watchlist.")]
public async Task<int> ClearWatchlist(int userId)
{
await _watchlistService.ClearWatchlist(userId);
return userId;
}
}
```
The **WatchlistMutationResolver** class has three private read-only fields: **\_watchlistService**, **\_movieService**, and **\_userService**. These fields are instances of **IWatchlist**, **IMovie**, and **IUser** interfaces, respectively, and they are initialized via dependency injection in the constructor.
The **ToggleWatchlist** method is an asynchronous method that toggles a movie in a user’s watchlist and then returns an updated list of movies in the watchlist. It’s decorated with the **Authorize** attribute, which means it requires the client to be authenticated to call this method. The **GraphQLDescription** attribute provides a description for this field in the GraphQL schema.
The **ClearWatchlist** method is another asynchronous method that deletes all items from a user’s watchlist and then returns the user’s ID. It’s also decorated with the **Authorize** and **GraphQLDescription** attributes.
## Add a GraphQL server query to get similar movies
Add the following method to the **MovieQueryResolver** class.
```js
[GraphQLDescription("Gets the list of movies which belongs to the same genre as the movie whose movieId is passed as parameter.")]
public async Task<List<Movie>> GetSimilarMovies(int movieId)
{
return await _movieService.GetSimilarMovies(movieId);
}
```
This asynchronous method returns a list of **Movie** objects with the same genre as the given movie, identified by the **movieId** parameter.
## Configure the Program.cs file
Since we have added new GraphQL resolvers, we need to register them in our middleware.
Update the following code example in the **Program.cs** file.
```js
builder.Services.AddGraphQLServer()
.AddAuthorization()
.AddQueryType<MovieQueryResolver>()
.AddTypeExtension<WatchlistQueryResolver>()
.AddMutationType<MovieMutationResolver>()
.AddTypeExtension<AuthMutationResolver>()
.AddTypeExtension<WatchlistMutationResolver>()
.AddFiltering()
.AddErrorFilter(error =>
{
return error;
});
```
We use the **AddTypeExtension** method to register new resolvers of the types **WatchlistQueryResolver** and **WatchlistMutationResolver**.
We will register the transient lifetime of the **IWatchlist** service using the following code.
```js
builder.Services.AddTransient<IWatchlist, WatchlistDataAccessLayer>();
```
With the server configuration complete, we can now move to the client side of the app.
## Add the GraphQL query
Add the following GraphQL queries to the **src\app\GraphQL\query.ts** file.
```js
export const GET_SIMILAR_MOVIE = gql`
query FetchSimilarMovies($movieId: Int!) {
similarMovies(movieId: $movieId) {
movieId
title
posterPath
genre
rating
language
duration
overview
}
}
`;
export const GET_WATCHLIST = gql`
query FetchWatchList($userId: Int!) {
watchlist(userId: $userId) {
movieId
title
posterPath
genre
rating
language
duration
}
}
`;
```
The code uses the **gql** tag from the **graphql-tag** library to define two GraphQL queries as described:
- **GET\_SIMILAR\_MOVIE:** This query is named **FetchSimilarMovies** and it takes a single parameter, **$movieId**, of type **Int!.** The **‘!’** indicates that this parameter is required. The query fetches similar movies to the given movie ID, returning several fields for each similar movie, such as **movieId**, **title**, **posterPath**, **genre**, **rating**, **language**, **duration**, and **overview**.
- **GET\_WATCHLIST:** This query is named **FetchWatchList** and takes a single parameter, **$userId**, of type **Int!.** It fetches the watchlist for the given user ID and returning several fields for each movie in the watchlist, such as **movieId**, **title**, **posterPath**, **genre**, **rating**, **language**, and **duration**.
## Add the GraphQL mutation
Add the following GraphQL mutation in the **src\app\GraphQL\mutation.ts** file.
```js
export const TOGGLE_WATCHLIST = gql`
mutation toggleUserWatchlist($userId: Int!, $movieId: Int!) {
toggleWatchlist(userId: $userId, movieId: $movieId) {
movieId
title
posterPath
genre
rating
language
duration
}
}
`;
export const CLEAR_WATCHLIST = gql`
mutation clearWatchlist($userId: Int!) {
clearWatchlist(userId: $userId)
}
`;
```
The code uses the **gql** tag from the **graphql-tag** library to define two GraphQL mutations as described below:
- **TOGGLE\_WATCHLIST:** This mutation is named **toggleUserWatchlist**, and it takes two parameters, **$userId** and **$movieId**, of type **Int!**. The **‘!’** indicates that these parameters are required. The mutation toggles a movie’s watchlist status for a user. If the movie is already on the user’s watchlist, it will be removed. If it’s not, it will be added. The mutation returns several fields for the toggled movie: **movieId**, **title**, **posterPath**, **genre**, **rating**, **language**, and **duration**.
- **CLEAR\_WATCHLIST:** This mutation is named **clearWatchlist** and takes a single parameter, **$userId**, of type **Int!**. It clears the watchlist for the given user ID, removing all movies from it.
## Create the client side model
Add the following type definitions to the **movie.ts** file under the **src\app\models\movie.ts** folder.
```js
export type SimilarMovieType = {
similarMovies: Movie[];
};
export type WatchlistType = {
watchlist: Movie[];
};
export type ToggleWatchlistType = {
toggleWatchlist: Movie[];
};
```
## Create the required GraphQL services
### Generate the fetch similar movies service
Run the following command in the **ClientApp** folder to generate a service file.
```
ng g s services\fetch-similar-movies
```
Add the following code to the **fetch-similar-movies.service.ts** file.
```js
import { Injectable } from '@angular/core';
import { Query } from 'apollo-angular';
import { SimilarMovieType } from '../models/movie';
import { GET_SIMILAR_MOVIE } from '../GraphQL/query';
@Injectable({
providedIn: 'root',
})
export class FetchSimilarMoviesService extends Query<SimilarMovieType> {
document = GET_SIMILAR_MOVIE;
}
```
This service performs a GraphQL query to fetch a list of similar movies for a given movie ID.
### Generate the fetch watchlist service
Run the following command to generate a new service file.
```
ng g s services\fetch-watchlist
```
Add the following code to the **fetch-watchlist.service.ts** file.
```js
import { Injectable } from '@angular/core';
import { Query } from 'apollo-angular';
import { GET_WATCHLIST } from '../GraphQL/query';
import { WatchlistType } from '../models/movie';
@Injectable({
providedIn: 'root',
})
export class FetchWatchlistService extends Query<WatchlistType> {
document = GET_WATCHLIST;
}
```
This service performs a GraphQL query to fetch the watchlist for a logged-in user.
### Generate the toggle watchlist service
Run the following command to generate a new service file.
```
ng g s services\toggle-watchlist
```
Add the following code to the **toggle-watchlist.service.ts** file.
```js
import { Injectable } from '@angular/core';
import { Mutation } from 'apollo-angular';
import { TOGGLE_WATCHLIST } from '../GraphQL/mutation';
import { ToggleWatchlistType } from '../models/movie';
@Injectable({
providedIn: 'root',
})
export class ToggleWatchlistService extends Mutation<ToggleWatchlistType> {
document = TOGGLE_WATCHLIST;
}
```
This service performs a GraphQL mutation to toggle the addition and removal of movies from the watchlist.
### Generate the clear watchlist service
Run the following command to generate a new service file.
```
ng g s services\clear-watchlist
```
Add the following code to the **clear-watchlist.service.ts** file.
```js
import { Injectable } from '@angular/core';
import { CLEAR_WATCHLIST } from '../GraphQL/mutation';
import { Mutation } from 'apollo-angular';
@Injectable({
providedIn: 'root',
})
export class ClearWatchlistService extends Mutation {
document = CLEAR_WATCHLIST;
}
```
This service performs a GraphQL mutation to clear the watchlist, i.e., remove all the items from it.
The next step is to update the **SubscriptionService** class in the **src\app\services\subscription.service.ts** file by adding the following two properties.
```js
export class SubscriptionService {
// existing code.
watchlistItemcount$ = new BehaviorSubject<number>(0);
watchlistItem$ = new BehaviorSubject<Movie[]>([]);
}
```
Run the following command to generate a new service file.
```
ng g s services\watchlist
```
Add the following code to the **watchlist.service.ts** file.
```js
import { Injectable } from '@angular/core';
import { SubscriptionService } from './subscription.service';
import { FetchWatchlistService } from './fetch-watchlist.service';
import { ToggleWatchlistService } from './toggle-watchlist.service';
import { map } from 'rxjs';
import { Movie } from '../models/movie';
import { ClearWatchlistService } from './clear-watchlist.service';
@Injectable({
providedIn: 'root',
})
export class WatchlistService {
constructor(
private readonly subscriptionService: SubscriptionService,
private readonly fetchWatchlistService: FetchWatchlistService,
private readonly toggleWatchlistService: ToggleWatchlistService,
private readonly clearWatchlistService: ClearWatchlistService
) {}
getWatchlistItems(userId: Number) {
return this.fetchWatchlistService
.watch(
{
userId: Number(userId),
},
{
fetchPolicy: 'network-only',
}
)
.valueChanges.pipe(
map((result) => {
if (result.data) {
this.setWatchlist(result.data?.watchlist);
}
})
);
}
toggleWatchlistItem(userId: number, movieId: number) {
return this.toggleWatchlistService
.mutate(
{
userId: Number(userId),
movieId: Number(movieId),
},
{
fetchPolicy: 'network-only',
}
)
.pipe(
map((result) => {
if (result.data) {
this.setWatchlist(result.data?.toggleWatchlist);
}
})
);
}
clearWatchlist(userId: number) {
return this.clearWatchlistService
.mutate(
{
userId: Number(userId),
},
{
fetchPolicy: 'network-only',
}
)
.pipe(
map((result) => {
if (result.data) {
this.setWatchlist([]);
}
})
);
}
private setWatchlist(watchList: Movie[]) {
this.subscriptionService.watchlistItemcount$.next(watchList.length);
this.subscriptionService.watchlistItem$.next(watchlist);
}
}
```
The constructor of the **WatchlistService** class injects four services into the component: **SubscriptionService**, **FetchWatchlistService**, **ToggleWatchlistService**, and **ClearWatchlistService**. These services manage the user’s watchlist.
The **getWatchlistItems** method fetches the watchlist items for a user by calling the **watch** method on **fetchWatchlistService** with the **userId** and a fetch policy of **network-only**. The watch method returns an **Observable** that emits the result of the GraphQL query. The **map** operator extracts the watchlist from the result and calls **setWatchlist**.
The **toggleWatchlistItem** method toggles the watchlist status of a movie for a user by calling the **mutate** method on **toggleWatchlistService** with the **userId** and **movieId** and a fetch policy of **network-only**. The **mutate** method returns an Observable that emits the result of the GraphQL mutation. The **map** operator extracts the **toggleWatchlist** from the result and calls **setWatchlist**.
The **clearWatchlist** method clears the watchlist for a user by calling the **mutate** method on **clearWatchlistService** with the **userId** and a fetch policy of **network-only**. The mutate method returns an Observable that emits the result of the GraphQL mutation. The **map** operator is used to check if the data property of the result is **true,** and if so, call **setWatchlist** with an empty array.
The private method **setWatchlist** updates the watchlist in the **subscriptionService** by calling the **next** method on two Observables: **watchlistItemcount$** and **watchlistItem$**. The **next** method pushes the new values (the length of the watchlist and the watchlist itself, respectively) to any subscribers of these Observables.
## Create the AddToWatchlist component
Run the following command to create the **AddToWatchlist** component.
```
ng g c components\add-to-watchlist
```
Add the following code to the **src\app\components\add-to-watchlist\add-to-watchlist.component.ts** file.
```js
import { Component, Input, OnChanges } from '@angular/core';
import { EMPTY, ReplaySubject, switchMap, takeUntil } from 'rxjs';
import { Movie } from 'src/app/models/movie';
import { SubscriptionService } from 'src/app/services/subscription.service';
import { WatchlistService } from 'src/app/services/watchlist.service';
import { ToastUtility } from '@syncfusion/ej2-notifications';
@Component({
selector: 'app-add-to-watchlist',
templateUrl: './add-to-watchlist.component.html',
styleUrls: ['./add-to-watchlist.component.css'],
})
export class AddToWatchlistComponent implements OnChanges {
@Input({ required: true })
movieId = 0;
toggle = false;
buttonText = '';
iconClass = 'e-zoom-in-2';
private destroyed$ = new ReplaySubject<void>(1);
constructor(
private readonly watchlistService: WatchlistService,
private readonly subscriptionService: SubscriptionService
) {}
ngOnChanges() {
this.subscriptionService.watchlistItem$
.pipe(takeUntil(this.destroyed$))
.subscribe((movieData: Movie[]) => {
this.setFavourite(movieData);
this.setButtonText();
});
}
toggleValue() {
this.toggle = !this.toggle;
this.setButtonText();
this.subscriptionService.userData$
.pipe(
switchMap((user) => {
const userId = user.userId;
if (userId > 0) {
return this.watchlistService.toggleWatchlistItem(
userId,
this.movieId
);
} else {
return EMPTY;
}
}),
takeUntil(this.destroyed$)
)
.subscribe({
next: () => {
if (this.toggle) {
ToastUtility.show({
content: 'Movie added to your Watchlist.',
position: { X: 'Right', Y: 'Top' },
cssClass: 'e-toast-success',
});
} else {
ToastUtility.show({
content: 'Movie removed from your Watchlist.',
position: { X: 'Right', Y: 'Top' },
});
}
},
error: (error) => {
console.error('Error occurred while setting the Watchlist : ', error);
},
});
}
private setFavourite(movieData: Movie[]) {
const favouriteMovie = movieData.find((f) => f.movieId === this.movieId);
if (favouriteMovie) {
this.toggle = true;
} else {
this.toggle = false;
}
}
private setButtonText() {
if (this.toggle) {
this.buttonText = 'Remove from Watchlist';
this.iconClass = 'e-zoom-out-2';
} else {
this.buttonText = 'Add to Watchlist';
this.iconClass = 'e-zoom-in-2';
}
}
ngOnDestroy(): void {
this.destroyed$.next();
this.destroyed$.complete();
}
}
```
The **AddToWatchlistComponent** class implements the **OnChanges** method to handle changes to input properties. The constructor injects **WatchlistService** and **SubscriptionService** into the component. These are used to manage the user’s watchlist.
The **movieId** is an input property that represents the ID of the movie to add to the watchlist. It’s required and defaults to 0.
The **ngOnChanges** lifecycle hook is called when an input property changes. It subscribes to **subscriptionService.watchlistItem$**, and when it emits, it calls **setFavourite** and **setButtonText**.
The **toggleValue** method toggles the movie’s watchlist status. It subscribes to **subscriptionService.userData$**, gets the **userId**, and calls **watchlistService.toggleWatchlistItem** with the **userId** and **movieId**. If the user ID is not greater than 0, it returns **EMPTY**. A toast notification is shown when a movie is added or removed from the watchlist, and any errors are logged to the console.
The private method **setFavourite** sets the toggle value based on whether the movie is on the watchlist. The private method **setButtonText** sets the button text and icon class based on the toggle value.
Add the following code to the **src\app\components\add-to-watchlist\add-to-watchlist.component.html** file.
```xml
<button
ejs-button
class="full-width"
[ngClass]="{ 'e-warning': toggle, 'e-success': !toggle }"
iconCss="e-icons {{ iconClass }}"
mat-raised-button
(click)="toggleValue()"
>
{{ buttonText }}
</button>
```
We have added a button that toggles a movie’s presence in a watchlist. The **ejs-button** attribute indicates that the button is a Syncfusion [Button](https://www.syncfusion.com/angular-components/angular-button "Angular Button"). The method **toggleValue** is invoked at the click of the button. This **ngClass** directive is used to apply the CSS classes dynamically based on the condition. If the **toggle** is true, the **e-warning** class is applied; if the **toggle** is false, the **e-success** class is applied.
## Create the Watchlist component
Run the following command to create the Watchlist component.
```
ng g c components\watchlist
```
Add the following code to the **src\app\components\watchlist\watchlist.component.ts** file.
```js
import { Component, OnDestroy } from '@angular/core';
import { EMPTY, ReplaySubject, switchMap, takeUntil } from 'rxjs';
import { SubscriptionService } from 'src/app/services/subscription.service';
import { WatchlistService } from 'src/app/services/watchlist.service';
import { ToastUtility } from '@syncfusion/ej2-notifications';
@Component({
selector: 'app-watchlist',
templateUrl: './watchlist.component.html',
styleUrls: ['./watchlist.component.css'],
})
export class WatchlistComponent implements OnDestroy {
private destroyed$ = new ReplaySubject<void>(1);
watchlistItems$ = this.subscriptionService.watchlistItem$;
constructor(
private readonly watchlistService: WatchlistService,
private readonly subscriptionService: SubscriptionService
) {}
clearWatchlist() {
this.subscriptionService.userData$
.pipe(
switchMap((user) => {
const userId = user.userId;
if (userId > 0) {
return this.watchlistService.clearWatchlist(userId);
} else {
return EMPTY;
}
}),
takeUntil(this.destroyed$)
)
.subscribe({
next: () => {
ToastUtility.show({
content: 'Watchlist cleared!!!',
position: { X: 'Right', Y: 'Top' },
});
},
error: (error) => {
console.error(
'Error occurred while deleting the Watchlist : ',
error
);
},
});
}
ngOnDestroy(): void {
this.destroyed$.next();
this.destroyed$.complete();
}
}
```
The constructor of the class **WatchlistComponent** injects two services into the component: **WatchlistService** and **SubscriptionService**. These services manage the user’s watchlist.
The property **watchlistItems$** is an Observable that emits the watchlist items. It’s assigned the value of **subscriptionService.watchlistItem$**.
The **clearWatchlist** method is used to clear the watchlist for the current user by subscribing to **subscriptionService.userData$**, getting the **userId**, and calling **watchlistService.clearWatchlist** with the **userId**. If the user ID is not greater than 0, it returns **EMPTY**, an Observable that emits no items and immediately completes. The **takeUntil** operator automatically unsubscribes from the Observable when **destroyed$** emits a value. A toast notification is shown when the watchlist is successfully cleared and any errors are logged in the console.
Add the following code to the **src\app\components\watchlist\watchlist.component.html** file.
```xml
<ng-container *ngIf="watchlistItems$ | async as watchlistItems">
<div class="my-2">
<div
class="title-container p-2 d-flex align-items-center justify-content-between"
>
<h2 class="m-0">My Watchlist</h2>
<div>
<button ejs-button cssClass="e-danger" (click)="clearWatchlist()">
Clear Watchlist
</button>
</div>
</div>
<ng-container *ngIf="watchlistItems.length > 0; else emptyWatchlist">
<div class="e-card">
<div class="e-card-content">
<ejs-grid #grid [dataSource]="watchlistItems">
<e-columns>
<e-column headerText="Poster" width="150">
<ng-template #template let-movieData>
<ejs-tooltip id="tooltip" content="{{ movieData.title }}">
<img
class="my-2"
src="/Poster/{{ movieData.posterPath }}"
/>
</ejs-tooltip>
</ng-template>
</e-column>
<e-column headerText="Title" width="150">
<ng-template #template let-movieData>
<a [routerLink]="['/movies/details/', movieData.movieId]">
{{ movieData.title }}
</a>
</ng-template>
</e-column>
<e-column field="genre" headerText="Genre" width="100"></e-column>
<e-column
field="language"
headerText="Language"
width="100"
></e-column>
<e-column headerText="Operation" width="150">
<ng-template #template let-movieData>
<ejs-tooltip content="Edit movie">
<app-add-to-watchlist
[movieId]="movieData.movieId"
></app-add-to-watchlist>
</ejs-tooltip>
</ng-template>
</e-column>
</e-columns>
</ejs-grid>
</div>
</div>
</ng-container>
</div>
</ng-container>
<ng-template #emptyWatchlist>
<div class="e-card">
<div class="e-card-content">
<h2 class="m-2">Your watchlist is empty.</h2>
<button
ejs-button
cssClass="e-link"
iconCss="e-icons e-back e-medium"
[routerLink]="['/']"
>
Back to Home
</button>
</div>
</div>
</ng-template>
```
The **ngIf** directive subscribes to the **watchlistItems$** Observable and assigns the emitted value to the local variable **watchlistItems**. The content inside the **ng-container** is only rendered if **watchlistItems$** emits a truthy value.
A Syncfusion Button with the **e-danger** CSS class is added. When clicked, it calls the **clearWatchlist** method from the component.
The **dataSource** input of the Syncfusion DataGrid component is bound to **watchlistItems**, which displays the items in the watchlist. The **e-column** elements define the grid’s columns. Each column has a **headerText** attribute that sets the column header, and some have a **field** attribute that binds the column to a property of the data items. Some columns also have an **ng-template** that customizes how the data is displayed.
The **app-add-to-watchlist** component’s **movieId input** is bound to **movieData.movieId**, so the component knows which movie to add to or remove from the watchlist.
When the watchlist is empty, the **emptyWatchlist** template is rendered. It displays a message and a button that navigates to the home page.
## Create the similar-movies component
Run the next command to create the **SimilarMovies** component.
```
ng g c components\similar-movies
```
Add the following code in the **src\app\components\similar-movies\similar-movies.component.ts** file.
```js
import { Component } from '@angular/core';
import { ActivatedRoute, Params } from '@angular/router';
import { EMPTY, map, switchMap } from 'rxjs';
import { FetchSimilarMoviesService } from 'src/app/services/fetch-similar-movies.service';
@Component({
selector: 'app-similar-movies',
templateUrl: './similar-movies.component.html',
styleUrls: ['./similar-movies.component.css'],
})
export class SimilarMoviesComponent {
readonly similarMovies$ = this.activatedRoute.paramMap.pipe(
switchMap((params: Params) => {
const selectedMovieId = Number(params.get('movieId'));
if (selectedMovieId > 0) {
return this.fetchSimilarMoviesService
.watch(
{
movieId: Number(selectedMovieId),
},
{
fetchPolicy: 'network-only',
}
)
.valueChanges.pipe(map((result) => result?.data?.similarMovies));
} else {
return EMPTY;
}
})
);
constructor(
private readonly activatedRoute: ActivatedRoute,
private readonly fetchSimilarMoviesService: FetchSimilarMoviesService
) {}
}
```
The **SimilarMoviesComponent** class contains an observable **similarMovies$**, which emits similar movies for a selected movie. It uses the **paramMap** Observable from **ActivatedRoute** to get the **movieId** parameter from the route. It then uses the **switchMap** method to call **FetchSimilarMoviesService.watch** with the **movieId** and fetch the policy **network-only**. The **watch** method returns an Observable that emits the result of the GraphQL query. The **map** operator is used to extract the **similarMovies** from the result. If **movieId** is not greater than 0, it returns **EMPTY**, an Observable that emits no items and immediately completes.
Add the following code to the **src\app\components\similar-movies\similar-movies.component.html** file.
```xml
<ng-container *ngIf="similarMovies$ | async as movies">
<div class="row my-4 g-0">
<div class="col-12 title-container p-2">
<h2 class="m-0">Similar Movies</h2>
</div>
<div class="e-card">
<div class="e-card-content row p-3">
<div class="d-flex justify-content-start flex-wrap">
<div *ngFor="let movie of movies" class="p-1">
<app-movie-card [movie]="movie"></app-movie-card>
</div>
</div>
</div>
</div>
</div>
</ng-container>
```
The **ngIf** directive subscribes to the **similarMovies$** Observable and assigns the emitted value to a local variable **movies**. The async pipe subscribes to an Observable and automatically updates the view whenever a new value is emitted. If **similarMovies$** does not emit a value (i.e., null or undefined), the **ng-container** and its contents will not be rendered.
The **ngFor** directive loops over the **movies** array. For each movie, it creates a new **div** and assigns the movie to the local variable **movie**.
We will then display the movie cards using the **<****app-movie-card>** component, binding the movie input property of **app-movie-card** to the **movie** variable.
## Create the PageNotFound component
Run the following command to create the **PageNotFound** component.
```
ng g c components\page-not-found
```
Add the following code to the **src\app\components\page-not-found\page-not-found.component.html** file.
```xml
<div class="e-card">
<div class="e-card-content">
<h2 class="m-2">The resource you are looking for is not found.</h2>
<button
ejs-button
cssClass="e-link"
iconCss="e-icons e-back e-medium"
[routerLink]="['/']"
>
Back to Home
</button>
</div>
</div>
```
This page will be rendered when the user tries to access a route that does not exist. We display a message and a button to redirect the user to the home page.
## Update the MovieDetails component
Update the **MovieDetailsComponent** class in the **src\app\components\movie-details\movie-details.component.ts** file.
```js
import { Component } from '@angular/core';
import { ActivatedRoute, Params } from '@angular/router';
export class MovieDetailsComponent {
// Existing code.
userData$ = this.subscriptionService.userData$;
constructor(
// Existing service injection.
private readonly subscriptionService: SubscriptionService
) {}
}
```
Add the selectors for the **SimilarMoviesComponent** and **AddToWatchlistComponent** inside the template of the **MovieDetailsComponent**.
Add the selector for **AddToWatchlistComponent** just after the movie cover image’s **\<div\>** element.
```xml
<ng-container *ngIf="movieDetails$ | async as movie; else noMovieFound">
<!-- Existing code -->
<div *ngIf="userData$ | async as user" class="mt-2 image-width">
<app-add-to-watchlist
*ngIf="user.isLoggedIn"
[movieId]="movie.movieId"
></app-add-to-watchlist>
</div>
<!-- Existing code -->
</ng-container>
```
Insert the selector for the **SimilarMoviesComponent** just before closing the **\<ng-container\>** element.
```xml
<ng-container *ngIf="movieDetails$ | async as movie; else noMovieFound">
<!-- Existing code -->
<app-similar-movies></app-similar-movies>
</ng-container>
```
## Configure app routing
Open the **src\app\app-routing.module.ts** file and add the route for the newly added components under the **appRoutes** array.
```js
const appRoutes: Routes = [
// Existing routes.
{
path: 'watchlist',
component: WatchlistComponent,
canActivate: [authGuard],
},
{ path: '**', component: PageNotFoundComponent },
];
```
The route configuration maps the path **/watchlist** to the **WatchlistComponent**. The **canActivate** option means that the **authGuard** needs to return **true** for the route to be activated.
At the end of the array, we have added the wildcard route that maps any path not matched by the previous routes to the **PageNotFoundComponent**.
**Note:** The order in which we define the application’s routes is very important. The router uses a first-match wins strategy when matching routes. Therefore, more specific routes should be placed above less specific routes. The wildcard route should be placed at the end because it matches every URL. It should be selected only if no other routes match first.
## Update the nav-bar component
Update the **NavBarComponent** class under the **src\app\components\nav-bar\nav-bar.component.ts** file.
```js
export class NavBarComponent implements OnInit, OnDestroy {
// Existing code
readonly watchlistItemcount$ = this.subscriptionService.watchlistItemcount$;
constructor(
// Existing service injection
private readonly watchlistService: WatchlistService
) {}
ngOnInit(): void {
this.subscriptionService.userData$
.pipe(
switchMap((user: User) => {
this.authenticatedUser = user;
if (this.authenticatedUser.userId > 0) {
return this.watchlistService.getWatchlistItems(
this.authenticatedUser.userId
);
} else {
return EMPTY;
}
}),
takeUntil(this.destroyed$)
)
.subscribe({
error: (error) => {
console.error('Error occurred while setting the Watchlist : ', error);
},
});
}
// Existing methods
}
```
In this update, we subscribe to the **userData$** Observable from the s **ubscription** **service**. The **RxJS** operator **switchMap** cancels the previous Observable and switches to a new one. In this case, it takes the emitted **User** object, assigns it to **this.authenticatedUser**, and then returns a new Observable based on the **userId**. If **userId** is greater than 0, it calls the **getWatchlistItems** method from the **WatchlistService** to get the watchlist items for the user. If **userId** is not greater than 0, it returns **EMPTY**, an Observable that emits no items and immediately completes. If an error occurs, it logs the error message to the console.
The next step is to update the **src\app\components\nav-bar\nav-bar.component.html** file by adding a button to navigate to the watchlist page. Add the following HTML template just before the code for the **Login** button.
```xml
<button
ejs-button
[isToggle]="true"
cssClass="e-inherit e-round e-small"
*ngIf="authenticatedUser.isLoggedIn"
[routerLink]="['/watchlist']"
iconCss="e-icons e-bookmark e-medium"
class="watchlist-btn">
<span
class="e-badge e-badge-warning e-badge-notification e-badge-overlap e-badge-circle">{{ watchlistItemcount$ | async }}</span>
</button>
```
This button will render only when the user is authenticated. It will also display a badge to show the number of items on the watchlist. When the button is clicked, it navigates to the **/watchlist** route.
## Set authorization header for API requests
Since we have added authorization checks on the GraphQL server, we need to send the authorization bearer token with each API request. The Bearer token is set to authenticate the client with the GraphQL server. When a client sends a request to a server, the server needs a way to verify the client’s identity. This is especially important for operations requiring certain permissions, such as fetching sensitive or modifying data.
Update the **createApollo** function in the **src\app\graphql.module.ts** file.
```js
// Existing code.
export function createApollo(httpLink: HttpLink): ApolloClientOptions<any> {
const auth = setContext(() => {
const headerToken = localStorage.getItem('authToken');
if (headerToken === null) {
return {};
} else {
return {
headers: {
Authorization: `Bearer ${headerToken}`,
},
};
}
});
const link = ApolloLink.from([auth, httpLink.create({ uri })]);
const cache = new InMemoryCache();
return {
link,
cache,
connectToDevTools: true,
};
}
// Existing code.
```
This function takes an **HttpLink** object as a parameter and returns an **ApolloClientOptions** object. **HttpLink** is used to connect to the GraphQL server. The **setContext** function creates middleware that can modify requests before they are sent. The bearer token is retrieved from local storage and added to the Authorization header of each request. An empty object is returned if the authentication token is null, meaning no headers are added to the request. If the token is not null, an object with an Authorization header is returned. An Apollo Link is created that first applies the **auth** link (which adds the Authorization header) and then the **httpLink** (which sends the request to the server).
The term bearer in the bearer token specifies the type of authentication. A bearer token means the bearer of this token is authorized, i.e., whoever presents this token has the right to access the data. This is a common pattern in web development, especially when working with APIs that require authentication.
## Execution demo
After executing the previous code examples, we will get output like in the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Build-Dynamic-Watchlist.gif" alt="Build a Dynamic Watchlist for Your Web App with Angular & GraphQL" style="width:100%">
</figure>
## GitHub resource
For more details, refer to the [complete source code](https://github.com/SyncfusionExamples/full-stack-web-app-using-angular-and-graphql "Full stack web app using Angular and Graphql GitHub demo") on GitHub.
## Summary
Thank you for reading this article. In it, we learned how to implement the watchlist feature in our application, which allows the user to add or remove movies from the watchlist. We also added a feature to display similar movies when users visit the movie details page.
In the next and final article of this series, we will learn how to deploy this application to the IIS and Azure App Service.
If you are new to our platform, we extend an invitation to explore our Syncfusion [Angular components](https://www.syncfusion.com/angular-components "Angular components") with the convenience of a [free trial](https://www.syncfusion.com/downloads/angular "Get free evaluation of Essential Studio for Angular"). This trial allows you to experience the full potential of our components to enhance your app’s user interface and functionality.
Our dedicated support system is readily available if you need guidance or have any questions. Contact us through our [support forum](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/angular "Syncfusion Feedback Portal"). Your success is our priority, and we’re always delighted to assist you on your development journey!
## Related blogs
- [A Full-Stack Web App Using Angular and GraphQL: Part 1](https://www.syncfusion.com/blogs/post/full-stack-web-app-angular-graphql "Blog: A Full-Stack Web App Using Angular and GraphQL: Part 1")
- [A Full-Stack Web App Using Angular and GraphQL: Data Fetching and Manipulation (Part 2)](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-2 "Blog: A Full-Stack Web App Using Angular and GraphQL: Data Fetching and Manipulation (Part 2)")
- [A Full-Stack Web App Using Angular and GraphQL: Perform Edit, Delete, and Advanced Filtering (Part 3)](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-3 "Blog: A Full-Stack Web App Using Angular and GraphQL: Perform Edit, Delete, and Advanced Filtering (Part 3)")
- [A Full-Stack Web App Using Angular and GraphQL: Adding User Registration Functionality (Part 4)](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-4 "Blog: A Full-Stack Web App Using Angular and GraphQL: Adding User Registration Functionality (Part 4)")
- [A Full-Stack Web App Using Angular and GraphQL: Adding Login and Authorization Functionalities (Part 5)](https://www.syncfusion.com/blogs/post/full-stack-app-angular-graphql-5 "Blog: A Full-Stack Web App Using Angular and GraphQL: Adding Login and Authorization Functionalities (Part 5)") | jollenmoyani |
1,913,614 | to specialise or not to specialise? | As a self-taught developer currently working in IT with multiple interests in tech such as web- and... | 0 | 2024-07-08T11:17:47 | https://dev.to/ohchloeho/to-specialise-or-not-to-specialise-ieh | career, discuss | As a self-taught developer currently working in IT with multiple interests in tech such as web- and application-development, machine-learning and IoT, I've run in a recent hurdle of whether or not I should specialise.
A little more information: I've been in the tech industry for a little over a year which can be considered a junior developer (please correct me if I'm mistaken), and the programming languages that I'm most familiar with include JavaScript and Go. My academic qualifications are unrelated to engineering or computer science although I am in process of getting an AWS foundational certificate. I am also currently based in Singapore if it's of any relevance.
If you're a seasoned developer, please give me some advice on whether I should pick a field or niche in tech to specialise in, and if so, which field(s) have a slightly lower than average barrier of entry. | ohchloeho |
1,908,261 | Passing erros throught microsservices (Java + Spring) | Hey folks! Here's another quick tip for you: today, we're diving into handling exceptions in Java and... | 0 | 2024-07-11T03:11:10 | https://dev.to/felipejansendeveloper/passing-erros-throught-microsservices-java-spring-3iam | java, spring, exceptions, microservices | Hey folks! Here's another quick tip for you: today, we're diving into handling exceptions in Java and how we can smoothly manage errors across microservices (among other approaches). Let's learn together! 👨💻🚀

Some time ago, I came across an issue where I needed to pass an error to the BFF (Backend-for-frontend). In essence, we have three microservices: one BFF, and others that handle the business logic of the application. Errors triggered within the business logic need to at least return a message to the BFF. This allows the frontend to determine the best error message and the most appropriate way to handle that specific error.

In short, the approach is quite straightforward: the server sends an error code through a header, and both the frontend and BFF are responsible for reading this code and returning a formatted JSON response to the frontend. Here are the steps:
1. Trigger an exception on the server, passing the error code.
2. The ExceptionHandler handles this exception and inserts the error code into a header called "CODE".
3. On the BFF side, this error is captured and a JSON response is constructed to be sent back to the frontend.
**Here's a rough outline of how the code might look:**
```
@Service
public class ServerService {
public Sucesso testMethod(Long id) {
if (id == 1) {
throw new ServerException(HttpStatus.BAD_REQUEST, ERROR_001);
} else if (id == 2) {
throw new ServerException(HttpStatus.BAD_REQUEST, ERROR_002);
} else if (id == 3) {
throw new ServerException(HttpStatus.BAD_REQUEST, ERROR_003);
}
return Sucesso.builder().id(id).build();
}
}
```
**Afterward, on the server side, this is the class responsible for handling triggered errors and adding the error code into the respective header!**
```
@RestControllerAdvice
public class ControlExceptionHandler {
public static final String CODE = "CODE";
@ExceptionHandler({ServerException.class})
public ResponseEntity<Object> serverException(ServerException e) {
HttpHeaders responseHeaders = new HttpHeaders();
responseHeaders.add(CODE, e.getCode());
return ResponseEntity.status(e.getHttpStatusCode()).headers(responseHeaders).build();
}
}
```
**On our BFF side, we use the `.onStatus()` method to check if an error occurred in the request and take appropriate actions (in this case, throwing a `ClientException` based on the obtained response)."**
```
@Service
public class ServerService {
public static final String CODE = "CODE";
private final RestClient restClient;
public ServerService(RestClient restClient) {
this.restClient = restClient;
}
public Sucesso testMethod(Long id) {
return restClient
.get()
.uri("http://localhost:8080?id=" + id)
.retrieve()
.onStatus(HttpStatusCode::isError, (HttpRequest request, ClientHttpResponse response) -> {
throw new ClientException(HttpStatus.BAD_REQUEST, Objects.requireNonNull(response.getHeaders().get(CODE)).get(0));
})
.body(Sucesso.class);
}
}
```
**And then, still on the BFF side, we construct the response for the Frontend based on the header captured from the response.**
```
@RestControllerAdvice
public class ControlExceptionHandler {
@ExceptionHandler({ClientException.class})
public ResponseEntity<Object> ClientException(ClientException e) {
return ResponseEntity.status(e.getHttpStatusCode()).body(e.getOnlyBody());
}
}
```
**Finally, this is the result that we expect.**

---
Here's the code on github: https://github.com/FelipeJansenDev/errors-between-microsservices
Follow me on Linkedin for more tips and tricks: https://www.linkedin.com/in/felipe-neiva-jansen/ | felipejansendeveloper |
1,908,383 | Gemini AI inside Chrome DevTools to Debug Errors | Introduction Gemini, formerly known as Bard, is a generative artificial intelligence... | 0 | 2024-07-11T12:01:08 | https://dev.to/shameel/gemini-ai-inside-chrome-devtools-to-debug-errors-3jpe | ai, debug, javascript, productivity | ## Introduction
Gemini, formerly known as Bard, is a generative artificial intelligence chatbot developed by Google.
(Source: Wikipedia)
TLDR: Gemini is another ChatGPT but by Google :)
In this blog, we will be discussing about Gemini - not the usual one but the one that is now integrated in to Chrome DevTools and help us debug right in there; instead of copy pasting the code from console to some AI chatbot like Gemini, ChatGPT or GitHub Copilot Chat.
This feature is called as **Understand console messages with AI**
Official Documentation: https://developer.chrome.com/docs/devtools/console/understand-messages
## Video Explanation
I have made a short and quick video for this explanation:
{% youtube WVSlgxLMojk %}
## Enable the feature
In order for you to enable this feature, you will have to:
- Open Chrome DevTools (Press F12)
- Go to Settings > Preference > Console
- Turn on **"Understand console messages with AI"**

## Understand Console Messages with AI
Now the magic is somewhat similar as what we have been experiencing since the introduction of such tools.
Lets take a look at it.
Open a Chrome DevTool on any webpage, preferably an empty page and clear the logs if there are any.
You can clear the logs by pressing "CTRL + L" or by clicking at top left (second icon) as shown below:

Now type a code like this:
```js
const a = 1
a = 2
```
This should cause an error in the console like this:

If you hover over the error, you will find an icon of a light bulb at top right:

As you hover over it, it'll expand and show you "Understand this error":

After you click on it, you will be asked about your permission to send the data to Google to understand the context:

Once you click accept, you will be provided with the explanation of the error and potential solution for that as well:

### Explanation and Solution by Gemini in Console
Following is the explanation and solution provided by the Gemini when I tried this for writing the blog:
The error "Uncaught TypeError: Assignment to constant variable" means you're trying to change the value of a variable that was declared as a constant using either const or (in some cases) let within a block where it's already defined.
Here's why you're getting the error and how to fix it:
1. **Using const**:
If you declared the variable a using const earlier in your code, you cannot reassign a new value to it. const creates constants, which are meant to hold values that should not be changed.
```js
const a = 2;
a = 5; // This will cause the error
```
Fix:
If you need to change the value, use let instead of const:
```js
let a = 2;
a = 5; // This is now allowed
```
If the value should truly remain constant, don't try to reassign it.
2. **Redeclaring with let in the same block**:
If you've used let to declare a within a block (like an if statement or a loop), and then try to redeclare it with let again within the same block, you'll get this error.
```js
if (true) {
let a = 2;
let a = 5; // Error: Cannot redeclare 'a' in the same block
}
```
Fix:
Simply assign the new value without using let again:
```js
if (true) {
let a = 2;
a = 5; // This is now correct
}
```
3. **Global Scope and Strict Mode**:
If you're not explicitly declaring a with var, let, or const, and you're in strict mode (using "use strict"), the browser will treat it as an attempt to assign to an undeclared variable, which is also not allowed.
```js
"use strict";
a = 2; // Error in strict mode
```
Fix:
Always declare your variables using var, let, or const to avoid this issue:
```js
"use strict";
let a = 2; // Now it's correct
```
To help me give you a more specific solution, please provide the following:
The surrounding code: Show me the code where you declare and try to modify the variable a.
Whether you're using strict mode: Let me know if you have "use strict" at the top of your JavaScript file.
Once you provide this information, I can give you a tailored solution to fix the error in your web app!
## Conclusion
This is a pretty cool integration of Gemini introduced by Google within Chrome DevTools to help us debug right away.
Hope the article helps :)
Happy coding! 🚀
**Follow me for more such content**:
{% cta https://www.youtube.com/@ShameelUddin123 %} 🚀 Follow on YouTube {% endcta %}
{% cta https://www.linkedin.com/in/shameeluddin/ %} 🚀 Follow on LinkedIn {% endcta %}
{% cta https://github.com/Shameel123 %} 🚀 Follow on GitHub {% endcta %}
| shameel |
1,908,481 | What is ASP.Net MVC? | MVC is one of the architectural pattern for implementing user interfaces. MVC components :- Model :... | 28,030 | 2024-07-10T06:45:05 | https://dev.to/anshuverma/mvc-architectural-pattern-ej6 | mvc | MVC is one of the architectural pattern for implementing user interfaces.
**MVC components :-**
**Model :** Application data and behaviour in terms of its domain and independent of the UI.

**View :** The HTML markup that we display to the user.

**Controller :** Responsible for handling HTTP request. Get data from database through the model, put them in the view and return the view to the client or the browser.

There is one more piece to this architecture which is not in the acronym "MVC" but it is nearly always there. It's router. When a request comes in an application, a controller will be selected for handling that request. Selecting the right controller is the responsibility of the router.

Router : Selects the right controller to handle a request.

A router based on some rules knows that the request with a URL should be handled by a class. More accurately it should be handled by one of the methods in the class because a class can have many different methods.
In ASP.NET MVC we refer to method of a controller as actions. So it is more accurate to say with, an action in a controller is responsible for handling a request.
**Benefit of MVC :-**
Better separation of concern : With this architecture each component has distinct responsibility and this results in better separation of concerns and more maintainable application.
| anshuverma |
1,908,566 | Create a CRUD API with Laravel | The CRUD operations (create, read, update, delete) are the basic functionality of any web... | 0 | 2024-07-08T03:14:03 | https://blog.stackpuz.com/create-a-crud-api-with-laravel/ | laravel, crud | ---
title: Create a CRUD API with Laravel
published: true
date: 2024-07-02 07:16:00 UTC
tags: Laravel,CRUD
canonical_url: https://blog.stackpuz.com/create-a-crud-api-with-laravel/
---

The CRUD operations (create, read, update, delete) are the basic functionality of any web application when working with a database. This example will show you how to create the CRUD API with Laravel and use MySQL as a database.
## Prerequisites
- Composer
- PHP 8.2
- MySQL
## Setup project
```
composer create-project laravel/laravel laravel_api 11.0.3
```
Create a testing database named "example" and run the [database.sql](https://github.com/stackpuz/Example-CRUD-Laravel-11/blob/main/database.sql) file to import the table and data.
## Project structure
```
├─ .env
├─ app
│ ├─ Http
│ │ └─ Controllers
│ │ └─ ProductController.php
│ └─ Models
│ └─ Product.php
├─ bootstrap
│ └─ app.php
├─ resources
│ └─ views
│ └─ index.php
└─ routes
├─ api.php
└─ web.php
```
\*This project structure will show only files and folders that we intend to create or modify.
## Project files
### .env
This file is the Laravel configuration file and we use it to keep the database connection information.
```ini
DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=example
DB_USERNAME=root
DB_PASSWORD=
SESSION_DRIVER=file
```
We also set `SESSION_DRIVER=file` to change the session driver from database to file.
### app.php
This file is the Laravel application configuration file, and we only added the API routing file here.
```php
<?php
use Illuminate\Foundation\Application;
use Illuminate\Foundation\Configuration\Exceptions;
use Illuminate\Foundation\Configuration\Middleware;
return Application::configure(basePath: dirname( __DIR__ ))
->withRouting(
web: __DIR__.'/../routes/web.php',
api: __DIR__.'/../routes/api.php',
commands: __DIR__.'/../routes/console.php',
health: '/up',
)
->withMiddleware(function (Middleware $middleware) {
//
})
->withExceptions(function (Exceptions $exceptions) {
//
})->create();
```
### web.php
This file defines the route URL for the Laravel web application. We just changed the default file from welcome.php to index.php.
```php
<?php
use Illuminate\Support\Facades\Route;
Route::get('/', function () {
return view('index');
});
```
### api.php
This file defines the route URL for the Laravel API. We define our API route here.
```php
<?php
use App\Http\Controllers\ProductController;
Route::resource('/products', ProductController::class);
```
### Product.php
This file defines the Eloquent Model information that maps to our database table named "Product".
```php
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
class Product extends Model
{
protected $table = 'Product';
protected $primaryKey = 'id';
protected $fillable = ['name', 'price'];
public $timestamps = false;
}
```
`$fillable = ['name', 'price']` the column list that allows you to insert and update.
`$timestamps = false` does not utilize the auto-generate timestamp feature (create\_at, update\_at columns).
### ProductController.php
This file defines all functions required to handle incoming requests and perform any CRUD operations.
```php
<?php
namespace App\Http\Controllers;
use App\Models\Product;
class ProductController {
public function index()
{
return Product::get();
}
public function show($id)
{
return Product::find($id);
}
public function store()
{
return Product::create([
'name' => request()->input('name'),
'price' => request()->input('price')
]);
}
public function update($id)
{
return tap(Product::find($id))->update([
'name' => request()->input('name'),
'price' => request()->input('price')
]);
}
public function destroy($id)
{
Product::find($id)->delete();
}
}
```
We use an Eloquent model named Product to perform any CRUD operations on the database by using the basic methods such as `get, find, create, update, delete`
## index.php
This file will be used to create a basic user interface for testing our API.
```html
<!DOCTYPE html>
<head>
<style>
li {
margin-bottom: 5px;
}
textarea {
width: 100%;
}
</style>
</head>
<body>
<h1>Example CRUD</h1>
<ul>
<li><button onclick="getProducts()">Get Products</button></li>
<li><button onclick="getProduct()">Get Product</button></li>
<li><button onclick="createProduct()">Create Product</button></li>
<li><button onclick="updateProduct()">Update Product</button></li>
<li><button onclick="deleteProduct()">Delete Product</button></li>
</ul>
<textarea id="text_response" rows="20"></textarea>
<script>
function showResponse(res) {
res.text().then(text => {
let contentType = res.headers.get('content-type')
if (contentType && contentType.startsWith('application/json')) {
text = JSON.stringify(JSON.parse(text), null, 4)
}
document.getElementById('text_response').innerHTML = text
})
}
function getProducts() {
fetch('/api/products').then(showResponse)
}
function getProduct() {
let id = prompt('Input product id')
fetch('/api/products/' + id).then(showResponse)
}
function createProduct() {
let name = prompt('Input product name')
let price = prompt('Input product price')
fetch('/api/products', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ name, price })
}).then(showResponse)
}
function updateProduct() {
let id = prompt('Input product id to update')
let name = prompt('Input new product name')
let price = prompt('Input new product price')
fetch('/api/products/' + id, {
method: 'PUT',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ name, price })
}).then(showResponse)
}
function deleteProduct() {
let id = prompt('Input product id to delete')
fetch('/api/products/' + id, {
method: 'DELETE'
}).then(showResponse)
}
</script>
</body>
</html>
```
- Many other articles will use Postman as the HTTP client to test the API, but in this article, I will use JavaScript instead. This will help you understand more details when working with HTTP request on the client-side.
- To keep this file clean and readable, we will only use basic HTML and JavaScript. There are no additional libraries such as the CSS Framework or Axios here.
- All CRUD functions will use the appropriate HTTP method to invoke the API.
- `showResponse(res)` will format the JSON object to make it easier to read.
## Run project
```
php artisan serve
```
Open the web browser and goto http://localhost:8000
## Testing
### Get All Products
Click the "Get Products" button. The API will return all product data.

### Get Product By Id
Click the "Get Product" button and enter "1" for the product id. The API will return a product data.

### Create Product
Click the "Create Product" button and enter "test-create" for the product name and "100" for the price. The API will return a newly created product.

### Update Product
Click the "Update Product" button and enter "101" for the product id and "test-update" for the name and "200" for the price. The API will return an updated product.

### Delete Product
Click the "Delete Product" button and enter "101" for the product id. The API will return nothing, which is acceptable as we do not return anything from our API.

## Conclusion
In this article, you have learned how to create and set up the Laravel API in order to create a CRUD API. Utilize the Eloquent Model as an ORM to perform the CRUD operations on the database. Test our API using JavaScript. I hope you will enjoy the article.
Source code: [https://github.com/stackpuz/Example-CRUD-Laravel-11](https://github.com/stackpuz/Example-CRUD-Laravel-11)
Create a CRUD Web App in Minutes: [https://stackpuz.com](https://stackpuz.com) | stackpuz |
1,908,592 | Agile Methodologies and Best Practices | Agile Methodologies and Best Practices In the fast-paced world of software development,... | 0 | 2024-07-12T10:34:07 | https://dev.to/sumit_01/agile-methodologies-and-best-practices-34md | webdev, agile, learning | ### Agile Methodologies and Best Practices
In the fast-paced world of software development, Agile methodologies have revolutionized the way teams deliver high-quality products. Agile focuses on flexibility, collaboration, and customer satisfaction, making it an essential framework for modern software development. In this article, we'll explore the core principles of Agile, its various methodologies, and best practices for successful Agile implementation.
### Understanding Agile
**What is Agile?**
Agile is an iterative approach to software development and project management that emphasizes incremental delivery, collaboration, and continuous improvement. Unlike traditional methodologies, such as Waterfall, which follow a linear and sequential process, Agile promotes flexibility and responsiveness to change.
**Core Principles of Agile**
Agile is guided by four key principles, as outlined in the Agile Manifesto:
1. **Individuals and Interactions over Processes and Tools**:
- Agile values the contributions and interactions of team members more than rigid adherence to processes and tools.
2. **Working Software over Comprehensive Documentation**:
- The primary measure of progress is working software, not extensive documentation.
3. **Customer Collaboration over Contract Negotiation**:
- Continuous collaboration with customers ensures that the final product meets their needs and expectations.
4. **Responding to Change over Following a Plan**:
- Agile teams welcome and adapt to changes, viewing them as opportunities to improve the product.
### Agile Methodologies
Several methodologies fall under the Agile umbrella, each with its unique practices and frameworks. Here are some of the most popular Agile methodologies:
**Scrum**
Scrum is a lightweight, iterative framework for managing complex projects. It organizes work into small, manageable units called sprints, typically lasting 2-4 weeks.
- **Roles**: Scrum involves three key roles: Product Owner, Scrum Master, and Development Team.
- **Artifacts**: Key artifacts include the Product Backlog, Sprint Backlog, and Increment.
- **Ceremonies**: Important ceremonies are Sprint Planning, Daily Stand-ups, Sprint Review, and Sprint Retrospective.
**Kanban**
Kanban focuses on visualizing work, limiting work in progress, and optimizing flow. It uses a Kanban board to represent work items and their status.
- **Visual Boards**: Tasks are visualized on a Kanban board with columns representing different stages of the workflow.
- **Work in Progress (WIP) Limits**: WIP limits ensure that the team does not take on too much work at once, promoting focus and efficiency.
- **Continuous Delivery**: Unlike Scrum’s fixed sprints, Kanban allows for continuous delivery of work as soon as it’s ready.
**Extreme Programming (XP)**
XP emphasizes technical excellence and customer satisfaction. It includes practices such as pair programming, test-driven development (TDD), and continuous integration.
- **Pair Programming**: Two developers work together at one workstation, enhancing code quality and knowledge sharing.
- **Test-Driven Development (TDD)**: Developers write tests before writing the actual code, ensuring that the code meets the required functionality.
- **Continuous Integration**: Code changes are frequently integrated and tested, reducing integration issues and ensuring a stable codebase.
### Best Practices for Agile Implementation
Successfully implementing Agile methodologies requires adopting best practices that align with Agile principles. Here are some key practices to consider:
**1. Foster a Collaborative Culture**
Agile thrives on collaboration and open communication. Encourage teamwork and create an environment where team members feel comfortable sharing ideas and feedback. Regularly hold meetings and discussions to keep everyone aligned and informed.
**2. Embrace Continuous Improvement**
Agile is about learning and improving continuously. Conduct regular retrospectives to reflect on what went well, what didn’t, and how processes can be improved. Use feedback loops to iterate and enhance the development process.
**3. Prioritize Customer Feedback**
Engage with customers frequently to gather feedback and understand their needs. Involve them in the development process through regular demos and reviews. This ensures that the product evolves in line with customer expectations.
**4. Maintain a Flexible Backlog**
A well-maintained product backlog is crucial for Agile success. Continuously prioritize and refine backlog items based on changing requirements and feedback. This helps the team stay focused on delivering the most valuable features first.
**5. Implement Automated Testing and Continuous Integration**
Automated testing and continuous integration (CI) are essential for maintaining code quality and ensuring that changes are integrated smoothly. Invest in robust testing frameworks and CI tools to catch issues early and reduce technical debt.
**6. Limit Work in Progress**
Limiting work in progress (WIP) helps teams maintain focus and complete tasks efficiently. Use techniques like Kanban WIP limits to prevent overloading team members and ensure a steady flow of work.
**7. Encourage Regular Releases**
Agile promotes frequent and incremental releases of working software. Aim for small, iterative releases that deliver value to customers quickly. This reduces the risk of large, disruptive changes and allows for faster feedback.
### Conclusion
Agile methodologies offer a dynamic and flexible approach to software development, focusing on collaboration, customer satisfaction, and continuous improvement. By understanding the core principles of Agile and implementing best practices, teams can enhance their productivity, deliver high-quality products, and respond effectively to changing requirements. Whether you choose Scrum, Kanban, XP, or a combination of methodologies, the key to Agile success lies in fostering a collaborative culture and embracing an iterative, customer-focused mindset. | sumit_01 |
1,908,621 | Docker Commands Cheat Sheet | Docker is a platform that allows you to develop, ship, and run applications inside containers. Docker... | 0 | 2024-07-08T10:20:59 | https://dev.to/madgan95/docker-commands-cheat-sheet-29pc | beginners, webdev, docker, javascript | Docker is a platform that allows you to develop, ship, and run applications inside containers. Docker provides better resource management, allowing you to allocate specific resources (CPU, memory, etc.) to each container. This helps in optimizing resource usage and improving performance.
## Containers
A container is a runnable instance of an image. They are lightweight, standalone, and executable packages that contain everything needed to run a piece of software, including the code, runtime, libraries, and dependencies.
## Image
An image is a read-only template with instructions for creating a Docker container.Images are created using a Dockerfile and can be stored in registries like Docker Hub.
**Create an image**
A Dockerfile is a script containing a series of instructions on how to build a Docker image. For example, this is a Dockerfile for creating NodeJs backend application
```
# Use an official Node.js runtime as a parent image
FROM node:22
# Set the working directory in the container
WORKDIR /usr/src/app
# Copy package.json and package-lock.json to the working directory
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application to the working directory
COPY . .
# Expose the port the app runs on
EXPOSE 4100
# Define the command to run the app
CMD ["node", "server.js"]
```
## Difference between running locally and using docker?
**Local**
Installing Redis locally involves downloading the Redis binaries or using a package manager (e.g., apt, brew) to install Redis on your machine. You would then start the Redis server using the redis-server command.
**Docker**
Running Redis in Docker involves pulling the Redis image from DockerHub using the docker pull redis command and then running a container using docker run command with specific options, such as port mapping.
## Ease of Use and Deployment:
**Local**
Running Redis locally requires manual setup and configuration. You need to ensure that the Redis server is properly configured and running.
**Docker**
Running Redis in Docker simplifies deployment and management. Docker provides a consistent environment, and you can easily start, stop, and manage Redis containers using Docker commands.
## Important Commands for Docker
Note: Open Docker Desktop before executing below commands to start daemon
**Pulling Docker Images**
To get started with Docker, you often need to pull images from a registry like Docker Hub. For example, to pull the official Redis image, you can use the following command:
```
docker pull redis
```
This command fetches the Redis image and makes it available on your local machine.
**Pushing Docker Images**
This command uploads an image to a Docker registry.
```
docker push <repository>/<image>:<tag>
```
**Viewing Docker Images**
This command displays a list of all pulled and present images on your local machine.
```
docker images
```
**Remove a specific image**
```
docker rmi <image_id>
```
**Running Containers**
To run a container from an image, use the docker run command. For example, to start a Redis container in detached mode (running in the background), use:
```
docker run --name <container_name> -p <host_port>:<container_port> -d <image>
```
Here’s what each option does:
--name <container_name>: Assigns a name to your container.
-p <host_port>: Maps port Ex. 6379:6379
-d: Runs the container in detached mode.
**Start a container:**
```
docker start <container_id>
```
**Stop a container:**
```
docker stop <container_id>
```
**Remove a specific container:**
```
docker rm <container_id>
```
**Remove all stopped containers and dangling containers:**
```
docker container prune -f
```
**Remove all stopped containers, unused networks, and dangling images:**
```
docker system prune -f
```
**Remove all unused containers, networks, images, and volumes:**
```
docker system prune --all --volumes
```
**Lists all running containers**
```
docker ps
```
**Lists all containers, including those that are stopped**
```
docker ps -a
```
**Cleaning Up Build Cache**
Docker caches intermediate layers during image builds to speed up the process. To remove all build cache and free up space, use:
```
docker builder prune -a
```
**Managing Docker Compose**
Docker Compose allows you to define and run multi-container Docker applications. To build and start services defined in your docker-compose.yml file, use:
```
docker-compose up --build
```
Sample docker-compose.yml file:
```
version: '3'
services:
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- "4100:4100"
environment:
NODE_ENV: production
```
This command builds images as necessary before starting the services.
-----------------------------------------------------------------
Feel free to reach out if you have any questions or need further assistance. 😊📁✨ | madgan95 |
1,908,732 | The power of embeddings: How numbers unlock the meaning of data | Prelude As I’m focusing a lot on Generative AI, I’m curious about how things work under... | 0 | 2024-07-12T18:32:10 | https://glaforge.dev/posts/2024/07/02/the-power-of-embeddings-how-numbers-unlock-the-meaning-of-data/ | ---
title: The power of embeddings: How numbers unlock the meaning of data
published: true
date: 2024-07-02 07:05:07 UTC
tags:
canonical_url: https://glaforge.dev/posts/2024/07/02/the-power-of-embeddings-how-numbers-unlock-the-meaning-of-data/
---
## Prelude
> As I’m focusing a lot on Generative AI, I’m curious about how things work under the hood, to better understand what I’m using in my gen-ai powered projects. A topic I’d like to focus on more is: **vector embeddings** , to explain more clearly what they are, how they are calculated, and what you can do with them.
>
> A colleague of mine, [André](https://x.com/andreban), was showing me a [cool experiment](https://writer-m4n3dyfjhq-uc.a.run.app/)he’s been working on, to help people prepare an interview, with the help of an AI, to shape the structure of the resulting final article to write.
>
> The idea is to provide: a topic, a target audience, and to describe the goals for the audience. Then, a large language model like [Gemini](https://deepmind.google/technologies/gemini/) prepares a list of questions (that you can update freely) on that topic. Next, it’s your turn to fill in the blanks, answer those questions, and then the LLM generates an article, with a plan following those key questions and your provided answers. I cheated a bit, and asked [Gemini](https://gemini.google.com/) itself those questions, and honestly, I really liked how the resulting article came to be, and I wanted to share with you the outcome below.
>
> It’s a great and simple introduction to vector embeddings! I like how AI can help organize information, shape the structure and the content for an article. **I’m not advocating for letting AI write all your articles** , far from that, but as an author, however, I like that it can help me avoid the blank page syndrome, avoid missing key elements in my dissertation, improve the quality of my written prose.
>
> Generative AI, in its creative aspect, and as your assistant, can be super useful! Use it as **a tool to help drive your creativity**! But **always use your critical sense to gauge the quality and factuality of the content**.
## Introduction: What are vector embeddings?
Imagine you have a vast library filled with books on every topic imaginable. Finding a specific book can be a daunting task, especially if you only know the general subject matter. Now imagine a magical system that can understand the meaning of each book and represent it as a unique code. This code, called a vector embedding, can then be used to quickly find the most relevant books based on your search query, even if you only have a vague idea of what you’re looking for.
This is the power of vector embeddings. They are essentially numerical representations of complex data, like text, images, or audio, that capture the underlying meaning and relationships within the data. These numerical codes, arranged as vectors, allow computers to process and compare data in a way that mimics human understanding.
## From Text to Numbers: The Journey of Embedding Creation
Creating vector embeddings involves a multi-step process that transforms raw data into meaningful mathematical representations. The journey begins with **data preprocessing** , where the data is cleaned, normalized, and prepared for embedding generation. This might involve tasks like removing irrelevant information, standardizing data formats, and breaking text into individual words or subwords (tokenization).
Next comes the heart of the process: **embedding generation**. This step leverages various techniques and algorithms, such as Word2Vec, GloVe, BERT, and ResNet, to convert each data point into a high-dimensional vector. The specific algorithm chosen depends on the type of data being embedded (text, images, or audio) and the intended application.
For instance, Word2Vec uses a neural network to learn relationships between words by analyzing how they co-occur in large text corpora. This results in vector representations for words, where similar words have similar vectors, capturing semantic relationships. Similarly, for images, convolutional neural networks (CNNs) like ResNet can be used to extract features from images, resulting in vectors that represent the visual content.
## Vector Databases: The Power of Storing and Searching Embeddings
Once embeddings are generated, they need a dedicated storage system for efficient retrieval and comparison. This is where **vector databases** come into play. Unlike traditional databases designed for structured data, vector databases are optimized for storing and searching high-dimensional vector data.
Vector databases employ specialized indexing techniques, such as Annoy, HNSW, and Faiss, to create efficient data structures that allow for fast similarity search. This means that when a user submits a query (e.g., a search term, an image), the database can quickly find the most similar data points based on the similarity of their vector representations.
## Embeddings Empower Search: Finding the Needle in the Haystack
The combination of vector embeddings and vector databases revolutionizes search by enabling **semantic search**. This means that instead of relying solely on keyword matching, search engines can understand the meaning behind the data and find relevant results even if the query doesn’t use exact keywords.
For example, imagine searching for “a picture of a dog with a hat.” Traditional keyword-based search might struggle to find relevant images, as the search term might not match the image description. However, with vector embeddings, the search engine can understand the semantic meaning of the query and find images that contain both a dog and a hat, even if those words are not explicitly mentioned in the image description.
## Beyond Search: Expanding the Reach of Embeddings
Vector embeddings are not limited to search applications. They have become essential tools in a wide range of fields, including:
- **Retrieval Augmented Generation (RAG):** This technique combines the power of information retrieval and generative models to create more informative and relevant responses. Embeddings are used to find relevant information in large text corpora, which is then used to augment prompts for language models, resulting in more accurate and context-aware outputs.
- **Data Classification:** Embeddings enable the classification of data points into different categories based on their similarity. This finds application in areas like sentiment analysis, spam detection, object recognition, and music genre classification.
- **Anomaly Detection:** By representing data points as vectors, anomalies can be identified as data points that are significantly different from the majority. This technique is used in various fields, including network intrusion detection, fraud detection, and industrial sensor monitoring.
## Facing the Challenges and Shaping the Future
While vector embeddings have revolutionized data analysis, they still face some challenges. These include the difficulty of capturing polysemy (multiple meanings of a word), contextual dependence, and the challenge of interpreting the meaning behind the high-dimensional vector representations.
Despite these limitations, research continues to push the boundaries of vector embeddings. Researchers are exploring techniques like contextual embeddings, multilingual embeddings, knowledge graph integration, and explainable embeddings to overcome existing limitations and unlock the full potential of these powerful representations.
## Stepping into the World of Embeddings: Resources and Next Steps
For those interested in diving deeper into the world of vector embeddings, a wealth of resources is available. Online courses and tutorials on platforms like Coursera, Fast.ai, and Stanford’s online learning platform provide a solid foundation in the underlying concepts and techniques.
Books like “Speech and Language Processing” by Jurafsky and Martin and “Deep Learning” by Goodfellow, Bengio, and Courville offer in-depth coverage of the field. Additionally, research papers and articles on platforms like arXiv and Medium offer insights into the latest advancements and applications.
To gain practical experience, explore Python libraries like Gensim, spaCy, and TensorFlow/PyTorch. These libraries provide tools for creating and working with embeddings, allowing you to build your own models and experiment with various applications.
The world of vector embeddings is constantly evolving, offering exciting opportunities for innovation and discovery. By understanding the power of these representations, you can unlock new possibilities for data analysis, information retrieval, and artificial intelligence applications. | glaforge | |
1,908,827 | After Effects: The Basics | Introduction I want to provide an introduction to After Effects, before writing about more... | 28,010 | 2024-07-09T22:04:45 | https://dev.to/kocreative/after-effects-the-basics-915 | beginners, tutorial, aftereffects, design | ## Introduction
I want to provide an introduction to After Effects, before writing about more advanced topics, for reference of the fundimental how-tos. So without further ado, here is a beginners guide to After Effects.
This article will cover:
- How to navigate the After Effects interface
- How to create a composition
- How to create different kinds of layers
- How to create keyframe driven animation
- How to create expression driven animation
- How to render your finished video
First off, what is After Effects, and what is it used for? After Effects is motion graphics software, capable of producing industry standard animation and visual effects. It is great in combination with vector based software, such as Illustrator, for bringing artwork to life with movement, or for creating templatable on-screen graphics to be used later in Premiere as .mogrt files. However it is not made for editing. If you need to edit your video, it is best to export your After Effects creations into more appropriate software (such as Premiere, or DaVinci Resolve).
## Navigating the AE Interface
When opening After Effects for the first time, you will likely be greeted with a screen which looks something like this:

Let’s walk through each of these panels, and what they are used for.
**Project Panel**
The project panel is where all the elements of your project can be found. You can import supported file types, make folders to keep things organised, and find all created compositions here.

**Timeline**
This is where your active composition timeline will be displayed. You will see all the elements which make up this composition and have access to their properties to effect and animate, as well as the composition framerate, and its duration.

**Effects & Presets**
By clicking on the title, you will reveal the Effects & Presets panel. As the name suggests, you can search through all possible effects and presets here.

**Properties Panel**
The properties panel is a fairly new addition, but very useful. Here, it will show you all the properties of a selected layer from within your active composition.

**Viewer**
Here you will actually see your active composition.

## Creating A Composition
Now that you’ve opened your software, it is time to create your first composition. A composition is where you will add and create all of your elements to make your finished video (similar to a “sequence” in Premiere).
You can either click the “new composition” button in the middle of the viewer if this is your first composition, or you can go to “composition/new composition.” It will bring up a window like this:

First, name your composition. Next, input the size you would like it to be. HD landscape videos are 1920x1080 pixels, while portrait is the opposite: 1080x1920 pixels. Next, input the frame rate you need. In the UK, most projects will require 25fps. However elsewhere you may find that you need a frame rate of 29.97, 30, 50, or 60. The higher the frame rate, the more images there will be per second of your composition, but this does not affect the overall duration of your composition. Lastly, input the duration of your composition. This is how long you want your video to be. If you’d like, you can also set your composition colour (although this is not a true solid, but instead is the alpha channel of your composition. Colouring it is purely for visibility purposes). Once all these details have been filed out, select “ok.” You will find your composition is now open in your timeline, and has been added to your project panel.

Now that you have a composition, you can start adding things to it. Here are some of the following types of layers you will likely want to use when building your compositions.
## Creating Assets in AE
**Solid Layers**
A solid layer is what it sounds like. It is a block of a solid colour, ideal for backgrounds, certain effects, and matte layers. To create a solid, with your timeline selected, go to “layer/new/solid…” and you will see this box:

Here you can name your solid and set its size. Press the “make comp size” button to make sure the solid is your composition size. Select your desired colour, and then click “ok”. The solid layer will then appear in your timeline (you may also notice a new “solids” folder appear in your project panel too).

**Shape Layers**
Shape layers allow you to make vector shapes in your composition. These are different from solids as they are not rasterized, and have parameters which are animate-able, and allow for creative expressions.
You can create a shape layer in a number of ways. To create an empty shape layer, click “Layer/New/Shape Layer.” Or, you can click and hold the rectangle button on the toolbar to select a shape you would like to draw.

Or by selecting the pen tool next to it, to create a custom shape.

Begin drawing your shape in the composition with nothing else selected to create your shape in your composition (otherwise these tools will create a mask on your selected layer). You can edit fill, stroke, and a myriad of other parameters in the properties panel.

**Text Layers**
Select “Layer/New/Text” to create a new text layer, or, select the text tool. Clicking once inside your composition will create a standard text layer, while a click and drag will create a text box.

Like shape layers, text layers also have extra parameters which can be used to animate them in unique ways. You can add these by clicking the “+add animator” button in the properties panel.

These are a more advanced method of animation, so I’ll go into depth about this another time, but I encourage having a play around with the options here to see what you can create.
**Null Objects**
A Null Object is an empty layer in your composition. It can be used to link other layers together, or for organisational purposes. To create a null object, you can navigate to “Layer/New/Null Object.”

**Adjustment Layer**
The last type of layer I’ll go through in this walkthrough is an adjustment layer. This is a layer where you can apply effects, which will apply to all visible layers beneath it. This is useful when you want to tweak the grade of your finished composition, or apply uniform effects to more than one layer. To create an adjustment layer, navigate to “Layer/New/Adjustment Layer.”

**Importing Files**
As well as creating assets inside of After Effects, you can also import images and other file types by dragging them from your folders into the project panel, or by right clicking on your project panel and selecting “Import/File…”

## Keyframe Animation
Now that we have run through the different types of assets you can use to create your content, we can discuss how to create movement and animation. You can do this in After Effects in a few different ways. The simplest way is through keyframes.
Using keyframe animation, we can tell After Effects what values we want parameters to be at certain points in time. After Effects will then animate between these two points. Let’s look at the Opacity parameter as an example.
Moving your cursor to the start of your composition, create a text layer and select it, so that its properties are visible in the properties panel. Locate “Opacity,” and change the value from 100 to 0. Then, hit the stopwatch button next to it. You will see it turn into a blue diamond. This signifies that you have created a keyframe. In your timeline, you will also see this parameter drop down from your layer with its new keyframe (and the text will no longer be visible, as its opacity is set to 0).

From there, move your cursor to the 1 second point of your timeline. You will see that the diamond shape is no longer blue in your properties panel. This is because we don’t currently have a keyframe set to this time. Change the value “0” to “100.” This will create a new keyframe and set the value of your layer’s opacity to 100 (and your text will reappear!).

Congratulations, you have created your first animation! Drag the cursor back and forth between these two keyframes. You will notice After Effects will animate between these two values, making the text fade up from 0 over the duration of 1 second.
The default way After Effects chooses to animate between 2 different keyframes is using linear interpolation. Put simply, this means After Effects moves evenly between the two keyframes, keeping a consistent speed. This can sometimes look a little stilted. In order to change that, we can tell After Effects to ease between the keyframes instead. This means that After Effects will instead slowly animate in and out, ramping up in the middle. This creates a more pleasing, more natural looking movement.
To ease your keyframes, drag-click to highlight your keyframes in your timeline. From there, right click on either of your keyframes and select “Keyframe Assistant/Easy Ease” or press F9 on your keyboard. You will see the keyframes change shape, from a diamond to an hourglass. Now if you watch your animation back, you will see that the animation is a lot smoother.



Try using keyframes on another layer’s parameter. Let’s use a shape layer’s scale as an example.
Create a shape layer, drag it under your text layer on the timeline, and make sure it is highlighted so it shows in your properties panel. Make sure your scale parameters are unlinked, so you can manipulate the x and y values independently (the symbol for unlinked is a broken chain).

Move your cursor to the start of your timeline. Click the stopwatch to create your first keyframe, and set the x value of scale to 0. Then, move your cursor to the 1 second mark, and set it to 100. This will create 2 keyframes, animating the scale from 0 to 100%. Select both of those keyframes and set the interpolation to easy ease, so the motion is more natural. Play back your animation.
Keep building up your animation in this way, animating different parameters for different durations, until you are happy with the result.

## Expressions
Another way you can animate elements in your composition is with expressions. Expressions run off javascript, and allow us to speak to After Effects in code instead of keyframes (or, a combination of both). You can find the documentation for [After Effects expression functions here.](https://ae-expressions.docsforadobe.dev/)
Remove the keyframes from your shape layer's scale by selecting them and pressing delete. We will create the same scale animation, this time using expressions instead of keyframes.
Once the keyframes are deleted, navigate to the scale stopwatch (either in your properties panel or in your timeline), and click it once while holding down the alt/option key. You will see the text turn red, and a new box will appear in your timeline where you are able to write your expressions.

Since this is a more advanced way of using After Effects, I will only briefly explain the expression. I will come back in another article to explain in more detail.
Since we want to create an animation which eases from 1 value to another, we will use the ease() function:
```
ease (time, 0, 1, 0, 100);
```
The ease function requires 5 arguments in order to work. The first argument is what parameter the function is remapping (for our example, this is “time.”). The next 2 arguments are the _in-point_ and _out-point_; when the animation starts and ends. The last 2 arguments are the start and end _values_. So, when we use this ease function, we are telling After Effects, “remap the scale value of this layer over time, in between 0 and 1 seconds, from the value 0 to 100.”
However, the scale parameter is _two_ values, an x and a y. So we will need to tell After Effects where we want to apply this ease function by creating an array, like so:
```
x = ease (time, 0, 1, 0, 100);
[x, value[1]]
```
“Value[1]” refers to the current y-coordinate value. If you play back your animation, you will see a new animation created for your scale parameter without using keyframes. Exciting!

## Rendering your animation
Once you are happy with your animation, you can render it to create your video. To do this, go to “Composition/Add to render queue,” or press CTRL/Apple + M. You will see your render queue:

Choose the output file type for your video. The default, “H.264 - Match Render Settings - 15 Mbps” will produce a MP4 file, and is a good choice for any video destined for the web. By clicking the dropdown next this, you can take a look at the other saved presets After Effects offers:

Choose the right file type for you. If you aren’t sure what file you need, consider where the video needs to play, and check the specifications. You should find the answer you need. If none of the presets fit that specification, you can create your own template by selecting the “Make Template…” option at the bottom.
Once you have set up your video settings, you can set your output location; where you want your video to save. Then, hit “render” to save your video.
And that’s it! I hope this basic introduction to After Effects has been helpful. Go forward and play around with the interface until you are familiar with all the tools outlined in this article.
Please leave a comment if you have any questions. | kocreative |
1,909,038 | Easily Export .NET MAUI DataGrid to Specific PDF Page | TL;DR: Learn to export the Syncfusion .NET MAUI DataGrid content to a specific page in a PDF... | 0 | 2024-07-08T15:59:48 | https://www.syncfusion.com/blogs/post/export-maui-grid-to-specific-pdf-page | dotnetmaui, datagrid, export, maui | ---
title: Easily Export .NET MAUI DataGrid to Specific PDF Page
published: true
date: 2024-07-02 10:00:00 UTC
tags: dotnetmaui, datagrid, export, maui
canonical_url: https://www.syncfusion.com/blogs/post/export-maui-grid-to-specific-pdf-page
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f6uche1fo3tor45y4uvs.png
---
**TL;DR:** Learn to export the Syncfusion .NET MAUI DataGrid content to a specific page in a PDF document. This guide includes code examples and setup instructions for developers.
Syncfusion [.NET MAUI DataGrid](https://www.syncfusion.com/maui-controls/maui-datagrid ".NET MAUI DataGrid") control displays and manipulates data in a tabular view. Its rich feature set includes different column types, sorting, autofit columns and rows, and styling all elements.
This blog will guide you through the steps to export the .NET MAUI DataGrid to a specific PDF page.
**Note:** If you’re new to our Syncfusion .NET MAUI DataGrid, please refer to the [getting started documentation](https://help.syncfusion.com/maui/datagrid/getting-started "Getting started with .NET MAUI DataGrid").
## Exporting .NET MAUI DataGrid to a specific PDF page
The [Syncfusion.Maui.DataGridExport](https://www.nuget.org/packages/Syncfusion.Maui.DataGridExport "Syncfusion.Maui.DataGridExport NuGet package") NuGet package offers all the PDF exporting functionalities, allowing us to utilize PDF packages to export the DataGrid into PDF formats.
So, include the **Syncfusion.Maui.DataGridExport** package in your app NuGet.
Then, add the Syncfusion .NET MAUI DataGrid control and an **Export To PDF** button to your xaml page.
```xml
<StackLayout>
<Button Text="Export To PDF" WidthRequest="200" HeightRequest="50"
Clicked="OnExportToPDF" />
<syncfusion:SfDataGrid x:Name="dataGrid"
Margin="20"
VerticalOptions="FillAndExpand"
ItemsSource="{Binding OrderInfoCollection}"
GridLinesVisibility="Both"
HeaderGridLinesVisibility="Both"
AutoGenerateColumnsMode="None"
SelectionMode="Multiple"
ColumnWidthMode="Auto">
<syncfusion:SfDataGrid.DefaultStyle>
<syncfusion:DataGridStyle RowBackground="LightBlue" HeaderRowBackground="LightGoldenrodYellow"/>
</syncfusion:SfDataGrid.DefaultStyle>
<syncfusion:SfDataGrid.Columns>
<syncfusion:DataGridNumericColumn Format="D"
HeaderText="Order ID"
MappingName="OrderID">
</syncfusion:DataGridNumericColumn>
<syncfusion:DataGridTextColumn HeaderText="Customer ID"
MappingName="CustomerID">
</syncfusion:DataGridTextColumn>
<syncfusion:DataGridTextColumn MappingName="Customer"
HeaderText="Customer">
</syncfusion:DataGridTextColumn>
<syncfusion:DataGridTextColumn HeaderText="Ship City"
MappingName="ShipCity">
</syncfusion:DataGridTextColumn>
<syncfusion:DataGridTextColumn HeaderText="Ship Country"
MappingName="ShipCountry">
</syncfusion:DataGridTextColumn>
</syncfusion:SfDataGrid.Columns>
</syncfusion:SfDataGrid>
</StackLayout>
```
After executing the above code example, the UI will look like the following image.
 All the pdf exporting methods are available in the [DataGridPdfExportingController](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataGrid.Exporting.DataGridPdfExportingController.html "DataGridPdfExportingController class of .NET MAUI DataGrid") class. Using the [ExportToPdf](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataGrid.Exporting.DataGridPdfExportingController.html#Syncfusion_Maui_DataGrid_Exporting_DataGridPdfExportingController_ExportToPdf_Syncfusion_Maui_DataGrid_SfDataGrid_ "ExportToPdf method of .NET MAUI DataGrid") or [ExportToPdfGrid](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataGrid.Exporting.DataGridPdfExportingController.html#Syncfusion_Maui_DataGrid_Exporting_DataGridPdfExportingController_ExportToPdfGrid_Syncfusion_Maui_DataGrid_SfDataGrid_Syncfusion_Maui_Data_ICollectionViewAdv_Syncfusion_Maui_DataGrid_Exporting_DataGridPdfExportingOption_Syncfusion_Pdf_PdfDocument_ "ExportToPdfGrid method of .NET MAUI DataGrid") method, we can export the DataGrid content to PDF.
You can add the required number of pages to the PDF document and mention the page number on which you need to render the DataGrid data using the [StartPageIndex](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataGrid.Exporting.DataGridPdfExportingOption.html#Syncfusion_Maui_DataGrid_Exporting_DataGridPdfExportingOption_StartPageIndex "StartPageIndex property of .NET MAUI DataGrid") property.
```csharp
private void OnExportToPDF(object sender, EventArgs e)
{
MemoryStream stream = new MemoryStream();
DataGridPdfExportingController pdfExport = new DataGridPdfExportingController();
var pdfDocument = new PdfDocument()
{
PageSettings =
{
Orientation = PdfPageOrientation.Landscape
}
};
pdfDocument.Pages.Add();
pdfDocument.Pages.Add();
pdfDocument.Pages.Add();
DataGridPdfExportingOption option = new DataGridPdfExportingOption() { StartPageIndex = 1, PdfDocument = pdfDocument};
var pdfDoc = pdfExport.ExportToPdf(this.dataGrid, option);
SaveService saveService = new();
saveService.SaveAndView("Export Feature.pdf", "application/pdf", stream);
}
```
Here, we’ll export the .NET MAUI DataGrid to the second page in a PDF document. Refer to the following output image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Exporting-.NET-MAUI-DataGrid-to-a-specific-page-in-a-PDF.png" alt="Exporting .NET MAUI DataGrid to a specific page in a PDF" style="width:100%">
<figcaption>Exporting .NET MAUI DataGrid to a specific page in a PDF</figcaption>
</figure>
**Note:** You can also [export the DataGrid to a specific point in a PDF document](https://help.syncfusion.com/maui/datagrid/export-to-pdf#startpoint "Exporting .NET MAUI DataGrid to a specific starting point on a PDF page").
## GitHub reference
Also, check out the [Exporting .NET MAUI DataGrid to a specific page in a PDF document GitHub demo](https://github.com/SyncfusionExamples/Exporting-datagrid-to-specific-PDF-page-in-.NET-MAUI "Exporting .NET MAUI DataGrid to a specific page in a PDF document GitHub demo").
## Conclusion
Thanks for reading! In this blog, we have seen how to export the Syncfusion [.NET MAUI DataGrid](https://www.syncfusion.com/maui-controls/maui-datagrid ".NET MAUI DataGrid") to a specific page in a PDF. We believe that the steps outlined here have been valuable and enlightening.
If you’re interested in delving deeper into our .NET MAUI framework, you can easily access and evaluate it by downloading [Essential Studio for .NET MAUI](https://www.syncfusion.com/downloads/maui "Free evaluation of the Essential Studio for .NET MAUI") for free. Our esteemed customers can acquire the latest version of Essential Studio from the [License and Downloads ](https://www.syncfusion.com/account/downloads "Essential Studio license and downloads page")page.
Should you need any assistance or have further inquiries, contact us via our [support forum](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/ "Syncfusion Feedback Portal"). We are committed to providing help and support in every possible way.
## Related blogs
- [Exporting DataGrid to PDF Made Easy in .NET MAUI](https://www.syncfusion.com/blogs/post/export-dotnet-maui-datagrid-to-pdf "Blog: Exporting DataGrid to PDF Made Easy in .NET MAUI")
- [Syncfusion Essential Studio 2024 Volume 2 Is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 Is Here!")
- [Introducing the 12th Set of New .NET MAUI Controls and Features](https://www.syncfusion.com/blogs/post/syncfusion-dotnet-maui-2024-volume-2 "Blog: Introducing the 12th Set of New .NET MAUI Controls and Features")
- [Introducing the New .NET MAUI Digital Gauge Control](https://www.syncfusion.com/blogs/post/dotnetmaui-digital-gauge-control "Blog: Introducing the New .NET MAUI Digital Gauge Control") | jollenmoyani |
1,909,048 | Event storming, and then what? | References Github Repo Event Storming Ticino SW craft sessions Overview Some... | 0 | 2024-07-11T12:28:29 | https://dev.to/maverick198/event-storming-and-then-what-5ajp | ddd, tdd, eventdriven, microservices | ### References
- [Github Repo](https://github.com/sabatinim/mars-rover-ddd)
- [Event Storming](https://www.eventstorming.com/)
- [Ticino SW craft sessions](https://www.youtube.com/@ticinosoftwarecraft/streams)
### Overview
Some time ago, I read a fascinating book titled *Introducing EventStorming: An Act of Deliberate Collective Learning by Alberto Brandolini*.
This book, filled with concrete examples, discusses the event-storming technique for modeling the business processes underlying a digital product to be implemented through software.
What truly captivated me and sparked my curiosity about this methodology is its ability to model software architecture and reveal **organizational limitations** within a company. In this article, I will demonstrate with a concrete example how event-storming can effectively bridge the gap between business and software development.
### Mars Rover Kata
The exercise I used as a reference is the [mars rover Kata](https://kata-log.rocks/mars-rover-kata).
Its requirements involve implementing software for a rover to receive commands for movement. The rover can move forward, rotate left and right, and has constraints related to the surface it can traverse and potential obstacles it might encounter.
Before starting to implement the software, I conducted an Event Storming session. Of course, this came with significant limitations: I was the only participant and not particularly experienced with the technique and this session was just to implement the software (there are several levels of event-storming abstraction we should consider).
I used basic elements to model a business process, including commands, events, aggregates, policies, and projections. The definitions of these components provided in the book are particularly enlightening:
- **Command**: A decision made by the user.
- **Aggregate**: Information necessary for making decisions.
- **Event**: A state transition mapped somewhere.
- **Projections**: Tools to support the decision-making process in the user's brain.
- **Policy**: Triggers that respond whenever something happens
Regarding the flow, we can see how these components interact with each other in the picture below:

Everything will start with a command that triggers actions on a specific aggregate. This will generate an event and by listening to the event we can continue to trigger commands using Policies or create a view using Projections.
Using these building blocks, I attempted to model the Mars Rover design Event Storming.
### Event Storming
Below, we can see the first iteration of the process.

First, I thought the rover needed to be powered on and set with a starting point and direction. The aggregate that came to mind here is the *Mars Rover* itself.
Once somebody starts it, it will be in "Started" mode and ready to move. Next, the rover can receive commands to turn right, turn left, and move forward. Depending on the presence of obstacles, the rover can either continue moving or encounter obstacles.
According to the exercise, in case of an obstacle, the rover should "shut down." Thus, I used a **policy** to react to the "ObstacleFound" event with a command instructing the rover to shut down.
From the first iterations, I noticed how intuitive it was to think in terms of Commands, Events, Aggregates, and Policies. I also used Projections to create datasets for future analysis, which I will discuss during the implementation phase.
From a modeling perspective, this technique is extremely useful. I could present the process to any product owner or domain expert or even implement it directly with them (as described in the book).
I am confident that in a very short time, we could create a dictionary of common terms understandable to all stakeholders involved (both business and software). Now, let's move on to the implementation phase.
### Implementation
You can find the solution code [here](https://github.com/sabatinim/mars-rover-ddd/tree/main).
Upon opening the project, you'll find a package named **ddd**. In this package, I have included the basic elements described earlier:
``` python
class Command:
pass
class Event:
pass
@dataclasses.dataclass(frozen=True)
class AggregateId:
value: str
@classmethod
def new(cls):
return cls(value=str(uuid4()))
@dataclasses.dataclass
class Aggregate:
id: AggregateId
version: int
class CommandHandler:
def handle(self, command: Command) -> Event:
pass
class Policy:
def apply(self, event) -> Command:
pass
class Projection:
def project(self, event):
pass
```
I reused the same names to create a common base linking the modeling and implementation parts. I also implemented an in-memory repository responsible for loading and saving an Aggregate object and a very simple in-memory *command dispatcher*.
The *command dispatcher* receives a series of commands as input and applies command handlers, policies, and projections based on its construction.
For this exercise, the implementation is in-memory, but you could consider implementing it with remote queues.
Here is our command dispatcher:
```python
class InMemoryCommandDispatcher:
def __init__(self,
command_handlers: Dict[Type[Command], CommandHandler],
projections: Dict[Type[Event], List[Projection]],
policies: Dict[Type[Event], List[Policy]]):
self.command_handlers = command_handlers
self.projections = projections
self.policies = policies
self.commands: List[Command] = []
def submit(self, commands: List[Command]):
for c in commands:
self.commands.append(c)
def run(self):
while self.commands:
command = self.commands.pop(0)
print(f"[COMMAND] {command}")
event: Event = self.command_handlers[type(command)].handle(command)
if event:
print(f"[EVENT] {event} generated")
event_policies = self.policies.get(type(event), [])
for policy in event_policies:
new_command = policy.apply(event)
if new_command:
self.commands.append(new_command)
for projection in self.projections.get(type(event), []):
projection.project(event)
```
At this point, I have all the necessary components to build the model defined during the Event Storming session. Therefore, I implemented the commands, events, aggregate, and command handlers, as shown in the function defined to build the process.
```python
def create_command_dispatcher(mars_rover_repo: MarsRoverRepository,
mars_rover_storage: List[MarsRoverId],
path_projection_storage: List[Dict],
obstacles_projection_storage: List[Dict]) -> InMemoryCommandDispatcher:
turn_right_command_handler = TurnRightCommandHandler(repo=mars_rover_repo)
turn_left_command_handler = TurnLeftCommandHandler(repo=mars_rover_repo)
move_command_handler = MoveCommandHandler(repo=mars_rover_repo)
start_command_handler = StartMarsRoverCommandHandler(repo=mars_rover_repo)
turn_off_command_handler = TurnOffCommandHandler(repo=mars_rover_repo)
notify_obstacle_command_handler = NotifyObstacleCommandHandler()
rover_path_projection = MarsRoverPathProjection(repo=mars_rover_repo, storage=path_projection_storage)
rover_start_projection = MarsRoverStartProjection(repo=mars_rover_repo,
paths_storage=path_projection_storage,
mars_rover_storage=mars_rover_storage)
rover_obstacles_projection = MarsRoverObstaclesProjection(storage=obstacles_projection_storage)
obstacle_found_policy = NotifyObstacleFoundPolicy()
turn_off_policy = TurnOffPolicy()
return (InMemoryCommandDispatcherBuilder()
.with_command_handler(TurnRight, turn_right_command_handler)
.with_command_handler(TurnLeft, turn_left_command_handler)
.with_command_handler(Move, move_command_handler)
.with_command_handler(StartMarsRover, start_command_handler)
.with_command_handler(TurnOff, turn_off_command_handler)
.with_command_handler(NotifyObstacle, notify_obstacle_command_handler)
.with_projection(MarsRoverStarted, rover_start_projection)
.with_projection(MarsRoverMoved, rover_path_projection)
.with_projection(ObstacleFound, rover_obstacles_projection)
.with_policy(ObstacleFound, obstacle_found_policy)
.with_policy(ObstacleFound, turn_off_policy)
.build())
```
As you can see, this function creates the dispatcher by configuring the process modeled during the Event Storming session. Specifically, it associates commands with their respective command handlers, as well as policies and projections. It is straightforward to understand the actions associated with commands and events.
#### Command Handlers
Let's take a look at how I manage a command using a **command handler** for controlling the Rover's movement.
```python
class MoveCommandHandler(CommandHandler):
def __init__(self, repo: MarsRoverRepository):
self.repo = repo
def handle(self, command: Move) -> MarsRoverMoved:
mars_rover: MarsRover = self.repo.get_by_id(command.id)
event = mars_rover.move()
self.repo.save(mars_rover)
return event
```
The class uses the repository to load the MarsRover Aggregate into memory and then calls the **"move()"** function to change its state.
Following this, the event is emitted and the aggregate's state is saved
#### Domain
Regarding the domain part, the figure below shows the objects used to model it.

Let me show also the MarsRover **Aggregate** methods contract:
```python
@dataclasses.dataclass
class MarsRover(Aggregate):
actual_point: Point
direction: Direction
world: World
status: MarsRoverStatus
def start(self) -> MarsRoverStarted:
...
def turn_off(self) -> MarsRoverTurnedOff:
...
def turn_right(self) -> MarsRoverMoved | None:
...
def turn_left(self) -> MarsRoverMoved | None:
...
def move(self) -> MarsRoverMoved | ObstacleFound | None:
...
```
It contains all the possible actions the Rover can do and the logic to change its internal state, starting from the actual point and direction to the grid/world in which the rover is moving.
#### Services
To orchestrate everything, I implemented a service.
Below you can see the service being used in the end-to-end tests developed:
```python
class TestE2E(unittest.TestCase):
def test_execute_some_commands(self):
repo = MarsRoverRepository()
paths = []
obstacles = []
mars_rover_ids = []
runner = (
MarsRoverRunner(repository=repo,
path_projection_storage=paths,
obstacles_projection_storage=obstacles,
mars_rover_projection_storage=mars_rover_ids)
.with_initial_point(x=0, y=0)
.with_initial_direction(direction=Direction.NORTH)
.with_world(world_dimension=(4, 4),
obstacles=[])
)
runner.start()
id = mars_rover_ids[0]
runner.run(id, "RMLMM")
actual: MarsRover = repo.get_by_id(MarsRoverId(id))
self.assertEqual("1:2:N", actual.coordinate())
self.assertEqual("MOVING", actual.status.value)
expected_path = ["0:0:N", "0:0:E", "1:0:E", "1:0:N", "1:1:N", "1:2:N"]
self._assert_paths(expected=expected_path, actual=paths)
self.assertListEqual([], obstacles)
```
In this test, I set the starting point, the grid on which the Rover moves, and its initial direction.
After that, the Rover is started calling **runner.start()** and receives commands in string format calling **runner.run(id, "RMLMM")**. I need to pass the rover id just to retrieve it using the repository. I can retrieve the RoverId from a projection created when we started it before.
Regarding assertions, I used the datasets generated by the paths projections, which are also developed with an in-memory version to facilitate interactions.
Below there are the logs of Commands and Events generated during the flow:
```
_e2e.py::TestE2E::test_execute_some_commands
[COMMAND] StartMarsRover(initial_point=Point(x=0, y=0), initial_direction=<Direction.NORTH: 'N'>, world=World(dimension=(4, 4), obstacles=Obstacles(points=[])))
[EVENT] MarsRoverStarted(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[COMMAND] TurnRight(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[COMMAND] Move(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[COMMAND] TurnLeft(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[COMMAND] Move(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[COMMAND] Move(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='aea5e69c-4f08-40db-bf44-097b5ae36380'))
```
Basically the rover is getting as input this commands "RMLMM" and they are translated during the flow execution.
#### Path Projection
I used a paths projection in order to store the coordinates every time the rover is moved.
As we can see in the before session we are able to assert the entire Rover path:
```python
expected_path = ["0:0:N", "0:0:E", "1:0:E", "1:0:N", "1:1:N", "1:2:N"]
self._assert_paths(expected=expected_path, actual=paths)
```
Therefore, for this exercise I implemented an in memory projection that is able to store path data in a list of dictionary:
```python
class MarsRoverPathProjection(Projection):
def __init__(self, repo: MarsRoverRepository, storage: List[Dict]):
self.repo = repo
self.storage = storage
def project(self, event: MarsRoverMoved):
mars_rover: MarsRover = self.repo.get_by_id(event.id)
raw = {"id": mars_rover.id.value, "actual_point": mars_rover.coordinate()}
self.storage.append(raw)
```
This class takes as input the event **MarsRoverMoved** because is triggered on this, loads the aggregate and is able to create a specific read model.
In this case the read model contains the paths traveled by the rover. The good thing of this design is that the projection is totally decoupled by the aggregate state change and could be easy bind on a specific event that represents the Aggregate state change.
#### Policy
At this point, when the Rover encounters an obstacle, it must automatically shut down.
To achieve this, I implemented a **shutdown Policy**. This Policy takes the **ObstacleFound** event as input and generates a shutdown event, which is then handled by its own *command handler*, as we saw earlier.
Below you can see the TurnOffPolicy implementation:
```python
class TurnOffPolicy(Policy):
def apply(self, event: ObstacleFound) -> Command:
return TurnOff(id=event.id)
```
### Additional requirement
I tried to push the design further by considering what would happen if I had an additional requirement, such as sending a notification when the Rover encounters an obstacle.
First, I reviewed the initial design event storming model and I integrated the notification feature, resulting in this new version:

As you can see from the diagram, all I had to do was define a new Policy based on the **ObstacleFound** event, which was responsible for creating the command to notify the obstacle found.
```python
class NotifyObstacleFoundPolicy(Policy):
def apply(self, event: ObstacleFound) -> Command:
return NotifyObstacle(message=f"Rover {event.id.value} hit obstacle")
```
After the creation of the command we will have the command handler that will manage the notification flow.
Regarding the Policy this is initialized within the builder of the command dispatcher, which effectively configures the process.
Here, I simply added the policy without needing to change or refactor existing code.
The test below is just to show how we can make an end to end test setting up obstacles:
```python
def test_hit_obstacle(self):
repo = MarsRoverRepository()
paths = []
obstacles = []
mars_rover_ids = []
runner = (
MarsRoverRunner(repository=repo,
path_projection_storage=paths,
obstacles_projection_storage=obstacles,
mars_rover_projection_storage=mars_rover_ids)
.with_initial_point(x=0, y=0)
.with_initial_direction(direction=Direction.NORTH)
.with_world(world_dimension=(4, 4),
obstacles=[(2, 2)])
)
runner.start()
id = mars_rover_ids[0]
runner.run(id, "RMMLMMMMMM")
obstacles_found = [o["obstacle"] for o in obstacles]
self.assertEqual([(2, 2)], obstacles_found)
```
It's easy setup obstacles and assert they are found checking the in memory obstacle projection storage. I created an obstacle projection in order to collect where is the obstacle found by the rover during the travel. It was easy because I had **ObstacleFound** event and I just needed to listen for this event and project it.
Below there are the logs of the Commands and Events generated during the test. As you can see during the travel the rover found an obstacle. After that it was turned off and the obstacle was notified.
```
test/test_e2e.py::TestE2E::test_hit_obstacle
[COMMAND] StartMarsRover(initial_point=Point(x=0, y=0), initial_direction=<Direction.NORTH: 'N'>, world=World(dimension=(4, 4), obstacles=Obstacles(points=[Point(x=2, y=2)])))
[EVENT] MarsRoverStarted(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[COMMAND] TurnRight(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[COMMAND] Move(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[COMMAND] Move(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[COMMAND] TurnLeft(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[COMMAND] Move(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[EVENT] MarsRoverMoved(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[COMMAND] Move(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[EVENT] ObstacleFound(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'), coordinate=(2, 2))
[COMMAND] NotifyObstacle(message='Rover 649d914f-967c-41ae-b3ca-ffede9ca9e7d hit obstacle')
Rover 649d914f-967c-41ae-b3ca-ffede9ca9e7d hit obstacle
[EVENT] ObstacleNotified()
[COMMAND] TurnOff(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
[EVENT] MarsRoverTurnedOff(id=MarsRoverId(value='649d914f-967c-41ae-b3ca-ffede9ca9e7d'))
```
### Evolutionary Architecture
Regarding the notification, I did not proceed further in the exercise. This is a typical example of what happens every day in our work: we create something that may have follow-up actions if it proves valuable.
Interestingly, the model already decouples this new concept (the notification), which we could potentially develop into a dedicated product line, with its own aggregate. This could become an external system that reads the ObstacleFound event from a queue and generates the notifications.
If we need to scale up because the notification system requires product enhancements, we could create a dedicated team to handle this domain.
This example is just to illustrate how such a design approach not only helps evolve our architecture but also guides it to support product development and organizational changes.
Some time ago, we had [Nicola Moretto](https://www.linkedin.com/in/nicola-moretto-ba197040/) speak at our [meetup](https://www.meetup.com/ticino-software-craft/) about *product development consistency.*

He showed us this slide, which I find very significant. He explained that *Architecture*, *Business*, and *Organization* should go hand in hand with product needs and must be easily modifiable to manage product increments.
How often do we encounter these situations?
- Implement features on legacy architectures.
- Implement features but can't do it end-to-end because we depend on other teams.
- Implement features whose business value is uncertain.
To address these types of situations, the *architect* must act as the point of contact between the business and the organization needed to support it, by implementing an architecture that is most easily adaptable to the problem at hand. | maverick198 |
1,909,071 | 5 Things I Will Never Do Again as a CTO or Senior manager | Becoming a good manager is not only about learning what to do. It is also what not to do. These are... | 0 | 2024-07-09T07:22:06 | https://simplecto.com/5-things-i-will-never-do-again-as-a-cto-or-senior-manager/ | leadership | ---
title: 5 Things I Will Never Do Again as a CTO or Senior manager
published: true
date: 2024-07-02 14:16:31 UTC
tags: leadership
canonical_url: https://simplecto.com/5-things-i-will-never-do-again-as-a-cto-or-senior-manager/
---
Becoming a good manager is not only about learning what to do. It is also what not to do. These are some of my hardest-won lessons as I write this as a diary to myself.
### I will never again Be the Sole Agent of Culture Creation or Change
As a senior manager or CTO, I've learned that driving company culture isn't my primary responsibility. While I can and should influence the engineering culture within my team, overall company culture should be shaped and driven by the CEO. In a previous role as a non-co-founding CTO, I made the mistake of trying to take on the mantle of culture agent for the entire company. This role belongs to the CEO, and if they don't take it up, it's not my responsibility to fill that gap. My focus should remain on fostering a positive and productive culture within the engineering department, aligning with the broader vision set by the leadership.
### I will never again Start Greenfield Projects with Microservices
For greenfield projects, I’ve learned the hard way that starting with microservices can lead to unnecessary complexity and overhead. Instead, I will always begin with a templated monolith and iterate from there. This approach allows for faster development and easier debugging in the early stages. As the project grows and the need for scalability becomes more apparent, we can then consider breaking the monolith into microservices. This strategy ensures we don't prematurely optimize and can adapt more flexibly to the project’s evolving requirements.
### I will never again seek out 10x engineers
I’ve come to believe that the concept of a "10x engineer" is more myth than reality—hallucinations of Silicon Valley fever dreams and Substack newsletters. Instead of chasing these unicorns, I prefer to take a "Moneyball" approach, akin to Brad Pitt's character in the movie, and focus on building a team of solid performers who can execute consistently and predictably. This approach ensures that the organization can effectively absorb their output, and customers can handle the changes and deliveries we make at a steady, reliable pace. It’s about building a balanced team where each member contributes to a cohesive, well-functioning unit rather than relying on the perceived exceptional output of a few individuals.
### I will never again Dictate How Things Should Be Done
I’ve learned that as a CTO, my role is not to dictate how my team should accomplish tasks, but to clearly define and enforce the desired outcomes. My focus should be on setting clear goals and objectives, ensuring everyone understands what success looks like. This empowers the team to leverage their expertise and creativity to find the best path to achieve those outcomes. By emphasizing results over methods, I foster a more innovative and motivated team that is aligned with our overall vision and goals.
Additionally, I understand that my team might not always know how to achieve these outcomes. Therefore, I support them by providing learning and development opportunities. This could involve buying them books, enrolling them in courses, or offering other resources that help them build the skills needed to reach our goals. Supporting my team in this way ensures they have the knowledge and tools necessary to succeed, and it demonstrates my commitment to their professional growth.
### I will never again Blur the Lines Between Professional and Personal Relationships
I’ve learned the importance of maintaining professional distance with my team members. I will never again become close friends or deeply engage with them on a personal level. While building rapport and understanding is crucial, it is equally important to preserve professional boundaries.
From a psychological perspective, blurring these lines can lead to various issues. Personal friendships within a professional context can create biases, favoritism, and conflict of interest, which can undermine team dynamics and decision-making. It can also make it challenging to provide objective feedback or make tough decisions that may impact those personal relationships.
Moreover, maintaining professional distance helps in preserving authority and respect. When the boundaries are clear, it becomes easier to lead effectively, as team members are less likely to question decisions based on personal feelings.
In essence, while it’s important to be empathetic and supportive as a leader, it’s equally crucial to keep the relationship professional to ensure fairness, objectivity, and the ability to lead without personal entanglements clouding judgment. | heysamtexas |
1,909,291 | Why React App May Be Broken By Google Translate Extension | Introduction While working at my company, I was reviewing Sentry issues that caused the... | 0 | 2024-07-10T06:16:52 | https://dev.to/ivanturashov/preventing-react-crashes-handling-google-translate-5bi0 | react, frontend, webdev, reactjsdevelopment | ## Introduction
While working at my company, I was reviewing Sentry issues that caused the entire React application to crash. I encountered several similar errors such as `Failed to execute 'removeChild' on 'Node': The node to be removed is not a child of this node` and `Failed to execute 'insertBefore' on 'Node': The node before which the new node is to be inserted is not a child of this node`. According to the Sentry logs, these errors occurred for different users on the same pages during similar but fairly ordinary actions. After a little research and performing the same actions as the users while using Google Translator, I was able to reproduce the issue on these pages. In the article I'll show why it happens and suggest some ways to fix it.
## TLDR;
Google Translator wraps each text node with 2 `font` elements that makes DOM and Virtual DOM incompatible.
## How Google Translate Does Translation
Let's take a button as an example, which by default renders text and a heart icon. It can also appear as a small button, rendering only the heart icon by removing the text based on the `isSmall` prop.
``` Typescript
// FavoriteButton.tsx
const FavoriteButton: React.FC<{
isSmall?: boolean;
}> = ({ isSmall = false }) => {
return (
<button>
{!isSmall && 'Like it'}
<img
src="/assets/favorite.svg"
alt="favorite-icon"
/>
</button>
);
};
```
Let's create a simple parent container that passes the desired state to our button. To do this, we'll add a simple true/false state switch in the form of a checkbox. The component passes the state to the Button.
``` Typescript
// App.tsx
const App: React.FC = () => {
const [isIconOnly, setIsIconOnly] = useState(false);
return (
<div>
<div>
<label>
<input
type="checkbox"
onChange={() => setIsIconOnly(!isIconOnly)}
/>
Use small state
</label>
</div>
<div>
<FavoriteButton isSmall={isIconOnly} />
</div>
</div>
);
};
```

Now, let's schematically imagine how our example will look in the tree. We'll focus only on the right part, namely the Button component.

Next, let's see how the right part will look in the Virtual DOM. Note, this is not an exact representation of the VDOM, but just a rough scheme. We're interested in what React will check for rendering the text in Button.

Now we'll turn on Google Translate and translate the application into any language. I'll take Estonian.

With the translation enabled, let's try clicking the checkbox. Voilà, we see a white screen and an error in the console: `Failed to execute 'removeChild' on 'Node'`.
## What Happened?
At first glance, it may seem that the translation plugin only replaced the English text with Estonian. If that were the case, React would continue to work correctly, as it's only interested in the presence of a text node, not its content. But if you open the browser console and look at the tree, you'll see that this is not the case. In reality, the translator replaced the text node with font elements and added another text node with the translation, thus changing the tree structure. How it looks now:

_Note, I deliberately hid the left branch in the diagram as it doesn't interest us. Although the text in the checkbox will also be changed. But from React point of view, all the elements are static there._
Now in the Button component diagram, we see that the text element is no longer a direct child of the button element. A `font` with translated text has taken its place. However, the element still exists in memory. This will happen by default with all text elements. I'll describe below when this will not happen.
Even though the DOM tree has changed, React **doesn't** know about it. For React, the previous scheme with virtual nodes is still relevant. React **still** considers the `VirtualTextNode` a child of the `VirtualButtonNode`. Now we'll click the checkbox and trigger React to re-render the component. When rendering the Button component, React will see that `isSmall` is true and the text needs to be removed. The rendering phase for VDOM will be successful. It will refer to `VirtualButtonNode` and simply remove the link to the text node. But during the commit phase, React will apply changes to the DOM, where it will necessarily call `buttonNode.removeChild(textNode)`, the references to which it keeps in memory. [More about the phases.](https://react.dev/learn/adding-interactivity#render-and-commit) And at this moment, the crash will happen. Because the `textNode` is no longer a child of `buttonNode`.
> Let’s imagine that the `img` tag doesn’t exist, or if it disappears along with the text under the `isSmall` condition. In that case, the button element will become empty. React will recognize this during the rendering phase, and during the commit phase, it will not execute the `removeChild` method but will simply overwrite the entire inner part of the button element. As if it called `buttonNode.innerHtml = null`. In this case, it doesn't matter what's inside `TextNode` or `FontNode` - everything will be removed.
## Fix Examples
Let's see how we can fix this. There can be many fix options. I'll present just a few of them.
### Wrap in an Element
If you add a wrapper, such as a span, the problem will go away.
``` Typescript
{!isSmall && <span>Like it</span>}
```
It's very simple. The translator will create `font` elements **inside** the span element. At the same time, the span itself will remain at the same level in the button descendant tree. If `isSmall` becomes true, React will remove the span from the tree along with all its children.
The additional element has its downside - it can complicate the layout, especially if there are already CSS selectors for span elements.
### translate=”no”
Do you remember I promised to tell you when the translator doesn't add `font` by default? You can add the `translate="no"` HTML attribute to any tag, thus adding an exception for Google Translate, including the `html` tag, that is, the entire application.
One downside is that users who don't understand the application language will be disappointed.
### Unstable. Replace with an Empty String
I tried replacing `&&` with rendering an empty string with a space, and the problem went away. There is an image below.
``` Typescript
{isSmall ? ' ' : 'Like it'}
```
I assume that React will stop calling the element removal and will only try to replace the content in the TextNode. Remember, I said that the replaced text is only removed from the DOM tree but is still stored in the application memory. And React refers to this node. It can change the text content.
Cons:
1. This is very implicit code, which will require explanations for other developers.
2. Highly dependent on the current layout and may require additional CSS hacks.
### Override Element Removal
In issues on GitHub, I found an [example](https://github.com/facebook/react/issues/11538#issuecomment-417504600) from Dan Abramov, where he rewrote the native removeChild and insertBefore methods. He suggests not calling the original method if the child element is not found. Note his comment that this will affect performance. But this is exactly what the React team could do if they decided to fix the problem on their side.
## Summary
The errors encountered were related to nodes being altered in the DOM by the translator, which caused React Virtual DOM to become inconsistent with the actual DOM. We explored how Google Translate replaces text nodes with `font` elements, leading to crashes when React tries to manipulate these nodes.
As you can guess, this issue is not limited to React and Google Translate but applies to other Virtual DOM libraries and any external interference with the rendering mechanism.
## Final Thoughts
In our case, we personally used `span` elements and the `translate="no"` attribute in specific places to prevent issues caused by Google Translate.
I hope this investigation was interesting and useful, providing insight into handling multilingual support in React applications and preventing crashes due to external changes. If you are interested in reading more about this issue, you should check out [this GitHub issue](https://github.com/facebook/react/issues/11538).
| ivanturashov |
1,909,412 | How to Use AI Code Generation to Enhance Developer Productivity | Let’s say you have to build a carousel component for an e-commerce site. How would you go about... | 0 | 2024-07-08T06:00:00 | https://www.getambassador.io/blog/ai-code-generation-boost-developer-productivity | ai, codegeneration, developer | Let’s say you have to build a carousel component for an e-commerce site. How would you go about that?
One option would be to design it in Figma and then code it in HTML, CSS, and JavaScript. You’d have to create the layout, add navigation, implement transitions, and ensure responsiveness across different devices. Likely, you’ll also have to integrate with an e-commerce backend through an [API](https://www.getambassador.io/kubernetes-glossary/api) and match the styling to your existing website.
With design, frontend, backend, and deployment, this quickly becomes a team endeavor.
Or, you could ask ChatGPT, “Can you create a carousel component for an e-commerce site?” and you’ll have the code in seconds. Now, it may not be a perfect technology quite yet, but it’s still a game changer for many developers.
No wonder more and more teams are electing for the latter option. According to the [2023 Stack Overflow Survey](https://survey.stackoverflow.co/2023/#ai), “70% of all respondents are using or are planning to use AI tools in their development process this year. ” AI code generation is completely changing how development works. Here’s a rundown of AI code generation to give you an understanding of how it can fit into your organization.
## What is AI Code Generation?
AI code generation refers to the process where artificial intelligence systems generate computer code to accomplish specific programming tasks. They do this by using advanced large language models (LLMs) to produce code in response to natural language prompts.
If we type “Can you create a carousel component for an e-commerce site?” into an AI service (OpenAI’s ChatGPT, in this case), the AI will respond with the code that best matches our request based on its training data and understanding of coding patterns.
Here is what was produced:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Product Carousel</title>
<link rel="stylesheet" href="styles.css">
</head>
<body>
<div class="carousel">
<button class="carousel-button prev" onclick="prevSlide()">❮</button>
<div class="carousel-track-container">
<ul class="carousel-track">
<li class="carousel-slide current-slide">
<img class="carousel-image" src="product1.jpg" alt="Product 1">
</li>
<li class="carousel-slide">
<img class="carousel-image" src="product2.jpg" alt="Product 2">
</li>
<li class="carousel-slide">
<img class="carousel-image" src="product3.jpg" alt="Product 3">
</li>
</ul>
</div>
<button class="carousel-button next" onclick="nextSlide()">❯</button>
</div>
<script src="script.js"></script>
</body>
</html>
```
This generated code aims to fulfill the specified requirements, including HTML structure, CSS styling, and JavaScript functionality for a fully interactive carousel component. The AI also produced the code for styles.css and script.js needed by this index.html. Let’s check it is working:

It does, with no changes bar adding an actual image.
Currently, there are three common ways developers might use AI code generation. The first is like above, directly from an AI chat interface, such as ChatGPT, Google Gemini, or Anthropic’s Claude. This approach is common when [debugging](https://www.getambassador.io/blog/debug-kubernetes-service-effectively) current code or when the request is small, like a single component.
This first method is the most straightforward and accessible. It allows developers to quickly generate code snippets or solve specific problems without leaving their browsers or integrating new tools into their workflow. It's handy for rapid prototyping, exploring new ideas, or getting unstuck on tricky code. It works more like pair programming, where you have someone help you explore coding options. But, as it exists outside of a developer's usual workflow, it disrupts the flow and requires more manual copy and pasting to get the code to work.
The second is AI code generation services. These can be called via API, such as OpenAI Codex, or embedded directly into IDEs, such as GitHub’s Copilot (powered by Codex) within VS Code.

This approach offers better integration into a developer's workflow. It allows for real-time code suggestions and completions as you type, making it feel like a natural extension of the coding process.
This method allows developers to receive AI-generated code suggestions without switching contexts or interrupting their flow. It's great for:
Autocompleting repetitive code patterns
Suggesting function implementations based on comments or function names
Generating boilerplate code quickly
Offering alternative ways to solve a problem you're working on
This requires more setup, but the main advantages are the immediate [feedback loop](https://www.getambassador.io/blog/boost-developer-velocity-optimizing-feedback-loops) and the ability to iterate quickly on AI suggestions.
The third way developers use AI code generation is through specific tools for part of their workflow. These might be AI testing tools, [development tools,](https://www.getambassador.io/products/blackbird/earlybird) or design tools that generate particular types of code to optimize parts of the [development process](https://www.getambassador.io/products/blackbird/earlybird). These tools integrate AI code generation into existing workflows without requiring developers to learn entirely new systems or significantly alter their current practices.
These specialized AI-powered tools can generate unit tests, create API endpoints, scaffold application structures, or translate design documents into functional code. By focusing on specific aspects of development, these tools offer targeted benefits while minimizing the learning curve. This approach allows teams to gradually incorporate AI assistance into their projects, picking and choosing where AI can provide the most value without overhauling their entire development methodology.
How AI Code Generation Works
At its core, AI code generation relies on LLMs trained on vast amounts of code from various sources. These models learn patterns, structures, and relationships within code across multiple programming languages and frameworks.
When a developer inputs a natural language prompt or a partially completed code snippet, the AI model processes this input through its neural network. Based on its training data and the context provided, it then predicts the most likely sequence of tokens (words, symbols, or code elements) that should follow.
The strength is in the model's ability to generalize from its training data. It's not simply regurgitating memorized code snippets but synthesizing new code based on learned patterns and the specific context provided.
For example, suppose you start typing a function definition in Python. In that case, the model recognizes the language and the function structure and can infer potential parameters and return types based on the function name and any docstring you've provided. It might suggest implementing common algorithms or design patterns that fit the context. As AI code generation technology evolves, we're seeing advancements like:
Multi-modal models that can understand both code and natural language explanations or even images of diagrams
Improved handling of project-specific conventions and styles
Better integration with version control systems and collaborative development environments
The Benefits of AI Code Generation
The main reason for using AI code generation is speed. Without having to type every line of code, the sheer velocity of code production means what might take a developer hours can be generated in seconds.
Beyond just speed, AI code generation helps developers with working in unfamiliar languages, building out boilerplate, integrating APIs, or using design patterns:
Unfamiliar languages: AI code generation works across multiple languages and frameworks, making polyglot programming easier. Developers can generate code in less familiar languages, reducing the learning curve and enabling faster cross-language development. This capability is particularly valuable for teams working on diverse tech stacks or when porting applications between different languages.
Boilerplate reduction: AI significantly reduces the tedium of writing repetitive boilerplate code, often necessary for setting up projects or implementing common structures. By generating this foundational code automatically, developers can focus on more complex and creative aspects of their projects. This saves time and reduces the likelihood of errors in these critical but often overlooked parts of a codebase.
Integrating APIs: AI code generation simplifies the process of integrating third-party APIs. It can generate client code, data models, and example API calls based on API specs or docs, reducing the time spent reading extensive documentation and writing structural code. This capability allows developers to quickly implement and test API integrations, accelerating the development of feature-rich applications.
Design patterns: AI code generation tools are adept at suggesting and implementing common design patterns, helping developers create more robust and maintainable code. By recognizing the context and requirements of a given task, AI can propose appropriate architectural patterns, ensuring that even less experienced developers can benefit from established best practices. This leads to more consistent, scalable, and efficient code across projects.
Because of all this, AI code generation has a huge cost-reduction benefit for businesses. By accelerating development cycles and automating routine coding tasks, companies can significantly reduce labor costs while maintaining or even increasing output. This efficiency allows teams to tackle more projects or features in less time, effectively doing more with fewer resources.
## Challenges of AI Code Generation
The main concern for developers is the quality of AI output. Often, AI produces functional code that solves the immediate problem but lacks the nuanced architecture and optimizations that come from years of programming experience.
This can lead to issues with efficiency, scalability, and maintainability down the line, effectively introducing technical debt into a codebase. Research into “code churn,” the amount of code that needs to be changed or updated within two weeks, shows that these changes are increasing in frequency, suggesting that code quality is decreasing.
## AI Code Generation
This can then negate the time saved in code production. Developers need to spend more time carefully reviewing and refining AI-generated code, as well as fixing bugs that are introduced into production.
Part of the quality issue can come from outdated training data. Software development is a fast-moving field, so the techniques used a few years ago that have become part of the vast datasets of existing code AI are trained on become obsolete. Newer libraries, API versions, or recent releases of languages may not be reflected in the AI's knowledge base. This can result in suboptimal code that doesn't leverage the latest best practices or features.
Developers relying too heavily on AI might use deprecated methods or miss out on more efficient solutions. They might also lose proficiency if they are overly reliant on AI and lack knowledge of the latest techniques and concepts.
Using AI Code Generation To Enhance Existing Workflows in [Blackbird](https://www.getambassador.io/products/blackbird/earlybird)
With so many teams using AI, the tools underpinning this revolution will improve. Not only will you be able to generate any code on the fly, but you’ll also be able to use these tools to increase developer productivity in specific workflow elements.
[API development](https://www.getambassador.io/blog/api-development-comprehensive-guide) is one such area where this enhancement is happening. Without AI, teams are left to manually mock API endpoints, create documentation, and [debug](https://www.getambassador.io/blog/debug-kubernetes-service-effectively) API errors. With AI,[ API development](https://www.getambassador.io/blog/api-development-comprehensive-guide) becomes a streamlined and efficient process. Tools like our new API development platform, [Blackbird](https://www.getambassador.io/products/blackbird/earlybird), combine the power of AI with expertise in multi-cluster and [cloud-native](https://www.getambassador.io/kubernetes-glossary/cloud-native) tools to offer a cloud and CLI-accessible platform that simplifies and accelerates [API development.](https://www.getambassador.io/blog/api-development-comprehensive-guide)
Using AI-powered elements in your development and combining that with a platform that helps organize and orchestrate your [API development ](https://www.getambassador.io/blog/api-development-comprehensive-guide)lifecycle is the ultimate winning combination.
Instant API design and coding: AI can quickly generate API structures and code based on your specifications, saving hours of manual work.
Fast and easy API mocking: Create realistic API mocks in seconds, allowing for rapid prototyping and testing without waiting for backend implementation.
Advanced API testing and [debugging](https://www.getambassador.io/blog/debug-kubernetes-service-effectively): AI can generate comprehensive test cases, simulate various scenarios, and help identify potential issues before they reach production.
Streamlined documentation: Automatically generate and update API documentation, ensuring it syncs with your actual API implementation.
By leveraging these AI-enhanced tools, development teams can produce high-quality APIs faster and simplify [API management](https://www.getambassador.io/blog/api-management-benefits) across their organization. This accelerates the development process and allows developers to focus on more complex, value-adding tasks rather than getting bogged down in repetitive API-related work. | getambassador2024 |
1,909,467 | Acessibilidade digital - Dicas de NVDA para desenvolvedores web | O NVDA é o leitor de tela mais utilizado em desktops atualmente e muitas vezes precisamos utilizar... | 0 | 2024-07-10T00:27:09 | https://dev.to/julioduartedev/acessibilidade-digital-dicas-de-nvda-para-desenvolvedores-web-32g5 | webdev, a11y, productivity, acessibilidade |
O NVDA é o leitor de tela mais utilizado em desktops atualmente e muitas vezes precisamos utilizar uma ferramenta desse tipo para testar a acessibilidade de nossos sites, então com base na minha experiência utilizando o NVDA no dia a dia de trabalho, trouxe as seguintes dicas:
## Dica 1: Ativar visualizador de fala
O visualizador de fala pode nos ajudar a ter um histórico do que foi dito pelo leitor e é muito útil para mostrar para terceiros o que está ocorrendo em seu leitor.
**Como ativar:** _Ferramentas > Visualizador de fala_
Aqui está um exemplo de como é o visualizador de fala:

**OBS:** Durante o uso, pode ser percebido que no visualizador de fala algumas coisas são lidas mais de uma vez, enquanto na própria fala do leitor isso não ocorre, então nesses casos se baseie apenas no que está sendo dito, ao invés do que está escrito.
## Dica 2: Desativar o som do leitor de tela
As vezes estamos alterando algum código e o leitor continua a falar enquanto estamos focados em outra coisa, o que pode ser bem irritante. Já pensou se tivéssemos uma forma rápida de fazer essa voz se calar? Então, para fazer isso você só precisa saber qual tecla modificadora o NVDA está utilizando.
**Como descobrir qual a tecla modificadora:** _Preferências > Configurações > Teclado > Verificar quais teclas estão selecionadas em "Selecionar as Teclas Modificadoras para o NVDA"_

Pronto, agora que você sabe qual a tecla modificadora ativa, também conhecida como "tecla NVDA"!
**Para desativar:** Aperte _Tecla NVDA + S_ até que seja dito "Modo de fala: desligado"
**OBS:** Caso precise ativar novamente, basta repetir o processo anterior, até entrar no modo de fala desejado.
Essa dica em junção com a primeira, possibilita que você teste a acessibilidade de seus sistemas de forma mais amigável.
## Dica 3: Ativar realce de objeto
Para identificar melhor o que está em foco no leitor de tela, usuários videntes podem ativar uma opção onde visualmente é indicado esse foco, por meio de uma borda vermelha.
**Como ativar:** _Preferências > Configurações > Visão > Realce visual > Marcar a opção "Realçar objeto de navegação"_

## Dica 4: Desabilitar rastreamento do mouse
Embora a princípio possa parecer prático passar o mouse por cima de um elemento e o leitor já indicar o que é, na prática pode ser bem estressante utilizar o computador assim, além de dessa forma a navegação por teclado não ser priorizada (o ideal é navegar por tudo utilizando apenas o teclado, para simular o uso real de uma pessoa com baixa visão).
**Para desativar:** _Preferências > Configurações > Geral > Desmarcar a opção "Habilitar rastreamento de mouse"_

## Dica 5: Desativar o leitor de tela ao fazer login no Windows
Caso as pessoas com acesso a seu computador não precisem do NVDA para navegar pelo sistema operacional no dia a dia, pode ser uma boa desativar essa configuração que vem ativada por padrão e pode até nos confundir no início, até compreender o que está acontecendo.
**Para desativar:** _Preferências > Configurações > Mouse > Desmarcar a opção "Usar o NVDA nas credenciais do Windows"_

## Dica extra: Explore as configurações e documentação por conta própria!
Assim você será capaz de dominar melhor essa tecnologia e personaliza-la para melhor atender sua dinâmica de trabalho, tornando a experiência mais proveitosa e produtiva.
A documentação do NVDA pode ser encontrada [neste link](https://www.nvaccess.org/files/nvda/documentation/userGuide.html?) (em inglês). | julioduartedev |
1,909,659 | Setting up Server for the Vehicle Tracking - Free | Effective vehicle tracking is crucial for fleet management, providing real-time data on the location,... | 0 | 2024-07-09T07:24:06 | https://dev.to/fleet_stack_21/setting-up-server-for-the-vehicle-tracking-free-4ena | Effective vehicle tracking is crucial for fleet management, providing real-time data on the location, status, and performance of your vehicles. Setting up a server for vehicle tracking can be done without significant costs, and with the right guidance, it becomes a straightforward process. This article will guide you through the steps to set up a free server for vehicle tracking, ensuring you have a robust and reliable system in place.

## Why Vehicle Tracking is Important
[Vehicle tracking systems](https://fleetstackglobal.com/vehicle-tracking-software) enhance fleet efficiency, improve security, and provide valuable insights through detailed reporting. These systems are vital for industries such as transportation, logistics, delivery services, and more.
## Step-by-Step Guide to Setting Up a Free Server for Vehicle Tracking
## 1. Choose a Free Server Provider
The first step is to select a server provider that offers free or trial services. Some popular options include:

**Amazon Web Services (AWS) Free Tier:** AWS offers a free tier for 12 months, which includes sufficient resources to set up a basic vehicle tracking server.
**Google Cloud Platform (GCP) Free Tier:** GCP provides a free tier with limited usage, perfect for initial setup and testing.
Microsoft Azure Free Account: Azure offers free services for 12 months, along with some always-free services.
## 2. Create Your Server Instance
Once you have chosen a server provider, follow these steps to create your server instance:
**Sign Up for an Account:** Register for a free account with your chosen provider.
**Navigate to the Console:** Access the management console or dashboard of the provider.
**Create a New Instance:** Select the option to create a new virtual machine (VM) instance. Choose the free tier options to avoid any charges.
## 3. Configure Your Server
Proper configuration is essential to ensure your server is optimized for vehicle tracking. Here’s how to do it:
**Select the Operating System:** Choose a compatible operating system (e.g., Windows Server or Ubuntu).
**Assign a Static IP Address:** Ensure your server has a static IP address for consistent communication with GPS devices.
Set Up Security Groups: Configure security groups to allow traffic on necessary ports (e.g., HTTP, HTTPS, TCP).
## 4. Install Vehicle Tracking Software
With your server configured, the next step is to install the vehicle tracking software. For this guide, we’ll use **[Fleet Stack](https://fleetstackglobal.com)**’s free GPS tracking platform:

**Download the Software:** Visit the **[Fleet Stack](https://fleetstackglobal.com)** website and download the setup files.
**Upload the Setup Files:** Transfer the setup files to your server using a secure method (e.g., SFTP).
**Run the Installation:** Execute the setup file on your server. Follow the on-screen instructions to complete the installation.
**5. Register and Configure Your Account**
After installing the software, you need to register your account and
## configure the platform:

**Launch the Application:** Open the **Fleet Stack** application installed on your server.
**Complete Registration:** Provide your details to create an account. This step unlocks all features of the GPS tracking platform.
**Configure GPS Devices:** Ensure your GPS devices are set up to communicate with your server using the static IP address.
{% embed https://www.youtube.com/embed/UJ30B4mnxHA?si=DzKPIM4qVDSm-fvZ %}
## 6. Start Tracking
With everything set up, you can now start tracking your vehicles:
**Log In to Your Account:** Use your registered email and password to log in to the platform.
**Monitor Your Fleet:** Begin monitoring your fleet in real-time. The intuitive interface will display the locations of your vehicles, and you can set up features such as geofencing, alerts, and detailed reporting.
## Benefits of Setting Up a Free Server for Vehicle Tracking
**Cost-Effective:** Utilize free tiers from major cloud providers to set up your server without significant costs.
**Scalability:** Easily scale your server resources as your tracking needs grow.

**Reliability:** Cloud providers offer high reliability and uptime, ensuring your tracking system is always available.
**Flexibility:** Customize and configure your server and tracking platform to meet specific needs.
## Conclusion
Setting up a free server for vehicle tracking is a practical and efficient way to enhance your fleet management capabilities.

By following this guide, you can leverage free resources from major cloud providers and set up a robust vehicle tracking system with **Fleet Stack**’s GPS tracking platform. Enjoy the benefits of real-time tracking, improved security, and detailed reporting, all while keeping costs low.
**[Start Using Free Vehicle Tracking Server](https://fleetstackglobal.com/vehicle-tracking-software)** | fleet_stack_21 | |
1,909,675 | Setting Up Unit Testing in an Existing Unity Project | If you’re knee-deep in a Unity project and pulling your hair out trying to set up unit testing,... | 0 | 2024-07-08T19:59:19 | https://dev.to/dutchskull/setting-up-unit-testing-in-an-existing-unity-project-58p2 | unity3d, testing, gamedev, tutorial | If you’re knee-deep in a Unity project and pulling your hair out trying to set up unit testing, you’re not alone. The two biggest challenges are dealing with `Assembly Definitions` and `Editor` folders. Here’s a guide to overcome these issues and make your life easier.
## The Problem
Unity, in its infinite wisdom, doesn’t allow writing tests on the default `Assembly-CSharp` and `Assembly-CSharp-Editor` assemblies. Frustrating, right?

To fix this, you need to add `Assembly Definitions` for both your game code and editor code. Here’s how:
1. **Game Code:**
- Add an `Assembly Definition` at the root of your project.
- Update the dependencies in the Inspector until there are no compiler errors. Off to the races!
2. **Editor Code:**
- Here’s where it gets tricky. In a big project, you’ll have lots of external dependencies and plugins, some with `Assembly Definitions`, some without.
- Without `Assembly Definitions`, Unity sees these `Editor` folders as game code and tries to build them into your final game, which inevitably fails.
## The Solution
1. **Create a Top-Level Editor Folder:**
- At the top level of your project, create an `Editor` folder.
- Add an `Assembly Definition` inside this folder. Name it similarly to your game code `Assembly Definition` but with a `.Editor` postfix (or whatever you prefer).
2. **Link Editor Folders:**
- For each `Editor` folder in your project, create an `Assembly Definition Reference` and link it to the top-level `Editor` folder `Assembly Definition`.
- This process can be incredibly tedious in a large project, so here’s a Python script to save you from hours of mind-numbing work:
```python
import os
import json
import yaml
def get_guid_from_meta(meta_file_path):
with open(meta_file_path, 'r') as meta_file:
meta_content = yaml.safe_load(meta_file)
return meta_content.get('guid')
def create_reference_json(editor_path, source_file_name, guid):
json_content = {
"reference": "GUID:" + guid
}
asmref_file_name = os.path.splitext(source_file_name)[0] + '.asmref'
json_file_path = os.path.join(editor_path, asmref_file_name)
with open(json_file_path, 'w') as json_file:
json.dump(json_content, json_file, indent=4)
print(f"Created {json_file_path} with GUID reference.")
def copy_file_to_editor_folders(unity_project_path, source_file):
if not os.path.isdir(unity_project_path):
print(f"The project path {unity_project_path} does not exist.")
return
meta_file_path = source_file + '.meta'
if not os.path.isfile(meta_file_path):
print(f"The meta file {meta_file_path} does not exist.")
return
guid = get_guid_from_meta(meta_file_path)
source_file_name = os.path.basename(source_file)
for root, dirs, files in os.walk(unity_project_path):
if 'Editor' in dirs:
editor_path = os.path.join(root, 'Editor')
asmdef_exists = any(file.endswith('.asmdef') for file in os.listdir(editor_path))
if not asmdef_exists:
create_reference_json(editor_path, source_file_name, guid)
else:
print(f"A .asmdef file already exists in the folder {editor_path}.")
# Example usage
unity_project_path = r"PATH\TO\ASSETS\FOLDER"
source_file = r"PATH\TO\Editor.asmdef"
copy_file_to_editor_folders(unity_project_path, source_file)
```
This script requires `pyyaml` to run:
```bash
pip install pyyaml
```
- Set `unity_project_path` to your Unity project’s assets folder.
- Set `source_file` to the path of your `Editor` `Assembly Definition`.
The script will:
- Crawl your project for all `Editor` folders.
- Check if an `Assembly Definition` is present.
- Add an `Assembly Definition Reference` if none exists.
After setting this up, follow the [Unity Testing Framework tutorial](https://docs.unity3d.com/Packages/com.unity.test-framework@1.1/manual/getting-started.html) to start writing tests.
#### Additional Tips
Consider splitting your code with `Assembly Definitions` to create logical boundaries and improve maintainability. It might be a pain now, but your future self will thank you!
Happy testing! | dutchskull |
1,909,682 | How to Design Utility Classes in Java? | NOTE: This article had been written in 2021. Hello everybody, Today we gonna talk about utility or... | 0 | 2024-07-08T07:29:52 | https://dev.to/mammadyahyayev/how-to-design-utility-classes-in-java-32b3 | java, cleancode, cleancoding | **NOTE: This article had been written in 2021.**
Hello everybody, Today we gonna talk about utility or helper class. I will answer What is that ? , Why we need that ?. Before the deep dive in Utility class implementation in Java , first we should understand what is that.
## What is Utility Class?
The utility class is helper. Let's say we have a little logic in the code and we use this logic throughout the project, instead of writing this logic over and over in each class, we can create a new class and add this logic to the newly created class. And then we can call this logic wherever we want. Maybe we have 10 lines of code in our logic, we can replace this 10 lines of code with 1 line.
## Why we need Utility Class?
Utility class helps us to improve code quality , readability. Also you can use Utility class to write good clean code. Other developers can understand your code easily. This is the main idea to use this class everywhere in the project. Now let's see this Utility class in action.
## Implement Utility class to our Project
First of All we use **IDE** (Integrated Development Environment). IDE makes our life easy. I am using Intellij Idea because this IDE is very powerful and very popular among the Java developers. Of course you can use any IDE what you like.
Open your favorite IDE and create new Java project. And then create a new class, this will be Main Class. Inside of this Main class create a main method and start to write own logic.
Let's assume we want to convert Date format to our format what we like. For doing this create Calendar instance, Why calendar? because some of Date methods are deprecated. Now focus on the our logic. First of all we create instance of Calendar with `Calendar.getInstance()`.
```java
Calendar calendar = Calendar.getInstance();
```
If i print this date we use following code.
```java
System.out.println(calendar.getTime());
```
`calendar.getTime()` method gave us Date object. If i run the Main class output will be this **Sun Jan 03 12:14:54 AZT 2021**. But I want to see this date in the format (03–01–2021 - dd-MM-yyyy). Now let's write the logic.
```java
SimpleDateFormat dateFormat = new SimpleDateFormat("dd-MM-yyyy");
String formattedDate = dateFormat.format(calendar.getTime());
```
In this logic I create instance of `SimpleDateFormat` and pass our pattern to this constructor of `SimpleDateFormat`. And then I call `format()` method of `SimpleDateFormat` and this method takes date object in our case we pass `calendar.getTime()` because this method gave us the Date object and then I assign the new value to this format method.
This format method returns back String object thus I assign `String` object and give `formattedDate` name to this String object. Now print this `formattedDate` object and see the result.
Result will be this **03–01–2021**. In this case we wrote 4 lines of code, i know this is not much but if you use this logic in each class, your code will increase.
Now let's create **Utility** class and add this logic to this class. You can give any name you like but i give `DateUtils`. This make sense because this logic related to Date. In this class we create a method and this method returns `String` object.

Now call this method in the Main Class.

As you can see, we instantiate this class and print the parseDate() method, the result will be the same as before.
Now let's design our Utility class, first we need to know that such classes contain static final fields and static methods. In Java, we can call the static method by simply typing **Classname.methodName**.
In our code, we can call this `DateUtils.parseDate()` method after making the method static. This class only contains static methods we don't need to instantiate hence we can change this class to abstract class , In Java we can not instantiate abstract class.

Now create static final fields and assign pattern to that field.

This is the end. Now in our simple project we replace 4 lines of code with 1 line and also we can use this logic in everywhere, just simply call `DateUtils.parseDate()`.
Also code is available on the [Github](https://github.com/mammadyahyayev/blog-posts/tree/master/utility-class-design/src/com/blogs).
## Conclusion
In this blog post we have seen how to implement and design the Utility Class. Maybe this project is simple but you should use this class in a big project and it will make your life easier.
Thank you for reading this article and I hope you try this on your project.
Feel free to reach me via [LinkedIn](https://www.linkedin.com/in/mammadyahya/)
Hope to see you in other articles. Bye-Bye.
| mammadyahyayev |
1,909,754 | The Hunt for a Perfect Headless CMS | We recently decided to leave Webflow (the reasons why will be further explained in our next blog) and... | 0 | 2024-07-03T06:57:50 | https://dev.to/zerodays/the-hunt-for-a-perfect-headless-cms-123h | nextjs, cms, payload, webdev | We recently decided to leave Webflow (the reasons why will be further explained in our next blog) and migrate our company website to a headless content management system. For that reason we decided to thoroughly research the options available and select the best solution for us.
## 🗒️ Requirements (and wishes ✨)
Our main goal was to select a suitable headless CMS for managing the content of a NextJS-based landing page, allowing for full customization and seamless integration. Based on that we developed a list of requirements for the system we wanted, not just for our website, but also for the majority of other landing pages we develop for our clients.
**Must have**:
- Localization support ([RTL](https://en.wikipedia.org/wiki/Right-to-left_script) support in editor is a bonus)
- Good integration with [NextJS](https://nextjs.org/):
- Good [React](https://react.dev/) integration.
- Possible server-side (pre)rendering.
- Supports custom sections or components to emulate a purposely primitive website builder.
- Rich text editor support.
- User-friendly admin interface.
- SEO support.
- Multi-user support.
- Media hosting.
- Easy backup and restore.
- Good reputation.
**Nice to have**:
- Open source.
- Ability to self-host.
- Support for drafts and versioning
- Live preview inside the content editor.
- Admin panel customization options.
- A/B testing support.
**Other considerations**:
- Migration challenges from the existing website.
- Multi-region hosting and caching (can be done on NextJS side).
Based on those requirements we tested and considered lots of CMS options. Let’s jump straight into why the Payload ended up our first choice.
## 🥇 The winner - [Payload CMS](https://payloadcms.com/)
**Reasons for Selection**:
- Excellent integration with NextJS. Latest version (3.0) is now a part of a NextJS application.
- Native localization support where localization can be done per field, instead of only per-block.
- Open source and widely used.
- Ability to be self-hosted.
- Admin dashboard is easy to extend with custom React components.
- Easy to extend with custom functionalities since CMS code is co-located with the frontend.
- Native TypeScript support.
- Live editing preview option.
**Cons**:
- The version 3 that includes all the sweet features we desire is still in beta - this is why we are using it only for our website at the moment and not experimenting on our clients.
## 🥈 Other CMS systems considered
Below is the list of all the other options we’ve tested. This is not to say that none of those could work for us - Strapi was extremely close too, and others might be best suited for your specific use case.
### [Strapi](https://strapi.io/)
**Pros**:
- Open source.
- (Probably) the most popular headless CMS option.
- Self-hosted easily.
- Official plugin to generate Swagger documentation so NextJS integration is trivial (although generated schema doesn’t include localization parameters).
- Native localization support
- Customizable admin dashboard.
**Cons**:
- Locales are still separate, when translating content all fields need to be set in all locales instead of preserving some fields (images, slugs, etc.). This might cause some friction for editors.
- Poor TypeScript support.
### [WordPress](https://wordpress.org/)
**Pros**: Very popular option, familiarity.
**Cons**: Requires additional plugins and configuration to achieve full headless functionality. Worse NextJS integration and development experience. Forces you to use different technologies on frontend and backend.
### [Sanity](https://www.sanity.io/)
**Pros**: NextJS support, UI editor integration, flexible localization methods.
**Cons**: No self hosting, high price (per user, per request and bandwidth).
### [Prismic](https://prismic.io/)
**Pros**: NextJS support, live preview editor.
**Cons**: No self-hosting, clunky localization (all locales are totally separate), restrictive pricing plans (max 8 locales in platinum plan).
### [Tina.io](https://tina.io/)
**Pros**: Simple, developer friendly, has live editing capabilities.
**Cons**: Poor i18n support, needs to be handled by developers.
### [Directus](https://directus.io/)
**Cons**: More a dataset editor than a CMS.
### [Contentful](https://www.contentful.com/)
**Cons**: Limited language support without enterprise tier.
### [KeystoneJS](https://keystonejs.com/)
**Pros**: Seems inactive, development progress on the official website was not updated since 2022.
### [Contentstack](https://www.contentstack.com/)
**Cons**: CMS is a minor part of the product. Company’s focus is elsewhere.
### [Storyblok](https://www.storyblok.com/)
**Cons**: Expensive. Subpar developer experience.
## 🏁 Conclusion
In the end, Payload took the crown for us. What ultimately sold us on it, was its superb integration with Next.js - our technology of choice for web development.
This blog and its underlying research was made by the awesome team at [zerodays.dev](https://zerodays.dev/). | zigapk |
1,909,904 | No Need For Docker Anymore | Introduction No need for Docker anymore I hear many people say, well I'd like to express... | 0 | 2024-07-12T15:42:33 | https://dev.to/peteking/no-need-for-docker-anymore-3nbi | devops, docker, softwareengineering, tooling | ## Introduction
No need for Docker anymore I hear many people say, well I'd like to express that Docker is incredibly relevant today and into the future - We are not saying goodbye!
## Docker
Docker was and still is a game-changer in software engineering, offering a containerisation approach to application development, deployment and management.
Here's a run-down of its significance and the advantages Docker can bring to you and your team:
- **Consistent Environments:** Docker ensure consistent environments across development, testing, and production stages. By encapsulating applications with their dependencies in isolated containers, Docker eliminates environment-related issues that can plague the development lifecycle.
- **Streamlined Collaboration:** Dev Containers, built on Docker's foundation, provide a standardised development environment for your team. Everyone on the product/project gets an identical container with the necessary tools and libraries, fostering a seamless collaboration and eliminating setup discrepancies.
- **Improved Portability:** Docker applications are inherently portable. Containers can run seamlessly on any Linux machine with Docker installed, regardless of the underlying infrastructure. This simplifies development across various environments from local development machines to cloud platforms.
- **Faster Development Cycles:** With Docker, developers can spin-up new environments in seconds. This agility translates to faster development cycles, as developers can quickly test and iterate on their code without lengthly setup times.
- **Efficient Resource Utilisation:** Containers share the host operating system's kernel, making them lightweight and resource-efficient compared to virtual machines. This allows you to run more applications on the same underlying hardware, optimising resource utilisation.
- **Simplified Scalability:** Scaling Dockerised applications is a breeze You can easily add or remove containers based on demand, enabling horizontal scaling for increased processing power or handling spikes in traffic. This leads onto container orchestration platforms such as Kubernetes.
- **Enhanced Reliability:** Docker containers isolate applications from each other and the host system, promoting stability and reliability. If one container malfunctions, it won't impact other containers or the underlying system.
What's not to like hey? 😁
## Learning Curve
I acknowledge that there is a learning curve, but trust me (hopefully you can), the initial investment will pay-off in the long term.
We can mitigate by adopting a strategic approach that focuses on practical experience alongside foundational knowledge.
- **Start with the Basics:** Begin by understanding the core concepts like containers, images, and Docker files. Numerous beginner-friendly tutorials and workshops are available online.
- **Hands on Learning:** The best way to solidify new knowledge and understanding is through practical application. Work on small projects, maybe personal ones using Docker. This will help you grasp how Docker functions in real-world applications.
- **Leverage Online Resources:** The Docker community is extensive and supportive. Utilise online forums, communities and of course the official Docker documentation to find answers, troubleshoot issues, and learn from other peoples experiences.
- **Focus on Practical Use Cases:** Instead of getting bogged down by every minute detail, concentrate on how Docker can address your specific development needs. This will make the learning process more engaging and goal-orientated.
## Final Thoughts
Docker, along with Dev Containers, streamlines the software development process by ensuring **consistency**, **portability**, **efficiency**, and **scalability**. By adopting these technologies, you and your team can feel empowered to deliver high-quality software faster and more reliably.
## BONUS!
Containerisation is available in different flavours so-to-speak, however, Docker, along with Docker Desktop and its offering is what I would highly recommend if you are doing anything commercially; if you are just doing something yourself you can get away without it. However, if purely personal and not commercial you can still use Docker Desktop without any cost!
The benefits of [Docker Desktop](https://www.docker.com/products/docker-desktop/), [Docker Hub](https://www.docker.com/products/docker-hub/) are immense, when you are using base images from external sources you are relying on trust. That's all well and good, however, you can also trust Docker Inc. i.e. if you enable only Docker Official images and Verified Publisher images, you are additionally trusting Docker Inc. as well. I see this as a good thing!
It's all about securing your software supply chain, every link in that chain you would ideally want to be trusted, verified and more. Docker helps you improve in this area directly. Not only this but Docker Inc, has made some strategic acquisitions recently, one to mention is [Chainguard](https://www.chainguard.dev/). I won't go into much of that here, but in essence [Chainguard](https://www.chainguard.dev/) curates minimal, highly-optimised container images and either reduces the open CVE's or gets the CVE count down to zero! Awesome 😎
One last thing, Docker Inc, has [Docker Scout](https://www.docker.com/products/docker-scout/) which further improves your software supply chain, there's that key phrase again, it's because it's vitally important and not to be underestimated.
## More Information
- Docker: https://www.docker.com
- Docker Docs: https://docs.docker.com
- Docker Desktop: https://www.docker.com/products/docker-desktop/
- Docker Hub: https://www.docker.com/products/docker-hub/
- Docker Scout: https://www.docker.com/products/docker-scout/
- Chainguard: https://www.chainguard.dev/
- Kubernetes: https://kubernetes.io/
- Nomad: https://www.nomadproject.io/
- Udemy, Docker Mastery: https://www.udemy.com/course/docker-mastery/ | peteking |
1,909,974 | A Complete Handbook on Automatic Speech Recognition (ASR) in Call Centers | Automatic Speech Recognition (ASR), a system that translates spoken language into text, is one such... | 0 | 2024-07-09T06:15:29 | https://dev.to/leadsrain/a-complete-handbook-on-automatic-speech-recognition-asr-in-call-centers-16ek | asr, callcenter, automaticspeechrecognition, leadsrain | Automatic Speech Recognition (ASR), a system that translates spoken language into text, is one such revolutionary breakthrough. ASR has been more popular in the last several years, finding use in anything from chatbots for customer service to personal assistants like Siri and Alexa to transcribing applications.
Therefore, let's take a closer look at this phenomenal voice recognition technology!
We will explore the basics of ASR, and its operation, in this piece of the blog.
## What is Automatic Speech Recognition (ASR)?
ASR, or [Automatic Speech Recognition is a powerful technology](https://leadsrain.com/direct-marketing-glossary/predictive-dialer/what-is-automatic-speech-recognition-asr-technology/) that enables the conversion of speech into text format. ASR analyzes audio inputs and accurately transcribes them into written formats by using sophisticated algorithms and machine learning methods. In common language, ASR is the process of transforming audio signals into written text.
As an example, callers seeking customer service no longer need to "press one" thanks to automated speech recognition.
By 2030, the global market for speech recognition technologies is projected to expand by $59.62 billion.
## How Does Automatic Speech Recognition Work?
The ASR process involves several steps which rely on complex algorithms and models.
Here's a simplified overview of how ASR works:

**Audio capture.** A microphone captures the sound waves produced by human speech.
**Pre-processing.** The digital signal is likely to be filtered for the removal of noise and other non-speech elements.
**Feature Extraction.** Each frame is analyzed to extract relevant features that represent the speech signal.
**Acoustic modeling.** Afterward, they map the audio features to individual speech sounds or phonemes.
**Language modeling.** Predicts the most likely word sequences based on grammar and context.
**Decoding.** Finally, the system deciphers the spoken words into text.
## ASR Applications used in a call center
Talking about a few of the applications that are designed to improve customer satisfaction within call centers.
**Advanced IVR.** Interactive voice response systems with ASR capability can better understand the requirements and intents of their customers by gathering information about their inquiries.IVR self-service menus may assist users with a variety of common activities, including placing orders, asking for information, scheduling appointments and bookings, and completing transactions using voice commands.
**Quality monitoring.** Transcription of call records for adherence to company regulations, laws, and quality assurance standards is one way that ASRs support quality monitoring. Contact center managers may evaluate and analyze interactions to guarantee compliance with script regulations and resolve issues connected to customer service quality by using ASR systems, which automatically transcribe conversations.
**Efficient call routing.** Call routing is another application for ASR. It improves [FCR (First Call Resolution) rates](https://leadsrain.com/direct-marketing-glossary/predictive-dialer/what-is-first-call-resolution/) and lowers average handle times by directing incoming calls to the most suitable teams, departments, or individual agents depending on the purpose of the call.
**Voice biometrics.** Call centers can use speech recognition applications to analyze particular characteristics of incoming calls, such as pitch, tone, and speech patterns, to identify and authenticate them using voice biometrics.
## Discussing key features of ASR systems in a call center
The mentioned features below collectively enhance the effectiveness and efficiency of call center operations, leading to better CX.
**Real-time transcriptions.** Automatically transcribes spoken words into text so that agents may concentrate on the conversation rather than taking manual notes.
**Natural language understanding.** analyses the intended purpose of spoken words, allowing for more effective customer service and context-aware responses.
**Keyword spotting.** Lets agents quickly address critical problems by emphasizing and identifying relevant terms or phrases during conversations.
**Sentiment analysis.** Examine a caller's voice for tone and mood to gauge their level of satisfaction and notify agents of any possible issues.
**CRM integration.** [Seamlessly integrate with CRM platforms](https://leadsrain.com/direct-marketing-glossary/integration/what-is-call-center-crm/) to provide agents with an in-depth knowledge of previous interactions and customer data.
## Snapshot of successful implementation of ASR
Highlighting a few known factors that greatly benefited from Automatic speech recognition. Let's study how they got this fantastic ASR technology into their routine tasks.
**#1. Healthcare.** ASR technology is being utilized to improve patient engagement, simplify clinical recordkeeping, and make medical transcribing easier. With voice recognition software driven by artificial speech recognition (ASR), doctors may transcribe patient notes, prescriptions, and medical records, minimizing administrative work and improving documentation accuracy.
**#2. Educational sector.** With the use of practice tasks and personalized feedback, ASR-powered language learning apps assist students in developing their vocabulary, pronunciation, and language fluency. Furthermore, accessible by search, text-based transcripts of lectures and class discussions are provided by ASR-driven lecture transcription services, which improve students' access to educational content.
[](https://leadsrain.com/signup)
**Conclusion**
Technology for automatic voice recognition is developing quickly, with new advancements becoming possible. Because they see opportunity and value, savvy companies have integrated ASR into their customer service solutions.
Using ASR contact center solutions to convert audio data into meaningful information will help you automate tasks, maintain revenue by streamlining operations, improve customer and agent satisfaction, track agent performance, automate quality assurance, and adhere to compliance.
| leadsrain |
1,909,975 | Introducing the New Blazor OTP Input Component | TL;DR: The new Syncfusion Blazor OTP Input component enhances app security with customizable OTP... | 0 | 2024-07-08T16:03:47 | https://www.syncfusion.com/blogs/post/new-blazor-otp-input-component | blazor, whatsnew, ui, web | ---
title: Introducing the New Blazor OTP Input Component
published: true
date: 2024-07-03 10:00:00 UTC
tags: blazor, whatsnew, ui, web
canonical_url: https://www.syncfusion.com/blogs/post/new-blazor-otp-input-component
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vx2ychfxrkjbe30r6f3t.png
---
**TL;DR:** The new Syncfusion Blazor OTP Input component enhances app security with customizable OTP entry, making it ideal for authentication and transactions. Let’s explore its user-friendly features in detail!
We’re thrilled to introduce the new Syncfusion [Blazor OTP Input](https://www.syncfusion.com/blazor-components/blazor-otp-input "Blazor OTP Input component") component as part of our [Essential Studio 2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release.
The [Blazor OTP Input](https://blazor.syncfusion.com/documentation/otp-input/getting-started "Getting started with Blazor OTP Input component") is a form component designed to streamline the process of entering one-time passwords (OTP) during multi-factor authentication. It offers extensive customization options, making it adaptable to various application needs. Features include configurable input types, adjustable input lengths, multiple styling modes, customizable placeholders and separators, and more.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Blazor-OTP-Input-component.png" alt="Blazor OTP Input component" style="width:100%">
<figcaption>Blazor OTP Input component</figcaption>
</figure>
Let’s explore the new Blazor OTP Input component in detail!
## Use cases
The Blazor OTP Input component is well-suited for various apps:
- **User authentication**: Enhance login security by integrating OTP verification, ensuring only authorized users can access their accounts.
- **Secure transactions**: Include OTP input for confirming transactions in online banking or e-commerce platforms, adding a layer of security for sensitive financial activities.
- **Account verification**: To enhance security and prevent unauthorized access, use OTP Input to verify user identities during account creation, password recovery, and other account-related actions.
## Key features
The key features of the OTP Input component are as follows:
- [Built-in input types](#Built)
- [Adjustable input length](#Adjustable)
- [Multiple styling modes](#Multiple)
- [Customizable placeholder and separators](#Customizable)
- [Setting validation state](#Setting)
- [UI customization](#UI)
### <a name="Built">Built-in input types</a>
The Blazor OTP Input component supports three input types:
- [Number](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.OtpInputType.html#Syncfusion_Blazor_Inputs_OtpInputType_Number "Number input of Blazor OTP Input component"): The default input type, which restricts entries to digits only.
- [Text](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.OtpInputType.html#Syncfusion_Blazor_Inputs_OtpInputType_Text "Text input of Blazor OTP Input component"): Allows alphanumeric and special character entries, suitable for more complex OTP requirements.
- [Password](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.OtpInputType.html#Syncfusion_Blazor_Inputs_OtpInputType_Password "Password input of Blazor OTP Input component"): Similar to the text type but masks the input characters to ensure privacy.
In the following code example, the [Type](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.SfOtpInput.html#Syncfusion_Blazor_Inputs_SfOtpInput_Type "Type property of Blazor OTP Input component") property is used to define the input type, accepting values from the [OtpInputType](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.OtpInputType.html "OtpInputType class of Blazor OTP Input component") enumeration.
```xml
@rendermode InteractiveServer
@using Syncfusion.Blazor.Inputs
<div class="otp-input-component">
<div>
<label>Number type</label>
<SfOtpInput Value="1234"></SfOtpInput>
</div>
<div>
<label>Text type</label>
<SfOtpInput Type="OtpInputType.Text" Value="ab12"></SfOtpInput>
</div>
<div>
<label>Password type</label>
<SfOtpInput Type="OtpInputType.Password" Value="ab12"></SfOtpInput>
</div>
</div>
<style>
.otp-input-component {
width: 250px;
display: flex;
flex-direction: column;
gap: 20px;
}
</style>
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Built-in-input-types-supported-in-Blazor-OTP-Input-component.png" alt="Built-in input types supported in Blazor OTP Input component" style="width:100%">
<figcaption>Built-in input types supported in Blazor OTP Input component</figcaption>
</figure>
**Note:** For more details, refer to the [input types of Blazor OTP Input component documentation](https://blazor.syncfusion.com/documentation/otp-input/input-types "Input Types in Blazor OTP Input component").
### <a name="Adjustable">Adjustable input length</a>
The Blazor OTP Input component offers flexibility to define the number of input fields, making it adaptable to various OTP length requirements. Whether you need a standard 6-digit OTP or a different length, the input length can be easily adjusted to meet your specific app requirements.
In the following example, the [Length](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.SfOtpInput.html#Syncfusion_Blazor_Inputs_SfOtpInput_Length "Length property of Blazor OTP Input component") property is set to 8 to define the number of input fields.
```xml
@rendermode InteractiveServer
@using Syncfusion.Blazor.Inputs
<div class="otp-input-component" style="width: 400px;">
<SfOtpInput Length=8 Value="12345678"></SfOtpInput>
</div>
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Customizing-the-input-filed-length-in-the-Blazor-OTP-Input-component.png" alt="Customizing the input filed length in the Blazor OTP Input component" style="width:100%">
<figcaption>Customizing the input filed length in the Blazor OTP Input component</figcaption>
</figure>
**Note:** For more details, refer to [setting the input length of Blazor OTP Input component documentation](https://blazor.syncfusion.com/documentation/otp-input/appearance#setting-input-length "Setting the input length of Blazor OTP Input documentation").
### <a name="Multiple">Multiple styling modes</a>
Customize the appearance of the Blazor OTP Input component with various styling modes to suit your app’s visual design:
- [Outlined](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.OtpInputStyle.html#Syncfusion_Blazor_Inputs_OtpInputStyle_Outlined "Outlined property of Blazor OTP Input component"): Displays a border around each input field for a clear and structured look.
- [Filled](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.OtpInputStyle.html#Syncfusion_Blazor_Inputs_OtpInputStyle_Filled "Filled property of Blazor OTP Input component"): Adds a background color to the input fields, enhancing visibility with an included underline.
- [Underlined](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.OtpInputStyle.html#Syncfusion_Blazor_Inputs_OtpInputStyle_Underlined "Underlined property of Blazor OTP Input component"): Highlights each input field with a sleek underline, providing a modern and minimalist style.
In the following code example, we set the [StylingMode](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.SfOtpInput.html#Syncfusion_Blazor_Inputs_SfOtpInput_StylingMode "StylingMode property of Blazor OTP Input component") property to specify the desired styling mode using the [OtpInputStyle](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.OtpInputStyle.html "OtpInputStyle class of Blazor OTP Input component") enumeration.
```xml
@rendermode InteractiveServer
@using Syncfusion.Blazor.Inputs
<input type="radio" checked="@(_inputStyle == OtpInputStyle.Outlined)"
name="InputStyle" title="InputStyle:Outlined" @onchange="@(() => _inputStyle = OtpInputStyle.Outlined)" />
<label>Outlined</label>
<input type="radio" checked="@(_inputStyle == OtpInputStyle.Filled)"
name="InputStyle" title="InputStyle:Filled" @onchange="@(() => _inputStyle = OtpInputStyle.Filled)" />
<label>Filled</label>
<input type="radio" checked="@(_inputStyle == OtpInputStyle.Underlined)"
name="InputStyle" title="InputStyle:Underlined" @onchange="@(() => _inputStyle = OtpInputStyle.Underlined)" />
<label>Underlined</label>
<br />
<br />
<div class="otp-input-component" style="width: 400px;">
<SfOtpInput StylingMode="@_inputStyle" Length=6 Value="123456"></SfOtpInput>
</div>
@code {
private OtpInputStyle _inputStyle = OtpInputStyle.Outlined;
}
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Multiple-styling-modes-in-Blazor-OTP-Input-component.gif" alt="Multiple styling modes in Blazor OTP Input component" style="width:100%">
<figcaption>Multiple styling modes in Blazor OTP Input component</figcaption>
</figure>
**Note:** For more details, refer to the [styling modes in the Blazor OTP Input component documentation](https://blazor.syncfusion.com/documentation/otp-input/styling-modes "Styling Modes in Blazor OTP Input component").
### <a name="Customizable">Customizable placeholder and separators</a>
Enhance user guidance and visual clarity with customizable placeholders and separators in the Blazor OTP Input component.
- **Placeholders:** Display hint characters in each input field to indicate the expected value. Define a single character for all fields or customize each field based on its length.
- **Separators:** Specify a character to visually separate input fields tailored to improve the readability and structure of OTP inputs.
Refer to the following code example.
```xml
@rendermode InteractiveServer
@using Syncfusion.Blazor.Inputs
<div class="otp-input-component">
<div>
<label>Placeholders</label>
<br /><br />
<SfOtpInput Placeholder="#" Length="5"></SfOtpInput>
</div>
<div>
<SfOtpInput Placeholder="#OTP#" Length="5"></SfOtpInput>
</div>
<div>
<label>Separators</label>
<br /><br />
<SfOtpInput Separator="/" Length="6"></SfOtpInput>
</div>
<div>
<SfOtpInput Separator="/" Length="6" CssClass="custom-otpinput"></SfOtpInput>
</div>
</div>
<style>
.otp-input-component {
width: 275px;
display: flex;
flex-direction: column;
gap: 20px;
}
.custom-otpinput span.e-otp-separator:nth-of-type(odd) {
display: none;
}
</style>
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Customizable-placeholder-and-separators-in-Blazor-OTP-Input-component.png" alt="Customizable placeholder and separators in Blazor OTP Input component" style="width:100%">
<figcaption>Customizable placeholder and separators in Blazor OTP Input component</figcaption>
</figure>
**Note:** For more details, refer to the [placeholders](https://blazor.syncfusion.com/documentation/otp-input/placeholder "Placeholder in Blazor OTP Input component") and [separators](https://blazor.syncfusion.com/documentation/otp-input/separator "Separator in Blazor OTP Input component") in the Blazor OTP Input component documentation.
### <a name="Setting">Setting validation state</a>
The Blazor OTP Input component enables you to set validation states, indicating success, warning, or error based on input validation. This component supports predefined styles that can be applied using the [CssClass](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.SfOtpInput.html#Syncfusion_Blazor_Inputs_SfOtpInput_CssClass "CssClass property of Blazor OTP Input component") property:
- **e-success**: Indicates a successful action.
- **e-error**: Indicates a negative action.
- **e-warning**: Indicates an action that requires caution.
In the following code example, the [CssClass](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.SfOtpInput.html#Syncfusion_Blazor_Inputs_SfOtpInput_CssClass "CssClass property of Blazor OTP Input component") property is used to set the validation state using these e-classes.
```xml
@rendermode InteractiveServer
@using Syncfusion.Blazor.Inputs
<div class="otp-input-component">
<div>
<label>Success state</label>
<SfOtpInput CssClass="e-success"></SfOtpInput>
</div>
<div>
<label>Error state</label>
<SfOtpInput CssClass="e-error"></SfOtpInput>
</div>
<div>
<label>Warning state</label>
<SfOtpInput CssClass="e-warning"></SfOtpInput>
</div>
</div>
<style>
.otp-input-component {
width: 250px;
display: flex;
flex-direction: column;
gap: 20px;
}
</style>
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Validation-states-in-Blazor-OTP-Input-component.png" alt="Validation states in Blazor OTP Input component" style="width:100%">
<figcaption>Validation states in Blazor OTP Input component</figcaption>
</figure>
**Note:** For more details, refer to the [validation states](https://blazor.syncfusion.com/documentation/otp-input/appearance#cssclass "Validation States in the Blazor OTP Input component") in the Blazor OTP Input component documentation.
### <a name="UI">UI customization</a>
With the Blazor OTP Input component, you can easily customize its appearance, including rounded fields and styling for each field. It offers extensive flexibility and control over the OTP Input presentation, ensuring seamless integration into your app’s design.
The following code example demonstrates how to create a rounded OTP Input and customize the styling of each input field, handling both filled and empty states using the [CssClass](https://help.syncfusion.com/cr/blazor/Syncfusion.Blazor.Inputs.SfOtpInput.html#Syncfusion_Blazor_Inputs_SfOtpInput_CssClass "CssClass property of Blazor OTP Input component") property.
```xml
@rendermode InteractiveServer
@using Syncfusion.Blazor.Inputs
<div class="otp-input-component">
<div>
<label>Rounded OTP Input</label>
<SfOtpInput CssClass="rounded-otp-input"></SfOtpInput>
</div>
<div>
<label>Input field styling</label>
<SfOtpInput CssClass="input-field-styles" Placeholder=" "></SfOtpInput>
</div>
</div>
<style>
.otp-input-component {
width: 250px;
display: flex;
flex-direction: column;
gap: 20px;
}
.rounded-otp-input.e-otpinput .e-otp-input-field {
border-radius: 50%;
height: 50px;
background-color: grey;
color: white;
font-size: 25px;
}
.rounded-otp-input.e-otpinput .e-otp-input-field.e-input.e-otp-input-focus {
border-radius: 50%;
}
.input-field-styles.e-otpinput .e-otp-input-field {
height: 50px;
background-color: lightgray;
font-size: 25px;
border-radius: 0;
border-color: transparent;
}
.input-field-styles.e-otpinput .e-otp-input-field:not(:placeholder-shown) {
background-color: #88d0e7;
border-color: transparent;
}
.input-field-styles.e-otpinput .e-otp-input-field.e-input.e-otp-input-focus {
border-radius: 0;
border-color: lightsalmon;
border-width: 2px;
box-shadow: none;
}
</style>
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Customizing-the-appearance-of-the-Blazor-OTP-Input-component.gif" alt="Customizing the appearance of the Blazor OTP Input component" style="width:100%">
<figcaption>Customizing the appearance of the Blazor OTP Input component</figcaption>
</figure>
## References
For more details, refer to our [Blazor OTP Input component demos](https://blazor.syncfusion.com/demos/otp-input/default-functionalities?theme=fluent2 "Blazor OTP Input component demos") and [documentation](https://blazor.syncfusion.com/documentation/otp-input/getting-started "Blazor OTP Input component documentation").
## Supported platforms
The OTP Input component is also available on the following platforms.
<table width="601">
<tbody>
<tr>
<td width="199">
<p><strong>Platform </strong></p>
</td>
<td width="200">
<p><strong>Live Demo </strong></p>
</td>
<td width="202">
<p><strong>Documentation </strong></p>
</td>
</tr>
<tr>
<td width="199">
<p><a title="JavaScript OTP Input Control" href="https://www.syncfusion.com/javascript-ui-controls/js-otp-input" target="_blank" rel="noopener">JavaScript</a></p>
</td>
<td width="200">
<p><a title="JavaScript OTP Input Demos" href="https://ej2.syncfusion.com/demos/#/material3/otp-input/default.html" target="_blank" rel="noopener">JavaScript OTP Input Demos </a></p>
</td>
<td width="202">
<p><a title="Getting Started with JavaScript OTP Input" href="https://ej2.syncfusion.com/documentation/otp-input/getting-started" target="_blank" rel="noopener">Getting Started with JavaScript OTP Input</a></p>
</td>
</tr>
<tr>
<td width="199">
<p><a title="Angular OTP Input Component" href="https://www.syncfusion.com/angular-components/angular-otp-input" target="_blank" rel="noopener">Angular</a></p>
</td>
<td width="200">
<p><a title="Angular OTP Input Demos" href="https://ej2.syncfusion.com/angular/demos/#/material3/otp-input/default" target="_blank" rel="noopener">Angular OTP Input Demos </a></p>
</td>
<td width="202">
<p><a title="Getting Started with Angular OTP Input" href="https://ej2.syncfusion.com/angular/documentation/otp-input/getting-started" target="_blank" rel="noopener">Getting Started with Angular OTP Input</a></p>
</td>
</tr>
<tr>
<td width="199">
<p><a title="React OTP Input Component" href="https://www.syncfusion.com/react-components/react-otp-input" target="_blank" rel="noopener">React</a></p>
</td>
<td width="200">
<p><a title="React OTP Input Demos" href="https://ej2.syncfusion.com/react/demos/#/material3/otp-input/default" target="_blank" rel="noopener">React OTP Input Demos</a></p>
</td>
<td width="202">
<p><a title="Getting Started with React OTP Input" href="https://ej2.syncfusion.com/react/documentation/otp-input/getting-started" target="_blank" rel="noopener">Getting Started with React OTP Input</a></p>
</td>
</tr>
<tr>
<td width="199">
<p><a title="Vue OTP Input Component" href="https://www.syncfusion.com/vue-components/vue-otp-input" target="_blank" rel="noopener">Vue</a></p>
</td>
<td width="200">
<p><a title="Vue OTP Input Demos" href="https://ej2.syncfusion.com/vue/demos/#/material3/otp-input/default.html" target="_blank" rel="noopener">Vue OTP Input Demos</a></p>
</td>
<td width="202">
<p><a title="Getting Started with Vue OTP Input" href="https://ej2.syncfusion.com/vue/documentation/otp-input/getting-started" target="_blank" rel="noopener">Getting Started with Vue OTP Input</a></p>
</td>
</tr>
<tr>
<td width="199">
<p><a title="ASP.NET Core OTP Input Control" href="https://www.syncfusion.com/aspnet-core-ui-controls/otp-input" target="_blank" rel="noopener">ASP.NET Core</a></p>
</td>
<td width="200">
<p><a title="ASP.NET Core OTP Input Demos" href="https://ej2.syncfusion.com/aspnetcore/otpinput/defaultfunctionalities#/material3" target="_blank" rel="noopener">ASP.NET Core OTP Input Demos </a></p>
</td>
<td width="202">
<p><a title="Getting Started with ASP.NET Core OTP Input" href="https://ej2.syncfusion.com/aspnetcore/documentation/otp-input/getting-started" target="_blank" rel="noopener">Getting Started with ASP.NET Core OTP Input</a></p>
</td>
</tr>
<tr>
<td width="199">
<p><a title="ASP.NET MVC OTP Input Control" href="https://www.syncfusion.com/aspnet-mvc-ui-controls/otp-input" target="_blank" rel="noopener">ASP.NET MVC</a></p>
</td>
<td width="200">
<p><a title="ASP.Net MVC OTP Input Demos" href="https://ej2.syncfusion.com/aspnetmvc/otpinput/defaultfunctionalities#/material3" target="_blank" rel="noopener">ASP.NET MVC OTP Input Demos </a></p>
</td>
<td width="202">
<p><a title="Getting Started with ASP.NET MVC OTP Input" href="https://ej2.syncfusion.com/aspnetmvc/documentation/otp-input/getting-started" target="_blank" rel="noopener">Getting Started with ASP.NET MVC OTP Input</a></p>
</td>
</tr>
</tbody>
</table>
## Conclusion
Thanks for reading! In this blog, we have seen the features of the new Syncfusion [Blazor OTP Input](https://www.syncfusion.com/blazor-components/blazor-otp-input "Blazor OTP Input") component that rolled out in our [2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release. These features are also listed on our [Release Notes](https://help.syncfusion.com/common/essential-studio/release-notes/v26.1.35 "Essential Studio Release Notes") and the [What’s New](https://www.syncfusion.com/products/whatsnew "Essential Studio What’s New") pages. Try out the component and share your feedback as comments on this blog.
You can also reach us through our [support forums](https://www.syncfusion.com/forums/ "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/ "Syncfusion Feedback Portal"). Our team is always ready to assist you!
## Related blogs
- [What’s New in Blazor: 2024 Volume 2](https://www.syncfusion.com/blogs/post/whats-new-blazor-2024-volume-2 "Blog: What’s New in Blazor: 2024 Volume 2")
- [Introducing the New Blazor TextArea Component](https://www.syncfusion.com/blogs/post/new-blazor-textarea-component "Blog: Introducing the New Blazor TextArea Component")
- [Introducing the New Blazor 3D Charts Component](https://www.syncfusion.com/blogs/post/blazor-3d-charts-component "Blog: Introducing the New Blazor 3D Charts Component")
- [What’s New in Blazor Diagram: 2024 Volume 2](https://www.syncfusion.com/blogs/post/whats-new-blazor-diagram-2024-vol-2 "Blog: What’s New in Blazor Diagram: 2024 Volume 2") | jollenmoyani |
1,910,348 | Chart of the Week: Creating the .NET MAUI Radial Bar to Visualize Apple’s Revenue Breakdown | TL;DR: Let’s visualize Apple’s revenue distribution data across different products using Syncfusion... | 0 | 2024-07-08T16:07:31 | https://www.syncfusion.com/blogs/post/dotnet-maui-radial-bar-apple-revenue | dotnetmaui, chart, maui, desktop | ---
title: Chart of the Week: Creating the .NET MAUI Radial Bar to Visualize Apple’s Revenue Breakdown
published: true
date: 2024-07-03 13:27:25 UTC
tags: dotnetmaui, chart, maui, desktop
canonical_url: https://www.syncfusion.com/blogs/post/dotnet-maui-radial-bar-apple-revenue
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hxk2c8nkghgttksz3ieq.jpeg
---
**TL;DR:** Let’s visualize Apple’s revenue distribution data across different products using Syncfusion .NET MAUI Radial Bar Chart. We’ll use advanced customization options for a superior data visualization experience. Read the article for complete details!
Welcome to our **Chart of the Week Blog** series!
Today, we’ll visualize the data on Apple’s revenue breakdown for 2022 by creating a stunning [Radial Bar Chart](https://www.syncfusion.com/maui-controls/maui-circular-charts/chart-types/maui-radial-bar-chart ".NET MAUI Radial Bar Chart") with a customized [center view](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.RadialBarSeries.html#Syncfusion_Maui_Charts_RadialBarSeries_CenterView "CenterView property of Radial Bar Charts") using the [Syncfusion .NET MAUI Circular Charts control](https://www.syncfusion.com/maui-controls/maui-circular-charts ".NET MAUI Circular Charts").
## Why use Radial Bar Chart?
A [Radial Bar Chart](https://youtu.be/KO7LfFuIwHE "YouTube video: How to customize the .NET MAUI Radial Bar Chart?") is a doughnut chart representing each segment as a separate circle. This versatile chart type can be used in various scenarios to visualize categorical data in a circular format. Here are a few everyday use cases of it:
- Sales performance
- Market share analysis
- Survey results
- Performance metrics
Imagine you’re a financial analyst looking to present complex revenue data in a visually appealing way. You need a chart that not only showcases the contributions of different product categories like iPhones, Macs, and Services but also has a stunning design. The [Radial Bar Chart](https://help.syncfusion.com/maui/circular-charts/radialbarchart "Radial Bar Chart in .NET MAUI Chart") is ideal for this purpose. It transforms detailed revenue breakdowns into an easy-to-understand and captivating visual, making data interpretation straightforward and efficient.
## **Center view customization**
The .NET MAUI Radial Bar Chart offers extensive customization options; the [center view](https://help.syncfusion.com/maui/circular-charts/radialbarchart#centerview "Center view customization in .NET MAUI Radial Bar Chart") is one such powerful feature. This feature allows you to add any view or key statistics to the center of the Radial Bar Chart, making it a valuable area for sharing additional information about the chart.
In this blog, we will visualize Apple’s revenue for 2022, placing an Apple icon and the total revenue in the center view. This central element not only adds a visual appeal but also provides a quick snapshot of the overall data.
Refer to the following image.

We’ll also explore other chart features such as the chart title, color palettes, series customization, legend, and interactive features like tooltips for further customization.
Let’s get started!
## Step 1: Gathering data from the source
First, we need to gather the revenue data for different product categories from a reliable source. For this example, we are using Apple’s revenue data for the year 2022 from [Apple’s annual report( page 40)](https://s2.q4cdn.com/470004039/files/doc_financials/2022/q4/_10-K-2022-(As-Filed).pdf "Apple's annual report").
## Step 2: Preparing the data for the chart
Create a **Model** to represent the revenue data for each product category. Refer to the following code example.
```csharp
public class Model
{
public string Category { get; set; }
public double Revenue { get; set; }
public string CategoryImage { get; set; }
public SolidColorBrush Color { get; set; }
public Model(string category, double revenue, string categoryImage, SolidColorBrush color)
{
Category = category;
Revenue = revenue;
CategoryImage = categoryImage + ".png";
Color = color;
}
}
```
Next, create a **ViewModel** to hold the collection of revenue data.
```csharp
public class ViewModel
{
public ObservableCollection<Model> RadialBarData { get; set; }
public ViewModel()
{
var colors = new string[]
{
"#0B77E3",
"#1D5B6F",
"#BD34B7",
"#DE7207",
"#8E4AFC"
};
RadialBarData = new ObservableCollection<Model>()
{
new Model("Services", 78129, "service", CreateBrush(colors[0])),
new Model("Wearables", 41241, "earphone", CreateBrush(colors[1])),
new Model("iPad", 29292, "ipad", CreateBrush(colors[2])),
new Model("Mac", 40177, "mac", CreateBrush(colors[3])),
new Model("iPhone", 205489, "iphone", CreateBrush(colors[4]))
};
}
private SolidColorBrush CreateBrush(string hexColor)
{
return new SolidColorBrush(Color.FromArgb(hexColor));
}
}
```
## Step 3: Configure the Syncfusion .NET MAUI Circular Charts
Let’s configure the Syncfusion [.NET MAUI Circular Charts control](https://www.syncfusion.com/maui-controls/maui-circular-charts ".NET MAUI Circular Charts control") using this [documentation](https://help.syncfusion.com/maui/circular-charts/getting-started "Getting started with .NET MAUI Circular Charts").
Refer to the following code example.
```xml
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage ….
xmlns:chart="clr-namespace:Syncfusion.Maui.Charts;assembly=Syncfusion.Maui.Charts">
<chart:SfCircularChart>
…
</chart:SfCircularChart>
</ContentPage>
```
## Step 4: Binding data to the chart
To design the Radial Bar Chart, we’ll use the Syncfusion [RadialBarSeries ](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.RadialBarSeries.html "RadialBarSeries class of .NET MAUI Circular Charts")instances. Make sure to configure the **ViewModel** class to bind its properties to the chart’s **BindingContext**.
Refer to the following code example. Here, in the **RadialBarSeries** , the [ItemsSource](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartSeries.html#Syncfusion_Maui_Charts_ChartSeries_ItemsSource "ItemsSource property of .NET MAUI Charts"), [XBindingPath](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartSeries.html#Syncfusion_Maui_Charts_ChartSeries_XBindingPath "XBindingPath property of .NET MAUI Charts"), and [YBindingPath](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.XYDataSeries.html#Syncfusion_Maui_Charts_XYDataSeries_YBindingPath "YBindingPath property of .NET MAUI Charts") properties are bound to the **RadialBarData** , **Category** , and **Revenue** properties, respectively.
```xml
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage ….
xmlns:local="clr-namespace:AppleRevenueRadialBarChart">
<chart:SfCircularChart>
<chart:SfCircularChart.BindingContext>
<local:ViewModel x:Name="viewModel"/>
</chart:SfCircularChart.BindingContext>
<chart:RadialBarSeries ItemsSource="{Binding RadialBarData}"
YBindingPath="Revenue" XBindingPath="Category" >
</chart:SfCircularChart>
</ContentPage>
```
## Step 5: Customizing the Syncfusion .NET MAUI Radial Bar Chart
Let’s enhance the aesthetics and readability of the .NET MAUI Radial Bar Chart by customizing its elements such as title, series, interaction features, center view, and legends.
### Chart title: Setting the stage
With the Syncfusion .NET MAUI Radial Bar Chart, adding a [title](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartBase.html#Syncfusion_Maui_Charts_ChartBase_Title "Title property of .NET MAUI Charts") is simple and intuitive. The title provides quick information about the data being plotted in the chart.
Refer to the following code example to add a title to the chart.
```xml
<chart:SfCircularChart.Title>
<HorizontalStackLayout>
<Image Source="revenue1.png" WidthRequest="{OnPlatform Default='50', iOS='40', MacCatalyst='70'}"/>
<VerticalStackLayout>
<Label Text="Apple's Revenue Breakdown for FY 2022" Margin="5,0,0,0"
HorizontalOptions="CenterAndExpand" HorizontalTextAlignment="Center"
VerticalOptions="CenterAndExpand"
FontSize="{OnPlatform Default='25', iOS='22', MacCatalyst='28'}" FontAttributes="Bold" />
<Label Text="Revenue growth rate calculated by product category" Margin="5,0,0,0"
HorizontalOptions="StartAndExpand" HorizontalTextAlignment="Center"
VerticalOptions="CenterAndExpand"
FontSize="{OnPlatform WinUI='14', Android='12', iOS='14', MacCatalyst='18'}"
TextColor="Gray"/>
</VerticalStackLayout>
</HorizontalStackLayout>
</chart:SfCircularChart.Title>
```
### Center view: Highlighting key information
The [center view](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.RadialBarSeries.html#Syncfusion_Maui_Charts_RadialBarSeries_CenterView "CenterView property of .NET MAUI Charts") helps us to share additional information about the Radial Bar Chart. Here, we’re going to add the total revenue generated by Apple in 2022, which is $394.3B in the center view.
```xml
<chart:RadialBarSeries.CenterView>
<VerticalStackLayout Margin="{OnPlatform WinUI='0, -20, 0, 0', Default='0, -10, 0, 0'}" >
<Image HorizontalOptions="CenterAndExpand" VerticalOptions="StartAndExpand" Source="apple.png"
HeightRequest="{OnPlatform iOS=70, Android=70, WinUI=85, MacCatalyst=120}"
WidthRequest="{OnPlatform iOS=70, Android=70, WinUI=100, MacCatalyst=120}" />
<VerticalStackLayout Margin="{OnPlatform WinUI='0, -10, 0, 0', Default='0, -10, 0, 0'}" >
<Label Text="Total revenue" FontSize="{OnPlatform Android=12, iOS=14, WinUI=12, MacCatalyst=18}"
HorizontalOptions="Center" VerticalOptions="Center" />
<Label Text="$394.3B" FontSize="{OnPlatform Android=14, iOS=18, WinUI=16, MacCatalyst=22}"
FontAttributes="Bold" HorizontalOptions="Center" VerticalOptions="Center" />
</VerticalStackLayout>
</VerticalStackLayout>
</chart:RadialBarSeries.CenterView>
```
### Palette: Making data pop with colors
We can customize the [palette](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartSeries.html#Syncfusion_Maui_Charts_ChartSeries_PaletteBrushes "PaletteBrushes property of .NET MAUI Charts") to apply distinct colors for each category, making it easier to differentiate between them. Each segment of the radial bar represents a product category, with unique colors assigned to Services, Wearables, iPad, Mac, and iPhone. Color coding helps in quick visual identification and comparison of categories.
**XAML**
```xml
<chart:RadialBarSeries PaletteBrushes="{Binding Palette}">
```
**C#**
```csharp
public class ViewModel
{
public ObservableCollection<Brush> Palette { get; set; }
public ViewModel()
{
var colors = new string[]
{
…
};
Palette = new ObservableCollection<Brush>(colors.Select(CreateBrush));
}
}
```
### Customizing the series appearance
Let’s customize the chart’s series appearance to suit our data presentation needs.
- The [CapStyle](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.RadialBarSeries.html#Syncfusion_Maui_Charts_RadialBarSeries_CapStyle "CapStyle property of .NET MAUI Charts") property adds smooth curves to the start and end of each bar.
- The [Radius](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.CircularSeries.html#Syncfusion_Maui_Charts_CircularSeries_Radius "Radius property of .NET MAUI Charts") property changes the radial bar chart size.
- The [MaximumValue](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.RadialBarSeries.html#Syncfusion_Maui_Charts_RadialBarSeries_MaximumValue "MaximumValue property of .NET MAUI Charts") represents the span of the segment-filled area in the radial bar track.
- The [GapRatio](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.RadialBarSeries.html#Syncfusion_Maui_Charts_RadialBarSeries_GapRatio "GapRatio property of .NET MAUI Charts") defines the space between each segment.
- The [TrackFill](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.RadialBarSeries.html#Syncfusion_Maui_Charts_RadialBarSeries_TrackFill "TrackFill property of .NET MAUI Charts") property customizes the circular bar area behind the radial bar segments.
By leveraging these properties, a visually stunning and highly customizable Radial Bar Chart can be created to communicate data insights effectively.
Refer to the following code example.
```xml
<chart:RadialBarSeries CapStyle="BothCurve"
Radius="{OnPlatform Android=1, iOS=1, Default=0.8}"
MaximumValue="218000"
GapRatio="0.4"
TrackFill="#E7E0EC">
```
### Legend: A quick reference guide
The [legends](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartBase.html#Syncfusion_Maui_Charts_ChartBase_Legend "Legend property of .NET MAUI Charts") provide information about the data points being displayed in the chart. They list all categories with their respective icons, making it easy to interpret the data briefly without needing to hover over each segment.
Refer to the following code example.
```xml
<chart:SfCircularChart>
<chart:SfCircularChart.Resources>
<local:BrushToColorConverter x:Key="brushToColor" />
<local:BillionConverter x:Key="billionConverter" />
<DataTemplate x:Key="LegendTemplate">
<Grid ColumnDefinitions="Auto,Auto" Margin="{OnPlatform Android='0, -5, 0, 10', Default='20, 0, 0, 0'}">
<Image Grid.Column="0" Source="{Binding Item.CategoryImage}"
WidthRequest="{OnPlatform Default='35', iOS='40', MacCatalyst='50'}"
HeightRequest="{OnPlatform Default='35', iOS='40', MacCatalyst='50'}" />
<VerticalStackLayout Grid.Column="1">
<Label FontSize="{OnPlatform Default='13', iOS='16', MacCatalyst='18'}"
VerticalTextAlignment="Center" Text="{Binding Item.Category}"
TextColor="{Binding IconBrush, Converter={StaticResource brushToColor}}"
Margin="0,2,0,0" HorizontalOptions="StartAndExpand" HorizontalTextAlignment="Center"/>
<Label FontSize="{OnPlatform Default='13', iOS='16', MacCatalyst='18'}"
VerticalTextAlignment="Center"
Text="{Binding Item.Revenue, Converter={StaticResource billionConverter}}"
TextColor="{Binding IconBrush, Converter={StaticResource brushToColor}}"
Margin="0,2,0,0" HorizontalOptions="StartAndExpand" HorizontalTextAlignment="Center"/>
</VerticalStackLayout>
</Grid>
</DataTemplate>
</chart:SfCircularChart.Resources>
<chart:SfCircularChart.Legend>
<chart:ChartLegend Placement="{OnPlatform Android=Right, iOS=Right, Default=Bottom}"
ItemTemplate="{StaticResource LegendTemplate}"/>
</chart:SfCircularChart.Legend>
<chart:RadialBarSeries LegendIcon="Circle">
</chart:RadialBarSeries>
</chart:SfCircularChart>
```
### Tooltip: Adding interactivity
We can enhance the chart’s interactivity with [tooltips](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartSeries.html#Syncfusion_Maui_Charts_ChartSeries_EnableTooltip "EnableTooltip property of .NET MAUI Charts") that provide detailed information when a user hovers over or taps on a segment. In this chart, the tooltip shows the category name and its percentage contribution to the total revenue. This interactivity makes the chart more engaging and informative.
Refer to the following code example.
```xml
<chart:SfCircularChart>
<chart:SfCircularChart.Resources>
<local:PercentageConverter x:Key="percentageConverter" />
<DataTemplate x:Key="tooltipTemplate">
<StackLayout Orientation="Vertical" VerticalOptions="Center" Margin="{OnPlatform MacCatalyst='3'}">
<HorizontalStackLayout HorizontalOptions="Center">
<Ellipse WidthRequest="{OnPlatform Default=10, MacCatalyst=15}"
HeightRequest="{OnPlatform Default=10, MacCatalyst=15}"
Stroke="White" StrokeThickness="2"
Background="{Binding Item.Color, Converter={StaticResource brushToColor}}"
HorizontalOptions="Center" VerticalOptions="Center"/>
<Label Text="{Binding Item.Category, StringFormat=' {0}'}"
TextColor="White" FontAttributes="Bold"
FontSize="{OnPlatform Default='12', Android='10', iOS='10', MacCatalyst='18'}"
HorizontalOptions="Center" VerticalOptions="Center"/>
</HorizontalStackLayout>
<BoxView HeightRequest="{OnPlatform Default=0.5, MacCatalyst=1}" Color="White"/>
<HorizontalStackLayout>
<Label Text="Revenue:" TextColor="White" FontAttributes="Bold"
FontSize="{OnPlatform Default='12', Android='10', iOS='10', MacCatalyst='18'}"
HorizontalOptions="Center" VerticalOptions="Center"/>
<Label TextColor="White" FontAttributes="Bold"
Text="{Binding Item.Revenue, StringFormat=' {0}', Converter={StaticResource percentageConverter}}"
FontSize="{OnPlatform Default='12', Android='10', iOS='10', MacCatalyst='18'}"
HorizontalOptions="Center" VerticalOptions="Center"/>
</HorizontalStackLayout>
</StackLayout>
</DataTemplate>
</chart:SfCircularChart.Resources>
<chart:SfCircularChart.TooltipBehavior>
<chart:ChartTooltipBehavior />
</chart:SfCircularChart.TooltipBehavior>
<chart:RadialBarSeries EnableTooltip="True" TooltipTemplate="{StaticResource tooltipTemplate}">
</chart:RadialBarSeries>
</chart:SfCircularChart>
```
After executing these code examples, we will get the output that resembles the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Visualizing-Apples-revenue-data-using-Syncfusion-.NET-MAUI-Radial-Bar-Chart-control.gif" alt="Visualizing Apple’s revenue data using Syncfusion .NET MAUI Radial Bar Chart" style="width:100%">
<figcaption>Visualizing Apple’s revenue data using Syncfusion .NET MAUI Radial Bar Chart</figcaption>
</figure>
## GitHub reference
For more details, refer to the demo on [GitHub](https://github.com/SyncfusionExamples/Creating-the-.NET-MAUI-Radial-Bar-Chart-with-Customized-Center-View "Visualizing Apple’s revenue data using Syncfusion .NET MAUI Radial Bar Chart GitHub demo").
## Conclusion
Thanks for reading! In this blog, we’ve seen how to visualize Apple’s revenue data using the Syncfusion .NET MAUI Radial Bar Chart. We strongly encourage you to follow the steps outlined in this blog, share your thoughts in the comments below, and implement these features in your own projects to create impactful data visualizations.
The existing customers can download the new version of Essential Studio on the [License and Downloads](https://www.syncfusion.com/account "Essential Studio License and Downloads page") page. If you are not a Syncfusion customer, try our 30-day [free trial](https://www.syncfusion.com/downloads "Get the free 30-day evaluation of Essential Studio products") to check out our incredible features.
You can also contact us through our [support forums](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback "Syncfusion Feedback Portal"). We are always happy to assist you!
## Related blogs
- [Easily Export .NET MAUI DataGrid to Specific PDF Page](https://www.syncfusion.com/blogs/post/export-maui-grid-to-specific-pdf-page "Blog: Easily Export .NET MAUI DataGrid to Specific PDF Page")
- [Introducing the New .NET MAUI Digital Gauge Control](https://www.syncfusion.com/blogs/post/dotnetmaui-digital-gauge-control "Blog: Introducing the New .NET MAUI Digital Gauge Control")
- [Chart of the Week: Creating a .NET MAUI Doughnut Chart to Visualize the World’s Biggest Oil Producers](https://www.syncfusion.com/blogs/post/maui-doughnut-chart-oil-producers "Blog: Chart of the Week: Creating a .NET MAUI Doughnut Chart to Visualize the World’s Biggest Oil Producers")
- [Introducing the 12th Set of New .NET MAUI Controls and Features](https://www.syncfusion.com/blogs/post/syncfusion-dotnet-maui-2024-volume-2 "Blog: Introducing the 12th Set of New .NET MAUI Controls and Features") | jollenmoyani |
1,910,383 | 40 Days Of Kubernetes (11/40) | Day 11/40 Multi Container Pod Kubernetes - Sidecar vs Init Container Video... | 0 | 2024-07-08T16:19:38 | https://dev.to/sina14/40-days-of-kubernetes-1140-a5f | kubernetes, 40daysofkubernetes | ## Day 11/40
# Multi Container Pod Kubernetes - Sidecar vs Init Container
[Video Link](https://www.youtube.com/watch?v=yRiFq1ykBxc)
@piyushsachdeva
[Git Repository](https://github.com/piyushsachdeva/CKA-2024/)
[My Git Repo](https://github.com/sina14/40daysofkubernetes)
We're going to look at to side-car or multi `container` pods.
Let's say we have a `pod` which has a `nginx` container. It can have additional `container` inside or sidecar (helper) `nginx` container that for instance a `health-check` or `init-container`, running certain tasks, for the main `container`, in this example it's `nginx`.
The first container that is run is `init-container`. As soon as the tasks of `init-container` completed, the main `container` starts. They also share the resources in that `pod`, such as `memory` or `cpu`.

(Photo from the video)
#### 1. Create `pod`
```yaml
---
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: busybox:1.28
env:
- name: FIRSTNAME
value: "SINA"
```
```console
root@localhost:~# kubectl apply -f pod-sidecar.yaml
pod/myapp created
root@localhost:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp 0/1 Completed 1 (2s ago) 3s
root@localhost:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp 0/1 CrashLoopBackOff 1 (13s ago) 14s
```
**Note** Because we're not doing anything in the `pod`, the status is `CrashLoopBackOff`.
The events of the creation `pod` is:
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m13s default-scheduler Successfully assigned default/myapp to lucky-luke-worker
Normal Pulled 44s (x5 over 2m13s) kubelet Container image "busybox:1.28" already present on machine
Normal Created 44s (x5 over 2m13s) kubelet Created container myapp-container
Normal Started 44s (x5 over 2m13s) kubelet Started container myapp-container
Warning BackOff 15s (x10 over 2m12s) kubelet Back-off restarting failed container myapp-container in pod myapp_default(e3371c0f-ea2a-44b6-81f7-98953931f6ad)
```
#### 2. Create multi `pod`
```yaml
---
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo the app is running && sleep 3600']
env:
- name: FIRSTNAME
value: "SINA"
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c']
args: ['until nslookup myservice.default.svc.cluster.local; do echo Wainting for service to be up; sleep 2; done']
```
- Run the `pod`:
```console
root@localhost:~# kubectl apply -f pod-sidecar.yaml
pod/myapp created
root@localhost:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp 0/1 Init:0/1 0 4s
root@localhost:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp 0/1 Init:0/1 0 16s
root@localhost:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp 0/1 Init:0/1 0 42s
```
- Let's see the events of the `pod`:
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m5s default-scheduler Successfully assigned default/myapp to lucky-luke-worker
Normal Pulled 2m5s kubelet Container image "busybox:1.28" already present on machine
Normal Created 2m5s kubelet Created container init-myservice
Normal Started 2m5s kubelet Started container init-myservice
```
It waits until the `myservice.default.svc.local` run.
- Log of the `pod`:
```console
root@localhost:~# kubectl logs pod/myapp
Defaulted container "myapp-container" out of: myapp-container, init-myservice (init)
Error from server (BadRequest): container "myapp-container" in pod "myapp" is waiting to start: PodInitializing
```
- Logs of `init-container`:
```console
root@localhost:~# kubectl logs pod/myapp -c init-myservice
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'myservice.default.svc.local'
Wainting for service to be up
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'myservice.default.svc.local'
Wainting for service to be up
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
...
```
- Create deployment and service and see our `myapp` container is `Running`.
```console
root@localhost:~# kubectl create deploy nginx-deploy --image nginx --port 80
deployment.apps/nginx-deploy created
root@localhost:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp 0/1 Init:0/1 0 108s
nginx-deploy-7d54cf5979-qrqq4 1/1 Running 0 5s
root@localhost:~# kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/myapp 0/1 Init:0/1 0 117s
pod/nginx-deploy-7d54cf5979-qrqq4 1/1 Running 0 14s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deploy 1/1 1 1 14s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d2h
root@localhost:~# kubectl expose deploy nginx-deploy --name myservice --port 80
service/myservice exposed
root@localhost:~# kubectl get pod,deploy,svc
root@localhost:~# kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/myapp 0/1 Init:0/1 0 2m31s
pod/nginx-deploy-7d54cf5979-qrqq4 1/1 Running 0 48s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deploy 1/1 1 1 48s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d2h
service/myservice ClusterIP 10.96.143.163 <none> 80/TCP 4s
root@localhost:~# kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/myapp 1/1 Running 0 2m42s
pod/nginx-deploy-7d54cf5979-qrqq4 1/1 Running 0 59s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deploy 1/1 1 1 59s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d2h
service/myservice ClusterIP 10.96.143.163 <none> 80/TCP 15s
root@localhost:~# kubectl logs myapp -c myapp-container
the app is running
```
- Print environment variables of the `pod`
As we can see, our variable `FIRSTNAME` is defined.
```console
root@localhost:~# kubectl exec -it myapp -- printenv
Defaulted container "myapp-container" out of: myapp-container, init-myservice (init)
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=myapp
`FIRSTNAME=SINA`
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYSERVICE_SERVICE_HOST=10.96.143.163
KUBERNETES_PORT=tcp://10.96.0.1:443
MYSERVICE_PORT=tcp://10.96.143.163:80
MYSERVICE_PORT_80_TCP_ADDR=10.96.143.163
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYSERVICE_PORT_80_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP_PORT=443
MYSERVICE_SERVICE_PORT=80
MYSERVICE_PORT_80_TCP=tcp://10.96.143.163:80
MYSERVICE_PORT_80_TCP_PORT=80
TERM=xterm
HOME=/root
```
#### 3. 3-Containers Pod
```yaml
---
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo the app is running && sleep 3600']
env:
- name: FIRSTNAME
value: "SINA"
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c']
args: ['until nslookup myservice.default.svc.cluster.local; do echo Wainting for Service to be up; sleep 30; done']
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c']
args: ['until nslookup mydb.default.svc.cluster.local; do echo Wainting for DB to be up; sleep 30; done']
```
```console
root@localhost:~# kubectl create deploy nginx-deploy --image nginx --port 80
deployment.apps/nginx-deploy created
root@localhost:~# kubectl expose deploy nginx-deploy --name myservice --port 80
service/myservice exposed
root@localhost:~# kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/myapp 0/1 Init:0/2 0 14m
pod/nginx-deploy-7d54cf5979-lddqf 1/1 Running 0 49s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deploy 1/1 1 1 49s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d4h
service/myservice ClusterIP 10.96.125.99 <none> 80/TCP 5s
root@localhost:~# kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/myapp 0/1 Init:0/2 0 14m
pod/nginx-deploy-7d54cf5979-lddqf 1/1 Running 0 57s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deploy 1/1 1 1 57s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d4h
service/myservice ClusterIP 10.96.125.99 <none> 80/TCP 13s
root@localhost:~# kubectl create deploy redis-deploy --image redis --port 6379
deployment.apps/redis-deploy created
root@localhost:~# kubectl expose deploy redis-deploy --name mydb --port 6379
service/mydb exposed
root@localhost:~# kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/myapp 0/1 Init:1/2 0 16m
pod/nginx-deploy-7d54cf5979-lddqf 1/1 Running 0 2m36s
pod/redis-deploy-6dd4dc84bc-2xcw9 1/1 Running 0 41s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deploy 1/1 1 1 2m36s
deployment.apps/redis-deploy 1/1 1 1 41s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d4h
service/mydb ClusterIP 10.96.147.66 <none> 6379/TCP 5s
service/myservice ClusterIP 10.96.125.99 <none> 80/TCP 112s
root@localhost:~# kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/myapp 0/1 Init:1/2 0 16m
pod/nginx-deploy-7d54cf5979-lddqf 1/1 Running 0 2m49s
pod/redis-deploy-6dd4dc84bc-2xcw9 1/1 Running 0 54s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deploy 1/1 1 1 2m49s
deployment.apps/redis-deploy 1/1 1 1 54s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d4h
service/mydb ClusterIP 10.96.147.66 <none> 6379/TCP 18s
service/myservice ClusterIP 10.96.125.99 <none> 80/TCP 2m5s
root@localhost:~# kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/myapp 1/1 Running 0 17m
pod/nginx-deploy-7d54cf5979-lddqf 1/1 Running 0 3m31s
pod/redis-deploy-6dd4dc84bc-2xcw9 1/1 Running 0 96s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deploy 1/1 1 1 3m31s
deployment.apps/redis-deploy 1/1 1 1 96s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d4h
service/mydb ClusterIP 10.96.147.66 <none> 6379/TCP 60s
service/myservice ClusterIP 10.96.125.99 <none> 80/TCP 2m47s
root@localhost:~# kubectl logs pod/myapp -c init-mydb
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Wainting for DB to be up
nslookup: can't resolve 'mydb.default.svc.cluster.local'
nslookup: can't resolve 'mydb.default.svc.cluster.local'
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Wainting for DB to be up
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Wainting for DB to be up
nslookup: can't resolve 'mydb.default.svc.cluster.local'
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Wainting for DB to be up
nslookup: can't resolve 'mydb.default.svc.cluster.local'
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: mydb.default.svc.cluster.local
Address 1: 10.96.147.66 mydb.default.svc.cluster.local
root@localhost:~# kubectl logs pod/myapp -c init-myservice
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'myservice.default.svc.cluster.local'
Wainting for Service to be up
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'myservice.default.svc.cluster.local'
Wainting for Service to be up
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
```
| sina14 |
1,910,464 | Python for Javascript Developers | Why Learn Python in the First Place? In today's world, we have an abundance of options... | 0 | 2024-07-08T14:01:38 | https://dev.to/alexaleoto/python-for-javascript-developers-36cd |

##Why Learn Python in the First Place?
In today's world, we have an abundance of options for languages, and at times it can become overwhelming. So why Python out of all languages? Python is number one on the list of most used programming languages. It is used by companies all around the world.

> Python is used by Intel, IBM, NASA, Pixar, Netflix, Facebook, JP Morgan Chase, Spotify, and a number of other massive companies. It's one of the four main languages at Google, while Google's YouTube is largely written in Python. The same goes for Reddit, Pinterest, and Instagram. The original BitTorrent client was also written in Python. It is used as a scripting language to control Maya, the industry standard 3D modeling and animation tool. ~ Brainstation.io
### Quick History on Python!
Python, one of the world's most popular programming languages, was conceptualized by Guido van Rossum in the late 1980s as a successor to the ABC programming language. Python was designed to be easy to learn and use, with support for exception handling and initially targeted at the Amoeba operating system. The first version, 0.9.0, was released in 1991, featuring essential elements like classes, functions, and data types such as lists and dictionaries. Python 1.0, released in 1994, introduced functional programming tools like lambda, map, filter, and reduce, contributed by other developers. Van Rossum continued developing Python at the Corporation for National Research Initiatives (CNRI) in Virginia, where he promoted programming accessibility through the initiative "Computer Programming for Everyone."
In 2000, the Python development team moved to BeOpen.com, transforming Python into a more open-source project with the release of Python 2.0, which fostered greater community involvement. This transition marked a significant change in Python's development process. Python 3.0, released in December 2008, was a major revision that was not backward compatible with Python 2.x, focusing on removing redundant constructs and modules to provide a more streamlined language. Despite the challenge of transitioning from Python 2 to Python 3, with support for Python 2.7 ending in 2020, Python continues to evolve and remains a highly popular and accessible programming language for beginners and professionals alike. Okay now that we got that out the way lets talk code!
##Essential Context and Syntax
For JavaScript developers, learning Python can be a smooth transition thanks to similarities in syntax and logic. However, there are also notable differences. Let's explore the essential context and syntax to get you started.
One of the first things I noticed about python is the readability. lets take a look at the example below
### Javascript Example

### Python Example

---
While both are fairly short pieces of code, the python example is shorter by a considerable amount of characters and python also has the added benefit of being english readable.
### Variables and Data Types
Variables in Python do not require a keyword for declaration (like var, let, or const in JavaScript). Plus, Python has dynamic typing, making it flexible and easy to use.
### JavaScript Example:

### Python Equivalent:

### Functions and Loops
Defining functions in Python is straightforward, using the def keyword. Loops in Python are also quite similar but use a different syntax.
### JavaScript Example:

### Python Equivalent:

### Lists and Dictionaries (Arrays and Objects)
Python lists and dictionaries are similar to JavaScript arrays and objects, making data manipulation easy and intuitive.
### JavaScript Example:

### Python Equivalent:

## Conclusion
Learning Python as a JavaScript developer is like adding a new superpower to your coding arsenal. It's simple, readable, and used by companies in every field out there. Python's flexibility, readability, and extensive library support make it an excellent choice for web development, data science, machine learning, automation, and more. Lastly don't forget that python is for the girls :)

| alexaleoto | |
1,910,626 | Implementing (Psuedo) Profiles in Git (Part 2!) | As noted in my first Implementing (Psuedo) Profiles in Git post: I'm an automation consultant for... | 0 | 2024-07-09T19:45:32 | https://thjones2.blogspot.com/2024/07/implementing-psuedo-profiles-in-git.html | authentication, git, ssh | ---
title: Implementing (Psuedo) Profiles in Git (Part 2!)
published: true
date: 2024-07-03 18:58:00 UTC
tags: authentication,git,ssh
canonical_url: https://thjones2.blogspot.com/2024/07/implementing-psuedo-profiles-in-git.html
---
As noted in my first [_Implementing (Psuedo) Profiles in Git_](https://www.blogger.com/blog/post/edit/3054063691986932274/8861509966346951193) post:
> I'm an automation consultant for an IT contracting company. Using git is a daily part of my work-life. … Then things started shifting, a bit. Some customers wanted me to use my corporate email address as my ID. Annoying, but not an especially big deal, by itself. Then, some wanted me to use their privately-hosted repositories and wanted me to use identities issued by them.
This led me down a path of setting up multiple git "profiles" that I captured into my first article on this topic. To better support such per-project identities, it's also a good habit to use per-project authentication methods. I generally prefer to do git-over-SSH – rather than git-over-http(s) – when interfacing with remote Git repositories. Because I don't like having to keep re-entering my password, I use an SSH-agent to manage my keys. When one only has one or two projects they regularly interface with, this means a key-agent that is only needing to store a couple of authentication-keys.
Unfortunately, if you have more than one key in your SSH agent, when you attempt to connect to a remote SSH service, the agent will iteratively-present keys until the remote accepts one of them. If you've got three or more keys in your agent, the agent could present 3+ keys to the remote SSH server. By itself, this isn't a problem: the remote logs the earlier-presented keys as an authentication failure, but otherwise let you go about your business. However, if the remote SSH server is hardened, it very likely will be configured to lock your account after the third authentication-failure. As such, if you've got four or more keys in your agent and the remote requires a key that your agent doesn't present in the first three autentication-attempts, you'll find your account for that remote SSH service getting locked out.
What to do? Launch multiple ssh-agent instantiations.
Unfortunately, without modifying the default behavior, when you invoke the ssh-agent service, it will create a (semi) randomly-named UNIX domain-socket to listen for requests on. If you've only got a single ssh-agent instance running, this is a non-problem. If you've got multiple, particularly if you're using a tool like [direnv](https://github.com/direnv/direnv), setting up your `SSH_AUTH_SOCKET` in your `.envrc` files is problematic if you don't have predictably-named socket-paths.
How to solve this conundrum? Well, I finally got tired of, every time I rebooted my dev-console, having to run `eval $( ssh-agent )` in per-project Xterms. So, I started googling and ultimately just dug through the man page for ssh-agent. In doing the latter, I found:
```
DESCRIPTION
ssh-agent is a program to hold private keys used for public key authentication. Through use of environment variables the
agent can be located and automatically used for authentication when logging in to other machines using ssh(1).
The options are as follows:
-a bind_address
Bind the agent to the UNIX-domain socket bind_address. The default is $TMPDIR/ssh-XXXXXXXXXX/agent.<ppid>.
```
So, now I can add appropriate command-aliases to my bash profile(s) (which I've already moved to `~/.bash_profile.d/<PROJECT>`) that can be referenced based on where in my dev-console's filesystem hierachy I am and can set up my `.envrcs`, too. Result: if I'm in `<CUSTOMER_1>/<PROJECT>/<TREE>`, I get attached to an ssh-agent set up for _that_ customer's project(s); if I'm in `<CUSTOMER_2>/<PROJECT>/<TREE>`, I get attached to an ssh-agent set up for _that_ customer's project(s); etc.. | ferricoxide |
1,910,826 | Securing Your React Applications: Best Practices and Strategies | Introduction React, a popular JavaScript library for building user interfaces, provides developers... | 0 | 2024-07-08T15:45:58 | https://dev.to/dev_habib_nuhu/securing-your-react-applications-best-practices-and-strategies-1d0i | webdev, javascript, react, node | **Introduction**
React, a popular JavaScript library for building user interfaces, provides developers with powerful tools to create dynamic and interactive applications. However, with great power comes great responsibility, especially when it comes to securing your applications. In this article, we’ll explore essential security practices and strategies to protect your React applications from common vulnerabilities and attacks.
1. **Protecting Against XSS (Cross-Site Scripting) Attacks**
Cross-Site Scripting (XSS) is a common security vulnerability in web applications. It occurs when an attacker injects malicious scripts into a web page, which can then be executed in the context of the user's browser. To protect your React application from XSS attacks, consider the following practices:
- Escape User Input: Always escape user input before rendering it in the DOM. Use libraries like DOMPurify to sanitize HTML.
```
import DOMPurify from 'dompurify';
const safeHTML = DOMPurify.sanitize(dangerousHTML);
```
- **Use dangerouslySetInnerHTML Sparingly**: Avoid using dangerouslySetInnerHTML unless absolutely necessary. If you must use it, ensure the content is sanitized.
- **Content Security Policy (CSP)**: Implement CSP headers to restrict the sources from which scripts, styles, and other resources can be loaded.
```
<meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self' 'sha256-...';">
```
2.**Preventing CSRF (Cross-Site Request Forgery) Attacks**
CSRF attacks trick users into performing actions on websites where they are authenticated. To mitigate CSRF attacks:
- **Use CSRF Tokens**: Include CSRF tokens in your forms and API requests to ensure that requests are coming from authenticated users.
```
fetch('/api/data', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'CSRF-Token': csrfToken
},
body: JSON.stringify(data)
});
```
- **SameSite Cookies**: Set cookies with the `SameSite` attribute to prevent them from being sent with cross-site requests.
```
document.cookie = "name=value; SameSite=Strict";
```
3.**Securing API Calls**
When your React application interacts with a backend API, secure the communication to protect sensitive data:
- **Use HTTPS**: Always use HTTPS to encrypt data transmitted between the client and server.
- **Authenticate Requests**: Use token-based authentication (e.g., JWT) to ensure that only authorized users can access your API.
```
fetch('/api/protected', {
headers: {
'Authorization': `Bearer ${token}`
}
});
```
- **Rate Limiting**: Implement rate limiting on your API to prevent abuse and protect against DDoS attacks.
4.**Safeguarding Sensitive Data**
Ensure that sensitive data is handled securely in your React application:
- **Environment Variables**: Store sensitive data such as API keys and secrets in environment variables, not in your source code.
`const apiKey = process.env.REACT_APP_API_KEY;`
- **Secure Storage**: Use secure storage mechanisms for sensitive information. Avoid storing sensitive data in local storage or session storage.
5.**Keeping Dependencies Up-to-Date**
Outdated dependencies can introduce security vulnerabilities. Regularly update your dependencies to the latest versions:
- **Audit Dependencies**: Use tools like npm audit to identify and fix security vulnerabilities in your dependencies.
`npm audit`
6.**Implementing Access Control**
Ensure that users only have access to the resources and functionalities they are authorized to use:
- **Role-Based Access Control (RBAC)**: Implement RBAC to manage user permissions and restrict access to sensitive parts of your application.
```
const userRoles = ['admin', 'editor', 'viewer'];
const checkAccess = (role) => {
return userRoles.includes(role);
};
```
- **Secure Routes**: Protect sensitive routes by implementing authentication and authorization checks.
```
const ProtectedRoute = ({ component: Component, ...rest }) => (
<Route
{...rest}
render={props =>
isAuthenticated() ? (
<Component {...props} />
) : (
<Redirect to="/login" />
)
}
/>
);
```
**Conclusion**
Securing a React.js application involves a multi-faceted approach that includes protecting against common web vulnerabilities, safeguarding sensitive data, and implementing robust access controls. By following the best practices outlined in this guide, you can significantly enhance the security of your React applications and protect your users' data.
| dev_habib_nuhu |
1,910,871 | 🔀 Semantic Router w. ollama/gemma2 : real life 10ms hotline challenge 🤯 | ❔ About CCaaS (aka. Contact Center as a Service) Cf Gartner : "Gartner defines contact... | 0 | 2024-07-10T23:27:11 | https://dev.to/adriens/semantic-router-w-ollamagemma2-real-life-10ms-hotline-challenge-1i3f | opensource, ai, showdev, python | ## ❔ About `CCaaS` (aka. Contact Center as a Service)
Cf [Gartner](https://www.gartner.com/reviews/market/contact-center-as-a-service) :
> "Gartner defines contact center as a service (`CCaaS`) as solutions offering SaaS-based applications that enable customer service departments to manage multichannel customer interactions holistically from both a customer-experience and employee-experience perspective. CCaaS solutions offer an adaptive, flexible delivery model with native capabilities across the four functional components of the technology reference model for customer service and support. CCaaS providers also offer productized integrations with partner solutions through application marketplaces."
👉 Today, we'll cover some of these aspects by **focusing on how to efficiently route a huge number and variety of questions into a ridiculuous little amount of topics** so they can be efficiently managed by the proper adequate services.
To achieve that, we'll use 100% Open Source software, on **locally running LLMs**, thanks to the following stack :
- **📘 A custom hand-made** dataset
- [`🦙 ollama`](https://ollama.com/)
- [`🤖 gemma2`](https://huggingface.co/docs/transformers/main/en/model_doc/gemma2)
- [🔀 Semantic Router | Aurelio AI](https://www.aurelio.ai/semantic-router)
## 💰 Expected benefits
- Save a lot of **human time**
- Save a lot of **money** (less LLM/GPU calls)
- **Dispatch tasks** according to their kind or complexity on different channels
- Keep (Kanban-like) swimlanes clean to **get an optimal throughput**
## 🍿 Demo for impatients
{% youtube QHqti5NprX8 %}
## 📚 Data sources
- **Public Facebook** human moderations
- **Corporate** websites
Below some real life datasources I used:
- [Je déménage et je souhaite que mon fixe et mon Internet me suivent](https://www.opt.nc/particuliers/telephonie-fixe/je-demenage-et-je-souhaite-que-mon-fixe-et-mon-internet-me-suivent)
- [Je déménage et je suis client CCP](https://www.opt.nc/particuliers/services-financiers/je-demenage-et-je-suis-client-ccp)
- [Pour toutes assistance concernant une coupure ou une perturbation de ma ligne Fixe ou Internet](https://www.opt.nc/assistance/assistance-coupure-et-perturbation-internet-fixe)
- [Bien entretenir ma ligne de téléphonie fixe ](https://www.opt.nc/particuliers/telephonie-fixe/bien-entretenir-ma-ligne-de-telephonie-fixe)
## 🗣️ Real life Q&A interactions examples
Below some real life customer Q&As:
> "Je déménage et je souhaite que mon fixe et mon Internet me suivent"
**💡 Answer** : Go to https://www.opt.nc/particuliers/telephonie-fixe/je-demenage-et-je-souhaite-que-mon-fixe-et-mon-internet-me-suivent
> "Je déménage et je suis client CCP"
**💡 Answer** : Go to https://www.opt.nc/particuliers/services-financiers/je-demenage-et-je-suis-client-ccp
## 🔖 Resources
Source code is available below :
[](https://www.kaggle.com/code/adriensales/semantic-router-ollama-gemma2-hotline) | adriens |
1,910,940 | Podman + Windows: Resolvendo erro "No connection could be made because the target machine actively refused it" | Enquanto estava estudando para escrever o post WSL: Gerenciando o disco da distro precisei reiniciar... | 0 | 2024-07-10T11:47:40 | https://dev.to/poveda/podman-windows-resolvendo-erro-no-connection-could-be-made-because-the-target-machine-actively-refused-it-142l | Enquanto estava estudando para escrever o post [WSL: Gerenciando o disco da distro](https://dev.to/poveda/wsl-gerenciando-o-disco-da-distro-ld1) precisei reiniciar as distros do WSL algumas vezes. Entre essas distros estava a VM utilizada pelo [Podman Desktop](https://podman-desktop.io/).
Qual foi a minha surpresa quando no dia seguinte precisei utilizar o podman e recebi a seguinte mensagem
> Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:63708: connectex: No connection could be made because the target machine actively refused it.
Num primeiro momento tentei parar e subir a VM novamente com os comandos `podman machine stop` e `podman machine start` porém sem sucesso. Depois disso resolvi ler a mensagem de erro em busca mais pistas. A mensagem dizia para rodar o comando `podman system connection list` e validar se o podman esta escutando na porta mencionada.
Rodando o comando a saída confirmou que porta 63708 estava sendo utilizada pela VM:
```powershell
$ podman system connection list
Name URI Identity Default ReadWrite
podman-machine-default ssh://root@127.0.0.1:64624/run/podman/podman.sock ~\.local\share\containers\podman\machine\machine false true
podman-machine-default-root ssh://root@127.0.0.1:63708/run/podman/podman.sock ~\.local\share\containers\podman\machine\machine true true
```
> Obs: Os caminhos do identity foram encurtados para facilitar a leitura
Ainda sem ideia sobre o que fazer resolvi olhar o que mais a mensagem dizia. A outra opção sugerida pela mensagem era subir uma nova VM do podman para solucionar o problema, entretanto essa era uma opção que não desejava executar num primeiro momento pois tenho alguns pods parados e imagens salvas no cache que demorariam muito para baixar novamente (ex: sql server).
Com o cenário desenhado, iniciei as pesquisas para tentar solucionar o problema. Encontrei a ["issue No connection could be made because the target machine actively refused it"](https://github.com/containers/podman/issues/19554) e tentei a maioria das soluções propostas na issue porém sem sucesso. Inclusive vi que [implementaram uma correção](https://github.com/containers/podman/pull/19557) para o podman e que também não resolveu meu problema 😢.
Voltei novamente para as informações fornecidas pelo podman e tive um estalo:
*"O podman desktop se comunica com a VM via SSH. Será que a porta 63708 está habilitada no serviço"*
Com essa suspeita acessei a distro do podman e abri o arquivo `/etc/ssh/sshd_config` no editor de texto. Indo para o final do arquivo constatei que somente a porta 64624 estava habilitada para acesso ssh. Então resolvi adicionar a porta 63708. O arquivo que ficou da seguinte forma após editado
```vi
...
# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# PermitTTY no
# ForceCommand cvs server
Port 64624
Port 63708
```
Depois de salvar a alteração o próximo passo foi reiniciar o serviço do ssh com o comando:
```shell
sudo systemctl restart sshd
```
Aguardei uns segundos e rodei o comando `systemctl status sshd` a fim de validar se o serviço havia iniciado. O resultado do comando comprova que o serviço estava rodando sem problemas:
```sh
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; preset: enabled)
Drop-In: /usr/lib/systemd/system/service.d
└─10-timeout-abort.conf
Active: active (running) since Wed 2024-07-03 21:02:01 -03; 3s ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 7207 (sshd)
Tasks: 1 (limit: 4697)
Memory: 1.3M
CPU: 12ms
CGroup: /system.slice/sshd.service
└─7207 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups"
Jul 03 21:02:01 PovedaRyzen systemd[1]: Starting sshd.service - OpenSSH server daemon...
Jul 03 21:02:01 PovedaRyzen (sshd)[7207]: sshd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS
Jul 03 21:02:01 PovedaRyzen sshd[7207]: Server listening on 0.0.0.0 port 63708.
Jul 03 21:02:01 PovedaRyzen sshd[7207]: Server listening on :: port 63708.
Jul 03 21:02:01 PovedaRyzen sshd[7207]: Server listening on 0.0.0.0 port 64624.
Jul 03 21:02:01 PovedaRyzen sshd[7207]: Server listening on :: port 64624.
Jul 03 21:02:01 PovedaRyzen systemd[1]: Started sshd.service - OpenSSH server daemon.
```
Com o ssh liberado para a porta 63708, rodei alguns comandos a fim de validar a conexão. Para a minha felicidade os resultados foram esses:
```sh
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a6ba726847e localhost/podman-pause:5.1.0-dev-4817811cb-1713312000 3 weeks ago Exited (0) 292 years ago 234f8433dbf7-infra
9fecc6f0544a docker.io/library/nginx:latest nginx -g daemon o... 3 weeks ago Exited (0) 292 years ago nginx-nginx
```
```sh
$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/podman-pause 5.1.0-dev-4817811cb-1713312000 d3af6c318a9d 3 weeks ago 1.14 MB
docker.io/library/nginx latest 4f67c83422ec 5 weeks ago 192 MB
docker.io/julianopoveda/readonlyapi v1 ac7c4054326e 2 months ago 120 MB
docker.io/kindest/node <none> 09c50567d34e 4 months ago 962 MB
```
### Conclusão
Por algum motivo que não consegui descobrir o real motivo da porta ter se tornado incomunicável. Essa questão dificultou bastante a busca de uma solução, pois é bem provável que alguém já passou por esse problema e reportou em algum lugar como solucionar.
No fim tive que contar com a minha experiência e memória para resolver o problema, sem o auxilio de uma fonte externa. E esse é um dos motivos pelos quais tenho escrito esses posts, ter uma base de solução de problemas documentada tanto online quanto offline.
Uma última curiosidade: enquanto buscava algumas informações para enriquecer esse post descobri que poderia ter trocado a porta default de comunicação para a porta utilizada pelo rootless usando o comando `podman system connection default podman-machine-default`. Esse comando teria resolvido o problema de forma bem mais simples, visto que a porta 64624 estava aberta no sshd.
### Referências
- [Acessar instância do podman por outra distro wsl](https://podman-desktop.io/docs/podman/accessing-podman-from-another-wsl-instance)
- [Issue #19554: No connection could be made because the target machine actively refused it](https://github.com/containers/podman/issues/19554)
- [Fix: Implement automatic port reassignment on Windows](https://github.com/containers/podman/pull/19557) | poveda | |
1,911,102 | Introducing DOCSCAN: The Ultimate Global ID Document Scanning API | Introducing the DOCSCAN API: Revolutionizing eKYC with AI-Powered Document Scanning In... | 0 | 2024-07-10T03:04:36 | https://dev.to/vyan/introducing-docscan-the-ultimate-global-id-document-scanning-api-2lo4 | webdev, javascript, beginners, react | ### Introducing the DOCSCAN API: Revolutionizing eKYC with AI-Powered Document Scanning
In today's fast-paced digital landscape, ensuring the authenticity of user identities is crucial for businesses. Enter PixLab's cutting-edge DOCSCAN API, a powerful tool designed to streamline the eKYC (electronic Know Your Customer) process. This AI-powered platform offers robust ID document scanning and data extraction capabilities, making it a game-changer for developers and businesses alike.
#### Key Features of DOCSCAN API
1. **Comprehensive Document Support**
- The DOCSCAN API supports over 11,000 types of ID documents from 197+ countries, including passports, ID cards, driving licenses, visas, birth certificates, and death certificates. No other KYC platform offers such extensive coverage, making DOCSCAN an industry leader.
2. **Advanced Features**
- The API includes highly accurate text scanning and automatic face detection and cropping. This ensures precise extraction of essential details from documents, such as full name, issuing country, document number, address, and expiry date.
3. **Developer-Friendly Integration**
- DOCSCAN is designed with developers in mind. The single REST API endpoint simplifies the integration process, allowing for quick and easy implementation into any application.
Let's dive into how you can integrate this powerful tool into your application.
**Versatile Use Cases**
DOCSCAN is ideal for various industries and applications, including:
**KYC (Know Your Customer):** Enhance security across digital platforms.
**User Verification:** Ensure authenticity in user profiles.
**Financial Services:** Facilitate international market expansion.
**Fraud Detection:** Combat identity theft and fraudulent activities.
**E-commerce:** Prevent chargebacks and combat credit card fraud.
**Healthcare:** Enhance patient care with secure identity verification.
**Travel & Hospitality:** Ensure secure, seamless check-in processes for travelers.
#### Easy Integration with DOCSCAN API
Integrating the DOCSCAN API into your application is straightforward. Here’s a step-by-step guide to get you started:
1. **Get Your API Key**
- First, you need to sign up at [PixLab](https://ekyc.pixlab.io) and generate your API key. This key is essential for authenticating your requests to the DOCSCAN API.
2. **Endpoint and Parameters**
- The primary endpoint for DOCSCAN is `https://api.pixlab.io/docscan`. You can make GET or POST requests to this endpoint, depending on your preference for uploading the document image.
3. **Making a Request**
- Here’s a simple example using JavaScript to scan a passport image:
```javascript
const apiKey = 'YOUR_PIXLAB_API_KEY'; // Replace with your PixLab API Key
const imageUrl = 'http://example.com/passport.png'; // URL of the passport image
const url = `https://api.pixlab.io/docscan?img=${encodeURIComponent(imageUrl)}&type=passport&key=${apiKey}`;
fetch(url)
.then(response => response.json())
.then(data => {
if (data.status !== 200) {
console.error(data.error);
} else {
console.log("Cropped Face URL: " + data.face_url);
console.log("Extracted Fields: ", data.fields);
}
})
.catch(error => console.error('Error:', error));
```
4.**Handling the Response**
- The API responds with a JSON object containing the scanned information. This includes URLs to the cropped face image and detailed extracted fields like full name, issuing country, document number, and more.
#### Additional Code Samples
**Python**

**PHP**

**Ruby**

**Java**
```
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import org.json.JSONObject;
public class DocScanExample {
public static void main(String[] args) {
try {
String apiKey = "YOUR_PIXLAB_API_KEY"; // Replace with your PixLab API Key
String imageUrl = "http://example.com/passport.png"; // URL of the passport image
String urlStr = "https://api.pixlab.io/docscan?img=" + java.net.URLEncoder.encode(imageUrl, "UTF-8") + "&type=passport&key=" + apiKey;
URL url = new URL(urlStr);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("GET");
BufferedReader in = new BufferedReader(new InputStreamReader(conn.getInputStream()));
String inputLine;
StringBuilder content = new StringBuilder();
while ((inputLine = in.readLine()) != null) {
content.append(inputLine);
}
in.close();
conn.disconnect();
JSONObject data = new JSONObject(content.toString());
if (data.getInt("status") != 200) {
System.out.println("Error: " + data.getString("error"));
} else {
System.out.println("Cropped Face URL: " + data.getString("face_url"));
System.out.println("Extracted Fields: " + data.getJSONObject("fields").toString());
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
```
**Comprehensive HTTP Response**
The DOCSCAN API endpoint always returns a JSON object. Below are the fields typically included in the response:
**status:** HTTP status code (200 indicates success).
**type:** Type of the scanned document.
**face_url:** URL to the cropped image of the face from the document.
**mrz_raw_text:** Extracted raw MRZ text (for Passports and Visas only).
**fields:** A JSON object containing extracted data such as fullName, issuingCountry, documentNumber, address, dateOfBirth, dateOfExpiry, sex, nationality, issuingDate, checkDigit, personalNumber, finalCheckDigit, issuingState, issuingStateCode, religion.
**How Does the PixLab DocScan Work?**
Do you want to know, what happens when you scan a driving license using the Docscan API? Let’s understand the process step by step. I am summarizing the point here, but you can read more about it in their official documentation.
1. At first the user’s face is detected using the face detectAPI.
2. After getting the face coordinate, you can crop and extract the image using image processing API from Pixlab.
3. Then using Docscan API, Pixlab extracts the information about the user.
4. After the processing is done, the image is deleted from the server. Pixlab doesn’t store any of the images for future reference. This is a very good move for privacy.
5. Underthe hood, Pixlab uses PP-OCR which is a practical ultra-lightweight OCR system that is mainly composed of three parts: Text Detection, Bounding Box Isolation, & Text Recognition. Thus Pixlab can generate accurate results by scanning a driver’s license.
#### Real-World Example
Let's consider a practical example. Suppose you want to verify a user's passport. By using the DOCSCAN API, you can extract all relevant details and store them in your database for future reference. The API also crops the user's face from the passport image, which can be used for profile verification.

```
// components/DocScanComponent.jsx
import { useState } from 'react';
import axios from 'axios';
const DocScanComponent = () => {
const [imageUrl, setImageUrl] = useState('');
const [scanResult, setScanResult] = useState(null);
const [loading, setLoading] = useState(false);
const [error, setError] = useState(null);
const apiKey = 'YOUR_PIXLAB_API_KEY';
const handleScan = async () => {
setLoading(true);
setError(null);
try {
const response = await axios.get('https://api.pixlab.io/docscan', {
params: {
img: imageUrl,
type: 'passport',
key: apiKey,
},
});
if (response.data.status !== 200) {
setError(response.data.error);
} else {
setScanResult(response.data);
}
} catch (err) {
setError('Error scanning document');
} finally {
setLoading(false);
}
};
return (
<div>
<h1>DocScan</h1>
<input
type="text"
placeholder="Enter Image URL"
value={imageUrl}
onChange={(e) => setImageUrl(e.target.value)}
/>
<button onClick={handleScan} disabled={loading}>
{loading ? 'Scanning...' : 'Scan Document'}
</button>
{error && <p style={{ color: 'red' }}>{error}</p>}
{scanResult && (
<div>
<h2>Scan Result:</h2>
<img src={scanResult.face_url} alt="Cropped Face" />
<pre>{JSON.stringify(scanResult, null, 2)}</pre>
</div>
)}
</div>
);
};
export default DocScanComponent;
```
This is a demo passport image. The extracted data using Pixlab Docscan API is listed below.

**Example Output for a Passport Scan**
```
{
"type": "PASSPORT",
"face_url": "https://s3.amazonaws.com/media.pixlab.xyz/24p5ba822a00df7F.png",
"mrz_img_url": "https://s3.amazonaws.com/media.pixlab.xyz/24p5ba822a1e426d.png",
"mrz_raw_text": "P<UTOERIKSSON<<ANNAXMARIAK<<<<<<<<<<<\nL898962C36UTO7408122F1204159ZE184226B<<<<<16",
"fields": {
"fullName": "ERIKSSON ANNA MARIA",
"issuingCountry": "UTO",
"documentNumber": "L898902C36",
"dateOfBirth": "19740812",
"dateOfExpiry": "20120415",
"sex": "F",
"nationality": "UTO"
}
}
```
**Key Takeaways:**
- DOCSCAN API supports 11,000+ document types from 197+ countries
- Easy integration with a single REST API endpoint
- Advanced AI for accurate text scanning and face detection
- Streamlines KYC, user onboarding, and fraud prevention processes
- Compatible with multiple programming languages
#### Conclusion
PixLab's DOCSCAN API offers a comprehensive and efficient solution for eKYC processes. With support for a vast array of documents, advanced scanning features, and a developer-friendly interface, integrating this API into your application can significantly enhance your identity verification processes.
To learn more, explore the [DOCSCAN API documentation](https://ekyc.pixlab.io/docsan) and get started today. Revolutionize your eKYC process with PixLab!
For additional resources, code samples, and community-contributed articles, visit the [PixLab GitHub Repository](https://github.com/pixlab-io) and join the conversation with fellow developers.
| vyan |
1,911,980 | My work setup for PHP development | These days the majority of my (programming) work is (in order of SLOC): PHP, Javascript (including... | 0 | 2024-07-10T11:04:37 | https://dev.to/moopet/my-work-setup-for-php-development-4dj8 | php, productivity | ---
title: My work setup for PHP development
published: true
description:
tags: php, productivity
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/if0r9v2pye9o8bhihjai.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-07-04 19:43 +0000
---
These days the majority of my (programming) work is (in order of SLOC): PHP, Javascript (including Node), CSS, HTML, and shell scripts. I do sometimes dip into other languages, but the instances are tiny compared to these main ones.
And this is my story.
## Hardware
I'm writing this on my main me-facing machine. It's running Windows 11, but that's almost entirely for Important Gaming Reasons.
Work provides me with a MacBook Pro, which I need to use for Important Corporate Reasons such as VPN access, and because it has a nicer video meeting experience. But really, I just shell into it and use it as a docker host and network proxy. I don't get along with the Apple keyboard, which is a broken mix of UK ISO and US ANSI and feels like typing on a child's toy. I appreciate the touchpad and screen, which are both impressive - but I much prefer using a mouse. So the Mac is relegated to being a third screen on the corner of my desk.

It's a mixed bag.
I have a laptop I use if I'm working at the kitchen table, which I occasionally do. It's running ChromeOS though I'll probably swap that out for Linux at some point. "Developer" mode in ChromeOS gives me a command line and shell access, so my workflow with browser/cli is pretty much the same.
## Operating system

Everything I do for work is through Ubuntu under WSL on the Windows 11 desktop, SSH on the ChromeOS laptop or Barrier'd into the MacBook Pro.
I know this sounds impractical. There are weird issues, such as Apple's bug where you can't use an IPv4 address in `/etc/hosts` for external connections because the firewall blocks it but you can if use IPv6, _even if you're not connecting via IPv6_. Oh, there were fun times figuring that hack out.
But though decade-ago me would have never thought it, nowadays Windows Subsystem for Linux - specifically WSL2 - is actually pretty decent. You can even run X applications if you install an X server, and there are a bunch of those available.
I don't use X for anything though. If it's not in a browser, I don't tend to care about it, and there's no good reason to use a Linux browser via X compared to a native one.
I can't help feeling a little bit _dirty_ for using Windows, but I cram that feeling way down inside and try to ignore it.
## Desktop - [Barrier](https://github.com/debauchee/barrier)
I don't use this too much any more, but Barrier is a fork of Synergy from before Synergy went closed-source. It bills itself as a kind-of-software-KVM, but what it boils down to is that I can move my mouse between different computers exactly the same as I can between the pair of monitors on my PC. If I move the mouse off the right side of my desktop monitor, it appears on my Macbook, and all mouse and keyboard input are seamlessly transferred there, just as if it was all one glorious whole.
Hashtag #gamechanger.
## Terminal - [WezTerm](https://wezfurlong.org/wezterm/index.html)
I have only recently started using WezTerm. "Recently" in terminal-speak means in the last year or two, because things move slowly in this world. I don't tend to change unless there's a good reason.
So what was my good reason? What was wrong with Windows Terminal/iTerm 2? Well, WezTerm is cross-platform, and I can share the same configuration - in Lua! - between hosts. It doesn't make weird restrictions (like the way Suckless will never support tabs), and it's very customisable.
As far as customisation goes, it's mostly font, background image and colourscheme. I usually use [Gruvbox](https://github.com/morhetz/gruvbox/wiki/Gallery)[^1] Dark for most things, though I override a couple of bits depending on the context.
But wait. Background image, you ask? On a terminal? Why? Well, it so happens I have a post about that:
{% embed https://dev.to/moopet/my-one-and-only-terminal-tip-ooa %}
## Shell - zsh
I settled on zsh because it's the default on Macs. I know, I know: I don't really use the Mac much. But it's almost entirely compatible with bash for all the things I care about, and it's available on all the systems I use.
I try to make every script I write POSIX-compliant, and if I use something that's different between BSD and GNU (like `sed -i`) I'll wrap it in a condition.
It's fair to say that the choice of shell doesn't really matter much.
So before you say anything, no, I don't use oh-my-zsh. It doesn't offer me anything I care about that I can't do out-the-box with zsh (or bash for that matter).
Oh, and [I almost never use aliases](https://dev.to/moopet/the-case-against-aliases-2mb1), either.
## Editor - [Neovim](https://neovim.io/)
I've been a Vi/Vim user for a long time, but I switched to neovim a year or two ago. I didn't originally see the point in switching, because all the features people went on about in neovim (and they did oh so go on about them! - async, LSPs, embedded terminals, etc.), well, they were either also available in recent versions of Vim or were stuff I really wasn't interested in. Stuff I didn't think belonged in an editor. Stuff that was trying to make Vim into Emacs.
Well, I still think those same grumpy thoughts, but I decided to try out a few of the neovim "distros" just to see what was out there and have ended up sticking with it. When loaded with plugins, it's certainly buggier than Vim, but it's still also a nice and comfortable editor for me - and it has more of a future, especially now Bram is gone (RIP).
## Productivity
### [Tmux](https://github.com/tmux/tmux/wiki)
Tmux is, as it's always introduced, a "terminal multiplexer", like GNU screen. It lets you close a terminal session and reopen it without losing your work, and it lets you run multiple terminal sessions at the same time.
The upshot of this is that I can leave my work running on the MacBook and connect from other machines via ssh and, by running `tmux attach` I am right back in the thick of things. I don't need to open a bunch of sessions each time, or remember what directory I was in or what services were running, it's all just there.
### [Tmuxinator](https://github.com/tmuxinator/tmuxinator)
The tmuxinator project is a wrapper for tmux, which lets you manage multiple separate tmux sessions.
As an example, here's the list of projects I currently have running on my macbook:
```
❯ tmuxinator ls
tmuxinator projects:
bc biascan ec fabric gce ifpma
leith leith-2023 msgan chickenland ngs ngs-new
ods ren renaissance scramble sf sf-cms
sf-forms sf-myplans sgh srn
```
What's in these projects? Well... they have a lot of specifics in them, but what they mostly boil down to is similar to this random project I picked as an example:
```yaml
windows:
- cms:
- ddev start
- storybook:
panes:
- watcher:
- cd web/storybook
- yarn watch
- middleware:
- workon mw8
- logs:
panes:
- cms:
- workon ngs
- docker exec -it ngs-cms-php drush -y --uri=ngs-cms.shore.signal.sh -r /shore_site/web ws --tail --full --extended --count=1
- new-cms:
- workon ngs-cms-2022
- docker exec -it ngs-cms-2022-app vendor/drush/drush/drush ws --extended
- middleware:
- workon mw8
- docker exec -it ngs-middleware-app vendor/drush/drush/drush ws --extended
- build-services:
panes:
- new-cms:
- workon ngs-cms-2022
- cd web/storybook
- yarn storybook
```
In this example, I have tmux windows for a Drupal 7 project, its Drupal 10 rebuild, some shared middleware, the front-end build services, a storybook server and a metric ton of logging.

I can spin this all up with `tmuxinator <project name>`, and switch between its collection of windows with `Ctrl-A <number>` and to any number of completely different projects with a pop-up menu I get by hitting `Ctrl-A S`.
### [Ddev](https://ddev.com)
Ddev is something that lets you create containerised (docker) environments for projects. It works with PHP and Node and (experimentally) Python.

The thing that manages the stack, basically.
My company used to use a home-grown docker-compose wrapper for this, back when there weren't many options. Ddev is really solid, though, and every time I have to work on a legacy or inherited project I'll convert it to Ddev as soon as I check it out. It doesn't usually take very long, and it means we have a consistent way of working. It makes it incredibly quick and easy to switch between PHP or Node versions, for instance.
Some of its main selling points for me are:
* Uses [Mutagen](https://mutagen.io/) for the file system, so it's fast even on MacOS (where Docker mounts are notoriously slo-o-ow)
* Easily lets you reconfigure PHP and Node versions
* Knows about a bunch of third-party apps - for instance, if you have a database GUI app like TablePlus or DBeaver you can launch it from the command line
* Supports community-maintained add-ons
and so on. It's good.
---
Image credits:
"Man Carrrying Groceries" - [DodgertonSkillhause](https://morguefile.com/p/1001058)
"The Ridiculous Switcherama of the Concorde Cockpit" - Me.
"Statue" - [lauramusikanski](https://morguefile.com/p/1070446)
"Scene from Terminator 2" - Sorry, I can't hear you I'm going through a tunnel
[^1]: I pronounce it the way it's spelt.
| moopet |
1,911,208 | Dynamic watermarking with imgproxy and Apache APISIX | Last week, I described how to add a dynamic watermark to your images on the JVM. I didn't find any... | 27,903 | 2024-07-11T09:02:00 | https://blog.frankel.ch/dynamic-watermarking/2/ | webdev, watermarking, imgproxy, apacheapisix | Last week, I described [how to add a dynamic watermark to your images on the JVM](https://blog.frankel.ch/dynamic-watermarking/1/). I didn't find any library, so I had to develop the feature, or, more precisely, an embryo of a feature, by myself. Depending on your tech stack, you must search for an existing library or roll up your sleeves. For example, Rust offers such an out-of-the-box library. Worse, this approach might be impossible to implement if you don't have access to the source image.
Another alternative is to use ready-made components, namely [imgproxy](https://imgproxy.net/) and [Apache APISIX](https://apisix.apache.org/). I already combined them to [resize images on-the-fly](https://blog.frankel.ch/resize-images-on-the-fly/).
Here's the general sequence flow of the process:

* When APISIX receives a specific pattern, it calls `imgproxy` with the relevant parameters
* `imgproxy` fetches the original image and the watermark to apply
* It watermarks the original image and returns the result to APISIX
Let's say the pattern is `/watermark/*`.
We can define two routes:
```yaml
routes:
- uri: "*" #1
upstream:
nodes:
"server:3000": 1
- uri: /watermark/* #2
plugins:
proxy-rewrite: #3
regex_uri:
- /watermark/(.*)
- /dummy_sig/watermark:0.8:nowe:20:20:0.2/plain/http://server:3000/$1 #4
upstream:
nodes:
"imgproxy:8080": 1 #5
```
1. Catch-all route that forwards to the web server
2. Watermark images route
3. Rewrite the URL...
4. ...with an `imgproxy`-configured route and...
5. ...forward to `imageproxy`
You can find the exact rewritten URL syntax in [imgproxy](https://docs.imgproxy.net/features/watermark) documentation. The watermark itself is configured via a single environment variable. You should buy `imgproxy`'s Pro version if you need different watermarks. As a poor man's alternative, you could also set up different instances, each with its watermark, and configure APISIX to route the request to the desired instance.
In this post, we implemented a watermarking feature with the help of `imgproxy`. The more I think about it, the more I think they make a match made in Heaven.
The complete source code for this post can be found on GitHub:
{% embed https://github.com/ajavageek/watermark-on-the-fly %}
**To go further:**
* [Digital watermarking](https://en.wikipedia.org/wiki/Digital_watermarking)
* [imgproxy documentation](https://docs.imgproxy.net/)
* [imgproxy interactive demo](https://imgproxy.net/)
<hr>
_Originally published at [A Java Geek](https://blog.frankel.ch/dynamic-watermarking/2/) on July 7<sup>th</sup>, 2024_ | nfrankel |
1,911,283 | AWS Cloud Resume Challenge | Table of Contents 1. Intro 2. Project Initialization 3. S3 4. CloudFront 5. Route 53 6.... | 0 | 2024-07-08T11:44:07 | https://dev.to/aktran321/aws-cloud-resume-challenge-37md | awschallenge | # Table of Contents
- [1. Intro](#1-intro)
- [2. Project Initialization](#2-project-initialization)
- [3. S3](#3-s3)
- [4. CloudFront](#4-cloudfront)
- [5. Route 53](#5-route-53)
- [6. View Counter](#6-view-counter)
- [7. DynamoDB](#7-dynamodb)
- [8. Lambda](#8-lambda)
- [9. Javascript](#9-javascript)
- [10. CI/CD with Github Actions](#10-cicd-with-github-actions)
- [11. Infrastructure as Code with Terraform](#11-infrastructure-as-code-with-terraform)
- [12. Conclusion](#12-conclusion)
- [13. Edits](#13-edits)
## 1. Intro
A few days ago, I decided to take on the Cloud Resume Challenge. This is a great way to expose yourself to multiple AWS Services within a fun project. I'll be documenting how I deployed the project and what I learned along the way. If you're deciding to take on the resume challenge, then hopefully you can use this as resource to get started. Now, lets begin.
## 2. Project Initialization
Setup a project environment and configure a git repo along with it. This will first include a frontend directory with your index.html, script.js, and styles.css.
If you want this done quickly, you could copy and paste your resume into ChatGPT and have it provide you with the 3 files to create a simple static website.
## 3. S3
Create an AWS account. Navigate to the S3 service and create a bucket. The name you choose for your bucket should be unique to your region. Once created, upload your files to the S3 bucket.
## 4. CloudFront
S3 will only host your static website over the HTTP protocol. To use HTTPS, you will have to serve your content over **CloudFront**, a **CDN (Content Delivery Network)**. Not only will this provide secure access to your website, but it will deliver your content with low latency. CloudFront edge locations are global, and will cache your website to serve it fast and reliably from a client's nearest edge location.
Navigate to CloudFront from the AWS console and click "**Create Distribution**". Pick the origin domain (your S3 bucket). If you enabled **Static Website Hosting** on the S3 bucket, a button will appear recommending you to use the bucket endpoint, but for our purposes since we want CloudFront direct access to the S3 bucket.
Under "**Origin Access Control**", check the "**Origin Access Control Setting (recommended)**". We do this because we only want the bucket accessed by CloudFront and not the public.
Create a new OAC and select it.
Click the button that appears and says "**Copy Policy**".
In another window, navigate back to your S3 bucket and under the "**Permissions**" tab paste the policy under the "**Bucket Policy**" section.
It should look something like this:
```
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "AllowCloudFrontServicePrincipal",
"Effect": "Allow",
"Principal": {
"Service": "cloudfront.amazonaws.com"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<Your Bucket Name>/*",
"Condition": {
"StringEquals": {
"AWS:SourceArn": "arn:aws:cloudfront::<Some Numbers>:distribution/<Your CloudFront Distribution>"
}
}
}
]
}
```
In the CloudFront window, finish the configuration by enabling HTTPS under "**Viewer Protocol Policy**" and finally leave the rest of the options default and create the distribution.
When the distribution is created, make sure the default root object is the index.html file. At this point, you should be able to open the **Distribution domain name** with your resume website up and running.
**IMPORTANT** You will now have a CloudFront distribution URL. Since your bucket is not public and in our current configuration, you can only access the html, css, and js files from that CloudFront distribution. Your HTML link and script tags will need to be updated.
For example, my script tag was
```
<link rel="stylesheet" href="styles.css">
```
and changed to
```
<link rel="stylesheet" href="https://d2qo85k2yv6bow.cloudfront.net/styles.css">
```
Once you update your link and script tags, re-upload your HTML file. You will also have to create an **Invalidation Request** in your CloudFront distribution so that it updates its own cache. When you create the request, simply input "**/***". This makes sure that CloudFront serves the latest version of your files (if you are constantly making changes and want to see them immediately on the website, then you will have to repeatedly make invalidation requests).
## 5. Route 53
Your next step will be to route your own DNS domain name to the CloudFront distribution. Since I already had a domain name, I only needed to navigate to my hosted zone in Route53 and create an A record, switch on "**alias**", select the dropdown option "**Alias to CloudFront distribution**", select my distribution, keep it as a simple routing policy, and save.
Also, within the CloudFront distribution's settings, you have to request and configure an SSL certificate associated with your domain and attach it.
And with that, your website should be up and running!
## 6. View Counter
To set up a view counter, we will now have to incorporate a DynamoDB and Lambda as well as write some Javascript for our HTML. The idea is when someone views our resume, the Javascript will send a request to the Lambda function URL. Lambda will be some Python code that retrieves and updates data in the DynamoDB table, and returns the data to your Javascript.
## 7. DynamoDB
Navigate to the DynamoDB service and create a table.
Go to "**Actions**"" --> "**Explore Items**" and create an item.
Set the id (partition key) value to 1.
Create a number attribute and label is "views" with a value of 0.
## 8. Lambda
Next, we will create the Lambda function that can retrieve the data from DynamoDB and update it.
When creating the Lambda function in the AWS console, I chose Python3.12.
Enable **function URL** and set the AUTH type to None. Doing so allows your Lambda function to be invoked by anyone that obtains the function URL. I chose to set the Lambda function up this way so I can test the functionality of the Lambda function with my project without setting up API Gateway at the moment.
Here is my Lambda function code:
```
import json
import boto3
dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table("cloud-resume-challenge")
def lambda_handler(event, context):
try:
# Get the current view count from DynamoDB
response = table.get_item(Key={
"id": "1"
})
if 'Item' in response:
views = int(response["Item"]["views"])
else:
views = 0 # Default to 0 if the item doesn't exist
# Increment the view count
views += 1
# Update the view count in DynamoDB
table.put_item(Item={
"id": "1",
"views": views
})
# Return the updated view count
return {
"statusCode": 200,
"body": json.dumps({"views": views})
}
except Exception as e:
print(f"Error: {e}")
return {
"statusCode": 500,
"body": json.dumps({"error": str(e)})
}
```
Finally, in the "**Configuration**" tab, we need an execution role that has permission to invoke the DynamoDB table. To do this, you would navigate to IAM and create a new role. This role will need the "**AmazonDynamoDBFullAccess**" permission. Once created, attach the role to your Lambda function.
## 9. Javascript
Then, write some code into your script.js file. Something like this:
```
async function updateCounter() {
try {
let response = await fetch("Lambda Function URL");
let data = await response.json();
const counter = document.getElementById("view-count");
counter.innerText = data.views;
} catch (error) {
console.error('Error updating counter:', error);
}
}
updateCounter();
```
The code sends a request to the Lambda function URL, parses it and sets it to the "**data**" variable. I have a <span> with id="view-count" and set it to data.views, which is the retrieved view count from the Lambda function URL.
## 10. CI/CD with Github Actions
We can create a CI/CD pipeline with Github Actions. Doing so will automatically update our S3 bucket files whenever code changes are pushed to Github.
To summarize, you have to create a directory "**.github**" and within it will be another directory "**workflows**". Create a YAML file inside.
This is my "**frontend-cicd.yaml**" file:
```
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: jakejarvis/s3-sync-action@master
with:
args: --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-east-1' #optional: defaults to us-east-1
SOURCE_DIR: 'frontend' # optional: defaults to entire repository
```
Your Github will now have a new action, but you still have to setup your environment variables such as AWS_S3_BUCKET, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY.
Within your Github repo, you would have to navigate to Settings → Secrets and variables (under the Security Section on the left side of the screen) → Actions
These access keys are associated with your AWS user and will need to be retrieved from the AWS console.
## 11. Infrastructure as Code with Terraform
So far, we've been clicking around in the AWS console, setting permissions and configurations for multiple AWS services. It can all get confusing and messy very quickly. Terraform allows us to set up our infrastructure in a code based format. This allows us to roll back configurations through versioning, and easily replicate our setup.
This was my first time using Terraform. For now, I just used it to create an API Gateway and re-create my Lambda function. So instead of my Javascript hitting the public function URL of my Lambda Function, I can have it hit my API Gateway, which will invoke my Lambda function. API Gateway has much better security, providing:
* Authentication and Authorization through IAM, Cognito, API Keys
* Throttling and Rate Limiting
* Private Endpoints
* Input Validation
After downloading Terraform onto my machine, I created a "**terraform**" folder in the root directory of my project. Then I created two files:
* provider.tf
* main.tf
Here is my provider.tf:
```
terraform {
required_providers {
aws = {
version =">=4.9.0"
source = "hashicorp/aws"
}
}
}
provider "aws" {
access_key = "*****"
secret_key = "*****"
region = "us-east-1"
}
```
I've made sure to omit this from my Github using a .gitignore file, since it would expose my AWS user's access key and secret key.
This file basically configures the provider which Terraform will use. In our case, it is AWS.
Next the main.tf:
```
data "archive_file" "zip_the_python_code" {
type = "zip"
source_file = "${path.module}/aws_lambda/func.py"
output_path = "${path.module}/aws_lambda/func.zip"
}
resource "aws_lambda_function" "myfunc" {
filename = data.archive_file.zip_the_python_code.output_path
source_code_hash = data.archive_file.zip_the_python_code.output_base64sha256
function_name = "myfunc"
role = "arn:aws:iam::631242286372:role/service-role/cloud-resume-views-role-bnt3oikr"
handler = "func.lambda_handler"
runtime = "python3.12"
}
resource "aws_lambda_permission" "apigateway" {
statement_id = "AllowAPIGatewayInvoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.myfunc.function_name
principal = "apigateway.amazonaws.com"
source_arn = "arn:aws:execute-api:us-east-1:${data.aws_caller_identity.current.account_id}:${aws_apigatewayv2_api.http_api.id}/*/GET/views"
}
resource "aws_apigatewayv2_api" "http_api" {
name = "views-http-api"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_integration" "lambda_integration" {
api_id = aws_apigatewayv2_api.http_api.id
integration_type = "AWS_PROXY"
integration_uri = aws_lambda_function.myfunc.invoke_arn
integration_method = "POST"
payload_format_version = "1.0" # Explicitly set payload format version
}
resource "aws_apigatewayv2_route" "default_route" {
api_id = aws_apigatewayv2_api.http_api.id
route_key = "GET /views"
target = "integrations/${aws_apigatewayv2_integration.lambda_integration.id}"
}
resource "aws_apigatewayv2_stage" "default_stage" {
api_id = aws_apigatewayv2_api.http_api.id
name = "$default"
auto_deploy = true
}
output "http_api_url" {
value = aws_apigatewayv2_stage.default_stage.invoke_url
}
data "aws_caller_identity" "current" {}
```
The archive_file data source zips the Python code (func.py) into func.zip. The aws_lambda_function resource creates the Lambda function using this zip file. The aws_lambda_permission resource grants API Gateway permission to invoke the Lambda function. The aws_apigatewayv2_api, aws_apigatewayv2_integration, and aws_apigatewayv2_route resources set up an HTTP API Gateway that integrates with the Lambda function, and aws_apigatewayv2_stage deploys this API. The output block provides the API endpoint URL. Additionally, data "aws_caller_identity" "current" retrieves the AWS account details.
Before initializing and applying the terraform code, I created another folder called "**aws_lambda**" and within it created a file func.py. This is where the Lambda function code from earlier is pasted in.
With that in place, run the commands:
* terraform init
* terraform plan
* terraform apply
After a few moments, my services and settings were configured in AWS.
One thing to note with this project, we can update the code for the frontend, commit and push to Github, invalidate the CloudFront cache, and see the changes applied. However, the Lambda function requires the Terraform commands to be executed for the changes to be applied.
## 12. Conclusion
I still have some updates to make with Terraform to configure the rest of the services I am utilizing, but I feel confident about what I've been able to build so far. This challenge has significantly deepened my understanding of AWS, providing me with hands-on experience in managing and automating cloud infrastructure. The skills and knowledge I’ve gained will be invaluable as I continue to build scalable, secure, and efficient cloud architectures in my career. I am excited to further refine my setup and explore additional AWS services and Terraform capabilities.
And if you want to checkout my project, [click here](andytran.click)!
### 13. Edits
So, my counter stopped working...
**Access to fetch at 'https://g6thr4od50.execute-api.us-east-1.amazonaws.com/views' from origin 'https://andytran.click' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.**
My browser is sending a request to the API Gateway, which is invoking my Lambda function, but my Lambda function isn't responding with the necessary CORS headers. The browser saw that the response didn't include the Access-Control-Allow-Origin header and blocked the response, resulting in a CORS error.
So I updated the Lambda function here with this in both return statements:
```
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*"
}
```
So the updated Lambda function looks like:
```
import json
import boto3
dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table("cloud-resume-challenge")
def lambda_handler(event, context):
try:
# Get the current view count from DynamoDB
response = table.get_item(Key={
"id": "1"
})
if 'Item' in response:
views = int(response["Item"]["views"])
else:
views = 0 # Default to 0 if the item doesn't exist
# Increment the view count
views += 1
# Update the view count in DynamoDB
table.put_item(Item={
"id": "1",
"views": views
})
# Return the updated view count with headers
return {
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*"
},
"body": json.dumps({"views": views})
}
except Exception as e:
print(f"Error: {e}")
return {
"statusCode": 500,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*"
},
"body": json.dumps({"error": str(e)})
}
```
Figured I might as well add burst/rate limiting to my API Gateway with my main.tf file:
```
resource "aws_apigatewayv2_stage" "default_stage" {
api_id = aws_apigatewayv2_api.http_api.id
name = "$default"
auto_deploy = true
default_route_settings {
throttling_burst_limit = 10
throttling_rate_limit = 5
}
}
``` | aktran321 |
1,911,341 | How to Enable Night Light in Windows 11? | Windows 11 Night Light: Night Mode in Windows 11 is a valuable feature designed to reduce eye strain... | 0 | 2024-07-12T11:08:23 | https://winsides.com/how-to-enable-night-light-in-windows-11/ | windows11, beginners, tutorials, tips | ---
title: How to Enable Night Light in Windows 11?
published: true
date: 2024-07-04 05:20:19 UTC
tags: Windows11,beginners, tutorials, tips
canonical_url: https://winsides.com/how-to-enable-night-light-in-windows-11/
cover_image: https://winsides.com/wp-content/uploads/2024/07/Enable-Night-Light-in-Windows-11.webp
---
Windows 11 Night Light: Night Mode in Windows 11 is a valuable feature designed to **reduce eye strain** and **improve sleep quality** by adjusting the **screen’s color temperature** to warmer tones during evening and night hours. This feature, also known as Night Light, helps mitigate the impact of **blue light** emitted from screens, which can interfere with the natural sleep cycle. This article provides detailed steps on How to Enable Night Light in Windows 11, customize Night Light and more. Let’s get into the steps.
- Open **Windows Settings** using the Key Combination <kbd>Win Key</kbd> + <kbd>I</kbd>.
- From the left pane, click on **System**.

_System Settings_
- Now, click on **Display**.

_Display_
- Under **Brightness & Color** , you can find **Night Light**.
- Toggle the Night Light Switch to **ON**.

_Enable Light in Windows 11_
- That is it. Night Light is now enabled in Windows 11.
## Customize Night Light in Windows 11:
Windows 11 offers different customization options for Night Light like **Strength** , and **Scheduled Night Night**. In this section, we will find more about them.
- Under **System > Display > Night light** , where we are currently, the **Strength** option slider allows us to adjust the intensity of the Night Light, The higher the slider position is, the stronger the Warmness in the Display blocking the Blue light.

_Adjust the strength of the Night light_
- The next option is to **Schedule Night light** —Toggle Schedule Night Light to turn on this option in Windows 11.

_Schedule Night Light in Windows 11_
- There are two options here. **Sunset to sunrise (12:00 AM – 12:00 AM)**. You can either choose this, or you can **Set Hours** according to your convenience.
Please make sure to **turn on Location Services** to schedule Night Light at sunset. It is mandatory.
### Understanding Blue Light and its Effects on Eyes:

_Visible Light Spectrum_
Blue light is a part of the **visible light spectrum** , with wavelengths ranging from approximately **380 to 500 nanometers**. It has the **shortest wavelengths** and the highest energy among visible-light colors. Sources include **Sun Light** , **Digital Screens** , **LED** , **Fluorescent lighting** , and more.
1. **Spending long hours** in front of screens can lead to digital eye strain or computer vision syndrome, characterized by discomfort, dryness, and fatigue.
2. The intensity of the **high-energy blue light scatters** more easily than other visible light, making it harder for the eyes to focus. This constant refocusing effort can cause strain.
3. Blue light exposure, especially **before bedtime** , can interfere with the production of **melatonin** , a hormone that regulates sleep. Reduced melatonin can lead to difficulty falling and staying asleep.
4. Excessive blue light exposure at night can disrupt the **natural sleep-wake cycle** , affecting overall sleep quality and leading to **daytime fatigue**. | vigneshwaran_vijayakumar |
1,911,562 | Knight of Knowledge: How Copilot’s Generative Answers Illuminate Data | Intro: Imagine a tool that not only converses with you but also delves into the depths of... | 26,301 | 2024-07-08T06:53:26 | https://dev.to/balagmadhu/knight-of-knowledge-how-copilots-generative-answers-illuminate-data-28el | copilotstudio, powerplatform | ## Intro:
Imagine a tool that not only converses with you but also delves into the depths of your own curated knowledge repositories to bring forth answers tailored to your context. This is the promise of Copilot’s new feature, a testament to the evolution of conversational AI that transcends the need for rigid dialog trees and manual scripting. It’s a leap towards a future where AI assists us by understanding our documents as well as we do, if not better.
## Decoding Co-pilotstudio - conceptual flow:
So here is my take on how the flow of information from the initial conversation input through various stages of processing to generate an appropriate response. The system uses the identified intent and topics to determine the best response logic to apply, whether it’s a standard dialogue, a knowledge-based answer, or a custom-tailored response.
- **Conversation (Utterance)**:
This is where the interaction begins, with the user providing input through speech or text.
- **Triggers**:
These are events or conditions that initiate the system’s processes.
- **Intent Match (Intend Match)**:
The system interprets the user’s intent from their utterance. This involves understanding what the user wants to achieve through their input. This could be generic / default system topics or we could orchastrated as interaction where user picks from pre-defined choices
Topics: The system categorizes the user’s intent into specific topics (for example Topic 1 might be greeting, topic 2 might be a rule based option to narrow the response for the conversation ). Each topic corresponds to a different subject area that the system can handle.
- **Response Logic**: This includes different methods for generating responses:
1. **Standard Dialogue**: Pre-configured responses for common interactions.
2. **Ground Knowledge**: Alternative grounding for queries related to the knowledge base.
3. **Custom Connector**: Specialized responses that may involve connecting to external systems or databases.

## Demo:
Let’s take a look at how this great feature allows us to be flexible and active in responding to user questions, making sure that the system’s answers are accurate and relevant to the situation.
Here the Generative Answer plugin searches the knowledge base which is PDF and provides the citation of the response.

The key is to design the Topic navigation and ground the topic to the right knowledge base.The key is how good we write the prompt for the LLM to generate the user response.

## Product Documentation:
[Generative answers
](https://learn.microsoft.com/en-us/microsoft-copilot-studio/nlu-boost-conversations)
[CoPilot Studio](https://learn.microsoft.com/en-us/microsoft-copilot-studio/guidance/) | balagmadhu |
1,911,769 | 40 Days Of Kubernetes (12/40) | Day 12/40 Kubernetes Daemonset Explained - Daemonsets, Job and Cronjob in... | 0 | 2024-07-10T17:00:47 | https://dev.to/sina14/40-days-of-kubernetes-1240-5g6k | kubernetes, 40daysofkubernetes | ## Day 12/40
# Kubernetes Daemonset Explained - Daemonsets, Job and Cronjob in Kubernetes
[Video Link](https://www.youtube.com/watch?v=kvITrySpy_k)
@piyushsachdeva
[Git Repository](https://github.com/piyushsachdeva/CKA-2024/)
[My Git Repo](https://github.com/sina14/40daysofkubernetes)
This section is about `cronjob`, `job` and `daemonset`.
As the definition, `DaemonSets` are `Kubernetes` API objects that allow you to run `Pods` as a `daemon` on each of your `Nodes`. New Nodes that join the `cluster` will automatically start running `Pods` that are part of a `DaemonSet`.
`DaemonSets` are often used to run long-lived background services such as Node monitoring systems and log collection agents. To ensure complete coverage, it’s important that these apps run a `Pod` on every Node in your `cluster`. [source](https://spacelift.io/blog/kubernetes-daemonset#what-is-a-kubernetes-daemonset)
---
#### 1. Create a `daemonset`:
```yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ds
labels:
env: demo
spec:
template:
metadata:
labels:
env: demo
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
selector:
matchLabels:
env: demo
```
```console
root@localhost:~# kubectl apply -f daemonset.yaml
daemonset.apps/nginx-ds created
root@localhost:~# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-ds-mf4gz 1/1 Running 0 11s
nginx-ds-rslrm 1/1 Running 0 11s
root@localhost:~# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-mf4gz 1/1 Running 0 19s 10.244.2.9 lucky-luke-worker2 <none> <none>
nginx-ds-rslrm 1/1 Running 0 19s 10.244.1.12 lucky-luke-worker <none> <none>
```
As you can see, our custom `workload` as `daemonset` is running on all `worker` nodes and because it is not a `control-plane` component and the `control-plane` node doesn't tolerate custom `workload`, it doesn't run on `control-plane` node. (but we can change it)
If one of these `pod` removed, it will run another one on that `node`
```console
root@localhost:~# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-ds-mf4gz 1/1 Running 0 9m40s
nginx-ds-rslrm 1/1 Running 0 9m40s
root@localhost:~# kubectl delete pod nginx-ds-mf4gz
pod "nginx-ds-mf4gz" deleted
root@localhost:~# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-ds-946m4 1/1 Running 0 5s
nginx-ds-rslrm 1/1 Running 0 10m
```
- See all `daemonset` on our `cluster`:
```console
root@localhost:~# kubectl get ds --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
default nginx-ds 2 2 2 2 2 <none> 10m
kube-system kindnet 3 3 3 3 3 kubernetes.io/os=linux 3d
kube-system kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 3d
```
---
#### 2. Cronjobs
See the [Cronitor](https://crontab.guru/) website for understand how it can be configured for tasks run in specific time.

(Photo from the video)
- Sample from `Kubernetes` [website](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#creating-a-cron-job):
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
```
#### 3. Jobs
A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again. [source](https://kubernetes.io/docs/concepts/workloads/controllers/job/)
- Sample from `Kubernetes` [website](https://kubernetes.io/docs/concepts/workloads/controllers/job/#running-an-example-job)
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
```
| sina14 |
1,911,846 | Upload videos to Vimeo using NodeJS, ExpressJS and TypeScript | There are many services that offer you the opportunity to store your videos or images in them, some... | 0 | 2024-07-10T17:17:13 | https://dev.to/remy349/upload-videos-to-vimeo-using-nodejs-expressjs-and-typescript-3oip | node, express, typescript, api | There are many services that offer you the opportunity to store your videos or images in them, some of these can be Cloudinary, Amazon S3 and more. But the protagonist of this tutorial is Vimeo.
## What is Vimeo?
Vimeo is a video hosting and distribution platform that allows users to upload, share and view high-definition videos. Vimeo is known for its focus on video quality and for offering advanced tools for content creators, such as customization options, detailed analytics and privacy controls.
## Requirements to develop the project
- NodeJS installed
- Vimeo account
## Project development
**1. External links**
- [Project repository - Github](https://github.com/Remy349/upload-videos-vimeo-node)
**2. Initial setup**
The first thing is to create a folder where all the necessary content and code will be stored, then create our package.json to install all the necessary dependencies:
```Shell
$ mkdir project-name
$ cd project-name/
$ npm init -y
```
Now that we have our package.json file we are going to define the necessary file and folder structure so we can focus on the code. We will create a folder src in the root of our project, inside src is where we will store our necessary files, the structure would be the following one:
```Shell
src
├── app.ts
└── routes
└── videos.ts
```
**3. Installing necessary dependencies and typescript setup**
It is time to install the necessary packages for the development of the project, in the terminal of our project we will execute the following:
```Shell
$ npm i express axios dotenv multer
```
These will be the basic dependencies of our project, among the packages we have "multer" which is a node.js middleware for handling multipart/form-data, which is mainly used to upload files, "dotenv" to handle environment variables, "axios" for http requests and "express" that we will use to develop our api.
Now we will proceed to install the development dependencies and packages needed to configure typescript in our project:
```Shell
npm i typescript ts-node morgan nodemon @types/express @types/morgan @types/multer @types/node -D
```
Once everything is installed we will configure typescript and nodemon, nodemon is a tool that helps develop Node.js based applications by automatically restarting the node application when file changes in the directory are detected.
We will create a tsconfig.json file to configure typescript:
```Json
// tsconfig.json
{
"compilerOptions": {
"target": "ES6",
"module": "commonjs",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*.ts"],
"exclude": ["node_modules"]
}
```
In our package.json we will add a new command for nodemon configuration in our project, also a command to compile our typescript and one more to run in production mode:
```Json
// package.json
{
"name": "upload-videos-vimeo-node",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"dev": "nodemon src/app.ts",
"build": "npx tsc",
"start": "node src/app.js"
},
...
}
```
**4. Basic server setup**
Let's go to our app.ts file to start the basic configuration of our server:
```Typescript
// app.ts
import "dotenv/config";
import express from "express";
import morgan from "morgan";
const app = express();
const PORT = 3000;
app.use(morgan("dev"));
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.listen(PORT, () => {
console.log(`Server is running at http://localhost:${PORT}`);
});
```
After that we simply execute the following command to run our server:
```Shell
$ npm run dev
> upload-videos-vimeo-node@1.0.0 dev
> nodemon src/app.ts
[nodemon] 3.1.4
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: ts,json
[nodemon] starting `ts-node src/app.ts`
Server is running at http://localhost:3000
```
**5. Create api path for uploading videos**
Let's go to our videos.ts file to create the path we will use to process the video upload to vimeo:
```Typescript
// videos.ts
import { Router, Request as ExRequest, Response as ExResponse } from "express";
const videoRouter = Router();
videoRouter.post("/upload", async (req: ExRequest, res: ExResponse) => {
try {
res.status(201).json({ message: "Video successfully uploaded" });
} catch (error) {
res.status(500).json({ message: "Internal server error" });
}
},
);
export default videoRouter;
```
In the code we have written we are declaring the method and the path we will use to process the video upload to vimeo, the logic will be inside a try/catch block to handle possible errors that may occur when processing upload.
At the end we are exporting our videoRouter variable to use it in our app.ts file to set the access of this route in our api. Let's go into our app.ts file to update the code in it and use the videoRouter variable:
```Typescript
// app.ts
import "dotenv/config";
import express from "express";
import morgan from "morgan";
// This is new
import videoRouter from "./routes/videos";
...
app.use(express.urlencoded({ extended: true }));
// This is new
app.use("/api/videos", videoRouter);
app.listen(PORT, () => {
console.log(`Server is running at http://localhost:${PORT}`);
});
```
If you visit the path `http://localhost:3000/api/videos/upload` using the `POST` method you should see the following:

**6. Upload videos to vimeo using its api**
It is time to start integrating vimeo in our api, in order to upload videos to vimeo we will make use of its api. We must go to the developers section of vimeo using the link `https://developer.vimeo.com/`:

Something very important that we need to be able to upload videos to vimeo is that we need an access token. For that we will click on the `Create an app` button:

After clicking on the button to `create an app`, a form will appear with basic information that we will have to fill out in order to get our access token:

Once the form is filled out, it will redirect us to the panel to obtain our access token. In the `Generate Access Token` section we will click on `Authenticated(you)`:

To be able to use the vimeo api without problems we must allow our token to edit and upload videos. Click on the `Private`, `Edit` and `Upload` checkboxes.
Once we have everything we need, we will go to the vimeo api documentation to start the process of uploading our videos to the platform `https://developer.vimeo.com/api/upload/videos`.
Before continuing with the development of our api we must create an `.env` file in the root of our project to save the access token we obtained:
```Shell
// .env
VIMEO_ACCESS_TOKEN=your_access_token_here
```
In our videos.ts file we will start configuring multer so that our api is able to receive and process files. In this case we are interested in videos:
```Typescript
// videos.ts
import { Router, Request as ExRequest, Response as ExResponse } from "express";
import multer from "multer";
const videoRouter = Router();
const storage = multer.memoryStorage();
const upload = multer({ storage });
videoRouter.post("/upload", upload.single("file"), async (req: ExRequest, res: ExResponse) => {
try {
res.status(201).json({ message: "Video successfully uploaded" });
} catch (error) {
res.status(500).json({ message: "Internal server error" });
}
},
);
export default videoRouter;
```
The new code we have written is so that our api can process the files it receives, using `multer.memoryStorage` we are telling multer that the file will be stored temporarily in memory and in the `post` function of our videoRouter we are telling it to process a single file through the name `file`, this is achieved by using `upload.single("file")`.
After having configured multer we will make a call to the vimeo api using the `post` method to create the process of uploading the video to our panel in our vimeo account:
```Typescript
// videos.ts
...
videoRouter.post("/upload", upload.single("file"), async (req: ExRequest, res: ExResponse) => {
try {
const VIMEO_ACCESS_TOKEN = process.env.VIMEO_ACCESS_TOKEN;
const file = req.file;
if (!file) {
return res.status(400).json({ message: "File not uploaded" });
}
const response = await axios.post(
"https://api.vimeo.com/me/videos",
{
upload: {
approach: "tus",
size: `${file.size}`,
},
},
{
headers: {
Authorization: `Bearer ${VIMEO_ACCESS_TOKEN}`,
"Content-Type": "application/json",
Accept: "application/vnd.vimeo.*+json;version=3.4",
},
},
);
res.status(201).json({ message: "Video successfully uploaded" });
} catch (error) {
res.status(500).json({ message: "Internal server error" });
}
},
);
...
```
In the code we added we are creating a variable to store our access token and another one to handle the file information we are receiving through the request. After this we create a small validation in case an empty request is sent without a file to process.
After that we create the `post` request to the vimeo api `https://api.vimeo.com/me/videos`, we send as data the size of the file, we do it using `file.size` and in the headers we attach our access token. The data we receive after making the request will be stored in a variable, in this case call it `response`.
Now that we have created the request to upload our video to vimeo the last thing we need to do is to complete the request, to complete the request we must make a `patch` request to the vimeo api sending as data the content of the file that we receive in its binary format or buffer, we will achieve this using our variable `file` its property `file.buffer`.
```Typescript
// videos.ts
...
videoRouter.post("/upload", upload.single("file"), async (req: ExRequest, res: ExResponse) => {
try {
...
const uploadLink: string = response.data.upload.upload_link;
await axios.patch(uploadLink, file.buffer, {
headers: {
"Content-Type": "application/offset+octet-stream",
"Upload-Offset": "0",
"Tus-Resumable": "1.0.0",
},
});
res.status(201).json({ message: "Video successfully uploaded" });
} catch (error) {
res.status(500).json({ message: "Internal server error" });
}
},
);
...
```
The `uploadLink` variable will be used to complete the video upload, it is a link that we receive as a response from our `response` variable after completing the `post` request.
As a final point we make the patch request to the vimeo api sending as parameter the link `uploadLink` and as data in the body of the request the content of the file in its buffer format using `file.buffer`. With this our video will be successfully uploaded to vimeo.
We send the file to our server using the format `multipart/form-data`:

And after having done the whole process correctly we should receive a success response if the video is successfully uploaded to vimeo:

The complete code should look like this:
```Typescript
// videos.ts
import axios from "axios";
import { Router, Request as ExRequest, Response as ExResponse } from "express";
import multer from "multer";
const videoRouter = Router();
const storage = multer.memoryStorage();
const upload = multer({ storage });
videoRouter.post("/upload", upload.single("file"), async (req: ExRequest, res: ExResponse) => {
try {
const VIMEO_ACCESS_TOKEN = process.env.VIMEO_ACCESS_TOKEN;
const file = req.file;
if (!file) {
return res.status(400).json({ message: "File not uploaded" });
}
const response = await axios.post(
"https://api.vimeo.com/me/videos",
{
upload: {
approach: "tus",
size: `${file.size}`,
},
},
{
headers: {
Authorization: `Bearer ${VIMEO_ACCESS_TOKEN}`,
"Content-Type": "application/json",
Accept: "application/vnd.vimeo.*+json;version=3.4",
},
},
);
const uploadLink: string = response.data.upload.upload_link;
await axios.patch(uploadLink, file.buffer, {
headers: {
"Content-Type": "application/offset+octet-stream",
"Upload-Offset": "0",
"Tus-Resumable": "1.0.0",
},
});
res.status(201).json({ message: "Video successfully uploaded" });
} catch (error) {
res.status(500).json({ message: "Internal server error" });
}
},
);
export default videoRouter;
``` | remy349 |
1,911,922 | Introduction to Functional Programming in JavaScript: Immutability #6 | Immutability is a key concept in functional programming and is crucial for writing reliable,... | 0 | 2024-07-08T22:00:00 | https://dev.to/francescoagati/introduction-to-functional-programming-in-javascript-immutability-6-3bfg | javascript | Immutability is a key concept in functional programming and is crucial for writing reliable, maintainable, and predictable code. By ensuring that data objects do not change after they are created, immutability helps to eliminate side effects and makes it easier to reason about the state of your application.
#### What is Immutability?
Immutability means that once an object is created, it cannot be changed. Instead of modifying an object, you create a new object with the desired changes. This contrasts with mutable objects, which can be modified after they are created.
Immutability can be applied to various types of data, including numbers, strings, arrays, and objects. Primitive values (numbers, strings, booleans) are inherently immutable in JavaScript, but complex data structures like arrays and objects are mutable by default.
#### Why is Immutability Important?
1. **Predictability**: Immutable data ensures that objects do not change unexpectedly, making the behavior of your program more predictable and easier to understand.
2. **Debugging**: When data is immutable, you can be confident that once it is created, it remains unchanged, which simplifies debugging and tracing the flow of data through your application.
3. **Concurrency**: Immutability helps avoid issues related to concurrent modifications of shared data, which is particularly important in multi-threaded or asynchronous environments.
4. **Time-Travel Debugging**: With immutable data, you can easily implement features like time-travel debugging, where you can step back and forth through the history of state changes in your application.
#### Achieving Immutability in JavaScript
While JavaScript does not enforce immutability by default, there are several techniques and libraries you can use to achieve immutability in your code.
1. **Using `const` for Primitive Values**
```javascript
const x = 42;
// x = 43; // This will cause an error because x is immutable
```
Declaring a variable with `const` ensures that the variable cannot be reassigned, making it immutable.
2. **Immutable Arrays**
To achieve immutability with arrays, you can use methods that do not mutate the original array, such as `map`, `filter`, `concat`, and the spread operator.
```javascript
const arr = [1, 2, 3];
// Using map
const doubled = arr.map(x => x * 2);
// Using filter
const evens = arr.filter(x => x % 2 === 0);
// Using concat
const extended = arr.concat([4, 5]);
// Using spread operator
const withNewElement = [...arr, 4];
console.log(arr); // [1, 2, 3]
console.log(doubled); // [2, 4, 6]
console.log(evens); // [2]
console.log(extended); // [1, 2, 3, 4, 5]
console.log(withNewElement); // [1, 2, 3, 4]
```
3. **Immutable Objects**
For objects, you can use `Object.assign` and the spread operator to create new objects with updated properties.
```javascript
const obj = { a: 1, b: 2 };
// Using Object.assign
const updatedObj = Object.assign({}, obj, { b: 3 });
// Using spread operator
const updatedObj2 = { ...obj, b: 3 };
console.log(obj); // { a: 1, b: 2 }
console.log(updatedObj); // { a: 1, b: 3 }
console.log(updatedObj2); // { a: 1, b: 3 }
```
4. **Deep Immutability**
For deeply nested structures, achieving immutability can be more challenging. Libraries like [Immutable.js](https://immutable-js.github.io/immutable-js/) and [Immer](https://immerjs.github.io/immer/) provide tools for creating and managing immutable data structures.
```javascript
const { Map } = require('immutable');
const obj = Map({ a: 1, b: 2 });
const updatedObj = obj.set('b', 3);
console.log(obj.toObject()); // { a: 1, b: 2 }
console.log(updatedObj.toObject()); // { a: 1, b: 3 }
```
```javascript
const produce = require('immer').produce;
const obj = { a: 1, b: 2 };
const updatedObj = produce(obj, draft => {
draft.b = 3;
});
console.log(obj); // { a: 1, b: 2 }
console.log(updatedObj); // { a: 1, b: 3 }
```
5. **Object.freeze**
You can use `Object.freeze` to make an object immutable. However, this is a shallow freeze, meaning nested objects can still be modified.
```javascript
const obj = Object.freeze({ a: 1, b: { c: 2 } });
// obj.a = 3; // This will cause an error
obj.b.c = 3; // This will not cause an error because the freeze is shallow
console.log(obj); // { a: 1, b: { c: 3 } }
```
To achieve deep immutability, you can use recursive freezing:
```javascript
function deepFreeze(obj) {
Object.keys(obj).forEach(prop => {
if (typeof obj[prop] === 'object' && obj[prop] !== null) {
deepFreeze(obj[prop]);
}
});
return Object.freeze(obj);
}
const obj = deepFreeze({ a: 1, b: { c: 2 } });
// obj.a = 3; // This will cause an error
// obj.b.c = 3; // This will also cause an error
console.log(obj); // { a: 1, b: { c: 2 } }
```
| francescoagati |
1,911,942 | Accelerate Couchbase-Powered RAG AI Application With NVIDIA NIM/NeMo and LangChain | Today, we’re excited to announce our new integration with NVIDIA NIM/NeMo. In this blog post, we... | 0 | 2024-07-11T10:32:29 | https://www.couchbase.com/blog/accelerate-rag-ai-couchbase-nvidia/ | couchbase, rag, nvidia, langchain | ---
title: Accelerate Couchbase-Powered RAG AI Application With NVIDIA NIM/NeMo and LangChain
published: true
date: 2024-07-04 16:49:03 UTC
tags: Couchbase,RAG,NVIDIA,Langchain
canonical_url: https://www.couchbase.com/blog/accelerate-rag-ai-couchbase-nvidia/
---
Today, we’re excited to announce our new integration with NVIDIA NIM/NeMo. In this blog post, we present a solution concept of an interactive chatbot based on a _Retrieval Augmented Generation_ (RAG) architecture with Couchbase Capella as a Vector database. The retrieval and generation phases of the RAG pipeline are accelerated by NVIDIA NIM/NeMo with just a few lines of code.
Enterprises across various verticals strive to offer the best customer service to their customers. To achieve this, they are arming their frontline workers such as ER nurses, store sales associates, and help desk representatives, with AI-powered interactive question-and-answer (QA) chatbots to retrieve relevant and up-to-date information quickly.
Chatbots are usually based on [RAG](https://www.couchbase.com/blog/an-overview-of-retrieval-augmented-generation/), an AI framework used for retrieving facts from the enterprise’s knowledge base to ground LLM responses in the most accurate and recent information. It involves three distinct phases, which starts with the retrieval of the most relevant context using [vector search](https://www.couchbase.com/products/vector-search/), augmentation of the user’s query with the context, and, finally, generating relevant responses using an LLM.
The problem with existing RAG pipelines is that calls to the embedding service in the retrieval phase for converting user prompts into vectors can add significant latency, slowing down applications that require interactivity. Vectorizing a document corpus consisting of millions of PDFs, docs, and other knowledge bases can take a long time to vectorize, increasing the likelihood of using stale data for RAG. Further, users find it challenging to accelerate inference (tokens/sec) cost-efficiently to reduce the response time of their chatbot applications.
Figure 1 depicts a performant stack that will enable you to easily develop an interactive customer service chatbot. It consists of the StreamLit application framework, LangChain for orchestration, Couchbase Capella for indexing and searching vectors, and NVIDIA NIM/NeMo for accelerating the retrieval and generation stages.
[](https://www.couchbase.com/blog/wp-content/uploads/2024/07/image1-1.png)
Figure 1: Conceptual Architecture of a QA Chatbot built using Capella and NVIDIA NIM/NeMo
Couchbase Capella, a high-performance database-as-a-service (DBaaS), allows you to get started quickly with storing, indexing, and querying operational, vector, text, time series, and geospatial data while leveraging the flexibility of JSON. You can easily integrate Capella for [vector search](https://www.couchbase.com/products/vector-search/) or semantic search without the need for a separate vector database by integrating an orchestration framework such as [LangChain](https://www.langchain.com/) or [LlamaIndex](https://www.llamaindex.ai/) into your production RAG pipeline. It offers the [hybrid search](https://www.couchbase.com/blog/hybrid-search/) capability, which blends vector search with traditional search to improve search performance significantly. Further, you can extend vector search to the edge using Couchbase mobile for edge AI use cases.
Once you have configured Capella Vector Search, you can proceed to choose a performant model from the [NVIDIA API Catalog](https://build.nvidia.com/explore/discover), which offers a broad spectrum of foundation models that span open-source, NVIDIA AI foundation, and custom models, optimized to deliver the best performance on NVIDIA accelerated infrastructure. These models are deployed as [NVIDIA NIM](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/?ref=blog.langchain.dev) either on-prem or in the cloud using easy-to-use prebuilt containers via a single command. NeMo Retriever, a part of NVIDIA NeMo, offers information retrieval with the lowest latency, highest throughput, and maximum data privacy.
The chatbot that we have developed using the aforementioned stack will allow you to upload your PDF documents and ask questions interactively. It uses _NV-QA-Embed_, a GPU-accelerated text embedding model used for question-answer retrieval, and [Llama 3 – 70B](https://build.nvidia.com/meta/llama3-70b), which is packaged as a NIM and accelerated on NVIDIA infrastructure. The [langchain-nvidia-ai-endpoints](https://python.langchain.com/v0.2/docs/integrations/chat/nvidia_ai_endpoints/) package contains LangChain integrations for building applications with models on NVIDIA NIM. Although we have used NVIDIA-hosted endpoints for prototyping purposes, we recommend that you consider using self-hosted NIM by referring to the [NIM documentation](https://docs.nvidia.com/nim/large-language-models/latest/introduction.html?nvid=nv-int-tblg-432774) for production deployments.
You can use this solution to support use cases that require quick information retrieval such as:
-
- Enabling ER nurses to speed up triaging by quick access to relevant healthcare information for alleviating overcrowding, long waits for care, and poor patient satisfaction.
- Helping customer service agents discover relevant knowledge quickly via an internal knowledge-base chatbot to reduce caller wait times. This will not only help boost CSAT scores but also allow for managing high call volumes.
- Helping sales associates inside a store to quickly discover and recommend items in a product catalog similar to the picture or description of the item requested by a shopper but is currently out of stock (stockout), to improve the shopping experience.
In conclusion, you can develop an interactive GenAI application, like a chatbot, with grounded and relevant responses using Couchbase Capella-based RAG and accelerate it using NVIDIA NIM/NeMo. This combination provides scalability, reliability, and ease of use. In addition to deploying alongside Capella for a DBaaS experience, NIM/NeMo can be deployed with on-prem or self-managed Couchbase in public clouds within your VPC for use cases that have stricter requirements for security and privacy. Additionally, you can use [NeMo Guardrails](https://developer.nvidia.com/blog/building-safer-llm-apps-with-langchain-templates-and-nvidia-nemo-guardrails/) to control the output of your LLM for content that your company deems objectionable.
The details of the chatbot application can be found in the Couchbase [Developer Portal](https://github.com/couchbase-examples/couchbase-tutorials/blob/141424e68c18233c4ed47cc6321d38540ab4ca54/tutorial/markdown/python/nvidia-nim-llama3-pdf-chat/nvidia-nim-llama3-pdf-chat.md) along with the [complete code](https://github.com/couchbase-examples/nvidia-rag-demo/blob/main/chat_with_pdf.py). Please sign up for a [Capella trial account](https://cloud.couchbase.com/sign-up), free [NVIDIA NIM account](https://build.nvidia.com/explore/discover?signin_corporate=false&signin=false), and start developing your GenAI application.
The post [Accelerate Couchbase-Powered RAG AI Application With NVIDIA NIM/NeMo and LangChain](https://www.couchbase.com/blog/accelerate-rag-ai-couchbase-nvidia/) appeared first on [The Couchbase Blog](https://www.couchbase.com/blog). | brianking |
1,911,958 | Introduction to Functional Programming in JavaScript: Monad and functors #7 | Monads and functors are advanced concepts in functional programming that provide powerful... | 0 | 2024-07-09T22:00:00 | https://dev.to/francescoagati/introduction-to-functional-programming-in-javascript-monad-and-functors-7-1l6l | javascript | Monads and functors are advanced concepts in functional programming that provide powerful abstractions for handling data transformations, side effects, and composition. While they originate from category theory in mathematics, they have practical applications in programming languages like JavaScript.
#### What is a Functor?
A functor is a data type that implements a `map` method, which applies a function to the value inside the functor and returns a new functor with the transformed value. In essence, functors allow you to apply a function to a wrapped value without changing the structure of the container.
##### Example of Functor
```javascript
class Box {
constructor(value) {
this.value = value;
}
map(fn) {
return new Box(fn(this.value));
}
}
// Usage
const box = new Box(2);
const result = box.map(x => x + 3).map(x => x * 2);
console.log(result); // Box { value: 10 }
```
In this example, `Box` is a functor. The `map` method applies the function `fn` to the value inside the box and returns a new `Box` with the transformed value.
#### What is a Monad?
A monad is a type of functor that implements two additional methods: `of` (or `return` in some languages) and `flatMap` (also known as `bind` or `chain`). Monads provide a way to chain operations on the contained value while maintaining the context of the monad.
##### Properties of Monads
1. **Unit (of)**: A method that takes a value and returns a monad containing that value.
2. **Bind (flatMap)**: A method that takes a function returning a monad and applies it to the contained value, flattening the result.
##### Example of Monad
```javascript
class Box {
constructor(value) {
this.value = value;
}
static of(value) {
return new Box(value);
}
map(fn) {
return Box.of(fn(this.value));
}
flatMap(fn) {
return fn(this.value);
}
}
// Usage
const box = Box.of(2);
const result = box
.flatMap(x => Box.of(x + 3))
.flatMap(x => Box.of(x * 2));
console.log(result); // Box { value: 10 }
```
In this example, `Box` is both a functor and a monad. The `of` method wraps a value in a `Box`, and the `flatMap` method applies a function to the contained value and returns the resulting monad.
#### Practical Use Cases of Monads and Functors
Monads and functors are not just theoretical constructs; they have practical applications in real-world programming. Let's explore some common use cases.
##### Handling Optional Values with Maybe Monad
The Maybe monad is used to handle optional values, avoiding null or undefined values and providing a safe way to chain operations.
```javascript
class Maybe {
constructor(value) {
this.value = value;
}
static of(value) {
return new Maybe(value);
}
isNothing() {
return this.value === null || this.value === undefined;
}
map(fn) {
return this.isNothing() ? this : Maybe.of(fn(this.value));
}
flatMap(fn) {
return this.isNothing() ? this : fn(this.value);
}
}
// Usage
const maybeValue = Maybe.of('hello')
.map(str => str.toUpperCase())
.flatMap(str => Maybe.of(`${str} WORLD`));
console.log(maybeValue); // Maybe { value: 'HELLO WORLD' }
```
In this example, the `Maybe` monad safely handles the optional value, allowing transformations only if the value is not null or undefined.
##### Handling Asynchronous Operations with Promise Monad
Promises in JavaScript are monads that handle asynchronous operations, providing a way to chain operations and handle errors.
```javascript
const fetchData = url => {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve(`Data from ${url}`);
}, 1000);
});
};
// Usage
fetchData('https://api.example.com')
.then(data => {
console.log(data); // 'Data from https://api.example.com'
return fetchData('https://api.example.com/2');
})
.then(data => {
console.log(data); // 'Data from https://api.example.com/2'
})
.catch(error => {
console.error(error);
});
```
Promises allow you to handle asynchronous operations in a clean and composable manner, chaining operations and handling errors gracefully.
Monads and functors are powerful abstractions in functional programming that enable you to work with data transformations, side effects, and composition in a more structured and predictable way.
While the mathematical foundations of monads and functors can be complex, their practical applications are highly valuable in real-world programming. Whether you are handling optional values with the Maybe monad or managing asynchronous operations with Promises, these functional programming techniques can help you create more robust and reliable applications.
| francescoagati |
1,911,960 | Introduction to Functional Programming in JavaScript: Monoids, Applicatives, and Lenses #8 | Functional programming offers a rich set of tools and patterns that can help you write more... | 0 | 2024-07-10T22:00:00 | https://dev.to/francescoagati/introduction-to-functional-programming-in-javascript-monoids-applicatives-and-lenses-8-1gjb | javascript | Functional programming offers a rich set of tools and patterns that can help you write more expressive, modular, and maintainable code. Among these tools are monoids, applicatives, and lenses. These advanced concepts can initially seem daunting, but they provide powerful abstractions for dealing with data and computations.
#### Monoids
A monoid is a type with a binary associative operation and an identity element. This might sound abstract, but many common data types and operations form monoids.
##### Monoid Properties
1. **Associativity**: \( (a \cdot b) \cdot c = a \cdot (b \cdot c) \)
2. **Identity Element**: There exists an element \( e \) such that \( a \cdot e = e \cdot a = a \)
##### Example: String Concatenation
```javascript
const concat = (a, b) => a + b;
const identity = '';
console.log(concat('Hello, ', 'World!')); // 'Hello, World!'
console.log(concat(identity, 'Hello')); // 'Hello'
console.log(concat('World', identity)); // 'World'
```
String concatenation with an empty string as the identity element is a monoid.
##### Example: Array Concatenation
```javascript
const concat = (a, b) => a.concat(b);
const identity = [];
console.log(concat([1, 2], [3, 4])); // [1, 2, 3, 4]
console.log(concat(identity, [1, 2])); // [1, 2]
console.log(concat([1, 2], identity)); // [1, 2]
```
Array concatenation with an empty array as the identity element is also a monoid.
#### Applicatives
Applicatives are a type of functor that allow for function application lifted over a computational context. They provide a way to apply functions to values that are wrapped in a context, such as Maybe, Promise, or arrays.
##### Applicative Properties
1. **Identity**: \( A.of(x).map(f) \equiv A.of(f).ap(A.of(x)) \)
2. **Homomorphism**: \( A.of(f).ap(A.of(x)) \equiv A.of(f(x)) \)
3. **Interchange**: \( A.of(f).ap(u) \equiv u.ap(A.of(f => f(x))) \)
##### Example: Applying Functions with Applicatives
```javascript
class Maybe {
constructor(value) {
this.value = value;
}
static of(value) {
return new Maybe(value);
}
map(fn) {
return this.value === null || this.value === undefined
? Maybe.of(null)
: Maybe.of(fn(this.value));
}
ap(maybe) {
return maybe.map(this.value);
}
}
const add = a => b => a + b;
const maybeAdd = Maybe.of(add);
const maybeTwo = Maybe.of(2);
const maybeThree = Maybe.of(3);
const result = maybeAdd.ap(maybeTwo).ap(maybeThree);
console.log(result); // Maybe { value: 5 }
```
In this example, the `ap` method is used to apply the function inside the `Maybe` context to the values inside other `Maybe` instances.
#### Lenses
Lenses are a functional programming technique for focusing on and manipulating parts of data structures. They provide a way to get and set values in immutable data structures.
##### Basic Lens Implementation
A lens is typically defined by two functions: a getter and a setter.
```javascript
const lens = (getter, setter) => ({
get: obj => getter(obj),
set: (val, obj) => setter(val, obj)
});
const prop = key => lens(
obj => obj[key],
(val, obj) => ({ ...obj, [key]: val })
);
const user = { name: 'Alice', age: 30 };
const nameLens = prop('name');
const userName = nameLens.get(user);
console.log(userName); // 'Alice'
const updatedUser = nameLens.set('Bob', user);
console.log(updatedUser); // { name: 'Bob', age: 30 }
```
In this example, `prop` creates a lens that focuses on a property of an object. The lens allows you to get and set the value of that property in an immutable way.
#### Combining Lenses
Lenses can be composed to focus on nested data structures.
```javascript
const addressLens = prop('address');
const cityLens = lens(
obj => obj.city,
(val, obj) => ({ ...obj, city: val })
);
const userAddressCityLens = {
get: obj => cityLens.get(addressLens.get(obj)),
set: (val, obj) => addressLens.set(cityLens.set(val, addressLens.get(obj)), obj)
};
const user = {
name: 'Alice',
address: {
city: 'Wonderland',
zip: '12345'
}
};
const userCity = userAddressCityLens.get(user);
console.log(userCity); // 'Wonderland'
const updatedUser = userAddressCityLens.set('Oz', user);
console.log(updatedUser); // { name: 'Alice', address: { city: 'Oz', zip: '12345' } }
```
By composing lenses, you can focus on and manipulate nested properties in complex data structures.
Monoids, applicatives, and lenses are advanced functional programming patterns that enable you to write more expressive and maintainable JavaScript code. Monoids provide a way to combine values in a structured manner, applicatives allow for function application within a context, and lenses offer a powerful way to access and update immutable data structures.
By incorporating these patterns into your programming toolkit, you can handle complex data transformations, manage side effects, and maintain immutability in your applications. | francescoagati |
1,912,105 | Building your first ROSA🌹 with Red Hat and AWS | “When life throws thorns, hunt for roses.” – Anonymous When market trends, pessimistic forecasts,... | 0 | 2024-07-11T04:55:43 | https://dev.to/aws-builders/building-your-first-rosa-with-red-hat-and-aws-3jjd | redhat, openshift, aws, 5g | “When life throws thorns, hunt for roses.” – Anonymous
When market trends, pessimistic forecasts, and Global economics throw companies and us developers thorns (on many levels), hunt for a Rosa (the English word for 'Rose'). Jokes aside, in this ever-changing market, looking for the most suitable solution can always make a difference between a successful business and a "what if we have done this differently". In this blog entry, I introduce a solution that combines the best of both worlds, a top-notch Cloud Services provider (and market leader) and the most complete Container Management platform offered by Red Hat, Red Hat OpenShift enterprise Kubernetes platform on AWS.
##
1. [Definition](#1-definition)
2. [Architecture](#2-architecture)
3. [Pre-requisites to create ROSA](#3-pre-requisites-to-create-rosa)
4. [ROSA cluster implementation](#4-rosa-cluster-implementation)
5. [Configure Developer Self-service](#5-configure-developer-self-service)
6. [ROSA Storage](#6-rosa-storage)
7. [ROSA Networking](#7-rosa-networking)
8. [ROSA Logging](#8-rosa-logging)
## 1. Definition
ROSA (Red Hat Openshift on AWS) is an example of a Platform service offered by Red Hat, as a sub-group of Red Hat Cloud Services.
Why is it important? it helps companies to spend more time building and deploying applications and less time managing infrastructure.
## 2. Architecture
The architecture of ROSA consists of several key components:
1. **Control Plane**: Managed by Red Hat, it includes the OpenShift API server, controller manager, scheduler, etcd, and other core services.
2. **Worker Nodes**: Deployed in your AWS account, running your containerized applications (Compute and Storage Volumes).
3. **Infrastructure Nodes**: Nodes where OpenShift components such as the ingress controller, image registry, and monitoring are deployed.
3. **Networking**: Utilizes AWS VPCs, subnets, security groups, and other networking services to manage communication.
4. **Storage**: Integrates with AWS storage services like EBS and S3 for persistent and object storage.
5. **Identity and Access Management**: Uses AWS IAM for permissions and OpenShift RBAC for fine-grained access control.
Red Hat OpenShift Service on AWS (ROSA) offers two cluster topologies:
1. **Hosted Control Plane (HCP)**: In this topology, the control plane is managed and hosted in a Red Hat account, while the worker nodes are deployed within the customer's AWS account.
2. **Classic**: Both the control plane and the worker nodes are deployed within the customer's AWS account.
In the below chapters, I will explain Classic Architecture, leaving the HCP topology for future discussions. Find below AWS topology for Classic architecture. Reference: [ROSA Architecture](https://docs.openshift.com/rosa/architecture/rosa-architecture-models.html#rosa-classic-architecture_rosa-architecture-models)

## 3. Pre-requisites to create ROSA
Before you can create your first ROSA cluster, ensure the following:
- **AWS Account and IAM User**:
- You need an AWS account with an IAM user.
- Since you subscribe to ROSA through the AWS Marketplace, your IAM user must have AWS Marketplace permissions. If you lack these permissions, contact your AWS account administrator to grant you access.
- For more details on troubleshooting ROSA enablement errors, review the documentation in the reference section.
- **AWS Service Quotas**:
- Your AWS account must have sufficient AWS service quotas to create ROSA clusters.
- Use the `rosa` command to verify these quotas.
- Review the documentation in the reference section for a list of required quotas.
- **Red Hat Account**:
- You need a Red Hat account to access the Hybrid Cloud Console.
- The cluster creation process links your Red Hat account with your AWS account, allowing you to manage your ROSA clusters from the OpenShift Cluster Manager web interface.
### How to Add OpenShift to Your AWS Account
Subscribing to ROSA through the AWS Marketplace is straightforward. Follow these steps to enable ROSA in your AWS account:
1. **Log in to the AWS Management Console**:
- Visit [AWS Management Console](https://console.aws.amazon.com/).
2. **Navigate to the ROSA Service**:
- Go to **Services** > **Containers** > **Red Hat OpenShift Service on AWS**.
3. **Get Started with ROSA**:
- Click **Get started** to reach the Verify ROSA prerequisites page.
4. **Check Your Subscription Status**:
- If you see the "You previously enabled ROSA" checkmark, you are already subscribed.
5. **Enable ROSA (if not already subscribed)**:
- Select **I agree to share my contact information with Red Hat**.
- Click **Enable ROSA**.
After following these steps, this should be the final result:

### Install and Configure CLI
- Install the aws command on your system. The tool is available at https://aws.amazon.com/cli/.
- Run the aws configure command to provide your IAM user credentials and to select your AWS Region.
```
$ aws configure
AWS Access Key ID [None]: AKIAXBPATO4UQQERVA6I
AWS Secret Access Key [None]: j8X/hphaBCOK0J5Ry64PZBRYcT0AV9gn7TWdXxw4
Default region name [None]: us-east-1
Default output format [None]: <Enter>
```
- Download and install the ROSA CLI from [Red Hat OpenShift Downloads](https://console.redhat.com/openshift/downloads).
- Execute the `rosa login` command to log in to your Red Hat account. This command will prompt you to generate an access token.
```
$ rosa login
To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa
? Copy the token and paste it here:
```
## 4. ROSA cluster implementation
The following steps will explain how to install a ROSA cluster using CLI. UI implementation is also available, but not discussed in this blog.
### Create Account Roles
To create ROSA clusters, you must first set up specific IAM roles and policies in your AWS account. These roles grant the necessary permissions for the ROSA cluster creation process to create AWS resources, such as EC2 instances.
Steps:
1. Log in to your AWS and Red Hat accounts using `aws configure` and `rosa login` commands.
2. Run `rosa create account-roles` to create the IAM resources.
- Use `--mode auto` to automate role and policy creation via the AWS API.
- Add `--yes` to skip confirmation prompts.
Example:
```
$ rosa create account-roles --mode auto --yes
...output omitted...
I: Creating account roles
I: Creating roles using 'arn:aws:iam::...:user/mgonzalez@example.com-fqppg-admin'
I: Created role 'ManagedOpenShift-Installer-Role' ...
I: Created role 'ManagedOpenShift-ControlPlane-Role' ...
I: Created role 'ManagedOpenShift-Worker-Role' ...
I: Created role 'ManagedOpenShift-Support-Role' ...
I: To create a cluster with these roles, run the following command:
rosa create cluster --sts
```
### Create a ROSA Cluster
Once your cloud environment is prepared, you can create a ROSA cluster.
To do this, open a command-line terminal and run `rosa create cluster`. This command starts the cluster creation process and exits immediately, allowing the installation to proceed unattended on AWS.
By default, `rosa create cluster` runs in interactive mode. You only need to specify the cluster name and can accept the default values suggested for other parameters.
```
$ rosa create cluster
I: Enabling interactive mode
? Cluster name: openshiftmarco
? Deploy cluster using AWS STS: Yes
W: In a future release STS will be the default mode.
W: --sts flag won't be necessary if you wish to use STS.
W: --non-sts/--mint-mode flag will be necessary if you do not wish to use STS.
? OpenShift version: 4.12.14
I: Using arn:...:role/ManagedOpenShift-Installer-Role for the Installer role
I: Using arn:...:role/ManagedOpenShift-ControlPlane-Role for the ControlPlane role
I: Using arn:...:role/ManagedOpenShift-Worker-Role for the Worker role
I: Using arn:...:role/ManagedOpenShift-Support-Role for the Support role
? External ID (optional): <Enter>
? Operator roles prefix: openshiftmarco-p5k3 1
? Multiple availability zones (optional): No
? AWS region: us-east-1
? PrivateLink cluster (optional): No
...output omitted...
I: Creating cluster 'openshiftmarco'
I: To create this cluster again in the future, you can run: 2
rosa create cluster --cluster-name openshiftmarco --sts --role-arn arn:aws:iam::452954386616:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::452954386616:role/ManagedOpenShift-Support-Role --controlplane-iam-role arn:aws:iam::452954386616:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::452954386616:role/ManagedOpenShift-Worker-Role --operator-roles-prefix openshiftmarco-p5k3 --region us-east-1 --version 4.12.14 --compute-nodes 2 --compute-machine-type m5.xlarge --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23
I: To view a list of clusters and their status, run 'rosa list clusters'
I: Cluster 'openshiftmarco' has been created.
I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information.
...output omitted...
I: Run the following commands to continue the cluster creation: 3
rosa create operator-roles --cluster openshiftmarco
rosa create oidc-provider --cluster openshiftmarco
I: To determine when your cluster is Ready, run 'rosa describe cluster -c openshiftmarco'.
I: To watch your cluster installation logs, run 'rosa logs install -c openshiftmarco --watch'.
```
A simplified, and more direct way to deploy a specific Red Hat Openshift cluster, defining above items + EC2 size will be:
`rosa create cluster --cluster-name openshiftmarco --region us-east-1 --multi-az=false --compute-machine-type m5.2xlarge --replicas 2 --sts --mode auto`
### Monitor ROSA Cluster Creation Process
The `rosa describe cluster --cluster [cluster_name]` will show the deployment status.
```
$ rosa describe cluster --cluster mycluster
...output omitted...
State: installing
...output omitted...
```
```
$ rosa describe cluster --cluster mycluster
...output omitted...
State: ready
...output omitted...
```
### Describe ROSA Cluster
Use the `rosa describe cluster -c [cluster_name]` to describe the cluster information.
```
>rosa describe cluster -c openshiftmarco
WARN: The current version (1.2.39) is not up to date with latest released version (1.2.40).
WARN: It is recommended that you update to the latest version.
Name: openshiftmarco
Domain Prefix: openshiftmarco
Display Name: openshiftmarco
ID: 2bqn7jb8ts39iushkqantla77o3ic1sl
External ID:
Control Plane: Customer Hosted
OpenShift Version:
Channel Group: stable
DNS: Not ready
AWS Account: 615956341945
API URL:
Console URL:
Region: us-east-1
Multi-AZ: false
Nodes:
- Control plane: 3
- Infra: 2
- Compute: 2
Network:
- Type: OVNKubernetes
- Service CIDR: 172.30.0.0/16
- Machine CIDR: 10.0.0.0/16
- Pod CIDR: 10.128.0.0/14
- Host Prefix: /23
EC2 Metadata Http Tokens: optional
Role (STS) ARN: arn:aws:iam::615956341945:role/ManagedOpenShift-Installer-Role
Support Role ARN: arn:aws:iam::615956341945:role/ManagedOpenShift-Support-Role
Instance IAM Roles:
- Control plane: arn:aws:iam::615956341945:role/ManagedOpenShift-ControlPlane-Role
- Worker: arn:aws:iam::615956341945:role/ManagedOpenShift-Worker-Role
Operator IAM Roles:
- arn:aws:iam::615956341945:role/openshiftmarco-t2j5-openshift-cloud-network-config-controller-cl
- arn:aws:iam::615956341945:role/openshiftmarco-t2j5-openshift-machine-api-aws-cloud-credentials
- arn:aws:iam::615956341945:role/openshiftmarco-t2j5-openshift-cloud-credential-operator-cloud-cr
- arn:aws:iam::615956341945:role/openshiftmarco-t2j5-openshift-image-registry-installer-cloud-cre
- arn:aws:iam::615956341945:role/openshiftmarco-t2j5-openshift-ingress-operator-cloud-credentials
- arn:aws:iam::615956341945:role/openshiftmarco-t2j5-openshift-cluster-csi-drivers-ebs-cloud-cred
Managed Policies: No
State: waiting (OIDC Provider not found: operation error STS: AssumeRoleWithWebIdentity, https response error StatusCode: 400, RequestID: 0956a1b9-92dd-4270-b654-4143dc650624, InvalidIdentityToken: No OpenIDConnect provider found in your account for https://oidc.op1.openshiftapps.com/2bqn7jb8ts39iushkqantla77o3ic1sl)
Private: No
Delete Protection: Disabled
Created: Jun 11 2024 03:17:42 UTC
User Workload Monitoring: Enabled
Details Page: https://[URL]
OIDC Endpoint URL: https://[URL] (Classic)
```
```
>rosa create admin --cluster=openshiftmarco
WARN: The current version (1.2.39) is not up to date with latest released version (1.2.40).
WARN: It is recommended that you update to the latest version.
INFO: Admin account has been added to cluster 'openshiftmarco'.
INFO: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user.
INFO: To login, run the following command:
oc login https://api.openshiftmarco.b3b3.p1.openshiftapps.com:6443 --username cluster-admin --password 3HgZ3-wN495-RLc3v-7sLaU
INFO: It may take several minutes for this access to become active.
```
There you go! You have your brand new Red Hat Openshift cluster available.
Let's check the AWS resources being created:
AWS EC2

AWS Route53


AWS Load Balancer


AWS EIP

#### Delete ROSA Cluster
Deleting ROSA cluster is even easier than creating one. Follow this simple steps:
1) Login Red Hat Hybrid Console and select your cluster. Then select the option "delete cluster"

2) Confirm the delete request by entering the cluster name

3) Confirm Resources are deleted from Red Hat Openshift Console


## 5. Configure Developer Self-service
Loading...
Please wait while I update data ⏳
## 6. ROSA Storage
Loading...
Please wait while I update data ⏳
## 7. ROSA Networking
Loading...
Please wait while I update this data ⏳
## 8. ROSA Logging
Loading...
Please wait while I update data ⏳
| mgonzalezo |
1,912,179 | Day 1 of 50 days DevOps Tools Series: Importance of Networking in DevOps | ** The Importance of Networking in DevOps ** Introduction Networking is the backbone of... | 0 | 2024-07-08T03:41:23 | https://dev.to/shivam_agnihotri/day-1-of-50-days-devops-tools-series-importance-of-networking-in-devops-edd | devops, network, networking, development | **
## The Importance of Networking in DevOps
**
**Introduction**
Networking is the backbone of modern IT infrastructure, enabling communication between systems, applications, and users. In the DevOps, networking plays a pivotal role in ensuring that software development, deployment, and operations are seamless and efficient. This blog will delve into the importance of networking in DevOps, highlighting key networking tools and their commands, and explaining how they empower DevOps engineers to achieve their goals.
**Why Networking is Crucial in DevOps**
Continuous Integration and Continuous Deployment (CI/CD): Networking facilitates the seamless integration and deployment of code across various environments. Reliable network connectivity ensures that code changes are propagated quickly and efficiently from development to production.
**Infrastructure as Code (IaC):** Networking is integral to IaC, allowing DevOps teams to define and manage network configurations programmatically. This approach enhances consistency, reduces errors, and speeds up deployment processes.
**Monitoring and Logging:** Effective networking ensures that monitoring tools can collect data from various parts of the infrastructure. This data is crucial for identifying issues, optimizing performance, and ensuring system reliability.
**Security:** Networking tools help in implementing security measures such as firewalls, VPNs, and intrusion detection systems. These tools protect the infrastructure from unauthorized access and cyber threats.
**Scalability:** Networking is key to scaling applications and services. It enables load balancing, traffic routing, and the efficient distribution of resources, ensuring that applications can handle varying loads and user demands.
**Essential Networking Tools for DevOps Engineers**
Wireshark
Nmap
cURL
Netcat
Telnet
Ping
Firewall Management
**1. Wireshark**
Wireshark is a network protocol analyzer that captures and displays data packets flowing through a network. It helps in troubleshooting network issues, analyzing performance, and ensuring security.
Key Commands:
```
wireshark #Launches the Wireshark GUI.
tshark -i <interface> #Captures packets on a specified interface using the terminal.
```
**Importance for DevOps:**
Wireshark helps DevOps engineers monitor network traffic, identify bottlenecks, and diagnose connectivity issues, ensuring smooth CI/CD processes and reliable application performance.
**2. Nmap**
Nmap (Network Mapper) is a powerful tool for network discovery and security auditing. It scans networks to discover hosts and services and identifies potential security vulnerabilities.
Key Commands:
```
nmap <target> #Scans the target IP or hostname.
nmap -sP <network> #Performs a ping scan on a specified network.
```
**Importance for DevOps:**
Nmap allows DevOps engineers to map network topologies, identify open ports, and assess security risks, helping maintain secure and well-organized infrastructure.
**3. cURL**
cURL (Client URL) is a command-line tool for transferring data with URLs. It supports various protocols such as HTTP, HTTPS, FTP, and more.
Key Commands:
```
curl <URL> #Fetches the content of the specified URL.
curl -X POST -d @data.json <URL> #Sends a POST request with data from a JSON file.
```
**Importance for DevOps:**
cURL is essential for testing API endpoints, automating HTTP requests, and verifying the availability and performance of web services.
**4. Netcat**
Netcat is a versatile networking tool for reading from and writing to network connections using TCP or UDP. It can be used for port scanning, data transfer, and debugging.
Key Commands:
```
nc -zv <host> <port> #Checks the availability of a port on a specified host (.
nc -l <port> #Listens for incoming connections on a specified port.
```
**Importance for DevOps:**
Netcat helps DevOps engineers troubleshoot network connectivity, test port availability, and perform data transfers, enhancing their ability to manage and debug networked applications.
**5. Telnet**
Telnet is a network protocol used to provide a command-line interface for communication with a remote device or server. Though less secure than modern alternatives, it's useful for debugging and testing.
Key Commands:
```
telnet <host> <port> #Connects to a specified host and port.
Ctrl + ] #Exits the Telnet session.
```
**Importance for DevOps:**
Telnet enables DevOps engineers to test connectivity and communicate with remote servers directly, which is useful for troubleshooting network services and verifying port configurations.
**6. Ping**
Ping is a simple command-line utility used to test the reachability of a host on an IP network and measure the round-trip time for messages sent to the destination.
Key Commands:
```
ping <hostname> #Sends ICMP echo requests to the specified hostname or IP address.
ping -c <count> <hostname> #Sends a specified number of ping requests.
```
**Importance for DevOps:**
Ping helps DevOps engineers quickly check network connectivity, diagnose network issues, and measure latency between different network nodes.
**7. Firewall Management**
Firewalls are security devices that monitor and control incoming and outgoing network traffic based on predetermined security rules. Proper firewall management is crucial for protecting infrastructure from unauthorized access and attacks.
Key Commands:
```
sudo ufw status #Checks the status of the firewall (for UFW).
sudo ufw allow <port> #Allows traffic on a specified port (for UFW).
sudo iptables -L #Lists current iptables rules.
```
**Importance for DevOps:**
Firewall management ensures that only authorized traffic can access network resources, protecting against security threats and maintaining a secure and compliant infrastructure.
**Conclusion**
Networking is a foundational element of DevOps, enabling seamless communication, efficient deployment, and robust monitoring. Tools like Wireshark, Nmap, cURL, Netcat, Telnet, Ping, and effective firewall management equip DevOps engineers with the capabilities to manage and optimize their network infrastructure effectively. By mastering these tools and understanding their commands, DevOps professionals can ensure the reliability, security, and scalability of their applications and services.
📣 Subscribe to our blog to get notifications on upcoming posts of this series. | shivam_agnihotri |
1,912,285 | How to Lock Your Screen in Windows 11 (If Disabled)? | Key Points to Lock Your Screen In Windows 11 On your keyboard, use the shortcut Winkey +... | 0 | 2024-07-11T16:20:43 | https://winsides.com/4-easy-ways-to-lock-your-screen-in-windows-11/ | tutorials, beginners, windows11, lockscreen | ---
title: How to Lock Your Screen in Windows 11 (If Disabled)?
published: true
date: 2024-07-05 04:59:27 UTC
tags: tutorials,Beginners,Windows11,LockScreen
canonical_url: https://winsides.com/4-easy-ways-to-lock-your-screen-in-windows-11/
cover_image: https://winsides.com/wp-content/uploads/2024/07/How-to-Lock-Screen-in-Windows-11.png
---
> ## Key Points to Lock Your Screen In Windows 11
>
> - On your keyboard, use the shortcut `Winkey + L`
> - It will instantly lock your screen & take you out to the login screen.
## Method 1: Using Start Menu:
This is one of the **old, unique, and straightforward** methods to lock your screen on your Windows 11 device. However, this method is suitable for those who prefer using the **GUI** and a **mouse**.
- **Step 1:** Click the **Start** button located at the bottom-left corner of your screen.

_Clicking the Start Menu Button_
- **Step 2:** Now you need to click on your **profile icon** at the top of the Start menu.
- **Step 3:** From the dropdown menu, select **Lock** , It will instantly lock your screen.

_Lock your Screen using Start Menu_
This is the fast & simple method to lock your screen. You can secured your device in just a few clicks.
## Method 2: Using CTRL + ALT + Del:
This method is one of the classic ways to lock your screen. Those who are familiar with **Windows XP, Vista, 7, 8, and 10** definitely know this method to lock the screen.
- **Step 1:** You need to Press **`Ctrl + Alt + Delete`** keys on your keyboard simultaneously.
- **Step 2:** From the screen that appears, click on **Lock** , It will bring you to the lock screen.

_Lock your Screen using CTRL + ALT + Del_
## Method 3: Using Power User Menu:
- **Step 1:** Firstly you need to **Right click** the **Start** button or simply use the keyboard shortcut **`Winkey + X`**.
- **Step 2:** In the list of options, you need to click the option **Shut down or sign out** >.
- **Step 3:** Now simply click the Lock button to lock your screen instantly.

_Lock Screen Preview_
## Method 4: Using Run Command:
This is one of the unique method to lock your screen using Run Dialog box. This is mostly suitable for tech savvy people.
- Firstly, you need to open Run window using the keyboard shortcut **`Winkey + R`**
- Now enter the following command `rundll32.exe user32.dll,LockWorkStation`
- Hit the **Enter** button or click the **OK** button to lock your screen.

_Using RUN Window to Lock your Screen_
## Method 5: Using Screensaver (Automated Lock Screen):
This is one of the unique methods, and it is a kind of automation process. All you have to do is set a screensaver and a timeout for a minute or longer.
- **Step 1:** Firstly, you need to Right click on the desktop and choose **Personalize**.

_Choose Personalize_
- **Step 2:** Now, Go to **Lock screen** settings.

_Choose Lock Screen option_
- **Step 3:** Scroll down and click on **Screen saver settings**.

_Choose ScreenSaver option_
- **Step 4:** Once you click the option, the **Screensaver Settings window** will be displayed on your screen.
- **Step 5:** Here, I choose the **3D Text** option as a screensaver and set the time to **1 minute**. Check the option “ **On resume, display logon screen** ” which will ask you to re-enter the login password to sign in.
- **Step** 6: Once you have done all the settings, click the **Apply** and **OK** buttons.

_Setting Screensaver to Lock your Screen_
- After a minute, You computer will launch the screensaver on your device by locking your screen automatically as shown in the below image.

_ScreenSaver Preview_

_Logon Screen Preview_
## Take Away:
Locking my screen in Windows 11 is **essential for maintaining security and privacy** , especially when I need to step away from my computer. I’ve found that there are multiple ways to do this, each offering convenience and reliability. Whether I use the Start menu, the classic **Ctrl + Alt + Delete** method, or even set up an automatic lock through screen saver settings, I have several options to choose from. By using these methods, I can ensure that my personal information and work remain secure. Find more informative tutorials on our Blog [WinSides.com](https://winsides.com/) | vigneshwaran_vijayakumar |
1,912,330 | Top 5 AI-Powered VSCode Extensions Every Developer Needs | In the ever-evolving world of development, finding tools that streamline your workflow is essential.... | 0 | 2024-07-09T12:46:20 | https://dev.to/enodi/top-5-ai-powered-vscode-extensions-every-developer-needs-59cf | ai, productivity, programming, vscode | In the ever-evolving world of development, finding tools that streamline your workflow is essential. AI is transforming how we code, enhancing productivity and efficiency. In this article, we'll dive into 5 must-have AI-powered VSCode extensions that can help you write better code faster.
### 1. GitHub Copilot

If you're a developer, you've likely heard of [GitHub Copilot](https://github.com/features/copilot). This AI-powered code completion tool from GitHub acts as an AI pair programmer, providing smart auto-complete suggestions to boost your productivity. Simply describe what you need in plain English, and Copilot will generate the code for you.
While it’s a paid service, GitHub offers a 30-day free trial to test its capabilities. Some features of GitHub Copilot include:
a. **Code completions:** Copilot provides autocomplete-style suggestions as you code, offering relevant suggestions as you type.
b. **Chat:** A chat interface for asking questions, useful for troubleshooting bugs or building new features.
c. **Participants:** The ability to tag domain experts in a chat, ask questions, and get responses.
d. **Slash commands:** Shortcuts to specific functionality via slash commands.
e. **Language support:** Compatible with languages like Java, PHP, Python, JavaScript, Ruby, Go, C#, C++, and more.
As of June 2024, GitHub Copilot has [1.3 million paid subscribers](https://www.ciodive.com/news/github-copilot-subscriber-count-revenue-growth/706201/#:~:text=The%20AI%2Dpowered%20coding%20assistant,to%2050%2C000%20developers%20this%20year).
### 2. Tabnine

[Tabnine](https://www.tabnine.com) is another powerful AI code assistant that provides AI chat and code completion for various languages such as JavaScript, Python, TypeScript, PHP, Go, and more.
Tabnine is also a paid product but offers a generous 90-day free trial. Some features of Tabnine include:
a. **Code Completion:** Speeds up coding by suggesting the next line or block of code, making your workflow smoother and more efficient.
b. **Write Test Cases:** Ask Tabnine to create tests for a specific function or piece of code in your project. It can generate actual test cases, implementations, and assertions. Tabnine can also use existing tests in your project and suggest new tests that align with your testing framework.
c. **Write Documentation:** Generate documentation for specific sections of your code to enhance readability and make it easier for other team members to understand.
d. **Chat:** Allows developers to generate code by asking questions in natural language, making it easy to get help and suggestions.
e. **Custom Model:** Ability to choose your own model tailored to your specific needs and preferences. Some models available to choose from include: the Tabnine model, Codestral, GPT-4o, Claude 3.5 Sonnet etc
f. **Language/Library Support:** Tabnine provides extensive support for languages and libraries, including JavaScript, TypeScript, Python, Java, C, C++, C#, Go, PHP, Ruby, Kotlin, Dart, Rust, React/Vue, HTML 5, CSS, Lua, Perl, YAML, Cuda, SQL, Scala, Shell (bash), Swift, R, Julia, VB, Groovy, Matlab, Terraform, ABAP, and more.
g. **Supported IDEs:** Tabnine works with various IDEs such as VS Code, JetBrains IDEs (IntelliJ, PyCharm, WebStorm, PhpStorm, Android Studio, GoLand, CLion, Rider, DataGrip, RustRover, RubyMine, DataSpell, Aqua, AppCode), Eclipse, Visual Studio 2022, and others.
Tabnine has over 7 million downloads.
### 3. Mintlify Doc Writer

[Mintlify Doc Writer](https://www.mintlify.com/) is a tool designed to simplify the process of writing documentation. It uses AI to generate clear, concise documentation directly from your code, saving you time and effort.
Mintlify Doc Writer is available as a paid service, but it offers a free trial to get you started. Key features include:
a. **Automated Documentation:** Generates documentation for your functions, classes, and modules by analyzing your code, ensuring that your documentation is always up-to-date.
b. **Code Comments:** Converts your inline code comments into well-structured documentation, making it easier for others to understand your code.
c. **Customization:** Allows you to customize the generated documentation to match your project's style and requirements.
d. **Integrations:** Works seamlessly with popular IDEs and code editors, allowing you to generate documentation directly within your development environment.
e. **Multi-language Support:** Supports a variety of programming languages, including JavaScript, Python, Java, C#, and more.
Mintlify Doc Writer helps you maintain high-quality documentation with minimal effort, improving code readability and team collaboration.
### 4. Red Hat Dependency Analytics
[Red Hat Dependency Analytics](https://marketplace.visualstudio.com/items?itemName=redhat.fabric8-analytics) is an AI-powered extension that helps developers manage dependencies and security vulnerabilities in their projects. It provides detailed insights and recommendations to ensure your code is secure and up-to-date.
Red Hat Dependency Analytics offers a range of features, including:
a. **Vulnerability Detection:** Automatically scans your project for known vulnerabilities in dependencies and provides suggestions for safer alternatives.
b. **Dependency Management:** Analyzes your project's dependencies and suggests updates to keep your software secure and compliant.
c. **Actionable Insights:** Offers detailed reports on the security and health of your dependencies, helping you make informed decisions.
d. **Integration:** Seamlessly integrates with popular IDEs like VS Code, allowing you to manage dependencies directly within your development environment.
e. **Multi-language Support:** Supports various programming languages and ecosystems, including JavaScript, Python, Java, and more.
Red Hat Dependency Analytics helps you maintain a secure and reliable codebase by providing critical insights into your project's dependencies.
### 5. GPT Commit
[GPT Commit](https://marketplace.visualstudio.com/items?itemName=DmytroBaida.gpt-commit) is an AI tool that assists developers in writing better commit messages. It leverages machine learning to analyze your code changes and suggest clear, concise, and informative commit messages.
GPT Commit is available as a paid service with a free trial option. Key features include:
a. **Commit Message Suggestions:** Automatically generates commit messages based on your code changes, ensuring that your commit history is clear and informative.
b. **Consistency:** Helps maintain consistency in your commit messages, making your project's history easier to understand.
c. **Customization:** Allows you to customize the format and style of the generated commit messages to match your project's conventions.
d. **Integration:** Works with popular version control systems like Git, integrating seamlessly into your existing workflow.
e. **Language Support:** Supports commit message generation for projects written in various programming languages, including JavaScript, Python, Java, C#, and more.
GPT Commit enhances your version control practices by ensuring that your commit messages are always well-written and informative, improving the overall quality of your project's history.
These tools showcase the power of AI in enhancing productivity and efficiency in software development. By integrating these AI-powered extensions into your workflow, you can streamline your coding process and produce high-quality code more efficiently.
I hope you've enjoyed this article. Comment below with any other tools you’ve found helpful. Happy coding :)
| enodi |
1,912,544 | Give your MacBook a new developer life with ChromeOS Flex & CDEs | In 2007, during the last days of my studies (wow, that was a long time ago 😅), I bought my first... | 0 | 2024-07-11T08:46:53 | https://dev.to/zenika/give-your-macbook-a-new-developer-life-with-chromeos-flex-cdes-2kle | cloud, cde, development | In 2007, during the last days of my studies (wow, that was a long time ago 😅), I bought my first MacBook. No more gaming PCs, this MacBook was going to be my work computer. After using it to develop and experiment different technologies, tools, and frameworks, I was forced to buy a new one, more powerful, to continue my development and photo and video editing activities.
I kept my first MacBook for smaller and less consuming activities until 2020, but I used it less and less as it became obsolete. After a while, even browsers like Chrome could no longer be updated, reducing more and more my usage until it ended up in a corner of my house
## Switch OS?
To use my MacBook a little longer, it was possible to install another OS, smaller than macOS, like Linux. In my company, I was working on Ubuntu and it was a very good option. I didn’t take the time to install this distribution on my old MacBook for personal reasons and priorities.
## ChromeOS Flex installation
A couple of months ago, I decided to take my old MacBook out of the closet and I was looking for options other than Linux. I found Chrome OS. I had already heard about it and as I work with Google solutions a lot , I decided to try this. And spoiler alert, I'm thinking of renewing my laptop for development and Google products are enough for me.
Installing Chrome OS Flex is easy and quick.
You just need a computer to download the iso image and build it on a USB key.
To do this, no matter what your OS is, you can configure your USB key from the command line or through an application.
More information here: [https://support.google.com/chromeosflex/answer/11541904?hl=en](https://support.google.com/chromeosflex/answer/11541904?hl=en
)
In my case, I had created a USB key with this documentation but I decided to use an "official" key that belonged to [Julien Landuré](https://twitter.com/jlandure).


After booting this USB Key on my computer, which takes a few minutes, I have a boot screen where my Google account is requested:

After answering a few questions, the computer is ready to be used. A bar like a macOS dock is available with several Google applications like Chrome, YouTube, Google Agenda, etc.

The "marketplace" offers a list of applications adapted to Chrome OS. Even if the basic idea was to install the least number of tools, there are many applications responding to many needs that you might find on other operating systems. Really interesting!

## Developing with CDEs
Now that my old computer is back up and running, what do I need to do to turn it into a developer's laptop?
Installing an IDE could be an option. For over 3 years, I've been using CDEs for personal, open-source and Zenika projects, and I'm not going to reset this old MacBook to install tools on it. So I'm taking my browser and picking up my two current favorite tools: Gitpod and IDX.
With the Gitpod Chrome extension, the “Open with Gitpod” button present on GitHub and GitLab is very useful to open a workspace.
Since Google Next 24, Google’s CDE, IDX, is now available for everyone and offers interesting features. I will speak about IDX in a future blog post.
CDEs share a common goal: to delocalize their configuration to the cloud. This approach offers several benefits, including faster laptop setup when someone comes into a new company or a project, and ultimately, longer computer lifetime.
Let’s start testing these tools on this "new" computer. Whether with IDX or Gitpod, the response times are very good and the navigation and application preview works just as well as on my M1 MacBook Pro.

_Screeenshot of IDX on a Zenika project available on GitHub_

_Screenshot of Gitpod and my personal blog hosted on GitLab_
After several features or fixes made with these CDEs, the only things that bother me are related to my MacBook Pro: I miss the keyboard backlight and the Mac shortcuts. But that's not related to IDX or Gitpod at all 😀
With Chrome OS Flex and CDEs as IDX, and Gitpod, my 2017 MacBook is running like new again! 🤘
This [recent blog post](https://www.zdnet.com/article/google-might-abandon-chromeos-flex-next-heres-why/
) mentions the possibility that ChromeOS Flex will be discontinued...
| jphi_baconnais |
1,912,637 | The Ultimate Guide to the OTTO API for Developers | APIs are very important in eCommerce for enabling various functions and integrations. eCommerce... | 0 | 2024-07-09T17:22:22 | https://dev.to/api2cartofficial/the-ultimate-guide-to-the-otto-api-for-developers-31gl | ottoapi, api, developer, webdev | [APIs](https://www.ibm.com/topics/api) are very important in eCommerce for enabling various functions and integrations. eCommerce businesses can use APIs to integrate many elements of their operations, such as payment processing, order fulfillment, inventory management, and customer relationship management. One of the popular APIs in eCommerce is the OTTO API.
In this article, we will explain the OTTO API's meaning, you will explore its main features and essential steps to get started. You will also discover the possibilities of integrating the OTTO API with a third-party solution with the help of API2Cart.
## What is OTTO API?
Otto API is a powerful tool that allows developers to easily integrate their applications with OTTO, a popular online marketplace. With OTTO API, developers can leverage OTTO's capabilities and e-store data
The OTTO API allows businesses to integrate these services directly into their systems, providing a seamless customer experience.
The OTTO API supports various HTTP methods, such as GET, POST, PUT, and DELETE, depending on the action you want to perform. For example, if you're going to retrieve a list of all product listings, you would make a GET request to the "products" endpoint. If you want to create a new product listing, you will make a POST request to the same endpoint but with the necessary data in the request body.
## The Key Features of OTTO API
[OTTO API](https://api.otto.de/portal/) provides many features and functionalities that enable developers to create highly interactive and personalized user experiences.
The OTTO Marketplace API caters to developers looking to integrate their software or apps with OTTO, a prominent European eCommerce marketplace. Here's a breakdown of its core functionalities:
**• Product Management:**
o Listing and updating product information on OTTO Marketplace.
o Uploading product descriptions and images.
o Managing product availability and stock levels.
**• Order Processing:**
o Receiving and managing orders placed through OTTO Marketplace.
o Updating order status and shipment details.
**• Inventory Management:**
o Synchronizing product inventory between your system and OTTO Marketplace.
o Keeping track of stock levels to avoid overselling.
These functionalities are likely facilitated through various API endpoints that allow developers to interact with OTTO's systems programmatically.
Integration with Third-Party Solution: OTTO API seamlessly integrates with a wide range of third-party services and platforms, allowing developers to enhance the functionalities of their applications. Integrating weather APIs, social media platforms, or eCommerce services provides a convenient way to connect with external systems and leverage their capabilities.
## Getting Started with Otto API
To get started with OTTO API, developers need to follow a few simple steps:
• **Sign up for an account:** Visit the OTTO API website and sign up for an account. This will give you access to the API documentation and, most importantly, allow you to generate API keys. These keys are necessary for making API requests.
• **Familiarize yourself with the documentation:** The API documentation is your gateway to understanding and harnessing the power of OTTO API. Review it, as it provides detailed information on the available endpoints, request/response formats, and authentication methods.
• **Obtain API keys:** Once you have signed up for an account, you can generate API keys. These keys play a crucial role in authenticating your requests and, importantly, ensuring the security of your application. You can rest assured that your application is protected.
• **Set up the development environment:** Before making API requests, you must set up your environment. This may involve installing software libraries or SDKs required to interact with the OTTO API. The documentation will provide instructions specific to your programming language or platform.
• **Make your first API call:** With your development environment and API keys in hand, you can make your first API call. Start by exploring the available API endpoints. Depending on the functionality you want to implement, you could retrieve data, send a message, or perform a specific action. The possibilities are endless, so dive in and start experimenting!
## Integrate OTTO via a Third-Party Solution
Businesses are empowered with various options for integrating OTTO with third-party solutions. By connecting OTTO with other tools and systems, these integrations can streamline processes and improve efficiency, putting you in control of your operations.
Integrating OTTO with other platforms allows businesses to automate many processes, saving valuable time and ensuring high accuracy. This accuracy improvement reduces the chances of errors in inventory management, order fulfillment, and shipping processes. And it also instills confidence in the reliability of your operations, reassuring your team and customers alike.
API2Cart provides a user-friendly interface and a comprehensive set of API methods that make integrating with OTTO's API seamless and efficient. With API2Cart, businesses can easily connect their systems with OTTO's platform and access real-time order data from multiple shopping carts. It eliminates the need to switch between different platforms or manually enter data, saving time and reducing errors.
[API2Cart integration with OTTO](https://api2cart.com/supported-platforms/otto-integration/?utm_source=devto&utm_medium=referral&utm_campaign=ottoapia.nem) - benefits businesses that rely on eCommerce platforms and opens up a world of possibilities regarding order management. Its seamless integration allows companies to streamline operations, provide a better customer experience, and boost sales.
| api2cartofficial |
1,912,871 | Enhancing the SQL Interval syntax: A story of Open Source contribution | There are many reasons why developers dive into the world of Open Source contributions. Contributing... | 0 | 2024-07-09T21:54:11 | https://dev.to/etolbakov/enhancing-the-sql-interval-syntax-a-story-of-open-source-contribution-1441 | rust, beginners, opensource, sql |
There are many reasons why developers dive into the world of Open Source contributions.
Contributing can be a way to give back to the community and use your skills for the greater good. It's a fantastic environment that allows you to network with talented developers, build relationships, and potentially find mentors or collaborators. For those seeking career advancement, contributions become a public portfolio showcasing your skills and experience. Sometimes, it's about a personal itch! You might encounter a bug or limitation in a project you rely on, and contributing a fix not only solves your frustration but benefits everyone who uses that software.
The recipe for a successful Open Source contribution goes beyond just code. A strong desire to learn fuels the journey, as you navigate unfamiliar codebases and tackle new challenges. But learning flourishes best within a supportive environment.
A responsive community acts as a safety net, offering guidance and feedback to help you refine your contributions. Ideally, you can also "dogfood" the tool you contribute to, using it in your work or personal projects. This firsthand experience provides valuable context for your contributions and ensures they address real-world needs. With these elements in place, you're well on your way to making a lasting impact on the Open Source community.
While I've been building my software development skills for a while now, Rust still feels like a whole new world to explore. This “newbie” feeling might make some shy away from contribution, fearing their code won't be good enough. I use silly mistakes as stepping stones to improve my skills, not a reason to feel discouraged.
Over a year of contributing to [GreptimeDB](https://github.com/GreptimeTeam/greptimedb) has been a rewarding journey filled with learning experiences. Today, I'd like to walk you through one of those. Let's get our hands dirty (or should I say, claws? 🦀)
This time we will be enhancing the Interval syntax by allowing the [shortened version]
## Motivation
This time I chose [a task](https://github.com/GreptimeTeam/greptimedb/issues/4168) to enhance the Interval syntax by allowing a shortened version. The SQL standard specifies a syntax format such as:
```sql
select interval '1 hour';
```
My objective was to ensure the functionality of the following alternative(_shortened_) format:
```sql
select interval '1h';
```
Diving right into the code, I discovered that the core functionality for handling transformations already exists. The scope of a change boils down to adding a new rule specifically for the [Interval](https://docs.rs/sqlparser/latest/sqlparser/ast/struct.Interval.html) data type: intervals with abbreviated time formats will be automatically expanded to their full versions. Let's take a closer look at a specific section of the code that does the logic
```rust
fn visit_expr(&self, expr: &mut Expr) -> ControlFlow<()> {
match expr {
Expr::Interval(interval) => match *interval.value.clone() {
Expr::Value(Value::SingleQuotedString(value)) => {
if let Some(data) = expand_interval_name(&value) {
*expr = create_interval_with_expanded_name(
interval,
single_quoted_string_expr(data),
);
}
}
Expr::BinaryOp { left, op, right } => match *left {
Expr::Value(Value::SingleQuotedString(value))=> {
if let Some(data) = expand_interval_name(&value) {
let new_value = Box::new(Expr::BinaryOp {
left: single_quoted_string_expr(data),
op,
right,
});
*expr = create_interval_with_expanded_name(interval, new_value);
}
............
}
```
## Code Review
An experienced Rust developer, [Dennis](https://x.com/killme20082) (CEO & Co-Founder @ Greptime), quickly identified an area for improvement and [suggested the fix](https://github.com/GreptimeTeam/greptimedb/pull/4220#discussion_r1655250624):

Code review shines as a learning tool.
Beyond simply accepting the suggested change (though the rationale for “efficiency” was clear!), I decided to take a deep dive. Analyzing the proposed improvement and explaining it to myself(in the form of this post), helped me better understand the Rust code and its recommended practices.
### Avoiding unnecessary cloning and ownership transfers
Originally I used `.clone()` on `interval.value`:
```rust
match *interval.value.clone() { ... }
```
Cloning here creates a new instance of the data each time, which can be inefficient if the data structure is large or cloning is expensive. The suggested version avoids this by using references:
```rust
match &*interval.value { ... }
```
Matching on a reference `(&*interval.value)` eliminates the cost of cloning. The same improvement is applied to the match on `left` in the binary operation case:
```rust
match &**left { ... }
```
This one is slightly more involved: it uses a double dereference to get a reference to the value inside a `Box`, which is more efficient than cloning.
### Clearer Pattern Matching
Using references in pattern matching emphasizes the intention of only reading data, not transferring ownership:
```rust
match &*interval.value { ... }
```
This shows explicitly that the matched value is not being moved. It helps with reasoning about the code, especially in a context with complex ownership and borrowing rules.
### Reduced Cloning Inside the Binary Operation
In the original code, the `op` and `right` fields of the `Expr::BinaryOp` variant are cloned unconditionally.
```rust
let new_value = Box::new(Expr::BinaryOp {
left: single_quoted_string_expr(data),
op,
right,
});
```
However, they only need to be cloned if the `left` field is an `Expr::Value` variant with a string value. The suggested enhancement moves the cloning inside the `if let` block, so it only happens when necessary.
```rust
let new_value = Box::new(Expr::BinaryOp {
left: single_quoted_string_expr(data),
op: op.clone(),
right: right.clone(),
});
```
### Using references instead of cloning:
In the original code, `expand_interval_name(&value)` is used, which borrows value. ~~However, value is of type `String`, which implements the `AsRef<str>` trait. This means that it can be automatically dereferenced to a `&str` when necessary~~. However, value is of type `String`, which implements the `Deref<Target=str>` trait (more information could be found [here](https://doc.rust-lang.org/std/ops/trait.Deref.html#deref-coercion)). This means that it can be automatically dereferenced to a `&str` when necessary. The improved version uses `expand_interval_name(value)`, which avoids the need for explicit reference.
## Summary
So in the context of this change “efficiency” stands for:
• Avoiding unnecessary cloning, reducing overhead.
• Making the borrowing and ownership patterns clearer and safer.
• Enhancing overall readability and maintainability.
This is how the `visit_expr` function looks like after all suggestions have been applied:
```rust
fn visit_expr(&self, expr: &mut Expr) -> ControlFlow<()> {
match expr {
Expr::Interval(interval) => match &*interval.value {
Expr::Value(Value::SingleQuotedString(value))
| Expr::Value(Value::DoubleQuotedString(value)) => {
if let Some(expanded_name) = expand_interval_name(value) {
*expr = update_existing_interval_with_value(
interval,
single_quoted_string_expr(expanded_name),
);
}
}
Expr::BinaryOp { left, op, right } => match &**left {
Expr::Value(Value::SingleQuotedString(value))
| Expr::Value(Value::DoubleQuotedString(value)) => {
if let Some(expanded_name) = expand_interval_name(value) {
let new_expr_value = Box::new(Expr::BinaryOp {
left: single_quoted_string_expr(expanded_name),
op: op.clone(),
right: right.clone(),
});
*expr = update_existing_interval_with_value(interval, new_expr_value);
}
}
_ => {}
},
.....................
}
```
Open Source contribution has been an incredible way to accelerate my Rust learning curve. My efforts on this **#GoodFirstIssue** serve as an illustration of how to improve skills through collaborations. Depending on your feedback, I'm excited to share more of these learning experiences in future posts!
A huge thanks to the entire [Greptime](https://github.com/GreptimeTeam/greptimedb) team, especially [Dennis](https://x.com/killme20082), for their support and guidance! Let’s keep the contributing/learning going!
**UPD**: Thanks to @tisonkun for the thorough review!
| etolbakov |
1,912,881 | El Decorator Pattern con Dragon Ball Z. | Imagina una clase base llamada Character que representa a un personaje genérico. Cuando cambiamos su... | 0 | 2024-07-08T14:03:00 | https://dev.to/missa_eng/el-decorator-pattern-con-dragon-ball-z-5aj4 | Imagina una clase base llamada Character que representa a un personaje genérico. Cuando cambiamos su cabello, añadimos nuevas habilidades y características. Así es como el Decorator Patter permite añadir funcionalidades a objetos de manera flexible y dinámica, sin alterar la estructura original. | missa_eng | |
1,912,903 | Loging using OSLog | OSLog is a Swift API that provides a unified logging system for all Apple platforms. It is a... | 0 | 2024-07-10T06:11:30 | https://wesleydegroot.nl/blog/OSLog | swift, oslog | ---
title: Loging using OSLog
published: true
date: 2024-07-05 15:06:22 UTC
tags: Swift,OSLog
canonical_url: https://wesleydegroot.nl/blog/OSLog
---
OSLog is a Swift API that provides a unified logging system for all Apple platforms.
It is a replacement for the older `print` and `NSLog` functions.
OSLog is a more efficient and secure logging system that provides better performance and privacy protection.
In this article, we will explore the features of OSLog and how to use it in your Swift applications.
## Installation
To use `OSLog` in your Swift application, you need to import the `OSLog` module.
You can do this by adding the following import statement to your Swift file:
```swift
import OSLog
```
Then you can use the `Logger` class to create a logger instance.
The `Logger` class is a wrapper around the `OSLog` API that provides a more convenient way to log messages.
```swift
let logger = Logger(
subsystem: "nl.wesleydegroot.demoApp",
category: "myCategory"
)
```
The `subsystem` parameter is a string that identifies the subsystem that the logger belongs to.
The `category` parameter is a string that identifies the category of the logger.
## Sample Use
```swift
import OSLog
let logger = Logger(
subsystem: "nl.wesleydegroot.demoApp",
category: "myCategory"
)
logger.fault("This is a fault") // Shows in red
logger.error("This is a error") // Shows in yellow
logger.warning("This is a warning") // Shows in yellow
logger.info("Information") // Shows in default color
logger.debug("Debug message") // Shows in default color
logger.trace("Trace message") // Shows in default color
```
## Caveats
Unfortunately, `OSLog` is only available on Apple platforms and it cannot be used within SwiftUI views.
If you need to log messages within a SwiftUI view, use the view extension below:
```swift
import SwiftUI
extension View {
func log(_ closure: () -> Void) -> some View {
_ = closure()
return self
}
}
```
Now you can log messages within a SwiftUI view like this:
```swift
import SwiftUI
import OSLog
struct ContentView: View {
let logger = Logger(
subsystem: "nl.wesleydegroot.exampleapp",
category: "MyCategory"
)
var body: some View {
Text("Hello, World!")
.log {
logger.info("Hello, World!")
}
}
}
```
## Displaying Log Messages (in app)
If you want to display log messages in your app, you can use the `OSLog` API to log messages to the console.
I've created a Swift Package to make this easier, you can find it [here (OSLogViewer)](https://github.com/0xWDG/OSLogViewer).
### How to use `OSLogViewer`:
```swift
import SwiftUI
import OSLogViewer
struct ContentView: View {
var body: some View {
NavigationView {
// The default configuration will show the log messages.
OSLogViewer()
// Custom configuration
// OSLogViewer(
// subsystem: "nl.wesleydegroot.exampleapp",
// since: Date().addingTimeInterval(-7200) // 2 hours
// )
}
}
}
```
[Download Swift Playground](https://wesleydegroot.nl/resources/OSLog.swiftpm.zip)
### OSLogViewer Features
#### Simple Interface

#### Beautiful export
```
This is the OSLog archive for exampleapp
Generated on 2/6/2024, 11:53
Generator https://github.com/0xWDG/OSLogViewer
Info message
ℹ️ 2/6/2024, 11:53 🏛️ exampleapp ⚙️ nl.wesleydegroot.exampleapp 🌐 myCategory
Error message
❗ 2/6/2024, 11:53 🏛️ exampleapp ⚙️ nl.wesleydegroot.exampleapp 🌐 myCategory
Error message
❗ 2/6/2024, 11:53 🏛️ exampleapp ⚙️ nl.wesleydegroot.exampleapp 🌐 myCategory
Critical message
‼️ 2/6/2024, 11:53 🏛️ exampleapp ⚙️ nl.wesleydegroot.exampleapp 🌐 myCategory
Log message
🔔 2/6/2024, 11:53 🏛️ exampleapp ⚙️ nl.wesleydegroot.exampleapp 🌐 myCategory
Log message
🔔 2/6/2024, 11:53 🏛️ exampleapp ⚙️ nl.wesleydegroot.exampleapp 🌐 myCategory
```
## Wrap up
OSLog is a more efficient and secure logging system that provides better performance and privacy protection.
It is a great replacement for the older `print` and `NSLog` functions, that provides a unified logging system for all Apple platforms.
Resources:
- [https://github.com/0xWDG/OSLogViewer](https://github.com/0xWDG/OSLogViewer)
- [https://developer.apple.com/documentation/os/logging](https://developer.apple.com/documentation/os/logging) | 0xwdg |
1,912,975 | Optimizing Angular Performance with `trackBy` in `ngFor` | In any dynamic web application, managing and displaying lists efficiently is crucial for performance.... | 0 | 2024-07-08T07:30:00 | https://dev.to/manthanank/optimizing-angular-performance-with-trackby-in-ngfor-1gil | webdev, angular, programming, beginners | In any dynamic web application, managing and displaying lists efficiently is crucial for performance. Angular's `ngFor` directive is a powerful tool for iterating over lists and rendering items in the DOM. However, when dealing with large or frequently changing lists, performance can become a concern. This is where Angular's `trackBy` function comes into play.
#### What is `trackBy`?
The `trackBy` function is used with the `ngFor` directive to help Angular uniquely identify items in a list. By default, Angular uses object identity to track changes, which can be inefficient. Using `trackBy`, you can specify a unique identifier for each item, enabling Angular to optimize DOM manipulations and improve performance.
#### Why Use `trackBy`?
Without `trackBy`, Angular will recreate DOM elements for the entire list whenever it detects changes, even if only one item has changed. This can lead to unnecessary re-rendering and degraded performance, especially with large lists. `trackBy` allows Angular to track items by a unique identifier, minimizing DOM updates to only the items that have changed.
#### Implementing `trackBy` in Angular
Let's walk through a simple example to demonstrate how to use `trackBy` in an Angular application.
##### Step 1: Define the Component
First, create a component that will display and update a list of items.
```typescript
import { Component } from '@angular/core';
@Component({
selector: 'app-track-by-example',
template: `
<div>
<button (click)="updateList()">Update List</button>
<ul>
<li *ngFor="let item of items; trackBy: trackById">
{{ item.id }} - {{ item.name }}
</li>
</ul>
</div>
`,
styles: []
})
export class TrackByExampleComponent {
items = [
{ id: 1, name: 'Item 1' },
{ id: 2, name: 'Item 2' },
{ id: 3, name: 'Item 3' }
];
updateList() {
this.items = [
{ id: 1, name: 'Updated Item 1' },
{ id: 2, name: 'Updated Item 2' },
{ id: 3, name: 'Updated Item 3' },
{ id: 4, name: 'New Item 4' }
];
}
trackById(index: number, item: any): number {
return item.id;
}
}
```
##### Step 2: Add the Component to a Module
Ensure that the component is declared in a module, such as `app.module.ts`:
```typescript
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { AppComponent } from './app.component';
import { TrackByExampleComponent } from './track-by-example.component';
@NgModule({
declarations: [
AppComponent,
TrackByExampleComponent
],
imports: [
BrowserModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
```
[Stackblitz Link](https://stackblitz.com/edit/stackblitz-starters-bzmdu7?file=src%2Fmain.ts)
#### How It Works
1. **Component Template**: The template uses an `ngFor` directive to iterate over the `items` array. The `trackBy` function is specified as `trackById`.
2. **Component Class**:
- `items` is an array of objects, each with a unique `id` and a `name`.
- The `updateList` method updates the list, simulating a scenario where the list changes dynamically.
- The `trackById` function returns the `id` of each item, providing a unique identifier for Angular to track.
#### Benefits of Using `trackBy`
1. **Improved Performance**: By uniquely identifying each item, Angular can avoid unnecessary re-rendering, leading to faster updates and smoother user experiences.
2. **Efficient DOM Manipulation**: Only the items that have changed are updated in the DOM, reducing the workload for the browser.
3. **Scalability**: As the application grows and the lists become larger, `trackBy` ensures that performance remains optimal.
To demonstrate the performance difference between using `trackBy` and not using `trackBy` in an Angular application, we'll create two simple apps. Each app will have a list of items that gets updated when a button is clicked. One app will use `trackBy` to optimize performance, while the other will not.
### Step-by-Step Guide to compare both without `trackby` & without `trackby`
#### 1. Set Up the Angular Project
First, create a new Angular project if you don't already have one:
```bash
ng new trackby-example
cd trackby-example
```
#### 2. Generate Components
Generate two components, one for each example:
```bash
ng generate component without-trackby
ng generate component with-trackby
```
#### 3. Implement the Components
##### Component Without `trackBy`
Edit the `without-trackby.component.ts` and `without-trackby.component.html` files as follows:
**without-trackby.component.ts:**
```typescript
import { Component } from '@angular/core';
@Component({
selector: 'app-without-trackby',
templateUrl: './without-trackby.component.html',
styleUrls: ['./without-trackby.component.css']
})
export class WithoutTrackbyComponent {
items = [
{ id: 1, name: 'Item 1' },
{ id: 2, name: 'Item 2' },
{ id: 3, name: 'Item 3' }
];
updateList() {
this.items = [
{ id: 1, name: 'Updated Item 1' },
{ id: 2, name: 'Updated Item 2' },
{ id: 3, name: 'Updated Item 3' },
{ id: 4, name: 'New Item 4' }
];
}
}
```
**without-trackby.component.html:**
```html
<div>
<h2>Without trackBy</h2>
<button (click)="updateList()">Update List</button>
<ul>
<li *ngFor="let item of items">
{{ item.id }} - {{ item.name }}
</li>
</ul>
</div>
```
##### Component With `trackBy`
Edit the `with-trackby.component.ts` and `with-trackby.component.html` files as follows:
**with-trackby.component.ts:**
```typescript
import { Component } from '@angular/core';
@Component({
selector: 'app-with-trackby',
templateUrl: './with-trackby.component.html',
styleUrls: ['./with-trackby.component.css']
})
export class WithTrackbyComponent {
items = [
{ id: 1, name: 'Item 1' },
{ id: 2, name: 'Item 2' },
{ id: 3, name: 'Item 3' }
];
updateList() {
this.items = [
{ id: 1, name: 'Updated Item 1' },
{ id: 2, name: 'Updated Item 2' },
{ id: 3, name: 'Updated Item 3' },
{ id: 4, name: 'New Item 4' }
];
}
trackById(index: number, item: any): number {
return item.id;
}
}
```
**with-trackby.component.html:**
```html
<div>
<h2>With trackBy</h2>
<button (click)="updateList()">Update List</button>
<ul>
<li *ngFor="let item of items; trackBy: trackById">
{{ item.id }} - {{ item.name }}
</li>
</ul>
</div>
```
#### 4. Update the App Module
Update the `app.module.ts` to include the new components:
```typescript
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { AppComponent } from './app.component';
import { WithoutTrackbyComponent } from './without-trackby/without-trackby.component';
import { WithTrackbyComponent } from './with-trackby/with-trackby.component';
@NgModule({
declarations: [
AppComponent,
WithoutTrackbyComponent,
WithTrackbyComponent
],
imports: [
BrowserModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
```
#### 5. Update the Main App Component
Update `app.component.html` to display both components:
```html
<div style="text-align:center">
<h1>Angular trackBy Example</h1>
</div>
<app-without-trackby></app-without-trackby>
<app-with-trackby></app-with-trackby>
```
#### 6. Run the Application
Run the application to see both components in action:
```bash
ng serve
```
Navigate to `http://localhost:4200` in your browser. You will see two sections, one without `trackBy` and one with `trackBy`. Click the "Update List" button in each section and observe the differences in performance and DOM updates.
[Stackblitz Link](https://stackblitz.com/edit/stackblitz-starters-ua4tmn?file=src%2Fmain.ts)
#### Conclusion
By implementing these two components, you can observe how using `trackBy` helps Angular to optimize DOM manipulations and improve performance. This is particularly noticeable with larger lists or more complex applications, where the efficiency gains become more significant.
Feel free to expand this example with more complex data or additional functionalities to see how `trackBy` can benefit your Angular projects.
Using `trackBy` with `ngFor` is a simple yet powerful way to optimize the performance of your Angular applications. By uniquely identifying items in a list, you can minimize DOM manipulations and ensure that your application remains responsive and efficient, even as it scales. Implementing `trackBy` is straightforward and can make a significant difference, particularly for applications that handle large or frequently changing lists.
Start using `trackBy` in your Angular projects today and experience the performance benefits for yourself!
--- | manthanank |
1,912,992 | Embrace simple tech stacks and code generation in DevOps and data engineering | DevOps, data engineering, and other platform engineering teams must recognize that the choices they... | 0 | 2024-07-08T12:48:00 | https://dev.to/panasenco/how-complex-tech-stacks-make-organizations-unproductive-225 | devops, dataengineering, sitereliabilityengineering, operations | DevOps, data engineering, and other platform engineering teams must recognize that the choices they make with regards to their tech stacks have huge effects on the rest of the organization. While adding a tool to the tech stack may boost the productivity of the platform engineering team, it could negatively impact the overall productivity of the organization. This is due to the [law of leaky abstractions](https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/), which states that no abstraction can completely hide the underlying technologies from engineers. Platform engineers' sense of productivity must shift from building increasingly complex tech stacks to iterating faster on simple ones using code generation tools like LLMs.
## The benefits and 'leakiness' of abstractions
Over the past decades, computer engineers have generally been able to achieve greater productivity through higher levels of abstraction. Operating systems provided useful tools like process schedulers and freed programmers from having to worry about hardware specifics. High-level programming languages freed programmers from worrying about allocating and freeing memory. Libraries and frameworks further allow programmers to interact with databases, distributed systems, and an unlimited number of other objects without having to worry about low-level implementation details. The 1975 book [_The Mythical Man-Month_](https://archive.org/details/MythicalManMonth/page/n107/mode/2up?q=high) states: "Programming productivity may be increased as much as five times when a suitable high-level language is used." DevOps engineers, data engineers, and other platform engineers are no different in that they can achieve greater productivity through more layers of abstraction.
In theory, abstractions are supposed to hide low-level implementation details from us and allow us to focus on solving high-level problems. Reality is messier. Though process scheduling has been abstracted away by the operating system, you still need to have at least a basic understanding of scheduling if you want to make an informed decision about multithreading vs multiprocessing vs asyncio for your application. Though memory allocation has been abstracted away by your programming language, you still have to have a basic understanding of garbage collection to avoid memory leaks in your program. Generally speaking, you can only make informed engineering choices if you have at least a basic understanding of all the layers your application is built on. Joel Spolsky popularized this idea in his 2002 article ["The Law of Leaky Abstractions"](https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/). The idea is that there's no such thing as a perfect or 'non-leaky' abstraction that perfectly hides away all underlying details in such a way that an engineer using it never has to worry about them. Some underlying detail will always have a huge impact on the performance and correctness of your program.
## How leaky abstractions affect platform engineers
Platform engineers such as DevOps and data engineers must develop the understanding and the empathy that the implementation engineers they serve want to be able to make informed decisions and build performant and bug-free applications. The implementation engineers can only achieve this goal by understanding the underlying layers of the platform they're building on. In other words, the more layers platform engineers build, the more layers the implementation engineers have to learn. Therefore, platform engineers must consciously limit the number of tools and layers of abstraction they introduce for the good of the organization as a whole, even if it keeps that particular team from reaching peak productivity.
The most common objection to this line of thought is along the lines of "web engineers shouldn't have to know any DevOps" and "data scientists shouldn't have to know any data engineering", and that these teams should just submit tickets if they need help. I believe that in most organizations, this is a short-sighted approach that's bad for everyone, for the following reasons:
1. It's bad for the platform engineering teams because the implementation engineers now can't make a single decision without asking the platform teams first, drowning everyone in meetings.
2. It's bad for the implementation engineers because they lose the ability to make informed decisions and to debug their own issues. Any code the web engineers and data scientists write will never be able to take full advantage of the underlying technologies. Any assumptions they make will always have a chance of blowing up in their faces.
3. It's bad for the organization as a whole because it creates a culture of "not my problem", "throw it over the fence", and "we must have ten meetings before we can make a decision", reducing everyone's feelings of trust, productivity, and satisfaction.
I'd like to acknowledge that some organizations have needs so complex that they require this complexity and specialization, even at the cost of overall productivity. However, unless you have clear evidence that your own organization is such a behemoth, you must assume that it only requires a simple platform until proven otherwise.
## A starting point for conversations: 1-2 abstractions over the minimum
Discovering the truth about your organization's platform needs starts with affirming that platform choices affect everyone. All the business needs must be made explicit and the voices of all engineering teams must be heard before arriving at the best path forward.
Use this starting point: **Platform engineers must introduce only one or two additional layers of abstraction in their platform architecture over what the implementation engineers have to know at the minimum to use the platform.**
Let's dissect this statement:
- Implementation engineers have to know how to use the outermost interface of the platform. Learning how to use the platform is a non-negotiable part of their jobs.
- If the sets of knowledge needed to use the platform and to develop the platform are basically the same (difference of zero layers of abstraction), then the platform engineers are leaving productivity on the table. There will almost certainly be a tool that could boost their productivity by introducing an additional layer of abstraction without being too much for the implementation engineers to learn just the basics of.
- On the other hand, if the platform engineers introduce three or more layers of abstraction over the minimum needed to use the platform, then that could become too much for most implementation engineers to learn in addition to their own jobs.
Use 1-2 layers over the minimum as a starting point for your design decisions and conversations. Only add complexity if there is a clear business need for it, or if all the implementation engineering teams are willing to invest extra time into learning a more complex and productive tech stack. The goal is to get to a point where the platform engineers get big productivity gains while implementation engineers can still understand the platform well enough to innovate and debug mostly on their own.
## How to keep platform engineers engaged?
Limiting DevOps engineers and data engineers to one or two layers of abstraction over the minimum can be good for the entire organization, but can leave these engineers feeling unsatisfied. The best engineers like to feel productive, and going up in layers of abstraction is generally how computer engineers increase their productivity. The brightest platform engineers will see ways to improve their own productivity with more layers of abstraction, but won't be able to act on these insights. How can we keep the best platform engineers from feeling bored and unsatisfied with a tech stack that average implementation engineers can learn on the side?
I believe that the future of platform engineering - in DevOps, data, site reliability engineering, analytics, and everything else - lies in building platforms that are simple enough for implementation engineers to understand, but then iterating on them faster with code generation tools, such as templating engines and AI large language models. Anyone can use LLMs, but the best platform engineers will be challenged to figure out how to use them while maintaining code quality, consistency, and security. The reduced number of requirements will make the pool of job candidates wider, making it easier to look for the best ones. Platform engineering repositories will become more democratic too, with engineers of all levels able to contribute code. The best platform engineers will just be able to leverage code generation to contribute code 5x-10x faster.
Let's take a look at the relevant factors from StackOverflow's survey ["What makes developers happy at work"](https://stackoverflow.blog/2022/03/17/new-data-what-makes-developers-happy-at-work/#h2-937641e7b9500) and how they can still be satisfied with this new paradigm:
- Strong sense of productivity: Platform engineers will feel more productive if they can solve more problems without getting bogged down in meetings while also feeling that they're empowering their coworkers rather than being bottlenecks for them.
- Many growth opportunities: Platform engineers will be challenged to use code generation and AI tools while maintaining code quality, still leading to theoretically unlimited growth.
- Visible, direct impact: Producing output faster will likely lead to more visible and direct impact as compared to trying to build up more layers of abstraction.
- Able to solve problems my way: Platform engineers will be encouraged to explore new ways to solve problems with existing tools as well as to pursue as many new code generation approaches as they want.
- Positive, healthy work relationships: Platform engineers will be able to speak a common language with each other and other engineering teams, hopefully feeling more connected and included rather than siloed. | panasenco |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.