id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,898,256 | Understanding Props, Parent, and Child Components in React Native using TypeScript. | In React Native, we often need to pass data from one component to another. This is where the concept... | 0 | 2024-06-24T01:17:02 | https://nehirugue.medium.com/understanding-props-parent-and-child-components-in-react-native-using-typescript-f91baaac88da | In React Native, we often need to pass data from one component to another. This is where the concept of props (short for properties) comes into play. Props are a way to pass data from a parent component to a child component.
Let’s start by creating a simple app that displays a list of posts.
**Step 1: Set up a new React Native project with TypeScript**
```bash
npx expo-cli init my-app --template expo-template-blank-typescript
```
This command will create a new React Native project with TypeScript support using the Expo CLI.
**Step 2: Create a Parent Component**
Create a new file called `App.tsx` in the root directory of your project and add the following code:
```tsx
import React from 'react';
import { View } from 'react-native';
import FeedPost from './FeedPost';
// Define the type for the data object
type DataObject = {
id: number;
title: string;
content: string;
};
const App = () => {
// Define the data array
const data: DataObject[] = [
{ id: 1, title: 'Post 1', content: 'This is the first post.' },
{ id: 2, title: 'Post 2', content: 'This is the second post.' },
// ... more data objects
];
return (
<View>
{data.map(item => (
<FeedPost
key={item.id}
id={item.id}
title={item.title}
content={item.content}
/>
))}
</View>
);
};
export default App;
```
In this example, we define a `DataObject` type that represents the structure of each data object. We then create an array `data` of `DataObject` types, which holds our sample data.
The `App` component maps over the `data` array and renders a `FeedPost` component for each item, passing down the `id`, `title`, and `content` properties as props.
**Step 3: Create a Child Component**
Create a new file called `FeedPost.tsx` in the same directory and add the following code:
```tsx
import React from 'react';
import { View, Text } from 'react-native';
// Define the prop types for the FeedPost component
type FeedPostProps = {
id: number;
title: string;
content: string;
};
const FeedPost = ({ id, title, content }: FeedPostProps) => {
return (
<View>
<Text>{title}</Text>
<Text>{content}</Text>
</View>
);
};
export default FeedPost;
```
In this file, we define a `FeedPostProps` type that represents the props that the `FeedPost` component expects to receive from its parent component. The `FeedPost` component receives these props as parameters and renders the `title` and `content` properties.
**Step 4: Run the App**
Now, you can run the app using the Expo CLI:
```bash
npm start
```
This will start the Expo development server and open the app in a simulator or on your physical device.
**Explanation**
In this tutorial, we have created a parent component `App` that holds the data as an array of objects. Each object represents a post with an `id`, `title`, and `content` property.
The `App` component maps over this `data` array and renders a `FeedPost` component for each item, passing down the `id`, `title`, and `content` properties as props.
The `FeedPost` component is a child component that receives these props and renders the `title` and `content` properties.
By using TypeScript, we define the types for the data objects and the props, which helps catch potential type errors during development and improves code maintainability.
In this example, we pass down the `id` prop to the `FeedPost` component, even though it's not used within the component itself. This demonstrates that you should pass down only the necessary props to the child component, but in some cases, you might need to pass additional data for identification, updating, or other purposes.
This tutorial covers the basics of how to create a parent component that holds data, pass that data as props to a child component, and how to define and use types in TypeScript to ensure type safety. You can build upon this foundation to create more complex applications and component hierarchies in React Native.
| nehi_rugue | |
1,898,255 | Cutting Down Docker Image Sizes: Next.js Standalone Mode for Easy Kubernetes Deployments on EKS and AKS | A Developer's Deployment Dilemma So, you are all geared up and brimming with excitement to... | 0 | 2024-06-24T01:12:08 | https://dev.to/alessandrorodrigo/cutting-down-docker-image-sizes-nextjs-standalone-mode-for-easy-kubernetes-deployments-on-eks-and-aks-4pno | ## A Developer's Deployment Dilemma
So, you are all geared up and brimming with excitement to launch your newest and greatest super cool Next.js app. Except that Docker images are just too bloated, and everything becomes slow. How frustrating deployment delays are, correct? What if I told you there's a way to trim that Docker image fat and get your app running smoothly on Kubernetes, whether you're using AWS EKS or Azure AKS? Say hello to Next.js standalone mode!
## My Experience with Standalone Mode
I experienced firsthand the impact of Next.js standalone mode while working on a project at my last company. Our Docker images for Next.js were over 1 GB, which caught my manager's attention. I was tasked with investigating and optimizing the images. Implementing standalone mode drastically reduced the size of our Docker images, leading to faster builds and smoother deployments. This improvement was crucial in streamlining our deployment process and resolving the issues with oversized images.
### Getting to Know Next.js Standalone Mode
#### The Lean, Mean, Deployment Machine
Next.js standalone mode is like that magical suitcase that only packs what you need. This excellent feature helps to create a minimal build of your app, cutting all the unneeded stuff and keeping the Docker image lean and mean.
#### What's the Story with Standalone Mode?
It's all about efficiency. Standalone mode strips your Next.js build down to what's essential for production.
To begin, just update your next.config.js to look like this:
```javascript
module.exports = {
output: 'standalone',
}
```
### How It Works
When you run the next build, Next.js compiles an absolute bare minimum into a .next/standalone folder.
#### Why Standalone Mode Rocks for Docker Deployments
##### Bye-Bye, Bloat!
Let's think about your Docker image like you feel about your carry-on luggage. The less you pack, the easier your travels. Without any extras, Standalone mode makes your Docker image much more minor.
- Smaller images mean faster builds and quicker deployments. Less waiting, more doing.
- Without extra dependencies, your build process gets a serious boost. You're not just building faster—you're deploying smarter.
- Keep It Secure: Fewer files and dependencies mean fewer potential security issues. It's like having a smaller, safer surface area.
##### Crafting the Perfect Dockerfile
Your Dockerfile is like a recipe for your app. And with standalone mode, you streamline it for the best results. Here is a hyper-optimized Dockerfile setup:
```dockerfile
# Stage 1: Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
COPY . .
RUN yarn build
# Stage 2: Production Image
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/static ./public/static
EXPOSE 3000
CMD ["node", "server.js"]
```
This configuration utilizes a multi-stage build, which keeps your final image clean and small, making the end users happy. With your streamlined Docker image, it’s a breeze.
#### Here's how you can set it up on EKS or AKS:
##### Deployment YAML Example
```yaml
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextjs-app
spec:
replicas: 3
selector:
matchLabels:
app: nextjs
template:
metadata:
labels:
app: nextjs
spec:
containers:
- name: nextjs
image: <your-registry>/nextjs-app:latest
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: nextjs-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: nextjs
```
##### The Benefits in Action
Once your app's up and running, you'll see the real magic.
- Resource-Friendly: Smaller images mean you spend fewer resources, just what is needed to cut costs.
- Faster, More Efficient Updates: Smaller images pull faster, and updates go faster. Say goodbye to those long waits during deployment.
- Network-Friendly: Your slim images use less bandwidth, making the whole process efficient.
##### Conclusion: Your Deployment Superpower
Deploying apps does not have to be a headache. This improves your Docker image size and build times to allow you to ship faster with Next.js standalone mode. Standalone mode can be combined with your secret weapon in the toolkit during deployment. So next time, when deploying, don't forget: Standalone mode for compact, highly effective, and quickly implemented Docker deployments. Try it out and see how it changes your deployment process!
##### Go on learning more about Next.js:
- [Next.js Standalone Mode Docs](https://nextjs.org/docs/pages/api-reference/next-config-js/output#standalone)
- [Getting Started with AWS EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)
- [Deploying on Azure AKS](https://learn.microsoft.com/en-us/azure/aks/) | alessandrorodrigo | |
1,898,097 | Sincronizando data entre tabs | Una de las cosas mas complicadas de entender y manejar en una aplicación es el estado de una... | 0 | 2024-06-24T01:10:26 | https://dev.to/dezkareid/sincronizando-data-entre-tabs-5951 | webdev, frontend, javascript | Una de las cosas mas complicadas de entender y manejar en una aplicación es el estado de una aplicación. Aunque tenemos herramientas como Redux, Zustand o cualquier otra biblioteca JS que salga en los próximos minutos que te tome leer este post.
Pero este post no se trata de bibliotecas, se trata de la plataforma y como sincronizamos datos entre las diferentes tabs que pueden estar abiertas en el navegador.
El navegador administra la memoria de una tab de manera individual pero la capa de datos se administra por dominio. Tenemos varias alternativas para almacenar datos:
* [LocalStorage](https://developer.mozilla.org/es/docs/Web/API/Window/localStorage)
* Cookies
* [Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache)
* [IndexedDB](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API)
* File System Access API
Hay muchas formas para almacenar datos y cada una de estas tiene un propósito. Ejemplo, la Cache API esta hecha para almacenar la respuesta de requests de red, lo cual nos sirve features como el modo offline.
Antes de hablarles de como sincronizar datos entre tabs, me gustaría dejar claro algunos puntos.
1. No hay mecanismo que el browser brinde de manera nativa una sincronización entre tabs.
2. No toda la información debe sincronizarse.
3. Normalmente usamos el servidor como fuente de la verdad y esta bien pero hacer multiples requests para tener la misma información que ya tienes en una tab me parece innecesario.
Para ello exploraremos 2 approaches:
* LocalStorage
* Cualquier Storage que uno quiera + [Service Workers](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API)
## LocalStorage
Esta opción es la menos recomendada porque es síncrona y ocupa el thread principal de ejecución de JS, pero siendo honestos es uno de los storages mas utilizados.
El objeto window tiene asociado un evento "storage", el cual se ejecuta cada vez que el storage cambia. Este se ejecutará en cada uno de nuestros tabs entonces con simplemente suscribirnos al evento podemos saber si algo cambió (se ejecuta en todos menos en el tab donde se ejecutó el cambio).
Vamos a hacer una página simple donde tengamos un contador y un botón para incrementar. El requerimiento es que cada vez que de click el contador se incremente se refleje en todas las tabs que tengamos abiertas.

Nota: Sugiero usar un folder y serve como servidor estático
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Sync local storage</title>
</head>
<body>
<h1 id="counter">0</h1>
<button id="increase">Increase</button>
<script>
document.getElementById("increase").addEventListener("click", () => {
const counter = +localStorage.getItem("counter", 0) + 1;
document.getElementById("counter").textContent = counter;
localStorage.setItem("counter", counter);
});
window.addEventListener("storage", (event) => {
if (event.key !== "counter") return;
document.getElementById("counter").textContent = +localStorage.getItem(
"counter",
0
);
});
window.addEventListener("load", () => {
document.getElementById("counter").textContent = +localStorage.getItem(
"counter",
0
);
});
</script>
</body>
</html>
```
La parte importante de esta página es suscribirse al evento de cambio del localStorage.
```javascript
window.addEventListener('storage', (event) => {
if (event.key !== 'counter') return;
document.getElementById('counter').textContent = +localStorage.getItem('counter', 0);
});
```
El [evento storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/storage_event) tiene una propiedad llamada key, mediante la cual podemos saber que cambió, este paso es crucial porque el storage es solo uno. Entonces esta al alcance de extensiones o cualquier otro script que se ejecute en el contexto de la aplicación. En este evento podríamos poner dispatchers o lo que queramos que reflejo el cambio que queremos a nuestro state manager.
La ventaja de este approach es que es fácil, ya tenemos un sistema de eventos incluido que evita la redundancia (evita disparar el evento en la tab donde se ejecutó)
La desventaja ... es el localStorage, todos tienen acceso y es síncrono, además de que solo puedes guardar strings así que debes hacer parseos para guardar estados complejos y si quieres ejecutar acciones solo para ciertas partes de tu state manager tendrás que verificar que solo se disparen cambios si la data cambió (algunos state managers ya incluyen esta funcionalidad).
## Cualquier Storage + Service Workers
Este approach es un poco más complicado pero en lo personal lo prefiero, por la simple y sencilla razón de que no es el local storage.
Para esto necesitamos una pieza de software centralizada que actue como un pubsub para la administración de eventos. En este caso usaremos un service worker ya que es una instancia compartida por todas las tabs (clients).
Las tareas del service worker serán las siguientes:
1. Suscribirse a un evento que nos diga que se quiere hacer un incremento
2. Hacer un update en nuestra base de datos (en este caso usaremos indexeddb)
3. Mandar un evento que le indique a todos los tabs(clients) que hubo un incremento.
Este service worker cumple con ese objetivo
```javascript
self.addEventListener('message', (event) => {
if (event.data === 'request_increment') {
incrementValueInDB();
}
});
function incrementValueInDB() {
const request = indexedDB.open('APPDB', 1);
request.onupgradeneeded = function(event) {
const db = event.target.result;
db.createObjectStore('count', { keyPath: 'id' });
};
request.onsuccess = function(event) {
const db = event.target.result;
const transaction = db.transaction(['count'], 'readwrite');
const objectStore = transaction.objectStore('count');
const request = objectStore.get(1);
request.onsuccess = function(event) {
let count = event.target.result?.count || 0;
count++;
const requestUpdate = objectStore.put({ count, id: 1 }, 1);
requestUpdate.onsuccess = function() {
self.clients.matchAll().then((clients) => {
clients.forEach((client) => client.postMessage('has_increment'));
});
};
};
};
}
```
El primer paso consiste en suscribirse a los mensajes y filtrarlos, ya que al ser una instancia compartida muchos mensajes pueden llegar.
```javascript
self.addEventListener('message', (event) => {
if (event.data === 'request_increment') {
incrementValueInDB();
}
});
```
La segunda parte es el código donde se actualizará la base de datos. Los service workers tienen acceso a IndexedDB así que ahí hacemos el incremento (aquí depende de la API que quieran usar, por eso no hay código)
La tercera parte consiste en avisar a los demas que hubo un incremento.
```javascript
self.clients.matchAll().then((clients) => {
clients.forEach((client) => client.postMessage('has_increment'));
});
```
Listo, con esto terminamos el service worker y ahora les comparto el código del HTML/JS
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Sync local storage</title>
</head>
<body>
<h1 id="counter">0</h1>
<button id="increase">Increase</button>
<script>
document.getElementById("increase").addEventListener("click", () => {
navigator.serviceWorker.controller.postMessage('request_increment');
});
if ("serviceWorker" in navigator) {
window.addEventListener("load", function () {
const request = indexedDB.open('APPDB', 1);
request.onupgradeneeded = function(event) {
const db = event.target.result;
db.createObjectStore('count', { keyPath: 'id' });
};
request.onsuccess = function(event) {
const db = event.target.result;
const transaction = db.transaction('count', 'readwrite');
const store = transaction.objectStore('count');
const getRequest = store.get(1);
getRequest.onsuccess = function(event) {
const data = event.target.result;
const count = data ? data.count : 0;
document.getElementById('counter').textContent = count;
};
};
navigator.serviceWorker.register("/sw.js").then(
function (registration) {
navigator.serviceWorker.addEventListener('message', function(event) {
if (event.data === 'has_increment') {
const request = indexedDB.open('APPDB', 1);
request.onsuccess = function(event) {
const db = event.target.result;
const transaction = db.transaction('count', 'readwrite');
const store = transaction.objectStore('count');
const getRequest = store.get(1);
getRequest.onsuccess = function(event) {
const data = event.target.result;
const count = data ? data.count : 0;
document.getElementById('counter').textContent = count;
};
};
}
});
},
function (err) {
console.log("ServiceWorker registration failed: ", err);
}
);
});
}
</script>
</body>
</html>
```
En la aplicación el código mas importante es la parte de enviar el evento de incrementar
```javascript
document.getElementById("increase").addEventListener("click", () => {
navigator.serviceWorker.controller.postMessage('request_increment');
});
```
Y suscribirnos al evento "has_increment" para tomar el dato del local storage
```javascript
navigator.serviceWorker.addEventListener('message', function(event) {
if (event.data === 'has_increment') {
const request = indexedDB.open('APPDB', 1);
request.onsuccess = function(event) {
const db = event.target.result;
const transaction = db.transaction('count', 'readwrite');
const store = transaction.objectStore('count');
const getRequest = store.get(1);
getRequest.onsuccess = function(event) {
const data = event.target.result;
const count = data ? data.count : 0;
document.getElementById('counter').textContent = count;
};
};
}
})
```
Nota: Si quieres hacer alguna modificación se cauteloso porque los service workers son difíciles de depurar
La ventaja de este enfoque es que no se bloquea el main thread de JS ya que la parte de actualización de datos se hace directamente en el worker y que es hasta cierto punto agnóstica ya que podemos usar cualquier store accesible por un worker.
La desventaja es que tiende a ser un poco complicado
## Conclusión
Ambos enfoques funcionan, dependerá de la arquitectura de la aplicación el decidir cual usar
| dezkareid |
1,898,254 | Technical Dos attacks | Ethical Hacking - Dos attacks on different services. https://github.com/samglish/technicalDos The... | 0 | 2024-06-24T01:07:56 | https://dev.to/samglish/technical-dos-attacks-2982 | cybersecurity, ddos | Ethical Hacking - Dos attacks on different services.
[https://github.com/samglish/technicalDos](https://github.com/samglish/technicalDos)
**The different tools**
1. Metasploit
2. Nmap NSE
3. Exploit database
4. Scapy
**DOS/DDOS categories**
- Session abuse.
- Attacks based on packet volume.
- Protocol-based attacks.
- Attacks based on the application layer.
**The tools we are going to use**
- Low Orbit Ion Cannon
- THC SSL DOS
- Scapy
- Slowloris
- https://upordown.ultrawebhosting.com/
**let's try it**
_1st tool is a website : https://upordown.ultrawebhosting.com/_

I will check if my site is available or not following service denial attacks. https://samglishinc.000webhostapp.com

we see that the website is available.
**THC SSL DOS**
```
thc-ssl-dos
```
```
______________ ___ _________
\__ ___/ | \ \_ ___ \
| | / ~ \/ \ \/
| | \ Y /\ \____
|____| \___|_ / \______ /
\/ \/
http://www.thc.org
Twitter @hackerschoice
Greetingz: the french underground
./thc-ssl-dos [options] <ip> <port>
-h help
-l <n> Limit parallel connections [default: 400]
```
how to use : thc-ssl-dos ip_target --accept
i want to test my website: let's see ip adress
run this command
```
dmitry samglishinc.000webhostapp.com
```
Output
```
Deepmagic Information Gathering Tool
"There be some deep magic going on"
HostIP:145.14.145.210
HostName:samglishinc.000webhostapp.com
Gathered Inet-whois information for 145.14.145.210
---------------------------------
```
```bash
thc-ssl-dos 145.14.145.210 --accept
```
Output
```bash
Waiting for script kiddies to piss off................
The force is with those who read the source...
Handshakes 0 [0.00 h/s], 1 Conn, 0 Err
Handshakes 4[4.310 h/s], 2 Conn, 0 Err
```
# Scapy
```bash
scapy
```
Output
```bash
INFO: Can't import PyX. Won't be able to use psdump() or pdfdump().
aSPY//YASa
apyyyyCY//////////YCa |
sY//////YSpcs scpCY//Pp | Welcome to Scapy
ayp ayyyyyyySCP//Pp syY//C | Version 2.4.4
AYAsAYYYYYYYY///Ps cY//S |
pCCCCY//p cSSps y//Y | https://github.com/secdev/scapy
SPPPP///a pP///AC//Y |
A//A cyP////C | Have fun!
p///Ac sC///a |
P////YCpc A//A | Craft packets like I craft my beer.
scccccp///pSP///p p//Y | -- Jean De Clerck
sY/////////y caa S//P |
cayCyayP//Ya pY/Ya
sY/PsY////YCc aC//Yp
sc sccaCY//PCypaapyCP//YSs
spCPY//////YPSps
ccaacs
using IPython 8.18.1
>>>
```
We will send a packet with a TTL 0, it is a malformed packet which will create confusion for the target server then a service denial we will send millions of requests.
Format `end(dst="ip", ttl=0)/TCP(),iface="",count=2000)`
see your ip_adress and nerwork interface
```bash
ifconfig
```
`adresse cible` :
`malformed packet` : use TTL 0
`packet volume`: 2000
`interface` : wlo1
```bash
INFO: Can't import PyX. Won't be able to use psdump() or pdfdump().
aSPY//YASa
apyyyyCY//////////YCa |
sY//////YSpcs scpCY//Pp | Welcome to Scapy
ayp ayyyyyyySCP//Pp syY//C | Version 2.4.4
AYAsAYYYYYYYY///Ps cY//S |
pCCCCY//p cSSps y//Y | https://github.com/secdev/scapy
SPPPP///a pP///AC//Y |
A//A cyP////C | Have fun!
p///Ac sC///a |
P////YCpc A//A | Craft packets like I craft my beer.
scccccp///pSP///p p//Y | -- Jean De Clerck
sY/////////y caa S//P |
cayCyayP//Ya pY/Ya
sY/PsY////YCc aC//Yp
sc sccaCY//PCypaapyCP//YSs
spCPY//////YPSps
ccaacs
using IPython 8.18.1
>>>send(IP(dst="145.14.145.210", ttl=0)/TCP(),iface="wlo1",count=2000)
```
```
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Sent 2000 packets.
```
for more information visit: http://sdz.tdct.org/sdz/manipulez-les-paquets-reseau-avec-scapy.html
# Low Orbit Ion Cannon (LOIC)
Install LOIC
create folder `Loic`
```
mkdir Loic
```
```
cd Loic/
```
```
git clone https://github.com/nicolargo/loicinstaller.git
```
```
cd loicinstaller/
```
```
./loic.sh
```
Usage: ./loic.sh <install|update|run>
```
./loic.sh install
```
run
```
./loic.sh run
```


Output
```
New configuration template added to /home/samglish/.siege
Run siege -C to view the current settings in that file
SIEGE 4.0.7
Usage: siege [options]
siege [options] URL
siege -g URL
Options:
-V, --version VERSION, prints the version number.
-h, --help HELP, prints this section.
-C, --config CONFIGURATION, show the current config.
-v, --verbose VERBOSE, prints notification to screen.
-q, --quiet QUIET turns verbose off and suppresses output.
-g, --get GET, pull down HTTP headers and display the
transaction. Great for application debugging.
-p, --print PRINT, like GET only it prints the entire page.
-c, --concurrent=NUM CONCURRENT users, default is 10
-r, --reps=NUM REPS, number of times to run the test.
-t, --time=NUMm TIMED testing where "m" is modifier S, M, or H
ex: --time=1H, one hour test.
-d, --delay=NUM Time DELAY, random delay before each request
-b, --benchmark BENCHMARK: no delays between requests.
-i, --internet INTERNET user simulation, hits URLs randomly.
-f, --file=FILE FILE, select a specific URLS FILE.
-R, --rc=FILE RC, specify an siegerc file
-l, --log[=FILE] LOG to FILE. If FILE is not specified, the
default is used: /var/log/siege.log
-m, --mark="text" MARK, mark the log file with a string.
between .001 and NUM. (NOT COUNTED IN STATS)
-H, --header="text" Add a header to request (can be many)
-A, --user-agent="text" Sets User-Agent in request
-T, --content-type="text" Sets Content-Type in request
-j, --json-output JSON OUTPUT, print final stats to stdout as JSON
--no-parser NO PARSER, turn off the HTML page parser
--no-follow NO FOLLOW, do not follow HTTP redirects
Copyright (C) 2018 by Jeffrey Fulmer, et al.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.
```
```
siege samglishinc.000webhostapp.com
```
```
{ "transactions": 9842,
"availability": 99.93,
"elapsed_time": 442.90,
"data_transferred": 9.92,
"response_time": 1.06,
"transaction_rate": 22.22,
"throughput": 0.02,
"concurrency": 23.49,
"successful_transactions": 7646,
"failed_transactions": 7,
"longest_transaction": 38.89,
"shortest_transaction": 0.35
}
``` | samglish |
1,898,252 | Day 5: Collaborating with GitHub for DevOps | Welcome to Day 5 of our 90 Days of DevOps journey! Today, we'll explore GitHub collaboration, master... | 0 | 2024-06-24T01:02:14 | https://dev.to/arbythecoder/day-5-collaborating-with-github-for-devops-5oo | github, git, devops, cloud | Welcome to Day 5 of our 90 Days of DevOps journey! Today, we'll explore GitHub collaboration, master essential Git commands, and tackle real-life challenges in a straightforward manner.
#### Git Essentials for DevOps
**Mastering Git Commands:**
- **git init:** Start a new Git repository.
- **git add:** Add changes to the staging area.
- **git commit:** Record changes to the repository with a descriptive message.
- **git push:** Upload local repository changes to a remote repository.
- **git pull:** Fetch and integrate changes from a remote repository.
- **git branch:** Manage branches for parallel development.
- **git merge:** Combine changes from different branches.
- **git checkout:** Switch branches or restore working tree files.
- **git clone:** Create a local copy of a remote repository.
#### Collaborating Efficiently on GitHub
**Setting Up Multiple GitHub Accounts:**
- Organize your projects by creating separate GitHub accounts for personal, work, and testing purposes. This helps maintain clarity and prevents accidental mix-ups.
**Inviting Collaborators:**
- Learn how to invite others to contribute to your GitHub repository and manage their access permissions effectively.

**Forking and Cloning Repositories:**
- Fork repositories to experiment with changes without affecting the original project. Clone repositories locally to work on them using Git commands.
**Creating Pull Requests:**
- Submit your modifications to the original repository via pull requests. Use this feature for code review and approval before merging changes.
#### Addressing Real-life Challenges
**Challenge 1: Managing GitHub Accounts**
- Balancing multiple GitHub accounts can be tricky. Ensure you use different credentials and configure Git properly to avoid confusion.
**Solution:**
- Set up SSH keys and configure Git to use specific credentials for each repository. This ensures you push changes to the correct repository without errors.
**Challenge 2: Collaborative Workflow**
- Handling merge conflicts and divergent code branches when collaborating with a team.
**Solution:**
- Establish clear branching strategies and merge policies. Conduct regular code reviews and automate tests to maintain code quality. Effective communication through comments and pull request discussions is key.
#### Fun and Learning Together
Navigating GitHub and Git commands can be both challenging and rewarding in your DevOps journey. Embrace the learning process, experiment with different workflows, and always strive for continuous improvement.
See you on Day 6 as we dive into Docker containerization! It's been a journey, and we're just getting started. Keep pushing forward—remember, it's not always easy, but we keep moving forward!
| arbythecoder |
1,898,251 | [Game of Purpose] Day 36 | Today I played around with sample projects: Lyria and Matrix. I wanted to see how professional game... | 27,434 | 2024-06-24T00:58:15 | https://dev.to/humberd/game-of-purpose-day-36-38ko | gamedev | Today I played around with sample projects: Lyria and Matrix. I wanted to see how professional game devs structure their code/blueprints. And oh boy I've seen so many nodes I can't wrap my head around. It took a lot of time for me to understand a function with 30 nodes, which would take me a minute if it was code.
Anyway, I learned that configuring Blueprints is done by Data Assets, where you can put many properties of different type. It's just like a cpp struct. | humberd |
1,898,247 | Dev challenge - Algorithms | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-24T00:47:34 | https://dev.to/marimnz/dev-challenge-algorithms-ldi | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Algorithms are like a list with the necessary step-by-step instructions to solve a problem in the best way. It's like a good cake recipe, in which you can't bake it without first adding the ingredients
## Additional Context
Simple concept, but fundamental to all code | marimnz |
1,898,242 | Introduction to Game AI Development | Introduction In recent years, the development of artificial intelligence (AI) has... | 0 | 2024-06-24T00:34:33 | https://dev.to/kartikmehta8/introduction-to-game-ai-development-3oa4 | webdev, javascript, beginners, programming | ## Introduction
In recent years, the development of artificial intelligence (AI) has revolutionized the gaming industry. AI is used to create intelligent and lifelike characters in video games, making them more challenging and engaging. Game AI development involves using algorithms and techniques to simulate human-like behavior and decision-making in games. In this article, we will explore the advantages and disadvantages of game AI development and its features.
## Advantages of Game AI Development
One of the biggest advantages of game AI development is that it enhances the overall gaming experience. AI-powered characters can adapt and respond to the player's actions, making the gameplay more immersive and unpredictable. It also allows developers to create more complex and challenging game levels, keeping players engaged for longer periods. AI can also improve the replay value of games by creating different outcomes for each playthrough.
## Disadvantages of Game AI Development
One of the major disadvantages of game AI development is the cost and time involved. Developing sophisticated AI systems requires a significant amount of resources and expertise. Additionally, AI can also have bugs and glitches, which can negatively impact the gaming experience.
## Features of Game AI Development
Game AI development has several features that make it an essential aspect of game development. These include pathfinding, decision-making, and learning abilities.
1. **Pathfinding:** Pathfinding algorithms allow AI characters to navigate through game environments efficiently. This is crucial for creating realistic movements and tactics.
```python
# Example of a simple pathfinding algorithm in Python
def find_path(start, goal, grid):
path = []
current = start
while current != goal:
# Simplified example: move one step towards the goal
current = (current[0] + (goal[0] - current[0]) // abs(goal[0] - current[0] if current[0] != goal[0] else 0),
current[1] + (goal[1] - current[1]) // abs(goal[1] - current[1] if current[1] != goal[1] else 0))
path.append(current)
return path
```
2. **Decision-Making:** Decision-making algorithms simulate human-like responses and actions, enhancing the realism of AI behaviors.
```python
# Pseudocode for a decision-making algorithm
if enemy_close():
if health_low():
retreat()
else:
attack()
else:
patrol()
```
3. **Learning Abilities:** Learning abilities enable AI to adapt and improve based on the player's behavior, making the game more challenging and engaging.
```python
# Example of learning ability using a simple reinforcement learning model
update_strategy_based_on_outcome(previous_action, outcome)
```
## Conclusion
Overall, game AI development has revolutionized the gaming industry and continues to evolve with advancements in technology. While it has its own set of challenges, the benefits of game AI far outweigh the drawbacks. With the constant development in this field, we can expect even more lifelike and intelligent game characters in the future.
| kartikmehta8 |
1,897,702 | Important things to know about the anchor tag <a> | Most of us are familiar with using the anchor tag to link to other pages on the web, but there's so... | 0 | 2024-06-23T23:35:39 | https://dev.to/douiri/important-things-to-know-about-the-anchor-tag-2hhi | html, webdev, learning, beginners | Most of us are familiar with using the anchor tag to link to other pages on the web, but there's so much more to this versatile element that often goes unnoticed by beginners. In this article, we'll explore some of the lesser-known features and functionalities of the anchor tag that can enhance your HTML skills and web development projects.
## href attribute
The URL that the hyperlink points to. which can be one of these schemes:
- HTTP URL:
```html
<a href="https://douiri.org">read more</a>
```
- targeting specific id by using # sign:
```html
<a href="#content">skip to main content</a>
<main id="content">
</main>
```
- a piece of media using media fragments
```html
<a href="https://example.com/video.mp4#t=30,60">Watch from 30 to 60 seconds</a>
```
- a text fragment with this syntax
```
https://example.com#:~:text=[prefix-,]textStart[,textEnd][,-suffix]
```
try this example to see how it works in action: [https://douiri.org/blog/css-floating-label/#:~:text=support](https://douiri.org/blog/css-floating-label/#:~:text=support)
- telephone, email, or sms
```html
<a href="mailto:drisspennywise@gmail.com">send email</a>
<a href="tel:+212651501766">call me</a>
<a href="sms:+212651501766">send SMS</a>
```
## download attribute
The `download` attribute instructs the browser to download the linked resource instead of navigating to its URL, provided the resource is from the same origin or uses `:blob` or `:data` schemes. You can either specify the desired file name or allow the browser to determine the appropriate name and extension.
```html
<a href="/videos/video.mp4" download>download video</a>
<a href="/cat-4321.png" downalod="cat.png">download image</a>
```
## rel attribute
The rel attribute accepts multiple values and can be used with various elements. While [you can view the full list here](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel), I want to focus on the values that control search engine crawlers: `nofollow`, `ugc`, and `sponsored`.
```html
<a href="https://example.com" rel="nofollow">some link</a>
```
- `nofollow`: indicates that the link should not pass ranking credit (i.e., Non-Endorsed)
- `ugc`: indicates that the link is user-generated content (i.e., comments, posts...)
- `sponsored`: indicates that the link is a sponsored content.
| douiri |
1,898,150 | Meet Chappy: Your Friendly, Quirky Chat Buddy - Next.js, Twilio API, Google Gemini, Assembly AI, & MongoDB, Hosted on Vercel | This is a submission for Twilio Challenge v24.06.12 Welcome to Chappy, your versatile chat... | 0 | 2024-06-24T00:08:58 | https://dev.to/ketanrajpal/chappy-your-friendly-and-quirky-chat-buddy-14pl | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
Welcome to Chappy, your versatile chat companion! You can access Chappy through your web browser or directly via WhatsApp, enjoying seamless synchronization from WhatsApp to WebApp.
Chappy is a true multitasker. Not only can it read images, text, and voice chats, but it also excels in providing detailed and accurate responses to your enquiries. Whether you snap a photo of a document, send a voice message, or type out a question, Chappy can interpret and understand the content, delivering helpful information in return.
Moreover, Chappy is designed to hold meaningful conversations. It remembers the context of your previous interactions, which allows it to offer more personalized and relevant responses. This context-awareness ensures that Chappy can follow complex discussions and provide coherent answers, making your experience smoother and more intuitive.
### The WhatsApp Bot is crafted with:
- Twilio WhatsApp API
- Next.js API
- Transcribe API for deciphering voice notes
- Vercel for hosting
- MongoDB for storing all your chats
- Google Gemini 1.5 Flash Model for advanced capabilities
### The Web Application shines with:
- Next.js for a smooth user experience
- Twilio Verify API for secure login
- Twilio WhatsApp API to send chats from the web app to WhatsApp
- Vercel for reliable hosting
- MongoDB to keep all your conversations safe
- Google Gemini 1.5 Flash Model for enhanced functionality
- Speech recognition and text-to-speech conversion, powered by Polly
## Demo
_**Please be patient with the chatbot as it may take some time to respond. I’m utilizing freely available AI models and haven’t subscribed to any paid services.**_
WhatsApp Twilio Sandbox

Web App: <https://chappy-lyart.vercel.app>
Github Repo: <https://github.com/ketanrajpal/chappy>
### Screenshots





## Twilio and AI
We leveraged Twilio's capabilities in conjunction with advanced AI technologies to create an efficient and seamless chat experience with Chappy. Here's how:
- **Twilio WhatsApp API:** The Twilio WhatsApp API is at the core of Chappy's communication framework. It enables smooth and reliable messaging services between users and Chappy on WhatsApp. By integrating this API, Chappy can send and receive text, images, and voice chats, ensuring that users can interact with Chappy in a versatile and intuitive manner.
- **Transcribe API:** The Transcribe API is crucial for deciphering voice notes. When a user sends a voice message, the API transcribes the audio into text, which Chappy can then process and respond to accurately. This feature enhances accessibility and convenience, allowing users to communicate with Chappy using voice.
- **Google Gemini 1.5 Flash Model:** The integration of the Google Gemini 1.5 Flash Model equips Chappy with advanced AI capabilities. This model enhances Chappy's ability to understand and process natural language, making responses more accurate and contextually relevant. It also powers features like speech recognition and text-to-speech conversion, enriching the overall user experience.
- **Speech Recognition and Text-to-Speech Conversion:** Powered by Polly, these features allow Chappy to convert spoken language into text and vice versa. Users can send voice messages that Chappy transcribes and responds to in text, or they can receive spoken responses from Chappy, making interactions more dynamic and accessible.
By combining Twilio's robust messaging APIs with cutting-edge AI technologies, we've created a powerful, versatile, and user-friendly chat companion in Chappy.
## Additional Prize Categories
- **Twilio Times Two**: Integrates Twilio WhatsApp API for messaging and Twilio Verify API for secure login, showcasing versatile use of Twilio's capabilities.
- **Impactful Innovators**: Enhances user experience through seamless communication across WhatsApp and WebApp platforms, potentially benefiting accessibility and usability.
- **Entertaining Endeavors**: While primarily functional, its advanced AI integration could lead to creative user interactions and applications, potentially qualifying it for innovative and engaging user experiences. | ketanrajpal |
1,900,589 | Bye Copilot - How to Create a Local AI Coding Assistant for Free | TLDR: Create your own local AI Coding Assistant that integrates with VS Code. AI Coding... | 0 | 2024-06-26T23:22:03 | https://www.davegray.codes/posts/bye-copilot-how-to-create-a-local-ai-coding-assistant-for-free | ai, githubcopilot, codingassistant, codeassistant | ---
title: Bye Copilot - How to Create a Local AI Coding Assistant for Free
published: true
date: 2024-06-24 00:00:00 UTC
tags: ai,copilot,codingassistant,codeassistant
canonical_url: https://www.davegray.codes/posts/bye-copilot-how-to-create-a-local-ai-coding-assistant-for-free
cover_image: https://raw.githubusercontent.com/gitdagray/my-blogposts/main/images/bye-copilot-how-to-create-a-local-ai-coding-assistant-for-free.png
---
**TLDR:** Create your own local AI Coding Assistant that integrates with VS Code.
## AI Coding Assistants
I didn't jump onboard the AI Coding Assistant train at first.
However, I now open up a chat with [ChatGPT](https://chatgpt.com/) as often as I do MDN.
I've found it's often a quicker reference for exactly what I need.
I have considered paying the monthly fee for [GitHub Copilot](https://github.com/features/copilot), but what if you could use open source large language models (LLMs) to create your own AI Coding Assistant?
Now you can.
## Install Ollama
Start by going to [Ollama.com](https://ollama.com/) and downloading the version for your operating system.
Note: As I write this, the Windows version is still considered a "preview". After installing on Windows, I had to restart my computer for `ollama` to be available at the command line.
Open a terminal window and type `ollama --help` to confirm you have Ollama installed and your computer can find it.
## Pick an Open Source LLM
Visit the [EvalPlus Leaderboard](https://evalplus.github.io/leaderboard.html) where the performance of many models are compared.
There are currently a couple of options in the EvalPlus Top 5 to consider: `DeepSeek-Coder-v2` and `CodeQwen1.5`.
Before choosing, go back to [Ollama.com](https://ollama.com/) and search the models to look at their details.
I personally decided to go with [codeqwen](https://ollama.com/library/codeqwen). It is a 4.2GB download and `deepseek-coder-v2` is 8.9GB.
You can try out several LLMs if you want to.
After choosing at least one, copy the `ollama` command and run it in your terminal window to download your LLM of choice.
For example, the command on the [codeqwen](https://ollama.com/library/codeqwen) page is `ollama run codeqwen`.
## Add the Continue VS Code Extension
Open up VS Code and click the extensions menu icon.
Search for `continue`.
You should find `Continue` by [continue.dev](https://www.continue.dev/).
Install the extension and you should find it by icon or name in the activity bar afterwards.
Click it to open and you should see a splash screen.
The screen will confirm Ollama is installed and provide other recommendations that you can ignore.
Click the "Local" button and the "next / continue" button.
You should now have a chat screen that opened over your filetree (if your filetree is on the left side).
## Configuring Continue
At the bottom of the chat window, select `Ollama - codeqwen:latest` from the menu.
Click the settings icon to open up the `config.json` file.
Look for the `tabAutocompleteModel` setting.
Change both the `title` and `model` values to `codeqwen`.
You will also see a `custom commands` setting like this:
```json
"customCommands": [
{
"name": "test",
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
```
You can add your own custom commands here. In this example, if you highlight a function in your code and type `test` in the chat window, it will execute the prompt you see.
Here's an example of a custom command I added:
```json
"customCommands": [
{
"name": "test",
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
},
{
"name": "step",
"prompt": "{{{ input }}}\n\nExplain the selected code step by step.",
"description": "Code explanation"
}
],
```
## Getting Started:
Click the help icon at the bottom right of the chat window.
It provides a link to a tutorial, web resources and keyboard shortcuts.
A couple of quick things to try:
- Select some existing code in your file, then press `Ctrl+L` to start a chat on the selected code.
- Select some existing code in your file, then type `test` in the chat window, and press enter to generate unit tests for that code.
- Create a new empty code file, press `Ctrl+i` and type instructions for code generation. Then watch your AI Code Assistant generate the code.
- Start typing code in a file and look for the autocompletion suggestions. Press tab to use them.
<hr />
## Let's Connect!
Hi, I'm Dave. I work as a full-time developer, instructor and creator.
If you enjoyed this article, you might enjoy my other content, too.
**My Stuff:** [Courses, Cheat Sheets, Roadmaps](https://courses.davegray.codes/)
**My Blog:** [davegray.codes](https://www.davegray.codes/)
**YouTube:** [@davegrayteachescode](https://www.youtube.com/davegrayteachescode)
**X:** [@yesdavidgray](https://x.com/yesdavidgray)
**GitHub:** [gitdagray](https://github.com/gitdagray)
**LinkedIn:** [/in/davidagray](https://www.linkedin.com/in/davidagray/)
**Patreon:** [Join my Support Team!](patreon.com/davegray)
**Buy Me A Coffee:** [You will have my sincere gratitude](https://www.buymeacoffee.com/davegray)
Thank you for joining me on this journey.
Dave | gitdagray |
1,898,130 | Iteration - a Stream Generator for Recursive Minds | The Iterators For anyone having been in programming a couple of years, chances are you've... | 0 | 2024-06-23T23:54:23 | https://dev.to/fluentfuture/iteration-a-stream-generator-m75 | java, stream, functional | ## The Iterators
For anyone having been in programming a couple of years, chances are you've run into an interview question or an assignment like [iterator for binary tree](https://stackoverflow.com/questions/12850889/in-order-iterator-for-binary-tree).
There are variations like for the "pre-order", "post-order" or "in-order" traversal; or sometimes you got a n-ary tree instead of a bnary tree.
A reason this kind of questions are used in interviews is because they require some meticulous bookkeeping, and there are loads of edge cases to bite you if you aren't careful.
Fine, it's 2024 and we have `Stream` now. How about having some fun and creating an _infinite_ stream to generate the Fibonacci sequence?
The one thing in common? They are all **boringly painful** to write and read. Playing with them for the interview is cool; but having to read them in real life is no fun.
## Recursion is easy
Imagine you are writing in-order traversal the most naive way:
```java
void traverseInOrder(Tree tree) {
if (tree.left != null) traverseInOrder(tree.left);
System.out.println(tree.value);
if (tree.right != null) traverseInOrder(tree.right);
}
```
```java
void fibonacci(int a, int b) {
System.out.println(a);
fibonacci(b, a + b);
}
fibonacci(0, 1);
```
Except the hardcoding of `System.out.println()`, and never mind the stack overflow caused by the infinite recursion, at least it's easy, right?
Why can't we have some meat - scratch that - why can't we keep the simplicity and just fix the hardcoding and the stack overflow?
## Design the API
This is my favorite part: to dream about what I would really really want, if everything just magically works.
Let's see... I don't want the hardcoding, so maybe this?
```java
void traverseInOrder(Tree tree) {
if (tree.left != null) traverseInOrder(tree.left);
generate(tree.value);
if (tree.right != null) traverseInOrder(tree.right);
}
```
```java
void fibonacci(int a, int b) {
generate(a);
fibonacci(b, a + b);
}
fibonacci(0, 1);
```
The imagined `generate()` call will put the elements in the final stream.
It doesn't solve the stack overflow though.
The very nature of an infinite stream means that it has to be lazy. The caller could decide to use `.limit(50)` to only print the first 50 numbers and the recursive call shouldn't progress beyond the first 50 numbers.
So I need something that delays the recursion call. The idiomatic way in Java to model laziness is to wrap it in a callback interface. So let's create one:
```java
interface Continuation {
void evaluate();
}
```
And then I need a method similar to the `generate()` call but accepts a `Continuation`. The syntax should look like:
```java
void traverseInOrder(Tree tree) {
if (tree.left != null) {
yield(() -> traverseInOrder(tree.left));
}
generate(tree.value);
if (tree.right != null) {
yield(() -> traverseInOrder(tree.right));
}
}
```
I stole the `yield` keyword from other languages like C# with native generator support (it means to yield control to the runtime and also yield the evaluation result into the stream). Recent Java versions have made it a reserved word so you'll likely need to call it using `this.yield()` instead.
By wrapping the recursive call in a lambda passed to `yield()`, I should get the effect of lazy evaluation. The implementation's job will be to ensure the right order between the directly generated elements and the lazily generated ones.
In summary, the draft API we have come up with so far:
```java
class Iteration<T> {
void generate(T element);
void yield(Continuation continuation);
/** Turns the Iteration into a stream */
Stream<T> iterate();
}
```
In case it wasn't obvious, we need the `iterate()` method to package it all up as a lazy stream.
The final syntax we want:
```java
class InOrderTraversal<T> extends Iteration<T> {
InOrderTraversal<T> traverseInOrder(Tree tree) {
if (tree.left != null) {
this.yield(() -> traverseInOrder(tree.left));
}
generate(tree.value);
if (tree.right != null) {
this.yield(() -> traverseInOrder(tree.right));
}
return this;
}
}
Stream<T> stream =
new InOrderTraversal<T>().traverseInOrder(tree).iterate();
```
```java
class Fibonacci extends Iteration<Integer> {
Fibonacci startFrom(int a, int b) {
generate(a);
this.yield(() -> startFrom(b, a + b));
return this;
}
}
Stream<Integer> stream =
new Fibonacci().startFrom(0, 1).iterate();
```
## Making it
It's nice to have a dream once in a while. We'll be able to use such API to turn many recursive algorithms _trivially_ into lazy streams.
But aside from polishing up an API _surface_ I like, there is nothing in concrete. How do I make it actually work?
First of all, if the thing only needs to handle `generate()`, it'd be a pointlessly trivial wrapper around a `Queue<T>`.
But we need to handle `yield()` with its lazy evaluation. Imagine we are effectively running this sequence:
```java
generate(1);
generate(2);
yield(() -> { // line 3
generate(3);
generate(4);
});
generate(5); // line 7
```
At line 3, can we enqueue the Continuation into the same queue before moving on to line 7?
If I do so, after line 7, the queue will look like `[1, 2, Continuation, 5]`.
Now if we call `iterate()` and start to consume elements from the stream, `1` and `2` will be popped out, and then the `Continuation` object, with the number `5` still remaining in the queue.
Once the `Continuation` object is consumed, it needs to be evaluated, which will in turn generate `3` and `4`. The question is where to put the two numbers?
We can't keep enqueueing them after the number `5` because it'll be out of order; we can't treat the queue as a stack and push them in FILO order because then we'll get `[4, 3, 5]`.
## Stack or Queue?
There are several ways to go about this.
One possibility is to create a stack of queues, where each time a `Continuation` is to be evaluated, push a new queue onto the stack. The stream will always consume elements from the top queue of the stack.
The downside is that you might end up creating and discarding many instances of `ArrayDeque`, which can be wasteful.
With some micro-optimization in mind, another approach is to use the [two-stacks-as-a-queue](https://stackoverflow.com/questions/69192/how-to-implement-a-queue-using-two-stacks) trick.
The are two stacks: the `inbox` for writing and `outbox` for reading. When either `generate()` or `yield()` is called, we push into the
`inbox` stack; when the stream tries to consume, it flushes everything out of `inbox` into the `outbox` and then consumes one-by-one from the `outbox`.
To put it in context, upon seeing `[Continuation, 5]` in the `outbox`, the `Continuation` is evaluated, which puts `[4, 3]` in the `inbox` stack.
On the other side, the stream tries to consume. It pops and pushes `[4, 3]` onto the `outbox` stack, resulting in `[3, 4, 5]`.
Implmentation-wise, we get to allocate no more than two `ArrayDeque`s.
```java
public class Iteration<T> {
private final Deque<Object> outbox = new ArrayDeque<>();
private final Deque<Object> inbox = new ArrayDeque<>(8);
public final <T> void generate(T element) {
if (element instanceof Continuation) {
throw new IllegalArgumentException("Do not stream Continuation objects");
}
inbox.push(element);
}
public final void yield(Continuation continuation) {
inbox.push(continuation);
}
private T consumeNextOrNull() {
for (; ; ) {
Object top = poll();
if (top instanceof Continuation) {
((Continuation) top).evaluate();
} else {
return (T) top;
}
}
}
private Object poll() {
Object top = inbox.poll();
if (top == null) { // nothing in inbox
return outbox.poll();
}
// flush inbox -> outbox
for (Object second = inbox.poll();
second != null;
second = inbox.poll()) {
outbox.push(top);
top = second;
}
return top;
}
}
```
The `poll()` private method does the "flush inbox into outbox" logic mentioned above.
The `consumeNextOrNull()` method consumes the next element from the two-stack-queue, and evaluates `Continuation` when it sees one.
## Wrap it all up
If you are with me so far, all we are missing is the `iterate()` method that wraps it all up as a `Stream`.
I'll cheat by just using Mug's [`whileNotNull()`](https://google.github.io/mug/apidocs/com/google/mu/util/stream/MoreStreams.html#whileNotNull(java.util.function.Supplier)) convenience method. But it's not hard to create your own by implementing a `Spliterator`.
```java
public Stream<T> iterate() {
return MoreStreams.whileNotNull(this::consumeNextOrNull);
}
```
And with that, our little generic recursive stream generator is complete.
## Use it for real
Before we call it the day, let's see if we can use it to solve a more realistic problem.
Say, you are calling a `ListAssets` API that supports pagination. Request and response definitions are:
```
ListAssetsRequest {
string userId;
int page_size;
string page_token; // start from this page
}
ListAssetsResponse {
List<Asset> assets;
string next_page_token; // empty if no more
}
```
If we were to naively fetch all pages and print them, it'll be as simple as sending request over and over again until the `response.next_page_token` is empty:
```java
void showAllAssets(String userId) {
ListAssetsRequest.Builder request =
ListAssetsRequest.newBuilder().setUserId(userId);
for (; ;) {
var response = assetsApi.listAssets(request.build());
for (Asset asset : response.getAssets()) {
System.out.println(asset);
}
if (response.getNextPageToken().isEmpty()) {
return; // no more pages
}
request.setPageToken(response.getNextPageToken());
}
}
```
But we can do better! Let's wrap it up as a lazy `Stream<Asset>` to give callers more flexibility. For example they can consume any number of assets and stop when needed without over-fetching pages unnecessarily.
```java
Stream<Asset> listAssets(String userId) {
class Pagination extends Iteration<ListAssetsResponse> {
Pagination startFrom(String pageToken) {
var response = assetsApi.listAssets(
ListAssetsRequest.newBuilder()
.setUserId(userId)
.setPageSize(100)
.setPageToken(pageToken)
.build());
generate(response);
if (!response.getNextPageToken().isEmpty()) {
this.yield(() -> startFrom(response.getNextPageToken()));
}
return this;
}
}
return new Pagination().startFrom("").iterate()
.flatMap(response -> response.getAssets().stream());
}
```
I trust you can read it alright, my friend. :)
| fluentfuture |
1,898,229 | AI Journal App with WhatsApp Integration | This is a submission for Twilio Challenge v24.06.12 What I Built A mobile app that uses... | 0 | 2024-06-23T23:41:06 | https://dev.to/preveenraj/journal-app-with-ai-capabilities-and-whatsapp-integration-23dh | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What I Built
A mobile app that uses user-provided text, emojis and photos through WhatsApp to generate personalised daily journals. Journals are illustrated with user-uploaded photos or AI-generated images based on the user's input. Users can edit the AI-generated text and adjust the source material to refine their summaries.
### Target Users
People who want a creative way to capture and reflect on their daily experiences.
## Demo
<!-- Share a link to your app and include some screenshots here. -->
{% embed https://www.youtube.com/watch?v=6dZHY8tdwJA %}
## Screenshots
* Whatsapp Chat with texts

* Journal App Home

* Journal View AI generated story

* All Journals

* Edit AI Generated Journal

* Edit User Messages

## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
Our app utilizes Twilio's WhatsApp and SMS API to seamlessly receive your daily messages. These messages are then aggregated and analyzed by Google's Gemini AI model through the Vertex AI SDK. This AI magic transforms your WhatsApp conversations into a beautiful, personalized daily journal, accessible directly within the app.
## Additional Prize Categories
* Impactful Innovators
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
### Contributors:
Preveen Raj: https://dev.to/preveenraj
Roshan Shibu: https://dev.to/roshanshibu
| preveenraj |
1,898,230 | Join Me in Building an Exciting Tic-Tac-Toe Web App! | Hi everyone, I'm developing a React-based Tic-Tac-Toe web app and looking for passionate graphic... | 0 | 2024-06-23T23:36:24 | https://dev.to/mstechgeek/join-me-in-building-an-exciting-tic-tac-toe-web-app-3m0o | Hi everyone,
I'm developing a **React-based Tic-Tac-Toe web app** and looking for passionate graphic designers and front-end developers with JavaScript and React experience to join me!
### Current Features:
- 🎮 Basic game logic for 3x3 Tic-Tac-Toe
- 🌐 Progressive Web App (PWA) implementation
- 🔊 Sound effects
- 🏆 Score tracker
### Upcoming Features:
- 🤖 Player vs. computer mode
- 🧑🤝🧑 UI screen for player selection
- 🕹️ Different game boards like 4x4 and 5x5
Check out the app here: [Tic-Tac-Toe Web App](https://tic-tac-toe-ms-tech-geek.netlify.app/)
This project is a fantastic opportunity to **learn, share knowledge, and build something awesome together**. If you're excited about contributing and gaining hands-on experience, let's team up!
**Feel free to reach out to me directly via [LinkedIn](https://www.linkedin.com/in/mayank-sethi/).**
Comment below or DM me if you're interested.
Best,
Mayank | mstechgeek | |
1,897,385 | ⏱️ Mobitag Go Hackathon 2024-06-22 week-end 🤓 | 📢 Context Last week, optnc published the following post : ... | 27,823 | 2024-06-23T23:25:19 | https://dev.to/adriens/mobitag-go-hackathon-2024-06-22-week-end-2n16 | api, hackathon, showdev, go | ## 📢 Context
Last week, [optnc](https://dev.to/optnc) published the following post :
{% embed https://dev.to/optnc/mobitagnc-25-ans-plus-tard-des-sms-en-saas-via-apigee-2h9e %}
👉 This new blog post is all about **the inspiration it brought me the very next days.**
## 🎯 Week-end Hackathon pitch
I really love to the find the most efficient way to pitch an idea, as often **visual management is a great way to achieve this**:
{% twitter 1804325979093102830 %}
## ⏱️ hackathon, 2h. timeboxing & office hours
After that, on the next saturday morning, I launched myself within a **2 hours hackathon** to **estimate what could be achieved during a very strict and short timeboxed session so I could organize one later on office hours**, and how I could animate that session.
## 🍿 Demo
{% youtube yVoMg7CXgaM %}
## 🤔 Learning paths considerations
After the first working prototype I felt happy with that but... I
started to dig a bit further around Go best practices, tooling, libs.
> _"The more I saw possibilities the more it made me want to learn more."_
So my conclusion would be that using genAI to start dirty-coding from scratch a new programming language is really great as **it makes things work within a very short amount of time** : perfect for a hackathon.
It helps produce _"something that runs"_ for demo purpose.
Then it made me thing about two categories:
- **For those who just want to discover a programming language**, build a PoC and make their opinion : it's a great way, for example to teach to code... **while producing something that does more than "Hello World"**
- **For those who felt curious** and want to go further in discovering : it's very fun
In my case I felt the need to make things better looking, automating and discovering Go language and benefits. I think I will :
- **Go on discovering `Go`** language and tools
- Probably test new programming languages, **go on coding new ideas from scratch with GenAi too and see what happens**
## 🔖 Resources
GitHub milestone : [⏱️ Hackathon week-end du 2024-06-22 🤓](https://github.com/opt-nc/mobitag/milestone/1) | adriens |
1,892,580 | How to Build a Vue App To Show Your GitHub Repositories | Less than a year ago, I was a novice in programming with no background in computer science which is... | 0 | 2024-06-23T23:19:35 | https://dev.to/sheisbukki/how-to-build-a-vue-app-to-show-your-github-repositories-jef | vue, javascript, ionic, github | Less than a year ago, I was a novice in programming with no background in computer science which is why building this app was challenging but fulfilling. So, if you’re a beginner like me and everything seems overwhelming, I hope this motivates you to keep reading, practising, and building!
Prerequisites for building along:
1. Have at least a basic understanding of HTML, CSS, and JavaScript
2. Know how to use a code editor
3. Be familiar with the command line and terminal
4. Have Node.js installed on your machine — Vite, the build tool we will use to build this project, runs on Node.js
5. Know how to use a package manager like npm, pnpm, yarn, or bun
That said, if you are not a complete beginner with Vue.js, you can jump right into the [section where we build the app](#lets-build-our-mobile-responsive-vue-app).
---
## A Comprehensive Guide to Setting Up a Vue Project
Vue.js is a flexible JavaScript framework that gives developers the freedom to build projects in different ways.
You can start writing Vue code or set up a Vue project using any of these methods:
1. In an HTML file with the content delivery network (CDN)
2. Through the Vue CLI (Webpack/Babel)
3. Scaffolding a Vue project with Vite
### Quick Start: Writing Vue Code in an HTML File
You can create an index.html file and add a CDN script to learn and get familiar with Vue syntax.
- Create a folder in your pc
- Create an index.html file
- Open the file, add the HTML boilerplate, and an empty `<script>` tag that will contain your Vue code block
- Then add the following script: `<script src=“https://unpkg.com/vue@3/dist/vue.global.js”></script>`
Adding this CDN script turns your HTML file into a kind of Vue file, so you can use it to play around with Vue code.
Sample Vue code in an HTML file:
```html
<!-- CDN script to start using Vue without any build tools -->
<script src="https://unpkg.com/vue@3/dist/vue.global.js"></script>
<div id="app">
<p>{{ message }}</p>
<p>My name is {{ name }}</p>
</div>
<script>
Vue.createApp({
data() {
return {
name: "Tolu",
message: "Hello Vue!",
};
},
}).mount("#app");
</script>
```
Note: it’s not advisable to try to build a real Vue project using this approach because writing Vue in an HTML file has some limitations. One of which is that this approach doesn’t take advantage of the modularity Vue offers.
### Writing Vue Code Using the Vue CLI (Webpack/Babel)
[](https://cli.vuejs.org)
Although the Vue CLI is currently in maintenance mode, you can use it to learn about creating and reusing components in a Vue project.
Here’s how to set up a Vue project using the Vue CLI:
- For this, you have to install Vue globally into your machine. Use this command `npm install -g @vue/cli` OR `yarn global add @vue/cli`
- Create a directory where you will set up the project. You can do this from your terminal with the following command — `mkdir {insert a new folder}` OR `cd {an existing folder}`
- Open the directory using the following command — `cd {insert the new folder}` if you just created a new folder
- Use this command in your terminal — `vue create {insert-your-project-name}`
This will scaffold a new Vue CLI project with the name you choose. Here’s the folder tree of a sample Vue CLI project with the name ‘vue-cli-app’.

While your project directory is open in your terminal, use the command; `npm run serve` to see the newly created demo project in the browser.
### Scaffolding a Vue Project With Vite
If you want to enjoy the full benefits of the Vue.js framework, particularly for building larger projects, it’s better to write Vue code in single file components (SFCs). Each SFC will be written inside a .vue file. Writing Vue code in .vue files will give you access to a better development environment.
Regardless of the Vue API you use to build a Vue project, all Vue files have the same elements:
1. A `<template>` tag that will contain your typical HTML content
2. A `<script>` tag that contains your Vue logic
3. A `<style>` tag that will contain your CSS styling
You might be wondering why we need to use Vite to build a Vue SFCs project, and there’s a perfectly good answer for your curiosity. The browser can only understand .html, .css, and .js files. Any code not written in those three coding languages must be compiled into HTML, CSS and JavaScript for the browser to run the code. Vite is a build tool that helps us to compile our code before sending it to the browser.
Now, let’s scaffold a new Vue project using Vite. Similar to the Vue CLI approach, we will also do this from the terminal using the following commands:
- `mkdir {insert a new directory name}` OR `cd {insert an existing directory}` in your terminal — this is where the project will live
- `cd {insert the new directory name}` if you just created the directory
- `npm create vue@latest` OR `pnpm create vue@latest` OR `yarn create vue@latest` OR `bun create vue@latest`
- You will be prompted to provide the project name, and you can choose a name like ‘my-vue-app'
- You will then be prompted to answer No or Yes questions for options such as ‘Add TypeScript?’ and ‘Add Pinia for state management?’. Once you become more familiar with Vue, you will know which extra technologies/tools to add to your Vue project. But for now, choose ‘No’ for all the options
Once that is done, your terminal will prompt you to run the following commands:
- `cd {insert-your-project-name}` — this command opens your project folder
- `npm install` — this command installs the required dependencies to make your Vue project work
- `npm run dev` — and this command starts up a development server showing the scaffolded demo app provided by Vite in the browser, which you can see via localhost:5173

Ensure to run these commands in the same order. First, `cd` to your project folder, then `npm install` if you used npm to initiate the project. If you used yarn, bun, or pnpm, use their commands instead.
Your project directory will have a similar folder tree:

You can look around and delete files you will not be using to build this project such as the files in the components and the assets folders. You can also delete the content of the App.vue and main.js files to write your code in them.
---
## Options API vs Composition API
There are two different ways to write Vue code, the Options API and the Composition API. While you can use both the Options API and Composition API in the same project, it’s not conventional to mix both APIs. Besides, sticking to one of the approaches in a project will make your code more readable than switching back and forth between Options API and Composition API.
### Example

### What difference do you notice between both code blocks?
First, you will see that the `<script>` tag in the Composition API code has a `setup` attribute, and we also imported `ref` and `onErrorCaptured`. On the other hand, the Options API code is wrapped in an `export default` code block. Additionally, the Options API code has a `data` property. There are other differences to note between both APIs, and you will see more as we build this GitHub repository project.
### Which API Should You Use to Build Your Vue Project?
If you’re wondering which API is better to build your Vue project, the truth is that you can use anyone. Start by learning one of the APIs and build a simple project like the one we will build in this article. Then, learn the other API, and also build a simple project with it or recreate an existing Options API Vue project with Composition API and vice versa.
Another concern you might have, particularly about the Options API, is whether it will be deprecated in the near future. Here’s what Evan You, the creator of Vue.js, has to say about it.

So, while you can use the Options API to build your project, it helps to be familiar with both APIs. This article will show you how to build this project with both APIs. And remember not to use both APIs in the same project.
---
## Let’s Build Our Mobile Responsive Vue App
The elements and functionalities our Vue app will have are:
- A home page
- A ‘NavBar’ — for both large viewports and small viewports
- A ‘NotFound’ page, which is similar to an error 404 page
- Error boundary to catch and report errors
- Pagination
- Routing and nested routing using params
- Fetching data from the GitHub API and displaying the content
### Creating the Vue Instance and Mounting the App
We write the following code in the main.js file:
```js
import { createApp } from ‘vue’
import App from ‘./App.vue’
const app = createApp(App)
app.mount(‘#app’)
```
The first line imports the Vue createApp instance.
The second line imports the App.vue component. The App.vue is the main component that will house all other components because unlike HTML, Vue apps can only have one page. All other pages within a Vue app are displayed to the browser through routing.
The third line creates the Vue instance variable in our app, and the fourth line mounts the app. Note that we can create the Vue instance and mount our app in one line of code using this `createApp(App).mount(‘#app’)`. But this can make our code messy as we add more code to the main.js file. Instead, we use lines three and four, which give us more flexibility.
The `app.mount(‘#app’)` line must always be the last line of code in the main.js file of a Vue project because the app gets mounted once you write that line. Any lines of code after it will not reflect in the app.
### Create the Components for the App
This app will have the following components:
- A HomePage.vue
- A NavBar.vue
- A NotFound.vue
- An ErrorBoundary.vue
- A component for displaying all the repositories — RepoCards.vue, you can use any name you like
- A component for displaying a single repository — SingleRepo.vue, you can use any name you like. This component will be a nested route in the RepoCards component
Create all these files inside the components folder, located in the src folder of the project.
Note: if you don’t have enough GitHub repositories to build this project, fetching data from the GitHub API might be pointless. Instead, you can fetch data from a dummy API such as [RandomDataAPI](https://random-data-api.com). You also don't have to create the RepoCards.vue and SingleRepo.vue components, you can create DemoCards.vue and singleContent.vue components.
---
### Setting Up the Vue Router in Your App
The NavBar component will link to different URLs within the app, and the app will also have a page that has a nested route. We need to set up the Vue router to enable this routing functionality. To use the Vue router in your project, install it into the project directory using this command: `npm install vue-router@4`
Once installed, create a folder with the name 'router' inside your components folder, and create an index.js file inside the router folder.
Write the following code inside the index.js file:
```js
import { createRouter, createWebHistory } from ‘vue-router’
import HomePage from '../components/HomePage.vue'
import ErrorBoundary from '../components/ErrorBoundary.vue'
import NotFound from '../components/NotFound.vue'
import RepoCards from '../components/RepoCards.vue'
import SingleRepo from ‘../components/SingleRepo.vue'
const router = createRouter({
history: createWebHistory(),
routes: [
{
path: '/',
alias: '/home',
component: HomePage,
name: 'Home',
meta: { title: 'Home Page', description: 'Home page' }
},
{
path: '/errorBoundary',
component: ErrorBoundary,
name: 'ErrorBoundary Page',
meta: { title: 'ErrorBoundary', description: 'Test error boundary' }
},
{
path: '/:pathMatch(.*)*',
alias: '/error404',
component: NotFound,
name: 'NotFound',
meta: { title: 'NotFound', description: `The page doesn't exist` }
},
{
path: '/repoCards',
component: RepoCards,
name: 'Repository Cards',
meta: {
title: 'Repository Cards',
description: 'All repositories'
}
},
{
path: '/singleRepo/:name',
component: SingleRepo,
name: 'Single Repository',
meta: {
title: 'Expanded Repository',
description: 'A repository in view'
}
}
]
})
export default router
```
Breakdown of the code:
- The first line imports the `createRouter` and `createWebHistory` from the Vue router. `createWebHistory` allows the router to keep track of the web history which is how you can go back and forward within a web app on the browser.
- Next, we import all the components we need to route with the Vue router.
- Then, we create the router variable with all the routes it will house. The `path` is the URL for each page, and some paths have aliases. The `name` is the name we gave each component. The `meta` adds SEO for each component as a page in the browser.
- The HomePage’s `path` is the default `path` — the default view, of the app, it will be the page users will land on upon visiting the web app URL.
- The NotFound’s `path` is a `params` of regular expression (regex) that matches all page routing errors to capture when users try to visit a page that doesn’t exist.
- You will notice that the `path` for the SingleRepo component is different from the rest, that’s because it’s a nested route that takes a params we defined as `:name`.
- Finally, we export the router to make it accessible to other files in the project.
We can’t use our router just yet. We need to import it into the main.js file. Edit the main.js file with the following code:
```js
import router from ‘./router’
const app = createApp(App)
app.use(router)
app.mount(‘#app’)
```
Now, we can use the router across the components in the project.
### Adding the Ionic Framework as a Dependency To Your Vue App
At this point, we need to bring in our UI framework, and I will be using the Ionic framework. So let’s add it to the project as a dependency. You can use any UI framework you prefer. Or you can skip this part and create the UI & functionality with HTML elements and pure CSS. Don’t worry, the app will function with or without a third-party UI framework. However, as a frontend developer, it helps to be familiar with using third-party UI frameworks like Ionic, Chakra, and ShadCN.
According to the Ionic framework [documentation](https://ionicframework.com/docs/intro/cdn#ionic--vue), here’s how to add Ionic to our existing Vue project:
- Install Ionic into the project folder via the terminal using this command — `npm install @ionic/vue @ionic/vue-router`
- Next, import it into your main.js file like so:
```js
import { IonicVue } from '@ionic/vue';
import App from './App.vue';
import router from './router';
const app = createApp(App).use(IonicVue)
app.use(router);
router.isReady().then(() => {
app.mount('#app');
});
```
- Since Ionic requires us to import routing dependencies from `@ionic/vue-router` instead of `vue-router`, we will also edit the existing routing code. Go to the index.js file in your router folder and edit it with the following:
```js
import { createRouter, createWebHistory } from '@ionic/vue-router';
const router = createRouter({
history: createWebHistory(process.env.BASE_URL),
routes: [
// routes go here
];
});
export default router;
```
Note: leave your routes array unchanged from the setup earlier. The only changes: we now import the `createRouter` and `createWebHistory` from `@ionic/vue-router`, and we add `process.env.BASE_URL` as an argument in the `createWebHistory`.
Finally, let’s import the necessary CSS provided by Ionic into the main.js file
```js
/* Core CSS required for Ionic components to work properly */
import '@ionic/vue/css/core.css';
/* Basic CSS for apps built with Ionic */
import '@ionic/vue/css/normalize.css';
import '@ionic/vue/css/structure.css';
import '@ionic/vue/css/typography.css';
/* Optional CSS utils that can be commented out */
import '@ionic/vue/css/padding.css';
import '@ionic/vue/css/float-elements.css';
import '@ionic/vue/css/text-alignment.css';
import '@ionic/vue/css/text-transformation.css';
import '@ionic/vue/css/flex-utils.css';
import ‘@ionic/vue/css/display.css';
```
I didn’t use any of the Optional CSS utils for this project, so you can delete them. I also didn’t use the `import @ionic/vue/css/structure.css` for this project, and you can also delete it. And we are set to use Ionic elements and styling going forward.
Remember, if you don’t want to use the Ionic framework, don’t do anything in this section to avoid breaking your code during build.
---
### Building a Responsive NavBar For Your Vue App
The first component we will import into the App.vue is the 'NavBar'. Open the NavBar.vue component in the components folder, and add the `<script>`, `<template>` and `<style>` tags to the NavBar file. We will create the Options API and Composition API versions of this component.
### Options API Version `<script>` Element of the NavBar Component
Write the following code inside the `<script>` tag of the NavBar
```js
import { IonIcon } from '@ionic/vue'
export default {
components: {
IonIcon
},
data() {
return {
windowWidth: window.innerWidth,
windowHeight: window.innerHeight,
isOpen: false,
ionIconStyle: {
fontSize: '64px',
color: '#000',
'--ionicon-stroke-width': '16px'
}
}
},
mounted() {
window.addEventListener('resize', this.handleWindowSizeChange)
},
beforeUnmount() {
window.removeEventListener('resize', this.handleWindowSizeChange)
},
methods: {
handleWindowSizeChange() {
this.windowWidth = window.innerWidth
this.windowHeight = window.innerHeight
}
}
}
```
A brief breakdown of the code:
- The first line imports an icon from Ionic, which we will use in the `<template>`. Then we add it as a component to the export default code block.
- The `data()` property contains logic that will help us create a responsive menu bar in place of the navbar when the user’s viewport is less than 768px.
- Next, we use the `mounted` and `beforeUnmount` lifecycle hooks to add and remove `eventListeners` that will take effect as the window width changes.
- In Vue Options API, the `methods` property contains functions, and in this code block, we added a function that handles the window size changes.
#### Composition API Version `<script>` Element of the NavBar Component
```js
<script setup>
import { ref, onMounted, onBeforeUnmount } from 'vue'
import { IonIcon } from '@ionic/vue'
const windowWidth = ref(window.innerWidth)
const windowHeight = ref(window.innerHeight)
const isOpen = ref(false)
const ionIconStyle = ref({
fontSize: '64px',
color: '#000',
'--ionicon-stroke-width': '16px'
})
const handleWindowSizeChange = () => {
windowWidth.value = window.innerWidth
windowHeight.value = window.innerHeight
}
onMounted(() => {
window.addEventListener('resize', handleWindowSizeChange)
})
onBeforeUnmount(() => {
window.removeEventListener('resize', handleWindowSizeChange)
})
</script>
```
While the Composition API is more flexible and remains the future of Vue.js, it doesn’t give us access to built-in features such as the lifecycle hooks without importing them. So, you will see that we had to import the lifecycle hooks in the Composition API version of the NavBar component. Additionally, we import a `ref` component that does the work of the `data()` property with a twist — we don’t use the ‘this’ keyword.
#### Finally, Let’s Add Content to the NavBar `<template>` Tag
```js
<template>
<div class="navBarContainer">
<nav v-if="windowWidth > 768" class="largeViewportNav">
<header>
<h1>GitHub Repo Explorer</h1>
</header>
<ul class="navLink">
<li><router-link class="navButton" to="/">Home</router-link></li>
<li><router-link class="navButton" to="/error404">Test NotFound</router-link></li>
<li>
<router-link class="navButton" to="/errorBoundary">Test ErrorBoundary</router-link>
</li>
</ul>
</nav>
<nav v-else class="smallViewportNav">
<header>
<h1>GitHub Repo Explorer</h1>
<ion-icon
@click="isOpen = !isOpen"
src="/menu-outline.svg"
aria-label="MenuBar"
aria-hidden="true"
size="large"
name="'menu-outline'"
:style="ionIconStyle"
></ion-icon>
</header>
<transition name="fade" appear>
<ul v-if="isOpen" class="navMenuItems">
<li>
<router-link class="navButton" to="/">Home</router-link>
</li>
<li>
<router-link class="navButton" to="`/error404`">Test Not Found</router-link>
</li>
<li>
<router-link class="navButton" to="/errorBoundary">Test Error Boundary</router-link>
</li>
</ul>
</transition>
</nav>
</div>
</template>
```
You can see that instead of using the `<a>` tag to add relative paths, we used the `<router-link>` provided by the Vue-router. The Vue-router also gives us a `<routerView>` tag we can use to display the components on the browser, and we will see how it works when we edit the App.vue file next.
I used scoped styling in all the components and global styling in the App.vue file. If you want to see the style rulesets I used in the NavBar.vue component and the rest, you can check the source code on [GitHub](https://github.com/SheIsBukki/GitHub-Repo-Vue-App).
#### Add the NavBar Component to the App.vue
```js
<!-- OPTIONS API VERSION -->
<script>
import NavBar from './components/NavBar.vue'
export default {
components: {
'nav-bar': NavBar
}
}
</script>
<!-- COMPOSITION API VERSION -->
<!-- <script setup>
import NavBar from './components/NavBar.vue'
</script> -->
<template>
<div>
<nav-bar />
<!-- <NavBar /> -->
</div>
</template>
```
We can’t see our NavBar in the browser just yet because the App.vue doesn’t have a default view. Remember we made the HomePage component the default page of the app. To make the App.vue display the default view in the browser, we need to add the routerView to the App.vue `<template>` tag like so:
```js
<template>
<div>
<!-- OPTIONS API VERSION —>
<nav-bar />
<router-view></router-view>
<!-- COMPOSITION API VERSION —>
<!-- <NavBar /> —>
<!-- <routerView></routerView> -->
</div>
</template>
```
---
### Fetching and Displaying Data From the GitHub API in a Vue App
The RepoCards.vue component, which will display my public GitHub repositories, will be imported into the HomePage component. Before we do that, let’s write the code. We will also write the Options API and Composition API versions of this component.
#### Options API Version `<script>` Element of the RepoCards Component
```js
<!-- OPTIONS API VERSION -->
<script>
import {
IonButton,
IonCard,
IonCardContent,
IonCardHeader,
IonCardSubtitle,
IonCardTitle
} from '@ionic/vue'
export default {
components: { IonButton, IonCard, IonCardContent, IonCardHeader, IonCardSubtitle, IonCardTitle },
data() {
return {
repos: [],
currentPage: 1,
reposPerPage: 2,
windowWidth: window.innerWidth,
windowHeight: window.innerHeight,
displayBlock: {
display: 'block'
},
displayGrid: {
display: 'grid',
'grid-template-columns': '1fr 1fr'
}
}
},
methods: {
async fetchRepos() {
try {
const response = await fetch('https://api.github.com/users/sheisbukki/repos')
this.repos = await response.json()
} catch (error) {
console.log('Error fetching repositories:', error)
throw error
}
},
previousPageButton() {
if (this.currentPage !== 1) this.currentPage--
},
nextPageButton() {
if (this.currentPage !== Math.ceil(this.repos.length / this.reposPerPage)) this.currentPage++
},
paginationNumbers(pageNumber) {
this.currentPage = pageNumber
},
handleWindowSizeChange() {
this.windowWidth = window.innerWidth
this.windowHeight = window.innerHeight
}
},
mounted() {
this.fetchRepos()
window.addEventListener('resize', this.handleWindowSizeChange)
},
beforeUnmount() {
window.removeEventListener('resize', this.handleWindowSizeChange)
},
computed: {
paginatedRepos() {
const indexOfLastRepo = this.currentPage * this.reposPerPage
const indexOfFirstRepo = indexOfLastRepo - this.reposPerPage
return this.repos.slice(indexOfFirstRepo, indexOfLastRepo)
},
pageNumbers() {
const pageNumbers = []
for (let i = 1; i <= Math.ceil(this.repos.length / this.reposPerPage); i++) {
pageNumbers.push(i)
}
return pageNumbers
}
}
}
</script>
```
A brief breakdown of the code:
- We import a few components from Ionic to create UI elements for this component such as the `IonCard` which will create card elements where we will display each repository.
- The `repos` in the `data()` property is an empty array that will store the repositories fetched from the GitHub API.
- The `currentPage` and `reposPerPage` variables help create the pagination element for the cards. The `currentPage` defines the page the pagination functionality will start from, while the `reposPerPage` defines the number of repositories each page should have.
- The remaining variables in the `data()` property help us make the cards mobile responsive.
- The `async fetchRepos` in the `methods` property is an asynchronous function that will try to fetch data from the GitHub API, particularly fetch my public repositories from GitHub. If successful, the data will be sent into the `repos` array, otherwise, it will throw an error.
- The `previousPageButton`, `nextPageButton` functions handle the pagination buttons, while the `paginationNumbers` function defines the page number of the current page for the cards.
- The `handleWindowSizeChange` function in the `methods` property handles the window size changes. The `mounted` and `beforeUnmount` lifecycle hooks add and remove `eventListeners` that will take effect as the window width changes. We also call the `fetchRepos` inside the `mounted` lifecycle hook.
- The `computed` property contains two functions, `paginatedRepos` and `pageNumbers` which depend on the variables created in the `data` property — `repos`, `currentPage`, and `reposPerPage`. The `paginatedRepos` computed property returns a new array of paginated repositories, which we loop through to create the repository cards in the `<template>` element later. While the `pageNumbers` computed property returns an array of page numbers for each page of the pagination.
The `computed` property, although similar to the `methods` property, are used like the `data` property but is dynamic. The `computed` property updates automatically when a dependency changes. For example, if the size of the `repos` in this project increases or reduces, the `pageNumbers` returned from the computed property `pageNumbers` will also change. Similarly, if you have more than 15 public repositories, you can assign the `reposPerPage` key the value of 5, and this will reflect in both computed properties.
#### Composition API Version `<script>` Element of the RepoCards Component
```js
<!-- COMPOSITION API VERSION -->
<script setup>
import { ref, onMounted, onBeforeUnmount, computed } from 'vue'
import {
IonButton,
IonCard,
IonCardContent,
IonCardHeader,
IonCardSubtitle,
IonCardTitle
} from '@ionic/vue'
const repos = ref([])
const currentPage = ref(1)
const reposPerPage = ref(2)
const windowWidth = ref(window.innerWidth)
const windowHeight = ref(window.innerHeight)
const displayBlock = ref({
display: 'block'
})
const displayGrid = ref({
display: 'grid',
'grid-template-columns': '1fr 1fr'
})
const fetchRepos = async function () {
try {
const response = await fetch('https://api.github.com/users/sheisbukki/repos')
repos.value = await response.json()
} catch (error) {
console.log('Error fetching repositories:', error)
throw error
}
}
const previousPageButton = () => {
if (currentPage.value !== 1) currentPage.value--
}
const nextPageButton = () => {
if (currentPage.value !== Math.ceil(repos.value.length / reposPerPage.value)) currentPage.value++
}
const paginationNumbers = (pageNumber) => {
currentPage.value = pageNumber
}
const handleWindowSizeChange = () => {
windowWidth.value = window.innerWidth
windowHeight.value = window.innerHeight
}
onMounted(() => {
fetchRepos()
})
onBeforeUnmount(() => {
window.removeEventListener('resize', handleWindowSizeChange)
})
const paginatedRepos = computed(() => {
const indexOfLastRepo = currentPage.value * reposPerPage.value
const indexOfFirstRepo = indexOfLastRepo - reposPerPage.value
return repos.value.slice(indexOfFirstRepo, indexOfLastRepo)
})
const pageNumbers = computed(() => {
const pageNumbers = []
for (let i = 1; i <= Math.ceil(repos.value.length / reposPerPage.value); i++) {
pageNumbers.push(i)
}
return pageNumbers
})
</script>
```
#### Finally, Let’s Add Content to the RepoCards `<template>` Tag
```js
<template>
<main>
<p v-if="!repos">Loading...</p>
<div v-else>
<section :style="windowWidth > 768 ? displayGrid : displayBlock" class="repoCardsContainer">
<ion-card class="repoCard" color="dark" v-for="repo in paginatedRepos" :key="repo.id">
<ion-card-header>
<ion-card-title>{{ repo.name }}</ion-card-title>
<ion-card-subtitle> Main language: {{ repo.language }} </ion-card-subtitle>
</ion-card-header>
<ion-card-content>{{ repo.description }}</ion-card-content>
<ion-button fill="clear">
<router-link :to="`/singleRepo/${repo.name}`">View more</router-link>
</ion-button>
</ion-card>
</section>
<section class="reposPagination">
<ul class="paginationButtonsContainer">
<ion-button
class="paginationButton"
aria-label="Previous page"
fill="outline"
shape="round"
@click="previousPageButton"
>«</ion-button
>
<li class="paginationButton" v-for="number in pageNumbers" :key="number">
<ion-button fill="outline" shape="round" @click="paginationNumbers(number)">{{
number
}}</ion-button>
</li>
<ion-button
class="paginationButton"
aria-label="Next page"
fill="outline"
shape="round"
@click="nextPageButton"
>»</ion-button
>
</ul>
</section>
</div>
</main>
</template>
```
A brief breakdown of the code:
- The `v-if=“!repos”` attribute and value is a v-if directive that will check if the `repos` array is null/falsy, and will return the `<p>` element if so. Otherwise, the `v-else` directive which is also added like an attribute in the `<div>` element will execute.
- For the repository cards, I used the Ionic `<ion-card>` element and used the `v-for` directive to loop through the `paginatedRepos` created earlier and return each repository in a card. The shorthand v-bind `:` is used to bind the `key` attribute written as `:key=“repo.id”`, giving each repository card a unique ID, the same as the one provided by GitHub.
- There’s an `<ion-button>` element in the `<ion-card>` element that contains a nested `<router-link>`. The `<router-link>` has a shorthand v-bind directive which binds the `to` attribute written as `:to=“`/singleRepo/${repo.name}`”`. This is the nested route within each RepoCards component, defined in the router `path: ‘/singleRepo/:name’`, and the custom params for the nested route is `${repo.name}`.
- The `<section class=“reposPagination>` element creates the UI for the pagination functionality we created earlier. The `<ul>` element in it holds the pagination buttons using the Ionic `<ion-button>` elements which contain the `previousPageButton` and `nextPageButton`. The `<ul>` element also contains a `<li>` element which loops through the `pageNumbers` earlier.
#### Add the RepoCards Component to the HomePage.vue
```js
<!-- OPTIONS API VERSION -->
<script>
import RepoCards from ‘./RepoCards.vue'
import ErrorBoundary from './ErrorBoundary.vue'
export default {
components: {
'repo-cards': RepoCards,
errorBoundary: ErrorBoundary
}
}
</script>
<!-- COMPOSITION API VERSION -->
<!-- <script setup>
import RepoCards from ‘./RepoCards.vue'
import ErrorBoundary from './ErrorBoundary.vue'
</script> -->
<template>
<ErrorBoundary>
<repo-cards />
</ErrorBoundary>
</template>
```
Note: we imported the ErrorBoundary component, why? We used it to wrap the RepoCards component, to monitor and report if there are any errors.
By now, your app should have a functional NavBar, and the HomePage should display the data you fetched from the GitHub API or RandomDataAPI in cards.
### Implementing Nested Routes in a Vue Project
The SingleRepo component is already nested in the RepoCards but, it’s not functional yet, so let’s fix that. We will also write the Options API and Composition API versions.
#### Options API Version `<script>` Element of the SingleRepo Component
```js
<!-- OPTIONS API VERSION -->
<script>
import {
IonButton,
IonCard,
IonCardContent,
IonCardHeader,
IonCardSubtitle,
IonCardTitle,
IonLabel,
IonItem,
IonList
} from '@ionic/vue'
import ErrorBoundary from './ErrorBoundary.vue'
export default {
components: {
IonButton,
IonCard,
IonCardContent,
IonCardHeader,
IonCardSubtitle,
IonCardTitle,
IonLabel,
IonItem,
IonList,
errorBoundary: ErrorBoundary
},
data() {
return {
repo: null
}
},
methods: {
fetchSingleRepo() {
fetch(`https://api.github.com/repos/sheisbukki/${this.$route.params.name}`)
.then((response) => response.json())
.then((data) => {
this.repo = data
})
.catch((error) => {
console.error(error)
})
},
////THIS WORKS, JUST DECIDED TO USE THE ONE ABOVE
// async fetchSingleRepo() {
// try {
// const response = await fetch(
// `https://api.github.com/repos/sheisbukki/${this.$route.params.name}`
// )
// this.repo = await response.json()
// } catch (error) {
// console.log('Error fetching repositories:', error)
// throw error
// }
// },
regularDate(dateValue) {
return new Date(dateValue).toLocaleDateString('en-uk', {
year: 'numeric',
month: 'short',
day: 'numeric'
})
}
},
mounted() {
this.fetchSingleRepo()
}
}
</script>
```
A brief breakdown of the code:
- The ErrorBoundary component is also imported to wrap the SingleRepo component, to monitor and report if there are any errors.
- Any time the ‘View more’ button in the RepoCards component is clicked, the `fetchSingleRepo` function in the `methods` property of the SingleRepo component will indeed fetch the data of the specific repository clicked. Why two functions? Well, how else can we learn how to fetch API data using different approaches?
- Notice any difference with the API the SingleRepo component is fetching from? It is also the GitHub API, but this time it uses the custom route params we defined in the router and specified in the RepoCards component to fetch data from my repositories.
- The `regularDate` function inside the `methods` property converts the ISO date returned from GitHub to a date format people can easily understand.
#### Composition API Version `<script>` Element of the SingleRepo Component
```js
<!-- COMPOSITION API VERSION -->
<script setup>
import { ref, onMounted } from 'vue'
import { useRoute } from 'vue-router'
import {
IonButton,
IonCard,
IonCardContent,
IonCardHeader,
IonCardSubtitle,
IonCardTitle,
IonLabel,
IonItem,
IonList
} from '@ionic/vue'
import ErrorBoundary from './ErrorBoundary.vue'
const repo = ref(null)
const route = useRoute()
const fetchIndividualRepo = function () {
fetch(`https://api.github.com/repos/sheisbukki/${route.params.name}`)
.then((response) => response.json())
.then((data) => {
repo.value = data
})
.catch((error) => {
console.error(error)
})
}
onMounted(() => {
fetchIndividualRepo()
})
const regularDate = (dateValue) => {
return new Date(dateValue).toLocaleDateString('en-uk', {
year: 'numeric',
month: 'short',
day: 'numeric'
})
}
</script>
```
Note: Unlike the Options API, you have to import the built-in `useRoute` component from Vue-router to enable custom route params in Composition API.
#### Finally, Let’s Add Content to the SingleRepo `<template>` Tag
```js
<template>
<ErrorBoundary>
<main>
<p v-if="!repo">Loading...</p>
<div v-else>
<h1>Repository</h1>
<section>
<ion-card color="dark">
<ion-card-header>
<ion-card-title>{{ repo.name }}</ion-card-title>
<ion-card-subtitle>
<strong>Main language:</strong> {{ repo.language }}
</ion-card-subtitle>
</ion-card-header>
<ion-card-content>
{{ repo.description }}
</ion-card-content>
<ion-card-content>
<ion-list>
<ion-item>
<em>Created on: </em>
<ion-label> {{ regularDate(repo.created_at) }}</ion-label>
</ion-item>
<ion-item>
<em>Pushed on: </em>
<ion-label> {{ regularDate(repo.pushed_at) }}</ion-label>
</ion-item>
<ion-item>
<em>Last updated on: </em>
<ion-label>{{ regularDate(repo.updated_at) }}</ion-label>
</ion-item>
</ion-list>
</ion-card-content>
<div class="cardFooter">
<ion-button fill="clear">
<a :href="repo.html_url">View source code</a>
</ion-button>
<em v-if="!repo.homepage">No live site</em>
<ion-button v-else fill="clear">
<a :href="repo.homepage">Visit live site</a>
</ion-button>
</div>
</ion-card>
</section>
<footer :style="{ 'text-align': 'center' }">
<ion-button fill="outline" shape="round" size="small"
><router-link to="/">Go back</router-link></ion-button
>
</footer>
</div>
</main>
</ErrorBoundary>
</template>
```
You can see that the `<ErrorBoundary>` element wraps the SingleRepo’s `<template>` element. This is enabled because the ErrorBoundary passes a `<slot>` in place of any components it is used to wrap.
---
### Error Handling in Vue Using the ErrorBoundary.vue and NofFound.vue Components
First, let’s write the code for the ErrorBoundary Component, and this will also include the Options API and Composition API versions.
#### Options API Version `<script>` Element of the ErrorBoundary Component
```js
<!-- OPTIONS API VERSION -->
<script>
export default {
data() {
return {
error: null,
errorInfo: '',
errorInstance: null
}
},
errorCaptured(error, instance, info) {
this.error = error
this.errorInfo = info
this.errorInstance = instance
console.log('error: ', error)
console.log('component Instance: ', instance)
console.log('errorSrcType: ', info)
return false
}
}
</script>
```
The `errorCaptured` is a lifecycle hook we can use to track errors that happen in a child component, which is why the ErrorBoundary component uses `<slot>` to represent the child components. Developers can use create ErrorBoundary and errorHandler components such as this to log errors or display errors to users.
#### Composition API Version `<script>` Element of the ErrorBoundary Component
```js
<!-- COMPOSITION API VERSION -->
<script setup>
import { ref, onErrorCaptured } from 'vue'
const error = ref(null)
const errorInfo = ref('')
const errorInstance = ref(null)
onErrorCaptured((error, instance, info) => {
error.value = error
errorInfo.value = info
errorInstance.value = instance
console.log('error: ', error)
console.log('component Instance: ', instance)
console.log('errorSrcType: ', info)
return false
})
</script>
```
#### Finally, Let’s Add Content to the ErrorBoundary `<template>` Tag
```js
<template>
<main>
<div v-if="error">
<h1>Something went wrong...</h1>
<pre>{{ error }}</pre>
<pre>{{ errorInstance }}</pre>
<pre>{{ errorInfo }}</pre>
</div>
<div v-else>
<slot></slot>
</div>
</main>
</template>
```
The ErrorBoundary component uses `<slot>` to pass down content to the SingleRepo and RepoCards components. Hence, if there’s an error in either child component, the component will display the passed down content defined with the `v-if=“error”` directive in the ErrorBoundary `<template>`.
We can also track routing errors and general errors in the app by adding the following code to the main.js file:
```js
router.onError((error) => {
console.log('Router error:', error)
})
app.config.errorHandler = (error, compInstance, info) => {
console.error('Error:', error)
console.error('Component Instance:', compInstance)
console.error('Error Info:', info)
}
```
#### The NotFound Component
This component only has the `<template>` and `<style scoped>` elements. Check the [source code](https://github.com/SheIsBukki/GitHub-Repo-Vue-App) of this project to see the styling for this component and the rest.
```js
<template>
<main>
<h1>Error 404</h1>
<p>Oops! Go back to <router-link to="/">Home</router-link></p>
</main>
</template>
```
And with this final touch, we have a fully functional Vue app.
---
### Final Thoughts
Building a Vue app can be a rewarding experience, especially for beginners. This article guided you through the essentials, from setting up a Vue project from scratch using different methods to implementing both Options and Composition APIs.
To recap, these are the concepts this article covers:
- Options API
- Composition API
- Scaffolding a Vue Project
- Vue Router
- Ionic Vue Framework
- Pagination with Vue
- Fetching data from API using JavaScript Fetch, and Async/Await
- Creating Reusable Vue Components
By completing this project, you will not only build a functional, mobile-responsive Vue app but also gain valuable skills that will serve you in future development endeavours. I sure learnt a lot from building this project.
Shout out to you for building along with me. You can check out the live app [here](https://git-hub-repo-vue-app.vercel.app). 🎉 | sheisbukki |
1,897,462 | Building BellyFull: A Meal Suggestion Bot to Help Satisfy Your Cravings | This is a submission for the Twilio Challenge What I Built Ever had a craving for a... | 0 | 2024-06-23T23:15:07 | https://dev.to/sands_44/building-bellyfull-a-meal-suggestion-bot-to-help-satisfy-your-cravings-2oii | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
Ever had a craving for a particular meal, but had no idea where to find it or who sells it at a reasonable price?
Where I'm from, it's easy to find meals such as pizza, or maybe a burger and french fries, but harder to find which restaurant serves a great T-Bone steak.
To help to solve this issue, I created BellyFull, a simple AI powered chatbot that recommends meals from restaurants around the island of Nassau, Bahamas based off a user's input, using the `WhatsApp Sandbox mode` in `Twilio Programmable Messaging` along with the `Gemini AI API`.
### Technologies/Frameworks used
* **Twilio Programmable Messaging** along with a **Whatsapp Sandbox** serves as the communication medium for the end user and chatbot.
* **Gemini** is used to parse through restaurant menus to find meals, as well as maintain the state of the user/chatbot conversation.
* **Express.js** serves the Twilio webhook that's triggered when a user sends a message to the chatbot.
* **Supabase** is used to store restaurant information, menus, location and addresses.
* **Vercel** is used to host the chatbot
## Demo
### Here's a video of how it works
{% embed https://www.youtube.com/watch?v=Wa-bLQT-lXo %}
## How to try it out
You can either whatsapp the number `+14155238886` with the message `join-something-season` or scan the QR code below to get started.

Once you're connected, start typing to start the conversation.
When you're finished, type "stop" to disconnect from the sandbox.
## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
Twilio makes it very easy to create interactive experiences through various messaging platforms. With the help of Gemini AI, I was able to create a product that can create dynamic responses on the fly, based off of a user input. Instead of having to craft specific conditional statements to answer end users, I was able to structure a prompt and let generative AI handle the rest!
## Additional Prize Categories
I don't feel as if this submission meets any of the additional categories. I'd nonetheless be honoured if nominated for any of the other prizes. | sands_44 |
1,889,943 | Learning about CustomPaint in Flutter | Something like a week ago I started to research about "how I can manually draw things in the Flutter... | 0 | 2024-06-23T23:09:54 | https://dev.to/josuestuff/learning-about-custompaint-in-flutter-4j69 | flutter, ui, beginners, learning | Something like a week ago I started to research about "how I can manually draw things in the Flutter canvas" and found information about `CustomPaint` and `CustomPainter`, which seems to be the main Flutter mechanism to allow devs draw specific things.
I'm not sure if there're more options or alternatives to achieve that, so for now let's just talk about my experience with Flutter's `CustomPaint` widget.
## The Flutter art workshop, an analog perspective
Since the very first moment I started to learn Flutter, I inmediately noticed that it is, essentially, just an OpenGL canvas with a bunch of abstractions. I personally don't like that, not because of the abstractions themselves, but because using an OpenGL canvas doesn't feel... Native... Like, it's really easy to notice that.
And you may think: "Well, yeah, but if you think about that, every user interface is just an intangible abstraction, so... Who cares?". And yeah, you're right and I also agree with that, it's just a personal opinion and I've been learning how that approach for making user interfaces brings some benefits.
With that in mind, I ended up to the conclusion that Flutter it's likely an art workshop: it gives you canvas and a bunch of different brushes and tools to draw interfaces in a fast, simple and easy way. With this perspective, using an OpenGL canvas makes much more sense and helps bit to understand how `CustomPaint` works and it's more easy to explain it.
## Alright, let's move on a bit more
So, you are working on a Flutter project and there's something specific you want to do, but Flutter doesn't have a widget for that and there's only one solution: create it by yourself. Well, the way to do that is by using `CustomPaint`, a widget that gives you direct access to the canvas, allowing you to draw things, but with one requirement: you need to give it a `CustomPainter`, which is basically Flutter telling you "Fine, you can draw here, but you must give an specialized artist for this very specific job!".
## Creating a `CustomPainter`
At this point, you may be wondering what's a `CustomPainter` and how to create them, because if you try giving a literal `CustomPainter` to a `CustomPaint`, you'll get an error. That's because `CustomPainter` is an abstract class and we have to create a class that `extends` from `CustomPainter` instead. So, let's make it:
```dart
final class WavePainter extends CustomPainter {
@override
void paint(Canvas canvas, Size size) {
}
@override
bool shouldRepaint(CustomPainter oldDelegate) => false;
}
```
Now, let me explain this:
- Our class is named `WavePainter` because we're going to make a simple audio waveform widget (as in the banner of this article)
- All classes that `extends` from `CustomPainter` must override 2 methods:
- `paint`: this one is where we're going to draw
- `shouldRepaint`: and this one handles if `paint` should be called again
Pretty simple, right? The `paint` methods recieves a `Canvas` and a `Size` object for the drawing stuff and `shouldRepaint` recieves an old version of our `CustomPainter` to check for changes and, if so, return true to tell Flutter to re`paint` the widget, but we usually do this only if our widget has mutable data that is expected to change during the lifetime of our app.
## Starting to draw
Alright, so our `CustomPainter` looks good so far, we can already give it to a `CustomPaint`, however, it will not draw anything because we haven't code what to draw yet, so let's start by creating a _brush_ for our drawing:
```dart
final class WavePainter extends CustomPainter {
@override
void paint(Canvas canvas, Size size) {
// The Paint class is essentially a brush, so...
final brush = Paint()
// We set the color,
..color = Color(0xFFFAFAFA)
// the thickness of the line
..strokeWidth = 3
// and how we want them to end
..strokeCap = StrokeCap.round;
}
@override
bool shouldRepaint(CustomPainter oldDelegate) => false;
}
```
At this point, you should check out the [API reference for the Paint](https://api.flutter.dev/flutter/dart-ui/Paint-class.html) class to know what else you can do with it. Now, our brush is ready to use, so it's time to finish our drawing:
```dart
final class WavePainter extends CustomPainter {
@override
void paint(Canvas canvas, Size size) {
final brush = Paint()
..color = Colors.white
..strokeWidth = 3
..strokeCap = StrokeCap.round;
var shift = 0.0;
final verticalCenter = size.height / 2;
final values = List<double>.generate(100, (_) {
return Random().nextDouble() * verticalCenter;
});
for (var i = 0; i < values.length && shift < size.width; i++) {
canvas.drawLine(
Offset(shift, verticalCenter - values[i]),
Offset(shift, verticalCenter + values[i]),
brush
);
shift += 6;
}
}
@override
bool shouldRepaint(CustomPainter oldDelegate) => false;
}
```
If we put that inside of a `CustomPaint`, we should get something like this:

## Understanding what's going on
Now, again, I'm gonna explain these changes a bit more, but first, we need to understand how exactly positioning works in this case, which is pretty simple because we can understand it by just looking at this image:

As you can see, we start drawing at the top left corner of the given canvas, so if we don't want to draw outside of it, we should use positive numbers. That said, we will want to find the vertical center of that canvas and that's exactly what we did with `size.height / 2`, we're gonna use that number for both: positioning our bars and limit it's size.
By default, all custom paints gets a canvas of 300x300, but we can change that by putting our custom paint into another widget that could help to set a different size. In this case, I created a canvas of 400x100 by putting my custom paint inside of a `Container`, so our vertical center is 50 (unless we change the height of the `Container`).
That said, how we're gonna use that number as the limit of our bars? Well, that's pretty simple and we already did it in this line:
```dart
return Random().nextDouble() * verticalCenter;
```
You see, since `nextDouble` will always return a number between 0.0 and 1.0, mutiplying that result by the vertical center will never exceed half the height of the canvas, so it's perfectly fine to do that. Now this section in the code we wrote makes more sense:
```dart
canvas.drawLine(
Offset(shift, verticalCenter - values[i]),
Offset(shift, verticalCenter + values[i]),
brush
);
```
`drawLine` requires an starting point and an end point for drawing lines, so we're doing this:

Pretty simple, right? Now, you might be thinking: "What about `shift`? What it does?". Well, `shift` is even more easy: it controls the horizontal position of our bars. Since we're not only putting them side by side, but also leaving a small space between them, we set `shift` to the width of the line (3) + space we want to have between bars (also 3) and we just keep incrementing it after every use.
Finally, to prevent our drawing to go out of the size of the given canvas, we put an additional condition to our `for` loop: `shift < size.width`. With that simple condition, no matter the width of the given canvas or how much data we have.
## Final words
To be honest, learning how to do this was a really cool thing, I enjoyed the process and had a fun moment so far, despite I wasn't able to implement it in the project I'm working on.
Here is the full code for this article: https://gist.github.com/Miqueas/b66297d8de4a29000e9cb4d3f9cdc3f5
Anyways, I hope you enjoyed this too, feel free to share, have a nice day and see you another day 👋. | josuestuff |
1,887,878 | FastAPI for Data Applications: Dockerizing and Scaling Your API on Kubernetes. Part II | Hey there, data enthusiasts and API aficionados! 🎉 If you joined us for the first part of this... | 27,835 | 2024-06-23T23:05:36 | https://dev.to/felipe_de_godoy/fastapi-for-data-applications-dockerizing-and-scaling-your-api-part-ii-4a53 | llmops, fastapi, eks, docker | Hey there, data enthusiasts and API aficionados! 🎉 If you joined us for the first part of this series, you already have your FastAPI application running smoothly on your local machine. Today, we're going to elevate that setup by deploying it to Amazon's Elastic Kubernetes Service (EKS). Get ready to scale those RESTful routes to the cloud! 🌩️
https://media.dev.to/cdn-cgi/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohr4jjoaytjmulsyxk72.png
## Prerequisites
1. Basic knowledge of FastAPI (check out our previous post if you need a refresher).
2. Docker installed.
3. An AWS account.
4. `kubectl` and `eksctl` configured.
## Step 1: Dockerize Your FastAPI Application
### The FastAPI Application
Let's assume you have a simple FastAPI application named `main.py` (as shown in the initial post):
```python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
import uvicorn
app = FastAPI()
# Estrutura dos dados
class Item(BaseModel):
id: int
name: str
description: str = None
# Banco de dados fictício (em memória)
fake_db: List[Item] = []
@app.get("/")
async def root():
return {"message": "Hello World"}
@app.get("/items/", response_model=List[Item])
async def get_items():
return fake_db
@app.get("/items/{item_id}", response_model=Item)
async def get_item(item_id: int):
for item in fake_db:
if item.id == item_id:
return item
raise HTTPException(status_code=404, detail="Item not found")
@app.post("/items/", response_model=Item)
async def create_item(item: Item):
# Verifica se o item já existe
for existing_item in fake_db:
if existing_item.id == item.id:
raise HTTPException(status_code=400, detail="Item already exists")
fake_db.append(item)
return item
@app.put("/items/{item_id}", response_model=Item)
async def update_item(item_id: int, updated_item: Item):
for idx, existing_item in enumerate(fake_db):
if existing_item.id == item_id:
fake_db[idx] = updated_item
return updated_item
raise HTTPException(status_code=404, detail="Item not found")
if __name__ == "__main__":
uvicorn.run(app, host="127.0.0.1", port=8000)
```
In the end, we'll have a FastAPI application with a `Dockerfile` and Kubernetes deployment/service YAML files. Here's the file structure:
```plaintext
project_folder
├── Dockerfile
├── README.md
├── deployment.yaml
├── run_local.sh
├── main.py
├── requirements.txt
├── service.yaml
```
### Creating the Dockerfile
Now let's build a Dockerfile to containerize our application:
```dockerfile
# Dockerfile
# Use the official Python image from the Docker Hub
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
COPY ./main.py /app/main.py
# Set the working directory inside the Docker container
WORKDIR /app
# Copy the requirements.txt file into the Docker image
COPY requirements.txt .
# Install the dependencies specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of your FastAPI code into the Docker image
# COPY . .
# Expose the port your FastAPI app runs on (default is 8000)
# EXPOSE 8000
# Command to run the FastAPI application using Uvicorn
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
```
### Create the `requirements.txt` file
Feel free to include any package you could need in your process, in our case we'll need:
``` plaintext
fastapi
uvicorn
pydantic
```
### Building and Running the Docker Image Locally
Use the following script to build and run your Docker container locally (`run_local.sh`):
```bash
#!/bin/bash
set -e
DOCKER_IMAGE_NAME="my-fastapi-app"
echo "Building Docker image..."
docker build -t $DOCKER_IMAGE_NAME .
echo "Running Docker container locally..."
docker run -d -p 8000:8000 --name my-fastapi-container $DOCKER_IMAGE_NAME
echo "FastAPI application is now running at http://localhost:8000"
echo "To stop the container, run: docker stop my-fastapi-container"
echo "To remove the container, run: docker rm my-fastapi-container"
```
With the relevant commands in the file, you can run the script:
```sh
chmod +x run_local.sh
./run_local.sh
```


## Step 2: Set Up an EKS Cluster
```sh
aws sts get-caller-identity
eksctl create cluster --name=minimal-cluster --region=us-east-1 --nodegroup-name=minimal-nodes --node-type=t3.micro --nodes=1 --nodes-min=1 --nodes-max=2 --node-volume-size=10 --managed
```
This command sets up a basic EKS cluster with minimal nodes. Super neat, right?
You can follow the progress in the console of cloudformation: eksctl creates 2 stacks, which have many assets to be built. This may take 10 minutes or longer. It's time to take a breath :)



## Step 3: Push Docker Image to AWS ECR
First, create an ECR repository:
```sh
aws ecr create-repository --repository-name my-fastapi-app --region us-east-1
```
Next, build, tag, and push your image to ECR:
```sh
docker build -t my-fastapi-app .
docker tag my-fastapi-app:latest 836090608262.dkr.ecr.us-east-1.amazonaws.com/my-fastapi-app:latest
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 836090608262.dkr.ecr.us-east-1.amazonaws.com
docker push 836090608262.dkr.ecr.us-east-1.amazonaws.com/my-fastapi-app:latest
```
## Step 4: Deploy to EKS
Create a `deployment.yaml`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: fastapi-app
template:
metadata:
labels:
app: fastapi-app
spec:
containers:
- name: fastapi-app
image: 836090608262.dkr.ecr.us-east-1.amazonaws.com/my-fastapi-app:latest
ports:
- containerPort: 8000
```
And a `service.yaml`:
```yaml
apiVersion: v1
kind: Service
metadata:
name: fastapi-app-service
namespace: default
spec:
type: LoadBalancer
selector:
app: fastapi-app
ports:
- protocol: TCP
port: 80
targetPort: 8000
```
Apply them to your cluster:
```sh
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
```
## Verify Deployment
Run the following commands to check your deployment and services:
```sh
kubectl get deployments
kubectl get services
```
Grab the external IP provided and open it in your browser:


Does the response look familiar? That's right, the API we built in the previous post is now accessible via the internet through our Kubernetes cluster! 🎉 (Of course, since we're using a dummy database, it only shows the most recent data entry. 🤷♂️)
Feel free to run your tests and deploy some exciting features!
### Cleaning Up Resources
Don't forget to clean up all the resources created during this process, including EC2 instances, EBS volumes, load balancers, Elastic IPs, resource groups, ECR repositories, EKS clusters, node groups, CloudFormation stacks, VPCs, subnets, NAT gateways, and network interfaces. Here's an order I recommend:
1. Node groups
2. EKS cluster
3. ECR repositories
4. Security groups
5. Load balancers
6. DHCP option sets
7. NAT gateways
Although you can use `aws-nuke` to delete resources easily, beware: if used incorrectly, it can delete critical resources. Therefore, I recommend manually cleaning up the resources.
For a better understanding of the resources created, check out AWS CloudTrail. It will show you every command executed by the stack, providing valuable insights into the automated processes.
## What's Next? 🚀
Now that your FastAPI application is successfully deployed on AWS EKS, it's time to think about enhancing and scaling your setup further. Here are some next steps to take your deployment to the next level:
1. **Authentication and Authorization**: Implement robust authentication mechanisms like OAuth2 or JWT to secure your APIs.
2. **Improving Autoscaling**: Leverage Kubernetes' Horizontal Pod Autoscaler (HPA) to dynamically adjust the number of pod replicas based on real-time metrics like CPU usage or custom application metrics.
3. **Observability**: Integrate monitoring and logging solutions like Prometheus and Grafana for metrics, and use tools like Fluentd and AWS CloudWatch for logging to ensure you have full visibility into your application's performance.
4. **High Availability**: Design your system for high availability by deploying your application across multiple availability zones and using Kubernetes Ingress controllers for better load balancing and failover capabilities.
In the dynamic world of modern tech, the principles of observability, scalability, and security are paramount, especially when deploying data-centric applications. By implementing these best practices, you're not only future-proofing your infrastructure but also ensuring a seamless experience for end-users.
Moreover, the ability to efficiently deploy machine learning models and data products is becoming increasingly crucial for data scientists and engineers. Leveraging tools like EKS for MLOps (Machine Learning Operations) and LLMOps (Large Language Model Operations) can enable you to deliver robust and scalable AI solutions. Every model deployment can benefit from the high availability, enhanced observability, and seamless scaling these platforms provide. In an era where data is the cornerstone of innovation, mastering these advanced deployment techniques will set you apart.
Stay tuned for future tutorials where we'll dive deeper into these topics, covering advanced Kubernetes features, CI/CD pipelines, and integrating more sophisticated data workflows. Until then, happy deploying! 🎉
Github repo:
https://github.com/felipe-de-godoy/FastAPI-for-Data-Delivery
Credit for Cover image https://dev.to/bravinsimiyu | felipe_de_godoy |
1,898,168 | AI Taxi Dispatch Operator | This is a submission for Twilio Challenge v24.06.12 What I Built Optimizing Business... | 0 | 2024-06-23T22:57:27 | https://dev.to/alaba_mustapha/ai-taxi-dispatch-operator-2kop | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What I Built
<!-- Share an overview about your project. -->
Optimizing Business Efficiency with AI Operators
Many businesses spend hours daily taking orders over the phone from prospective clients, leading to less productive employee time and increased operational costs.
AI Operators can revolutionize this process by intuitively collecting orders from prospective clients and notifying the business through email or external APIs when an order is ready for fulfillment. This automation will enhance employee productivity and reduce business costs by streamlining the manual order collection process.
A compelling proof of concept for such AI Operators is the AI Taxi Dispatch Operator. This solution enables taxi companies to automate the booking process and dispatch drivers and vehicles using the company's dispatch system via APIs or manual assignments.
In this implementation, Twilio WhatsApp messaging is utilized to create an AI operator built with OpenAI technology. The AI operator gathers all necessary information from the user, sends out dispatch requests through the company's dispatch API, and notifies both the company and the user when a driver has been dispatched.
By integrating AI Operators, businesses can significantly improve efficiency, reduce costs, and provide better service to their clients.
## Demo
<!-- Share a link to your app and include some screenshots here. -->
### Github Repo
{% embed https://github.com/alabamustapha/twilio-taxi-operator %}
### Youtube Demo and Screenshots
#### Demo 1
{% embed https://www.youtube.com/watch?v=cQwJxFGeb_E %}
#### Demo 2: App and Explanation
{% embed https://youtu.be/pQlmuRrEmpI %}




## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
Twilio WhatsApp Messaging was used as the primary communication medium for the users. Messages sent to the WhatsApp number are processed through webhook by a web app built on Laravel. These incoming messages are sent to an AI assistant powered by the OpenAI GPT4 model. The AI assistance is tasked to handle the conversation until all required details to request a booking are provided after which the booking details is sent to the database for dispatching followed by email notification to admin and dispatch notification to the user with drivers details
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
### Twilio Times Two
I used the Twilio WhatsApp Messaging feature for the interaction between the user and the operator. Twilio SMS Feature is used for failover notifications.
### Impactful Innovators
This product will help companies reduce operation costs, save time, and increase lead generation.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! → --> | alaba_mustapha |
1,896,348 | Simplify Phone Screening with Twilio and AI Automation | This is a submission for the Twilio Challenge What I Built I explored several ideas for... | 0 | 2024-06-23T22:52:58 | https://dev.to/bibekkakati/simplify-phone-screening-with-twilio-and-ai-automation-3e8e | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
I explored several ideas for this challenge and did some research. Then, I found a LinkedIn post about how recruiters handle phone interviews for screening and the problems they face, like coordinating schedules between recruiters and candidates. Often, these calls are unplanned for the candidates, causing nervousness and a poor first impression. This inspired me to create a product that **automates the process using IVR to conduct phone interviews for screening**. The product would evaluate the call recordings and provide a summary of the entire conversation to the recruiter.
**Flow of the system:**
- The recruiter will enter the candidate's details into the system through a form.
- The candidate will receive an SMS notification with a link included in the message.
- When the candidate clicks the link, the system will automatically dial their phone number.
- An IVR call will guide the candidate through the interview and ask the necessary questions.
- The candidate's responses will be recorded and transcribed by the system. AI will then use the transcription to provide context for the call recording.
**Future Scopes/Improvements:**
- Develop a comprehensive multi-tenant dashboard for recruiters to manage records efficiently.
- Integrate the system with job boards for seamless operation.
- Implement an AI model to evaluate candidates' responses and rank them based on job requirements.
## Demo
**Walkthrough video**
{% embed https://youtu.be/ooiqpuqpMow %}
**Demo application link**
- Application URL is https://ai-interview-call-8568-dev.twil.io/add-candidate.html
- IVR call might not work for you as it is a demo account and only verified phone numbers are allowed.
**Github link**
https://github.com/bibekkakati/ai-interview-call
## Twilio and AI
Let's discuss how I used Twilio and AI in the project.
- **Twilio Verify** is used to validate the candidate's phone number and provide an internationally formatted result.
- **Twilio Programmable Messaging** is used to send an SMS to the candidate with the interview call link.
- **Twilio Voice** is used to make a call to the candidate, gather information, and record the candidate's responses.
- **Twilio Voice Intelligence** is used to transcribe the call recordings.
- **Gemini AI** is used to extract key points from the candidate's responses (transcriptions).
- **Twilio Serverless** functions are used to deploy the functions and assets (a form to capture candidate details).
## Additional Prize Categories
My submission qualifies for the following additional prize categories:
- **Twilio Times Two**: Multiple Twilio features are used in the project.
- **Impactful Innovators**: This product will greatly help recruiters and companies manage their time and resources better.
| bibekkakati |
1,898,166 | A Journey into Microservices — Part 1 | Microservices have revolutionized how we build and manage software, but to fully appreciate their... | 0 | 2024-06-23T22:49:36 | https://dev.to/gervaisamoah/a-journey-into-microservices-part-1-4gck | microservices, architecture, softwaredevelopment, softwareengineering | Microservices have revolutionized how we build and manage software, but to fully appreciate their impact, we need to look at where we started and how we got here.
## The Journey from Monoliths to Microservices
### The Monolithic Era
In the beginning, there were monolithic applications. These were large, single-tiered systems with interconnected and interdependent components. While they could be robust and functional, monolithic systems had significant drawbacks:
- **Time-consuming**: Building, testing, and deploying could take a long time.
- **Coupling issues**: The tight coupling of components made implementing changes or scaling specific application parts difficult.
### Enter Service-Oriented Architecture (SOA)
Service-Oriented Architecture (SOA) aimed to address some of these issues by decomposing applications into smaller modules or services. However, SOA brought its own set of challenges:
- **Complexity**: Managing multiple services added a layer of complexity.
- **Performance overhead**: Additional communication layers could slow down performance.
- **Governance issues**: Ensuring consistent standards and practices across services was difficult.
### The Rise of Microservices
Microservices take the concept of modularization to the next level. They break down applications into even smaller, independently deployable services. This approach brings more agility and fits well with modern, cloud-native environments. However, microservices are not without their challenges, which we’ll explore further.
## What Are Microservices?
At its core, microservices architecture is about decomposing a system into smaller, more manageable units that can be developed, deployed, and scaled independently. Here’s how it works:
- **Modular Design**: Microservices solve big problems by breaking them into smaller, modular problems.
- **Decoupled Systems**: Each service operates independently, reducing dependencies.
- **Protocol-Aware Communication**: Services communicate using a common protocol, usually REST. This allows for polyglot development — different services can be written in different languages or frameworks, as long as they can communicate through the defined protocol.
### Benefits of Microservices
This structure brings several benefits.
- **Flexibility**: Teams can work in their preferred languages and frameworks.
- **Agility**: Faster development and deployment cycles.
- **Scalability**: Services can be scaled independently, leading to more efficient use of resources.
### Service Size and Domain-Driven Design
The size of a microservice isn’t about lines of code but about the scope of functionality, it should be **Domain-Focused**: a microservice should handle a single set of related functions within a well-defined domain. This aligns with the principles of Domain-Driven Design (DDD).
### Cloud-Native Compatibility
Cloud-native architecture involves designing systems to run in a cloud-based infrastructure. Microservices are particularly well-suited for this environment because they can be easily transitioned to cloud-native patterns. However, it’s important to note that cloud-native and microservices are distinct concepts — you can have monolithic cloud-native applications too.
## Communication Between Services
One critical aspect of microservices is how they communicate.
### Interservice Communication
Microservices communicate over HTTP using protocols like REST or GraphQL. This standardized communication allows for a heterogeneous environment where different services, regardless of the technology stack, can interact seamlessly. However, this flexibility comes with its own set of challenges:
- **Agility**: Teams can deliver code quicker and more efficiently, but must manage the complexity of numerous interservice communications.
- **Orchestration**: Proper orchestration and versioning strategies are crucial to prevent issues when services call each other. For example, deploying a new version of a service should not cause failures in calls from other services.
## Distribution and Scalability
The ability to distribute and scale services is another key advantage of microservices.
### Global Distribution
Microservices can be deployed across different geographic locations, making it easier to provide global availability and reduce latency for users in different regions.
### Elastic Scalability
One of the significant advantages of microservices is the ability to scale services independently. For instance, if a service handling customer data experiences a spike in traffic, only that service needs to be scaled, rather than the entire application. This elasticity allows systems to handle varying traffic loads efficiently.
## Risks and Challenges of Microservices
While microservices offer many benefits, they also come with certain risks and challenges.
- **Complexity**: The biggest cost of adopting microservices is increased complexity. Managing multiple services, each with its lifecycle can be daunting. If your processes are not streamlined, the overhead can outweigh the benefits.
- **Distribution Tax**: With multiple services communicating over the network, latency and potential congestion can increase. Each service call adds a slight delay, and the cumulative effect can impact performance.
- **Reliability**: Microservices rely on the health of individual services. If one service fails, it can affect others, making reliability a critical consideration. Systems need to be designed to handle partial failures gracefully.
## Conclusion
Microservices offer a modern approach to building scalable, flexible, and efficient systems. While they bring numerous benefits, they also introduce new challenges that must be carefully managed. In our next article, we’ll dive deeper into strategies for managing microservices effectively. Stay tuned! | gervaisamoah |
1,887,777 | The Magical World of Machine Learning at Hogwarts (Part #1) | 🌟✨ Welcome, young wizards and witches, to the mystical realm of machine learning! I am Professor Leo,... | 0 | 2024-06-23T22:43:23 | https://dev.to/gerryleonugroho/the-magical-world-of-machine-learning-at-hogwarts-part-1-2jp4 | machinelearning, ai, algorithms, beginners | 🌟✨ Welcome, young **wizards and witches**, to the **mystical realm** of **machine learning**! I am Professor Leo, a close friend of the great **Albus Dumbledore** and your guide on this magical journey through the wonders of machine learning. My son, **Gemika Haziq Nugroho**, is just like you — a budding wizard full of curiosity and excitement, learning the enchanting arts at **Hogwarts School of Witchcraft and Wizardry**. Together, we shall explore how machine learning is akin to the magic we practice every day. So, grab your wands and get ready for a spellbinding adventure! 🧙♂️🧙♀️
**Machine learning** is like the spells we learn in our classes; it helps us understand and **predict the world** around us. Just as we memorize incantations to make things happen, **machines learn patterns from data** to **make predictions** and **decisions**. Imagine your spell book, filled with various enchantments, each for a different purpose. **Machine learning** algorithms are much like these spells, each designed to solve a specific problem. Let’s dive into this enchanted spell book and discover the magic within! 📖✨
In this first post, we will explore three fascinating realms of machine learning. We begin with "**The Sorting Hat's Wisdom: Classification Spells**," where we'll discover how the ancient Sorting Hat’s magic can be mirrored in algorithms that categorize data into distinct houses. Next, we delve into "**Predicting the Future with Crystal Balls: Regression Charms**", uncovering the power of predicting future events with the same accuracy as Professor Trelawney's crystal ball. Finally, we journey through "**The Marauder’s Map: Finding Hidden Paths with Clustering Enchantments**", where we reveal the magic of discovering hidden patterns, much like the Marauder’s Map unveils secret passages. Ready your wands, dear readers, and let the enchantment begin! 🪄🔮📜
---
## 1. The Sorting Hat's Wisdom: Classification Spells

🔮🧙♂️ Join me young witches and wizards, to the grand hall of Hogwarts, where the **Sorting Hat’s wisdom** reveals your true house! Just like the **Sorting Hat**, **machine learning classification** spells can determine where you belong based on your unique traits. Let’s uncover the magic behind these classification algorithms! 🔮✨ It decides whether a student belongs in _Gryffindor_, _Hufflepuff_, _Ravenclaw_, or _Slytherin_. But how does it do that? It's much like a **classification spell in machine learning**! 🧙♀️🔮
### 1.1 Decision Trees 🌳
Imagine a magical tree that asks you questions to determine your destiny. "_Are you brave?", "Do you value knowledge?_" Each question leads to a branch, guiding you to the right house. A **decision tree in machine learning** works the same way. It asks questions about the data and leads you down a path to classify it correctly. For example, when the Sorting Hat places Harry in _Gryffindor_, it might have asked questions about his bravery and courage to make the right decision. Imagine standing before the Sorting Hat, feeling the tingle of its ancient magic.
The hat asks questions about your _bravery_, _loyalty_, _intelligence_, and _ambition_. Each question leads you down a different path, much like the branches of a magical tree. This is how a **decision tree** works in **machine learning**. It asks **a series of questions about the data**, each answer branching off into more questions until it arrives at a final decision. For instance, when the Sorting Hat placed Harry Potter in Gryffindor, it likely asked, “_Is he brave?_” “_Does he value courage?_” Each answer narrowed down the possibilities until the hat confidently shouted, “_Gryffindor!_” 🎩🦁
In real life, **decision trees** help us classify things just as precisely. Imagine Professor Sprout using a **decision tree** to determine which magical plant fits best in the **Hogwarts** greenhouse. She might ask, “_Does it need sunlight?_” “_Is it prone to cold?_” Each answer helps her find the perfect spot for every magical herb and fungus. 🌱✨
###1.2 K-Nearest Neighbors (KNN) 🤝
Now, think about how the Sorting Hat might also consider your friends. If most of your friends are in _Hufflepuff_, it might place you there too. **K-Nearest Neighbors** is like this. It looks at the closest "_neighbors_" or data points to decide how to classify new data. If you have traits similar to a group of **Gryffindors**, the algorithm will classify you as a **Gryffindor** too. 🦁🦡🦅🐍
Picture another magical method the Sorting Hat might use. It looks at the students you’re most similar to—your closest friends and companions. If most of your friends are in Hufflepuff, the hat might place you there too. This is the essence of **K-Nearest Neighbors**, or **KNN** for short. It examines the ‘neighbors’ closest to the data point to make a **classification**. If a new student has many traits similar to **Gryffindors**, **KNN** will classify them as a **Gryffindor**. 🦡🦅🐍
In the magical world of Hogwarts, KNN might help in sorting magical creatures. Imagine Hagrid using KNN to classify a new beast he’s discovered. By comparing it to similar creatures in the Forbidden Forest, he can determine if it’s a friendly **Bowtruckle** or a mischievous **Niffler**. 🧚♂️✨
These classification spells are more than just algorithms — they are the essence of how we understand and organize our magical world. From **sorting students into their rightful houses** to **classifying mystical creatures**, **classification magic ensures that everything finds its proper place** in the enchanted halls of Hogwarts. 🌟📚✨
In real-life applications, **classification algorithms** help us identify things like whether an email is spam or not, just as the Sorting Hat identifies the best house for each student. Imagine a magical email system at Hogwarts that filters out the Howlers (nasty letters) so only the nice messages get through. That's classification magic in action! 📧✨
---
##2. Predicting the Future with Crystal Balls: Regression Charms 🔮✨

🔮✨ Step right into Professor Trelawney's Divination classroom, where crystal balls reveal the future! Just like how we predict events with crystal balls, **regression algorithms** in **machine learning predict future** outcomes based on past data. Let’s delve into these enchanting charms! 🔮✨
###2.1 Linear Regression 📈
Picture **Professor Trelawney** peering into her crystal ball, where a straight, shining line of events stretches into the future. Linear regression is like this — **the algorithm finds the best-fitting straight line through a series of data points**, helping us predict what happens next. For instance, imagine predicting the number of **Bertie Bott's Every Flavor Beans** that will be sold at the next **Quidditch** match **based on sales from previous matches**. The algorithm uses **past sales data** to draw a **straight line that points to future sales**. It's as if the crystal ball is revealing the future, one bean at a time! 🍬✨
In the magical world of Hogwarts, **linear regression** might help **predict the number of house points** Gryffindor will earn in the next week based on their performance in the past month. **Professor McGonagall** could use this spell **to foresee whether Gryffindor has a chance to win** the House Cup. With the magic of **linear regression**, we can see the future more clearly and prepare for what's to come! 🏆✨
###2.2 Polynomial Regression 🌀
Now, what if the future isn’t a straight line but a swirling pattern of events? **Polynomial regression** uses curves instead of straight lines to make predictions. It’s like seeing a more complex vision in the crystal ball, where events twist and turn in magical patterns. Suppose we want **to predict the number of broomsticks sold** at varying speeds of the Nimbus 2000. **Polynomial regression** captures the more **intricate relationship between speed and sales**, giving us a magical insight into future trends. 🧹✨
In Hogwarts, this spell might help **predict how many chocolate frogs will be consumed during the Halloween feast**, considering the different factors like the **number of students** and their **fondness for sweets**. With **polynomial regression**, the predictions are as delightful and complex as the treats themselves. 🍫🐸
In our everyday magical lives, these _regression charms_ help us foresee important events. Imagine **Dumbledore using these spells to predict the outcomes of Quidditch matches**, plan the school's budget for potion ingredients, or even anticipate the arrival of a new student. With these powerful prediction charms, Hogwarts is always prepared for the future, ensuring that the magic never fades. With these powerful prediction charms, we’re always a step ahead in our magical endeavors! 🌟✨
---
## 3. The Marauder’s Map: Finding Hidden Paths with Clustering Enchantments

🗺️✨ "_I solemnly swear that I am up to no good!_" Welcome to the wondrous world of the Marauder’s Map, a magical artifact that reveals every nook and cranny of Hogwarts. Clustering algorithms in machine learning are like the spells that unveil hidden paths and secrets on this map. Let’s explore these mystical enchantments! 🗺️✨
###**3.1 K-Means Clustering** 🌟
Imagine you're holding the Marauder’s Map, watching in awe as it reveals groups of students in various parts of the castle. **K-Means Clustering** is a spell that groups similar data points together, much like how the map shows clusters of students in different Hogwarts locations. For instance, it might reveal a group of students studying diligently in the library, another group practicing spells in the courtyard, and a secret gathering in the Room of Requirement. Each group is a **"cluster"** discovered by this magical algorithm. 🧙♂️📚
In practical terms, **K-Means Clustering** can help identify groups of similar items. Imagine Hagrid using this spell to group magical creatures based on their habits and habitats. He could discover clusters of creatures that prefer the Forbidden Forest, those that thrive in the Black Lake, and those that love the skies around Hogwarts. By understanding these clusters, Hagrid can take better care of his magical friends. 🦉🦄
###**3.2 Hierarchical Clustering** 🏰
Now, imagine the **map revealing a hierarchy of secret passages and hidden rooms**, showing not just groups but also **how these groups are connected**. **Hierarchical clustering** works similarly by building a tree of clusters, from the most general to the most specific. It’s like uncovering layers of secrets, one by one. This algorithm can help us find the most hidden and connected parts of Hogwarts, just as the Marauders did. 🗝️✨
For example, Professor Snape might use hierarchical clustering to **categorize potions based on their ingredients and effects**. By creating **a tree of potions**, he can see which potions share similar properties and which ones are unique. This makes it easier for him to teach his students the subtle art of potion-making, guiding them through the intricate connections between different brews. 🧪🔮
In real-life applications, clustering algorithms help organize information, such as grouping similar spells in the library or identifying patterns in potion ingredients. Imagine the Hogwarts library using these spells to organize its vast collection of books, making it easier for students to find the right spell or potion recipe. With clustering enchantments, the hidden magic in our data is revealed, making **Hogwarts** an even more wondrous place! ✨📚🪄
---
As our magical exploration draws to a close for this first post, we have glimpsed the profound wisdom that machine learning holds, akin to the arcane knowledge safeguarded within the halls of Hogwarts. 🏰✨ "**The Sorting Hat's Wisdom: Classification Spells**" has shown us the art of categorizing and understanding our data, echoing the Sorting Hat's unparalleled ability to place students where they truly belong. In "**Predicting the Future with Crystal Balls: Regression Charms,**" we learned the magical art of forecasting, harnessing the predictive power akin to Professor Trelawney's prophetic visions. And in "**The Marauder’s Map: Finding Hidden Paths with Clustering Enchantments,**" we uncovered the secrets of clustering, mirroring the magical map's ability to reveal hidden pathways and connections.
This journey is but the beginning, dear students, of a series that will delve deeper into the magical world of machine learning. Together, we will [continue to explore and unravel](https://dev.to/gerryleonugroho/the-magical-world-of-machine-learning-at-hogwarts-part-2-5b37) the mysteries of this powerful field, blending the enchantments of our wizarding world with the marvels of modern data science. Stay tuned for the next chapter in our magical series, where more wondrous spells and incantations await. Until then, may your minds remain curious, your wands ever-ready, and your hearts full of the wonder of discovery! 🌟🧙♂️📚
| gerryleonugroho |
1,898,163 | Ieiejdj | Why does the chicken cross the road cause it had to get the other side ha ha ha | 0 | 2024-06-23T22:42:37 | https://dev.to/koesterslisahotmailcom/ieiejdj-191k | Why does the chicken cross the road cause it had to get the other side ha ha ha | koesterslisahotmailcom | |
1,898,158 | Sometimes all we need is a good safe word! Even ChatGPT! | Using Safe Words with ChatGPT to Solve Complex Problems: A Must-Have Technique When... | 0 | 2024-06-23T22:27:49 | https://dev.to/mbround18/sometimes-all-we-need-is-a-good-safe-word-even-chatgpt-2e82 | chatgpt, rust, devops, productivity | ## Using Safe Words with ChatGPT to Solve Complex Problems: A Must-Have Technique
When dealing with complex problems, communication clarity is crucial. In my recent experience, I found that using safe words with ChatGPT significantly enhanced my problem-solving process. Let me share how this simple technique can be a game-changer for developers and problem solvers alike.
### The Concept of Safe Words
Safe words are pre-defined signals used in various contexts to indicate a stop or pause, in the case of AI its to get it to stop responding or talking while you have multiple things you want to explain!
### My Experience with ChatGPT
In a recent coding project, I was faced with a complex issue involving a { [Rust-based game server for Valheim](https://github.com/mbround18/valheim-docker). I needed ChatGPT’s assistance to navigate through multiple steps without losing track of the conversation’s flow or getting overwhelmed by the details. Here’s how I did it:
#### Setting the Rules
I started by defining a clear rule with ChatGPT: I would feed it information, and it would respond with "Ok" until I used the safe word "Pineapple." This ensured that I could provide all necessary details before ChatGPT processed and responded with a solution.
#### The Interaction
Here’s a snapshot of our conversation to illustrate this technique:
**User:**
>I have a complex problem I am trying to solve; I will give you information and you will respond with Ok until I respond with Pineapple. Once I say Pineapple in my prompt you will use all the information I have given you and your knowledge base to come up with a potential solution, do you understand?
**ChatGPT:**
> Ok.
**User:**
> I have a UDP and TCP based game server wrapper build in Rust for a game called Valheim.
**ChatGPT:**
> Ok.
**User:**
> I need a way to monitor traffic on ports
> - "2456:2456/udp"
> - "2457:2457/udp"
> - "2458:2458/udp"
> When it detects no activity it will pause a process by a process name. I have a way to get the process name.
**ChatGPT:**
> Ok.
**User:**
- **NOTE:** This was a pretty long output. Truncated for the article. it's just a tree of the src folder in the project. I pasted it directly into the chat with GPT so it could have its context.
```
Here is my src folder:
src
├── huginn
│ ├── Cargo.toml
│ ├── README.md
│ ├── main.rs
│ └── routes
│ ├── metrics.rs
│ ├── mod.rs
│ └── status.rs
├── odin
│ ├── Cargo.toml
│ ├── README.md
│ ├── build.rs
│ ├── cli.rs
│ ├── commands
│ │ ├── backup.rs
│ │ ├── configure.rs
│ │ ├── install.rs
│ │ ├── install_mod.rs
│ │ ├── logs.rs
│ │ ├── mod.rs
│ │ ├── notify.rs
│ │ ├── start.rs
│ │ ├── status.rs
│ │ ├── stop.rs
│ │ └── update.rs
│ ├── constants.rs
TRUNCATED
```
**ChatGPT:**
> Ok.
**User:**
> src/odin/server/process.rs has this functionality.
**ChatGPT:**
> Ok.
**User:**
> The process is started via this file src/odin/server/startup.rs.
**ChatGPT:**
> Ok.
**User:**
> Pineapple.
**ChatGPT:**
> Given the requirements and the provided code structure, here's a potential solution for monitoring traffic on the specified UDP ports and pausing a process when no activity is detected...
#### The Benefits
1. **Controlled Flow of Information**: By using a safe word, I controlled when ChatGPT should start processing the data, ensuring it had all the context needed.
2. **Reduced Overwhelm**: Breaking down the information into manageable chunks made it easier for both me and ChatGPT to handle complex tasks without missing critical details.
3. **Enhanced Focus**: This method allowed ChatGPT to focus on providing a well-thought-out solution rather than responding prematurely to incomplete information.
### Implementing Safe Words in Your Workflow
To incorporate this technique in your workflow, follow these simple steps:
1. **Define a Safe Word**: Choose a unique word that you will use to signal ChatGPT to process the information and respond. Ensure it is a word unlikely to appear in your regular communication to avoid accidental triggers.
2. **Set the Context**: Inform ChatGPT at the beginning of your session about the safe word and the process. This helps in aligning the AI's responses to your communication style.
3. **Feed Information Gradually**: Provide the necessary details step-by-step, confirming each piece of information with a simple "Ok" response from ChatGPT.
4. **Use the Safe Word**: Once you have provided all the required information, use the safe word to signal ChatGPT to start processing and provide a solution.
### Conclusion
Now, I will not say this was all daisies. In true GPT fashion it yelled pineapple after a few prompts when it thought it understood everything. It gave me a partial answer which was incorrect, and I had to remind it:
> I didn't say pineapple, you did, hold your response until I say pineapple. is that understood?
However, using safe words with ChatGPT is a powerful technique to solve complex problems effectively. It ensures clear communication, reduces overwhelm, and enhances the quality of the solutions provided by the AI. By implementing this strategy, developers and problem solvers can leverage ChatGPT more efficiently, leading to better outcomes and smoother workflows.
Try it out in your next complex problem-solving session and experience the difference it makes!
| mbround18 |
1,898,157 | Enhance Your React Applications with cards-slider-react-lib : A Feature-Rich Card Slider Library | cards-slider-react-lib: A Customizable and Responsive React Slider Component The... | 0 | 2024-06-23T22:26:08 | https://dev.to/victor_ajadi_21b5913f79f6/enhance-your-react-applications-with-cards-slider-react-lib-a-feature-rich-card-slider-library-m6m | webdev, javascript, beginners, programming | **cards-slider-react-lib: A Customizable and Responsive React Slider Component**
The `card-slider-react-lib` library provides a powerful and versatile React component for creating interactive and visually appealing card sliders in your web applications. This article delves into the features, usage, and customization options offered by `card-slider-react-lib`.
Key Features:
- **Responsive Design**: `card-slider-react-lib` automatically adapts the layout to display an optimal number of cards per view based on the user's screen size. This ensures a seamless experience across various devices.
- **Highly Customizable**: The library allows you to define the content of your cards, customize the number of cards displayed per view, and configure various other properties to match your specific requirements. You can even create custom card components for a truly unique look and feel.
- **Effortless Card Management**: `card-slider-react-lib` efficiently arranges your cards dynamically based on the current index, creating a smooth and continuous sliding effect.
- **Interactive Navigation**: The component comes with built-in navigation buttons that enable users to easily navigate through the cards, either forward or backward. You can also customize the color of these buttons for better integration with your application's design.
**Properties (Props) for Customization:**
**array (Required)**: This is a mandatory property that takes an array of objects. Each object represents the content of a single card in your slider. The properties within these objects will be available as props in your custom card component.
**cardNumPerView (Optional)**: This prop allows you to control the number of cards displayed per view. It provides manual control over the layout.
**autoArrange (Optional, default: false)**: When enabled, this prop automatically adjusts the number of cards displayed based on the screen size, ensuring responsiveness. It overrides cardNumPerView when set to true.
**buttonColor (Optional, default: '#000000')**: This prop lets you customize the color of the navigation buttons to match your application's color scheme.
**buttonWidth (Optional, default: '54px')**: Set the width of the navigation buttons using CSS measurements (e.g., 'px', 'em').
**buttonHeight (Optional, default: '54px')**: Set the height of the navigation buttons using CSS measurements.
**CustomCard**: This is where you define your custom card component. It receives any props you pass within the `<CardSlider>`tag. Refer to the implementation of your CustomCard component for specific prop usage.
**LeftSvgIcon (Optional)**: This prop allows you to override the default left navigation button with your custom SVG icon component.
**RightSvgIcon (Optional)**: Similarly, you can override the default right navigation button with your custom SVG icon component.
**slideTimeInterval (Optional, default: 3240)**: This prop sets the interval (in milliseconds) at which the slider auto-slides in an infinite loop.
**allowSlidePerInterval (Optional)**: When enabled (true), this prop allows the slider to auto-slide without requiring users to click the navigation buttons. It also pauses auto-sliding when the user hovers over the slider, improving performance.
**New Update**
**cardSpacing**: this is use to give space to each card or image-card for customized display "gap"
**buttonPosition**: used to position the button around the slider, `middle` or `middle-bottom` or `middle-top` , default positioned at the `right and left end` of the slider container.
**buttonPositionGap**: when a value is passed here, it gives space to the navigation button relative to the `buttonPosition`
**Custom Card Example**
The provided example demonstrates the structure of a custom card component (`CustomCard`) that you can use with `cards-slider-react-lib`. This component renders the card content based on the props received from the `CardSlider`component.
Basic Usage:
The usage section showcases how to integrate the `CardSlider` component into your React application. It demonstrates how to define card data, customize props, and create a custom card component.
Installation:
To install `cards-slider-react-lib`, use npm or yarn:
`Bash
npm install cards-slider-react-lib`
or
`Bash
yarn add cards-slider-react-lib`
Basic Usage:
Here's an example of integrating `CardSlider` into your React application:
```
import React from 'react';
import { CardSlider } from 'cards-slider-react-lib';
import CustomCard from './CustomCard'; // Your custom card component
const cardData = [
{ id: 1, title: 'Card 1', content: 'Content 1' },
{ id: 2, title: 'Card 2', content: 'Content 2' },
{ id: 3, title: 'Card 3', content: 'Content 3' },
// ... more card data
];
function App() {
return (
<div className="App">
<CardSlider
`array={cardData}`
`cardNumPerView={3}` // Or use autoArrange={true} for auto-
adjustment
`buttonColor="#ff5733"`
`buttonWidth="50px"`
`buttonHeight="50px"`
`CardComponent={(props) => <CustomCard {...props}
additionalProp="1" />}`
`cardSpacing={'30px'}` //this is use to give space to each card or
image-card for customized display "gap"
`buttonPosition={'middle'}` or middle-bottom or middle-top, default
positioned at the end of the slider container
`buttonPositionGap={'10px'}` //when a value is passed here, it
gives space to the navigation button relative to the
buttonPosition
// Other customization options...
/>
</div>
);
}
```
**Comparison to Other Card Slider Libraries:**
Several React libraries offer card slider functionalities. Here's a brief comparison of `cards-slider-react-lib` with two popular alternatives:
`cs = cards-slider-react-lib
rsp=React Swiper
rs=React Slick
`
```
Feature cs rsp rs
Responsiveness Yes Yes Yes
Customization High High High
Navigation Btn Built-in Optional Optional
Autoplay Optional Yes Yes
Touch Support Yes Yes Yes
Documentation Moderate Good Good
Ease of Use Moderate Moderate Moderate
```
**Additional Details:**
Dynamic Screen Size Handling: As mentioned earlier, `card-slider-react-lib` employs breakpoints to automatically adjust the number of cards displayed per view based on screen size. Here's a breakdown of the default breakpoints:
**Extra Small**: 1 card
**Small**: 2 cards
**Medium**: 3 cards
**Large**: 4 cards
By leveraging `card-slider-react-lib` , you can create visually appealing and interactive card sliders that enhance the user experience in your React applications. Its flexibility and customization options allow you to tailor the component to your specific design needs and data structures. | victor_ajadi_21b5913f79f6 |
1,898,149 | PNG vs AVIF: Understanding the Differences and Usage | What Are the Differences Between PNG and AVIF? PNG (Portable Network... | 0 | 2024-06-23T22:01:07 | https://dev.to/msmith99994/png-vs-avif-understanding-the-differences-and-usage-2j00 | ### What Are the Differences Between PNG and AVIF?
### PNG (Portable Network Graphics)
PNG is a raster-graphics file format that supports lossless data compression. It was created as an improved, non-patented replacement for GIF (Graphics Interchange Format).
Here are some key characteristics of PNG:
**- Lossless Compression:** PNG compresses files without losing any data, making it ideal for images that require high quality and precision, such as logos and text images.
**- Transparency:** PNG supports transparency and alpha channels, which allows for varying levels of opacity, making it suitable for images that need to blend seamlessly with backgrounds.
**- Widespread Support**: PNG is widely supported across all major web browsers and graphic software, ensuring compatibility and ease of use.
### AVIF (AV1 Image File Format)
AVIF is a relatively new image format based on the AV1 video codec. It offers several advanced features that make it a strong contender in the image format arena:
**- Superior Compression:** AVIF provides better compression rates than PNG, reducing file sizes significantly while maintaining high image quality. This makes it ideal for web use where loading times and bandwidth are crucial.
**- High Dynamic Range (HDR):** AVIF supports HDR, which allows for a broader range of colors and greater contrast, producing more visually striking images.
**- Modern Features:** AVIF includes support for features like animation, transparency, and color management, making it a versatile choice for various applications.
## Where Are They Used?
### PNG Usage
**- Web Graphics:** PNG is commonly used for web graphics, particularly for images that require transparency, such as logos, icons, and illustrations.
**- Print Media:** Due to its lossless compression, PNG is suitable for high-quality prints where image fidelity is critical.
**- Digital Art:** Artists and designers use PNG for detailed digital art and illustrations that require high resolution and color accuracy.
### AVIF Usage
**- Web Performance:** AVIF's superior compression makes it ideal for web use, as smaller file sizes lead to faster loading times and reduced bandwidth usage.
**- Photography:** AVIF's support for HDR and high color fidelity makes it a good choice for professional photography and high-quality image storage.
**- Animations:** With its support for animation, AVIF can be used for animated graphics, offering an alternative to formats like GIF and APNG.
## What Are Their Benefits and Drawbacks?
### PNG Benefits
**- High Quality:** Lossless compression ensures that no data is lost, maintaining the original quality of the image.
**- Transparency:** Full support for transparency and alpha channels.
**- Compatibility:** Broad compatibility across different platforms and software.
### PNG Drawbacks
**- File Size:** Larger file sizes compared to other compressed formats like JPEG and AVIF, which can affect web performance.
**- No Animation:** Limited support for animations compared to formats like GIF or AVIF.
### AVIF Benefits
**- Efficient Compression:** Significantly smaller file sizes with minimal loss in quality, improving web performance.
**- Advanced Features:** Support for HDR, transparency, and animations.
**- Modern Codec:** Utilizes modern compression techniques for better image quality and efficiency.
### AVIF Drawbacks
**- Compatibility:** Being a newer format, AVIF is not yet universally supported across all browsers and devices, although this is rapidly changing.
**- Processing Power:** AVIF decoding can require more processing power, which may impact performance on lower-end devices.
## When Should You Use Each One?
**Use PNG When:**
- You need high-quality, lossless images.
- Transparency is required for logos, icons, or web graphics.
- Broad compatibility with older browsers and graphic software is essential.
- You are working with digital art or illustrations that require precise color accuracy.
**Use AVIF When:**
- Web performance is a priority, and you need smaller file sizes for faster loading times.
- You are dealing with high-quality photography that benefits from HDR support.
- You want to incorporate animations with modern features.
- You are preparing for the future with a format that is likely to gain broader support over time.
## Final Words
Both [PNG and AVIF](https://cloudinary.com/tools/png-to-avif) have their distinct advantages and are suited to different use cases. PNG remains a staple for high-quality, lossless images with transparency needs, while AVIF offers a modern alternative with superior compression and advanced features. Understanding the strengths and limitations of each format will help you make informed decisions about which to use based on your specific requirements.
As technology evolves, so do the tools at our disposal. Staying updated with the latest advancements like AVIF ensures that you are leveraging the best options available to optimize both the quality and performance of your digital imagery.
By understanding the key differences and optimal use cases for PNG and AVIF, you can ensure that your images are both visually appealing and efficient, enhancing the overall experience for your audience. | msmith99994 | |
1,854,427 | Dev: Web | A Web Developer is a professional responsible for designing, developing, and maintaining websites and... | 27,373 | 2024-06-23T22:00:00 | https://dev.to/r4nd3l/dev-web-4f7k | webdev, developer | A **Web Developer** is a professional responsible for designing, developing, and maintaining websites and web applications. Here's a detailed description of the role:
1. **Front-End Development:**
- Web Developers focus on front-end development, which involves creating the user interface, layout, and visual elements of a website or web application.
- They use HTML (Hypertext Markup Language), CSS (Cascading Style Sheets), and JavaScript to build responsive and interactive web interfaces that are compatible with various devices and browsers.
- Front-end frameworks and libraries such as Bootstrap, React, Angular, and Vue.js are commonly used by Web Developers to streamline front-end development tasks and enhance user experience.
2. **Back-End Development:**
- In addition to front-end development, Web Developers also work on back-end development, which involves server-side programming, database management, and server configuration.
- They use server-side languages and frameworks such as PHP, Python, Ruby on Rails, Node.js, and Django to implement server-side logic, handle data processing, and interact with databases.
- Web Developers design and develop RESTful APIs (Application Programming Interfaces) to enable communication between the front-end and back-end components of web applications.
3. **Full-Stack Development:**
- Some Web Developers specialize in full-stack development, which involves working on both the front-end and back-end aspects of web development.
- Full-stack Developers have proficiency in both client-side and server-side technologies, allowing them to build end-to-end web solutions from the user interface to the server infrastructure.
4. **Responsive Design and Cross-Browser Compatibility:**
- Web Developers ensure that websites and web applications are responsive, meaning they adapt seamlessly to different screen sizes and devices, such as desktops, laptops, tablets, and smartphones.
- They test websites across multiple browsers and platforms to ensure cross-browser compatibility and consistent user experience across different environments.
5. **User Experience (UX) and User Interface (UI) Design:**
- Web Developers collaborate with UX/UI designers to implement user-centric design principles and create intuitive, visually appealing interfaces.
- They optimize the user experience by improving website navigation, accessibility, and interactivity to enhance user engagement and satisfaction.
6. **Web Performance Optimization:**
- Web Developers optimize website performance by minimizing page load times, reducing file sizes, and implementing caching strategies.
- They utilize techniques such as lazy loading, image optimization, code minification, and content delivery network (CDN) integration to improve website speed and responsiveness.
7. **Security and Data Protection:**
- Web Developers implement security best practices to protect websites and web applications from security threats, vulnerabilities, and cyber attacks.
- They use encryption protocols, secure authentication mechanisms, and input validation techniques to safeguard user data and prevent unauthorized access to sensitive information.
8. **Version Control and Collaboration:**
- Web Developers use version control systems such as Git and collaboration platforms like GitHub and GitLab to manage code repositories, track changes, and collaborate with team members.
- They follow agile development methodologies and participate in sprint planning, code reviews, and continuous integration (CI) and continuous delivery (CD) practices to ensure project success and timely delivery.
9. **Continuous Learning and Professional Development:**
- Web Developers stay updated with the latest web technologies, trends, and best practices through continuous learning, online courses, workshops, and professional certifications.
- They actively participate in web development communities, forums, and meetups to exchange knowledge, share experiences, and stay connected with the broader developer community.
10. **Client Communication and Project Management:**
- Web Developers collaborate closely with clients, project managers, and other stakeholders to understand project requirements, provide technical insights, and deliver solutions that meet business objectives.
- They communicate effectively, provide regular project updates, and address client feedback and concerns to ensure client satisfaction and project success.
In summary, Web Developers play a crucial role in building dynamic, interactive, and engaging websites and web applications that cater to the needs of users and businesses in today's digital age. By leveraging their technical expertise, creativity, and problem-solving skills, they contribute to the development of innovative web solutions that drive online presence, brand visibility, and business growth. | r4nd3l |
1,893,030 | AI-Powered Loyalty Rewards Assistant: Amazon Bedrock Agent in Action | Generative AI is no longer a buzzword. It has become an integral part of our everyday conversations.... | 0 | 2024-06-23T21:48:30 | https://dev.to/girishmukim/generative-ai-assistant-for-loyalty-reward-system-using-amazon-bedrock-knowledge-bases-agent-1670 | aws, tutorial, ai, api | Generative AI is no longer a buzzword. It has become an integral part of our everyday conversations. Typically, new technologies spark excitement within the technical circle; however, Generative AI transcended that boundary and quickly became mainstream. Enterprises are also exploring its practical applications from a business perspective. You can love it or hate it, but one thing is clear - you can't ignore it. So I decided to try it myself. This blog post is about my experimentation with Amazon Bedrock's capabilities.
Let's use the example of building a **RewardBot** to illustrate the process, and if you're interested in learning how to build a chatbot using AWS services then this blog post is perfect for you.
_Retail Rewards Inc.,_ a fictitious retail company, is taking generative AI to its customers. The functionalities of the first version of RewardBot for customers are to check their loyalty card balance, and help them fully understand the benefits of the loyalty program. Our goal is to create a pilot that we can iterate and improve over time.
Key Features of RewardBot:
- **Membership Enrollment:** Guide users through the process of becoming loyalty card members.
- **Balance Inquiry:** Allow users to check their loyalty card balance in real-time.
- **FAQs and Support:** Provide answers to common questions and support for loyalty program issues.
_Membership Enrollment feature will be launched in next version._
Now that we’ve set the stage, it’s time to dive deeper into the technical details of our Generative AI Assistant. First, let's look at few important concepts-
**Retrieval-Augmented Generation (RAG)**
Imagine you're asking a large language model a question. Generally, it would just give you an answer based on what it's learned from its training data. However, it won't know anything about your customer data, or about your business policies around loyalty reward programs. That's where Retrieval-Augmented Generation (RAG) comes in. It is a technique to leverage your enterprise-specific data to enhance the responses of large language models (LLMs) <u>without retraining a model</u>.
**Knowledge Bases for Amazon Bedrock**
Amazon Bedrock Knowledge Bases are centralized repositories for structured and unstructured data, allowing AI models to access up-to-date information. They enhance the accuracy of AI responses without needing frequent retraining.
In a retail company's Loyalty Reward System, an FAQ document stored in Amazon S3 can be part of the knowledge base. When a customer asks the AI assistant a policy question, it retrieves the relevant information from this document to provide an accurate answer.
**Agents for Amazon Bedrock**
Agents for Amazon Bedrock plan and execute multistep tasks and help orchestrate interactions between foundation models, knowledge bases, and Lambda functions to securely execute APIs. An agent analyzes the user request and automatically calls the necessary APIs and data sources to fulfill the request. Thus, agents reduce significant development efforts for developers and speed up generative AI application deployments.
**System Architecture:**

Implementation Steps -
1. Create a Bedrock agent
2. Create Knowledge Base
3. Associate Knowledge Base with the agent
4. Create an action group
5. API Gateway and backend Lambda
You should test agent at every step to understand the concepts of Knoweldge base and an action group.
Let's prepare couple of test prompts to use during our testing and compare results as we move through our implementation steps.
Prompt 1: Do my loyalty points expire?
Prompt 2: What is my reward balance?
**Deployment Instructions**
**Step1: Create a Bedrock agent**
Where the magic happens. The Bedrock agent orchestrates interactions between foundation models, knowledge bases, and action groups. An action group defines actions that the agent can help the user perform. For example, you could define an action group called "ManageLoyaltyPoints" that helps users carry out actions that you can define, such as getting a point balance or redeeming a balance.
**Steps to Create a Bedrock Agent:**
- Log in to the AWS Management Console.
- Navigate to Amazon Bedrock.

- Select Agents under "Builder tools" on the left and click "Create Agent".

- Fill in the necessary details such as the agent's name and description.

- Create a new or choose existing agent IAM role. Select model and provide agent instructions.

Make sure you save changes.
- "Prepare" the agent. This is important before you could test.

- Test agents with prompts


You can click on 'show trace' to understand the trace of the agent's actions.
**Step2. Create a Knowledge Base**
The Knowledge Base is a repository of information that the Bedrock agent will use to respond to user queries. This can include FAQs, documentation, and other relevant data.
**Steps to Create a Knowledge Base:**
- Knowledge base in this case would be used to host the Loyalty Program FAQ file. So, first create an S3 bucket and upload the FAQ document to it. I'll provide the Git repo for all the files and code used in this tutorial.

-
In the Amazon Bedrock console, navigate to "Knowledge Bases." under Builder tools and click on "Create Knowledge Base."





With above recommended selection, KB creates OpenSearch Index as a vector database. Titan Embeddings model is used to convert FAQ document into embeddings and store in vector database. You can create a new vector database or used existing one. Here I will create a new one.
Review your selections and create knowledge base.
SYNC data source in Knowledge base.

**Vector Database:**

Make sure you have access to the Tital Embeddings model; otherwise, you may encounter the error.
Click on "Model Access" in the bottom left corner and request access to the model. Usually access will be granted almost immediately.

**Step3: Associate Knowledge Base with the agent**
Associate the Knowledge Base with the agent to enable the agent to leverage its contents for more accurate and context-aware interactions.



Prepare agent again. Click on Prepare.
Try same prompts. Those are our testcases.

If you noticed, both queries are answered based on the knowledge base. Therefore, the response to the second query is general information from the FAQ document and not tailored to the customer's account.
**Step4: Create an action group**
Bedrock Action Group defines specific tasks an agent can perform to help users, like managing loyalty points or answering account questions.
**Steps to Create an action group:**
- Click "Edit in Agent Builder" and go to the action group section.

_THree things you would need here -_
**1.1** OpenAPI schema. Either you can refer to a file with OpenAPI specification from S3 or use visual editor. Here I have a file in S3.
**1.2** DynamoDB table **Users** to query Loyalty Points for Users.

**1.3** Lambda function which agent will invoke through an action group. This is a function (action-grp-business-function) where business logic is written to pull loyalty points balance for a user based on UserID.
**I'll provide Git repository for OpenAPI specification file and the Lambda code.**
One important consideration is, the resource permission for the agent and Bedrock service to invoke this lambda function.
On Lambda function console page -


OK, Let's get back to Action group creation with OpenAPI specification file and Lambda function ready for us.


Also ensure, Lambda Execution role has access to DynamoDB table. I have given full access to DynamoDB to simplify (AmazonDynamoDBFullAccess).

Click on Save and Prepare so that agent is ready with last changes.

Optionally, you can go back to agent page and create an Alias.
An alias points to a specific version of your Agent.


Let's check our prompts again -

**Step5: API Gateway and backend Lambda**
Awesome, the user can now interact with RewardBot to get their loyalty point balance. Next, let's build an API Gateway to expose the **/getloyaltypoints** API. The Lambda function (**call-agent**) code will be in the Git repo. Ensure the Lambda execution role has Bedrock access and increase the timeout from the default 3 seconds to a higher value.

**API Gateway**
- **API:** REST API **GenAIService** with Lambda Integration
- **Resource:** getloyaltypoints (path /)
- **Method:** GET.
The most of the settings used are default except below changes -
_Method request_

_Integration request_

{
"prompt":"$input.params('prompt')",
"session_id":"$input.params('session_id')"
}
Using postman to invoke API **/getloyaltypoints**.


[Github Repository](https://github.com/awslearn-repo/RewardBot
)
| girishmukim |
1,898,147 | The Growing Importance of Prompt Engineering | Introduction In the rapidly evolving landscape of artificial intelligence (AI), the role... | 27,673 | 2024-06-23T21:48:22 | https://dev.to/rapidinnovation/the-growing-importance-of-prompt-engineering-42df | ## Introduction
In the rapidly evolving landscape of artificial intelligence (AI), the role of
prompt engineering has emerged as a crucial element in harnessing the full
potential of AI technologies, particularly in the realm of language models. As
AI continues to integrate into various sectors, the ability to effectively
communicate with and guide AI systems has become increasingly significant.
## What is Prompt Engineering?
Prompt engineering involves crafting inputs that guide AI models to generate
the most accurate and relevant outputs. This practice is crucial because the
quality and specificity of the input significantly influence the AI's
response, impacting the effectiveness of the model in real-world applications.
## Why Hire a Prompt Engineer?
Hiring a prompt engineer is becoming increasingly crucial as businesses and
organizations continue to integrate AI into their operations. A prompt
engineer specializes in designing and refining the inputs given to AI models
to optimize their outputs for specific tasks.
## How to Hire a Prompt Engineer?
Hiring a prompt engineer requires a strategic approach to ensure that the
right talent is brought on board. This involves identifying the correct skill
set, including technical skills and creative thinking, and knowing where to
find qualified candidates.
## Types of Prompt Engineers
Prompt engineering is emerging as a specialized field within AI and machine
learning. There are AI-focused prompt engineers, blockchain-focused prompt
engineers, and hybrid roles that blend skills from different fields.
## Benefits of Hiring a Prompt Engineer
Hiring a prompt engineer can significantly benefit organizations by enhancing
AI interaction quality, improving efficiency and accuracy of AI systems, and
providing tailored blockchain solutions.
## Challenges in Hiring a Prompt Engineer
One of the primary difficulties in hiring a prompt engineer is finding
candidates with the right mix of skills in AI, linguistics, and software
development. Additionally, the rapid pace of technological change in AI
requires prompt engineers to continuously update their skills and knowledge.
## Future of Prompt Engineering
The future of prompt engineering looks promising, with increasing
opportunities for innovation in how prompts are structured and optimized. As
AI models become more sophisticated, the expertise required to interact with
these models effectively will become more specialized.
## Real-World Examples
Real-world examples of technology impacting business are abundant. For
instance, Amazon's use of robotics and AI in their fulfillment centers and
Tesla's incorporation of AI in its vehicles for autonomous driving features
showcase the significant impact of advanced technology in traditional sectors.
## Conclusion
Prompt engineering has emerged as a crucial field in the era of AI-driven
technologies. The role of a prompt engineer is pivotal in shaping the way AI
systems interact with human queries, ensuring that the output is accurate,
contextually relevant, and ethically sound. As AI continues to evolve, the
field of prompt engineering is expected to grow in both relevance and
complexity.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
## URLs
* <https://www.rapidinnovation.io/post/hiring-a-prompt-engineer-what-you-should-know>
## Hashtags
#PromptEngineering
#AIInnovation
#BlockchainTech
#AIandBlockchain
#FutureOfAI
| rapidinnovation | |
1,894,033 | How to create LazyColumn with drag and drop elements in Jetpack Compose. Part 1. | Hello there! The subject of drag and drop elements isn't new and you can find many decisions of this... | 27,782 | 2024-06-23T21:45:47 | https://dev.to/mardsoul/how-to-create-lazycolumn-with-drag-and-drop-elements-in-jetpack-compose-part-1-4bn5 | android, compose | Hello there! The subject of drag and drop elements isn't new and you can find many decisions of this problem in the Internet. But the decision isn't so obviously like a creating drag and drop lists by RecyclerView. And I suppose my step-by-step guide will be useful.
#### Sources:
- [Make it Easy: How to implement Drag and Drop List Item in Jetpack Compose](https://www.youtube.com/watch?v=jVL6Ze46III)
- [LazyColumnDragAndDropDemo.kt](https://cs.android.com/androidx/platform/frameworks/support/+/androidx-main:compose/foundation/foundation/integration-tests/foundation-demos/src/main/java/androidx/compose/foundation/demos/LazyColumnDragAndDropDemo.kt)
If you just want to solve your problem you can get code from [LazyColumnDragAndDropDemo.kt](https://cs.android.com/androidx/platform/frameworks/support/+/androidx-main:compose/foundation/foundation/integration-tests/foundation-demos/src/main/java/androidx/compose/foundation/demos/LazyColumnDragAndDropDemo.kt) and implement it to your project.
So, I'm getting a solution from [Make it Easy: How to implement Drag and Drop List Item in Jetpack Compose](https://www.youtube.com/watch?v=jVL6Ze46III). I'll try to explain step-by-step what happens. Next, I'll solve issues and do refactoring. At the end of refactoring I wanna get something like [LazyColumnDragAndDropDemo.kt](https://cs.android.com/androidx/platform/frameworks/support/+/androidx-main:compose/foundation/foundation/integration-tests/foundation-demos/src/main/java/androidx/compose/foundation/demos/LazyColumnDragAndDropDemo.kt).
_At the beginning I wanna say that the most interesting will be in next parts. You can think of this part as a preview._
OK, let's get started.
### About start commit of app
[Starter code](https://github.com/MarDSoul/example-android-draganddroplazycolumn/tree/69375902142800e650e4ae443446c4a90735a332)
So, we have three layers in our app:
- **UI** - consists of one screen with two buttons (Add user and Clear list) and `class AppViewModel` and `data class UserEntityUi`
- **Domain** - consists of `data class UserEntity` and `interface UserRepository`
- **Data** - consists of database (created by Room) and `class UserRepositoryImpl`
Also we have Hilt for DI.
ViewModel is subscribed on changes of `Flow<List<UserEntity>>` and the screen `collectAsState()` this `StateFlow`. _nothing much..._
### Step 1
First of all we need to add a reaction on gestures
```
LazyColumn(
modifier = Modifier
.fillMaxSize()
.weight(1f)
.pointerInput(Unit) {
detectDragGesturesAfterLongPress(
onDrag = { },
onDragStart = { },
onDragEnd = { },
onDragCancel = { }
)
}
)
```
And reaction we add by [Modifier.pointerInput](https://developer.android.com/reference/kotlin/androidx/compose/ui/input/pointer/package-summary#(androidx.compose.ui.Modifier).pointerInput(kotlin.Any,kotlin.coroutines.SuspendFunction1)) and [detectDragGesturesAfterLongPress](https://developer.android.com/reference/kotlin/androidx/compose/foundation/gestures/package-summary#(androidx.compose.ui.input.pointer.PointerInputScope).detectDragGesturesAfterLongPress(kotlin.Function1,kotlin.Function0,kotlin.Function0,kotlin.Function2)). You can read about an understanding gestures in [Understand gestures | Google Developers](https://developer.android.com/develop/ui/compose/touch-input/pointer-input/understand-gestures) by yourself. I just don't want to copy text of official guides and documentation it's not correctly.
### Step 2
As usual in Compose we need some state for drag and drop item. Definitely inside this state we need to have [LazyListState](https://developer.android.com/reference/kotlin/androidx/compose/foundation/lazy/LazyListState) and some action for moving element `onMove: (Int, Int) -> Unit`.
OK, let's create it.
```
class DragAndDropListState(
val lazyListState: LazyListState,
private val onMove: (Int, Int) -> Unit
)
```
And of course we need to remember our state.
```
@Composable
fun rememberDragAndDropListState(
lazyListState: LazyListState,
onMove: (Int, Int) -> Unit
): DragAndDropListState {
return remember { DragAndDropListState(lazyListState, onMove) }
}
```
And let's create some extension for moving in a `mutableList`
```
fun <T> MutableList<T>.move(from: Int, to: Int) {
if (from == to) return
val element = this.removeAt(from)
this.add(to, element)
}
```
Next let's inject them into our screen
```
is UiState.Success -> {
val users = state.data.toMutableStateList()
val dragAndDropListState =
rememberDragAndDropListState(lazyListState) { from, to ->
users.move(from, to)
}
/* LazyColumn etc. */
}
```
`toMutableStateList()` function create `SnapshotStateList<T>` that can be observed and snapshot.
### Step 3
So, we see 4 params in `detectDragGesturesAfterLongPress` which we should implement: `onDrag`, `onDragStart`, `pnDragEnd`, `onDragCancel`. Go back to our `class DragAndDropListState` and start...
#### onStart
The first of all we need to know which element we dragged. It needs some variables:
```
private var initialDraggingElement by mutableStateOf<LazyListItemInfo?>(null)
var currentIndexOfDraggedItem by mutableStateOf<Int?>(null)
```
And now we can implement `onDragStart` function for setting initial values of current element.
But first let's create an extension to getting offset of end of element:
```
private val LazyListItemInfo.offsetEnd: Int
get() = this.offset + this.size
```
```
fun onDragStart(offset: Offset) {
lazyListState.layoutInfo.visibleItemsInfo
.firstOrNull { item -> offset.y.toInt() in item.offset..item.offsetEnd }
?.also {
initialDraggingElement = it
currentIndexOfDraggedItem = it.index
}
}
```
So, let's paste `onStart()` into `LazyColumn`
```
LazyColumn(
modifier = Modifier
.fillMaxSize()
.weight(1f)
.pointerInput(Unit) {
detectDragGesturesAfterLongPress(
onDrag = { },
onDragStart = { offset ->
dragAndDropListState.onDragStart(offset)
},
onDragEnd = { },
onDragCancel = { }
)
}
){
/*Composable content*/
}
```
#### onDrag
What do we wanna do?
- we want to move an element along the Y coordinate
- when we moved an element to the place of another element it needs to change indexes of both elements
OK, let's create a variable for distance of offset
```
private var draggingDistance by mutableFloatStateOf(0f)
fun onDrag(offset: Offset) {
draggingDistance += offset.y
}
```
Next we need to get offsets of our initial element
```
private var draggingDistance by mutableFloatStateOf(0f)
private val initialOffsets: Pair<Int, Int>?
get() = initialDraggingElement?.let { Pair(it.offset, it.offsetEnd) }
fun onDrag(offset: Offset) {
draggingDistance += offset.y
initialOffsets?.let { (top, bottom) ->
val startOffset = top.toFloat() + draggingDistance
val endOffset = bottom.toFloat() + draggingDistance
}
}
```
Next I gonna directly get the element. In LazyList (LazyColumn or LazyRow) index of element isn't equal a position on a screen. So, we need some extension for getting the element by index.
```
private fun LazyListState.getVisibleItemInfo(itemPosition: Int): LazyListItemInfo? {
return this.layoutInfo.visibleItemsInfo.getOrNull(itemPosition - this.firstVisibleItemIndex)
}
```
```
private val currentElement: LazyListItemInfo?
get() = currentIndexOfDraggedItem?.let {
lazyListState.getVisibleItemInfo(it)
}
```
Next our task is changing indexes - we have to change the index of element to which we moved the dragged element, and we have to change index of dragged element.
```
fun onDrag(offset: Offset) {
draggingDistance += offset.y
initialOffsets?.let { (top, bottom) ->
val startOffset = top.toFloat() + draggingDistance
val endOffset = bottom.toFloat() + draggingDistance
currentElement?.let { current ->
lazyListState.layoutInfo.visibleItemsInfo
.filterNot { item ->
item.offsetEnd < startOffset || item.offset > endOffset || current.index == item.index
}
.firstOrNull { item ->
val delta = startOffset - current.offset
when {
delta < 0 -> item.offset > startOffset
else -> item.offsetEnd < endOffset
}
}
}?.also { item ->
currentIndexOfDraggedItem?.let { current ->
onMove.invoke(current, item.index)
}
currentIndexOfDraggedItem = item.index
}
}
}
```
`onDrag` function is completed. Let's paste it to `LazyColumn`.
```
detectDragGesturesAfterLongPress(
onDrag = { change, offset ->
change.consume()
dragAndDropListState.onDrag(offset)
/*some code below*/
```
But it's not all because if we try to use it we can see that list doesn't scroll behind dragged item. For scrolling list we have to add over scroll checker and some CoroutineJob for the `.scrollBy()` function.
Add over scroll checker function to `class DragAndDropListState`
```
fun checkOverscroll(): Float {
return initialDraggingElement?.let {
val startOffset = it.offset + draggingDistance
val endOffset = it.offsetEnd + draggingDistance
return@let when {
draggingDistance > 0 -> {
(endOffset - lazyListState.layoutInfo.viewportEndOffset).takeIf { diff -> diff > 0 }
}
draggingDistance < 0 -> {
(startOffset - lazyListState.layoutInfo.viewportStartOffset).takeIf { diff -> diff < 0 }
}
else -> null
}
} ?: 0f
}
```
Add Coroutine Scope and Coroutine Job to the top of list
```
val coroutineScope = rememberCoroutineScope()
var overscrollJob by remember { mutableStateOf<Job?>(null) }
```
And add `scrollBy()` to onDrag
```
onDrag = { change, offset ->
change.consume()
dragAndDropListState.onDrag(offset)
if (overscrollJob?.isActive == true) return@detectDragGesturesAfterLongPress
dragAndDropListState
.checkOverscroll()
.takeIf { it != 0f }
?.let {
overscrollJob = coroutineScope.launch {
dragAndDropListState.lazyListState.scrollBy(it)
}
} ?: kotlin.run { overscrollJob?.cancel() }
}
```
#### onDragEnd, onDragCancel
Here we just reset variables in `class DragAndDropListState`
```
fun onDragInterrupted() {
initialDraggingElement = null
currentIndexOfDraggedItem = null
draggingDistance = 0f
}
```
### Step 4
There is one thing what we have to do. We should define the modifier of element.
Let's create a variable inside `class DragAndDropListState`
```
val elementDisplacement: Float?
get() = currentIndexOfDraggedItem?.let {
lazyListState.getVisibleItemInfo(it)
}?.let { itemInfo ->
(initialDraggingElement?.offset ?: 0f).toFloat() + draggingDistance - itemInfo.offset
}
```
And our element's modifier looks like this:
```
ItemCard(
userEntityUi = user,
modifier = Modifier
.composed {
val offsetOrNull =
dragAndDropListState.elementDisplacement.takeIf {
index == dragAndDropListState.currentIndexOfDraggedItem
}
Modifier.graphicsLayer {
translationY = offsetOrNull ?: 0f
}
}
)
```
### Preliminary results
Now we have the same code like in [Make it Easy: How to implement Drag and Drop List Item in Jetpack Compose](https://www.youtube.com/watch?v=jVL6Ze46III) video.
You can get the code by this link: [Part 1 code](https://github.com/MarDSoul/example-android-draganddroplazycolumn/tree/99c493a5beaeac77d49e3ac79044667fbe94457f)
And this even works :)

#### Issues
- If we add a new user our list back to the starter order of elements. It happens we add user to a database, from the database we receive a new users list, obviously our `uiState` also have a new value, next recomposition.
- If we add a new user and try to interact with last elements we can catch **Index Of Bounds Exception**. It happens because we have no any keys for update `DragAndDropListState`.
---
_Refactoring and resolving issues will be in next Part 2 (place for link)._ | mardsoul |
1,897,915 | How to Create a Dark Mode in Figma and Not Die Trying | Alright, alright, you've been working on your design and prototyping in Figma for months, and due to... | 0 | 2024-06-23T21:42:41 | https://dev.to/amanda_montero/how-to-create-a-dark-mode-in-figma-and-not-die-trying-20pm | figma, uidesign, uxdesign, frontend |
Alright, alright, you've been working on your design and prototyping in Figma for months, and due to the nature of your project, a dark mode wasn't planned from the beginning. It's okay, it happens to all of us sometimes. Come on, you already had enough problems on your plate.
Don't worry, it's time to buckle up and get to work.
##
**How can we do it in the most organized and clean way possible?**
Here are my 3 key tips.
**1-Define your primary color palette**. It's obvious, right? But sometimes due to lack of time or poor team organization, our project ends up with an overwhelming number of colors, making our work more complicated and causing developers to hate us more than necessary.

**2- Create your palette**. Once our exercise of personal redemption is done and everything is in order, we assign a palette equivalent to the one already created in light mode. ⭐Here comes the magic⭐: create palettes that preserve the brand identity of the app. Be original; dark mode doesn't have to be black. In the example case, the brand has a very soft light aesthetic with pastel pink tones. Dark mode was quite a challenge.

Hey! Remember colors shouldn't just be pretty, **create a color palette with contrast to make your design accessible**, remember you have tools like https://www.w3.org/WAI/standards-guidelines/wcag/glance/es to help you with this task.
If you work with Figma, you also have plugins that can help you automate the process, such as A11y, Able etc.
Alright, this is an example of how we can structure the use of the color palette in our project. Personally, I think it's a way to organize and streamline the whole process that we are going to undertake afterwards.
**This part is tedious, but trust me, it will save you a ton of work in the future.**

Okay, everything is set. Now let's focus on the two elements that will make our lives easier when creating a dark mode, or simply speeding up any design changes in our project.
**1-Components:** Always work with components. Seriously, if you're a junior UI designer, you might still be working with groups. It may seem obvious, but if you're just starting out, I highly recommend getting the hang of components as soon as possible.
-
**Advantages of using components:** You can apply changes in all the design in seconds. You can create variables, to make dynamic components, create variations of the same component, to be able to create a functional UI Kit and to be able to reuse it in different projects.
**Remember,** these components can help us in creating the dark mode in the sense that sometimes we need to be prepared for the issues we can encounter with dark mode shades. In the example case, product cards need a small white border, which is not typical of dark mode, but it's what the project required.

-
**Disadvantages, NONE, don't be lazy** and work with components, you will take a little time at the beginning but really you will thank yourself.

**2-Variants**: are not really necessary for a UI but also facilitate a lot of work and are a crucial tool to generate the dark mode in a dynamic way.

In this way, we will be able to assign color variants from our original palette (light) to our variant or variants of palettes. It is important that the colors of the variants are assigned. That is to say, if we have a button with a color that we have called X, that button has to have the color X, not its hexadecimal tone, so when we assign the dark variable, the color will change automatically. This allows us to also create eventual design changes or automatically change a color throughout the project.
##
**That's all.**
Today's hard work will be tomorrow's rest. If you follow these tips, the changes in the project will be very pleasant.

| amanda_montero |
1,898,144 | Advanced Text Search Mastery with Apache Lucene: A Full Guide | Apache Lucene is an esteemed search library celebrated for its advanced text search capabilities. It... | 0 | 2024-06-23T21:31:29 | https://devtoys.io/2023/10/29/advanced-text-search-mastery-with-apache-lucene-a-full-guide/ | tutorial, backend, devtoys, java | ---
canonical_url: https://devtoys.io/2023/10/29/advanced-text-search-mastery-with-apache-lucene-a-full-guide/
---
Apache Lucene is an esteemed search library celebrated for its advanced text search capabilities. It is a vital resource for developers, data analysts, and SEO professionals, providing a robust query syntax for crafting precise and complex search queries. This guide aims to unravel the intricacies of Lucene’s query syntax, enabling you to maximize the potential of Apache Lucene in your projects.
## Understanding Lucene Query Syntax: Simple vs. Full
Lucene Query Syntax comes in two flavors: Simple and Full. Both serve to create powerful search queries but differ in terms of complexity and capability.
## Simple Lucene Query Syntax
**Purpose**: Designed for ease of use and quick setup.
**Capabilities**: Supports basic text searches, including single and multiple term searches, as well as wildcard and fuzzy searches.
**Limitations**: Lacks the advanced features and precision of Full Lucene
Query Syntax.
Usage Scenario: Ideal for straightforward search requirements where speed and simplicity are prioritized.
```java
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
public class SimpleLuceneExample {
public static void main(String[] args) throws Exception {
// Setup: Create an index
StandardAnalyzer analyzer = new StandardAnalyzer();
Directory index = new RAMDirectory();
// Add documents to the index (omitted for brevity)
// Simple search example
String queryStr = "apple";
Query query = new QueryParser("content", analyzer).parse(queryStr);
// Search the index
IndexReader reader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(reader);
TopDocs results = searcher.search(query, 10);
// Display search results
for (ScoreDoc hit : results.scoreDocs) {
Document doc = searcher.doc(hit.doc);
System.out.println(doc.get("content"));
}
reader.close();
index.close();
}
}
```
## Full Lucene Query Syntax
**Purpose**: Offers an extensive set of features for complex and precise search queries.
**Capabilities**: Includes all features of Simple Lucene Query Syntax, plus advanced options like Boolean operators, range searches, boosting terms, proximity searches, and field-specific queries.
**Usage Scenario**: Best suited for complex search requirements that demand a high degree of precision and customization.
```java
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
public class FullLuceneExample {
public static void main(String[] args) throws Exception {
// Setup: Create an index
StandardAnalyzer analyzer = new StandardAnalyzer();
Directory index = new RAMDirectory();
// Add documents to the index (omitted for brevity)
// Full query example
String queryStr1 = "apple";
String queryStr2 = "banana";
Query query1 = new QueryParser("content", analyzer).parse(queryStr1);
Query query2 = new QueryParser("content", analyzer).parse(queryStr2);
BooleanQuery.Builder booleanQuery = new BooleanQuery.Builder();
booleanQuery.add(query1, BooleanClause.Occur.MUST);
booleanQuery.add(query2, BooleanClause.Occur.MUST_NOT);
// Search the index
IndexReader reader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(reader);
TopDocs results = searcher.search(booleanQuery.build(), 10);
// Display search results
for (ScoreDoc hit : results.scoreDocs) {
Document doc = searcher.doc(hit.doc);
System.out.println(doc.get("content"));
}
reader.close();
index.close();
}
}
```
## Grasping the Basics of Lucene Query Syntax
**Single and Multiple Term Searches**
## Single Term Search
Type the term in the search box. For instance, apple fetches all documents containing “apple”.
```java
String queryStr = "apple";
Query query = new QueryParser("content", analyzer).parse(queryStr);
```
## Multiple Term Search
Inputting terms like apple banana retrieves documents with either “apple”, “banana”, or both.
```java
String queryStr = "apple banana";
Query query = new QueryParser("content", analyzer).parse(queryStr);
```
## Phrase Searches
To find an exact phrase, enclose it in double quotes: "apple pie".
```java
String queryStr = "\"apple pie\"";
Query query = new QueryParser("content", analyzer).parse(queryStr);
```
## Wildcard Searches
Utilize * for multiple character wildcards and ? for single character wildcards: appl*, app?e.
```java
String queryStr = "appl*";
Query query = new QueryParser("content", analyzer).parse(queryStr);
String queryStr = "app?e";
Query query = new QueryParser("content", analyzer).parse(queryStr);
```
## Fuzzy Searches
Add ~ to a term for a fuzzy search: apple~.
```java
String queryStr = "apple~";
Query query = new QueryParser("content", analyzer).parse(queryStr);
```
## Mastering Boolean Operators
Boolean operators enable the creation of complex search queries:
AND: apple AND banana returns documents with both “apple” and “banana”.
```java
String queryStr = "apple AND banana";
Query query = new QueryParser("content", analyzer).parse(queryStr);
OR: apple OR banana retrieves documents with either “apple” or “banana”.
String queryStr = "apple OR banana";
Query query = new QueryParser("content", analyzer).parse(queryStr);
NOT: apple NOT banana fetches documents with “apple” but not “banana”.
String queryStr = "apple NOT banana";
Query query = new QueryParser("content", analyzer).parse(queryStr);
```
## Implementing Range Searches in Lucene
Range searches are crucial for finding documents with terms within a specific range:
**Inclusive Range Searches**
Use square brackets []: date:[20230101 TO 20231231], price:[10 TO 50].
```java
String queryStr = "date:[20230101 TO 20231231]";
Query query = new QueryParser("content", analyzer).parse(queryStr);
```
**Exclusive Range Searches**
Use curly brackets {}: price:{10 TO 50}.
```java
String queryStr = "price:{10 TO 50}";
Query query = new QueryParser("content", analyzer).parse(queryStr);
```
## When to Use Exclusive Range Searches
Filtering Results: Exclude edge values.
Avoiding Duplication: Useful in paginated search results.
Precise Numerical Filters: For accurate numerical filtering.
## 👀 Check out the full tutorial here! ===> [Advanced Text Search Mastery with Apache Lucene: A Full Guide - DevToys.io](https://devtoys.io/2023/10/29/advanced-text-search-mastery-with-apache-lucene-a-full-guide/)
| 3a5abi |
1,898,143 | AI is just excited to be here | Any software developer worth their salt can tell AI is just an excited pair programmer. In the early... | 0 | 2024-06-23T21:24:42 | https://dev.to/maurijhn/ai-is-just-excited-to-be-here-f10 | ai, computerscience | Any software developer worth their salt can tell AI is just an excited pair programmer.
In the early stages of ChatGPT, everybody was luring me into using the chatbot to help me as a software developer. Everything I heard about it was "heaven-like". When I started using it, I did feel its versatility for developers; even went the extra mile and purchased the premium version. Everybody who was or knew a software developer told stories about how powerful the bot was when it had access to the internet. After less than a week of using the paid version, I failed to see big differences between the paid and non-paid answers.
I then tried out GitHub's Copilot. I found this tool extremely powerful for developers. Not because of the solutions it provided, but because I didn't need to go back to the browser to copy and then paste them in my editor. However, I started to feel an uneasy loss of control over my thinking. It wasn't me programming anymore. I was just trying out snippets until the program worked. Eventually, I configured Copilot to be toggled and disabled by default. I only use it when I don't want to rewrite something or when I'm writing unit tests.
This post is more of a venting outlet for me. I was laid off last year and with all the talks of AI taking over tech jobs and nobody batting an eye at Sam Altman being all excited about it, I think people hyping AI need a reality check. It's a tool, a new software tool, 1s and 0s, and tools are made for us to _"amplify the inherent abilities that we have to spectacular magnitudes"_ _- Steve Jobs_ | maurijhn |
1,898,129 | Enhancing Technical Skills by Writing: Learn in Public | Overview In this article, I will share the reasons why I write technical blog posts and... | 0 | 2024-06-23T21:20:03 | https://dev.to/godinhojoao/enhancing-technical-skills-by-writing-learn-in-public-42b7 | beginners, writing | ## Overview
- In this article, I will share the reasons why I write technical blog posts and my approach. I hope this will motivate and guide you to start writing about topics you are studying or already familiar with!
## Table of Contents
- [Reasons why I write technical blog posts](#reasons-why-i-write-technical-blog-posts)
- [Learn in Public vs Content Creator](#learn-in-public-vs-content-creator)
- [Do I use ChatGPT or other AIs when writing? Yes, and no.](#do-i-use-chatgpt-or-other-ais-when-writing-yes-and-no)
- [Steps I follow to create my technical blog posts and suggestions for you](#steps-i-follow-to-create-my-technical-blog-posts)
## Reasons why I write technical blog posts
- To learn/improve technical knowledge.
- To learn/improve storytelling.
- To learn/improve writing.
- To learn/improve English.
## Learn in Public vs Content Creator
- These terms are different. As a learner in public, you share what you've been studying technically.
- The purpose of a learner in public is to document and share their studies.
- The purpose of a content creator is to entertain. (This is not my focus, and probably not yours if you are here.)
- Some advantages of learning in public:
- Builds accountability and consistency.
- Reinforce your learning through teaching and practicing.
- Provides opportunities for feedback and improvement.
## Do I use ChatGPT or other AIs when writing? Yes, and no.
- No, AI does not create my content.
- **If AI did, I wouldn't learn anything during this process**, and it would be a **complete waste of time**, which I can't afford.
- Yes, I use ChatGPT and AI to review my English and minimize typos. I also use GPT to improve my markdown, as I write all my posts using a `file.md` on VsCode.
- But I still try to maintain my writing style, even if it means committing "mistakes" and not being the best English writer.
- "Mistakes" because I'm referring to my writing style, which may not be perfect, **but it's mine, not an AI's**.
- "Mistakes", because I'm talking about the writing style, that's probably not the best, **but it's mine** not of an AI.
- As I mentioned before: **My main purpose is to learn** and to do that the content creation follows [certain steps](#steps-i-follow-to-create-my-technical-blog-posts).
## Steps I follow to create my technical blog posts:
- There are some types of posts that I used to write, some more informal, others more formal. Sometimes practical tutorials, and sometimes theoretical content about something that I'm studying for the first time or reviewing.
- But in all of them I follow a "basic algorithm":
- 1. **Select the subject you will learn or review**
- Here you will select the content, but more than that, ensure it is something you are genuinely interested in or that solves a problem you have or have encountered before.
- 2. **Write the article with your own words and understanding of the content**
- How do you know if you understand something? By teaching others and practicing.
- To teach others, you can't be reading something and writing at the same time, because you will just copy the words you've read without realize. Instead, try to do it naturally, read first, practice, create examples for yourself, and then write just what you've learned.
- But don't get too attached to it, sometimes it's important to use the actual definition and avoid your own words, be sure to do this at the right moment.
- 3. **Review written content, typos, and also markdown.**
- Ensure that what you've written is not a misunderstanding of the subject.
- In the future, if you see something wrong you can re-write about this subject, why not?
- For typos, you can use AI, Grammarly, read again, or even hire a professional. (Probably paying someone it's not your idea, right?)
- To review markdown and improve formatting you can use AI.
## Thanks for Reading!
- Feel free to reach out if you have any questions, feedback, or suggestions. Your engagement is appreciated!
## Contacts
- You can find this and more content on:
- [My website](https://godinhojoao.com/)
- [GitHub](https://github.com/godinhojoao)
- [LinkedIn](https://www.linkedin.com/in/joaogodinhoo/)
- [Dev Community](https://dev.to/godinhojoao)
| godinhojoao |
1,898,128 | [NestJS] API DockerHub + IA + PASETO (Local) | Proyecto en NestJS usando los endpoints de DockerHub (para buscar repositorios), utilizando... | 0 | 2024-06-23T21:13:43 | https://dev.to/jkdevarg/nestjs-api-dockerhub-ia-paseto-local-2n4n | nestjs, javascript, api, ai | Proyecto en NestJS usando los endpoints de DockerHub (para buscar repositorios), utilizando autenticación con Paseto y con la ayuda de IA (Geminis).
**Introducción:**
Es un proyecto que arme estando aburrido, mediante los endpoints de DockerHub obtengo datos de los repositorios, con ellos y usando IA Geminis creo un docker-compose.yml listo para ser ejecutado.
Si se envia un parametro tipo boleano en la query ej:
`http://localhost:3000/api/v1/docker/find?id=bitnami/laravel&execute=true`
ejecuta el docker compose para crear las imagenes y el contenedor.
Si todo esta bien la respuesta de la API será correcta devolviendo, la respuesta de DockerHub, el contenido del docker-compose.yml el comando que se puede ejecutar, y el output del docker-compose a la hora de crearse.
**Capturas**






Luego puedes configurar cualquier proyecto o repositorio.
**Extras**
También se uso la libreria de PASETO para autenticación con token.
Para más info [sitio web](https://paseto.io/)
Si ven que no funciona a la primera o sale error 400, vuelven a ejecutar o sino pueden ir al directorio raíz donde se creo el composer y lo pueden volver a generar.
Código del proyecto
[https://github.com/JkDevArg/API-NestJS-DockerHub](https://github.com/JkDevArg/API-NestJS-DockerHub)⭐⭐⭐
| jkdevarg |
1,898,127 | We were tired of bureaucracy, so we built an opensource repo for the best guides | Today we are launching Tramitit, a shared database on getting all those poor-UX local procedures done... | 0 | 2024-06-23T21:09:11 | https://ricardobatista.me/posts/we-were-tired-of-bureaucracy/ | Today we are launching [Tramitit](https://tramitit.com/), a shared database on getting all those poor-UX local procedures done in a much simpler way!
- **How it works**: The community provides detailed walkthroughs on every possible bureaucratic process you might go through.
- **For the community by the community**: The content on this website is curated, verified, and rated by the community.
- **Welcoming providers that can help our users**: Although our guides are as simple as possible, users may still prefer to delegate tasks to a curated provider.
We are in a bit a chicken & egg problem: for the traffic to pick up and have contributors, we need to start by adding some content - some of which we have already gone through and fixed it, other not yet there. We will get there over time, and work with providers that can also bring the quality up.
You can find our [GitHub repo here](https://github.com/tramitit/guides), where you can contribute as well!
Our [website is also open-source](https://github.com/tramitit/tramitit.github.io), so feel free to add anything. | rbatista19 | |
1,898,125 | Custom Emails with Supertokens, Resend, and React Email | At Cerebral Valley we use Supertokens for authentication into our platform. Supertokens comes with a... | 0 | 2024-06-23T21:05:59 | https://dev.to/iporollo/custom-emails-with-supertokens-resend-and-react-email-2mi1 | At [Cerebral Valley](https://cerebralvalley.ai) we use [Supertokens](https://supertokens.com/) for authentication into our platform.
Supertokens comes with a default email template / design that is sent to users upon account creation, email confirmation, and other actions.
We wanted to customize emails sent out from Supertokens to our users to keep the brand aesthetic, so we used [Resend](https://resend.com/) and [React Email](https://react.email/) to do so.
I wrote up this post to give step by step instructions of how to combine the three technologies and showcase their simplicity.
## Pre-requisites
Your project will need to be using [Supertokens](https://supertokens.com/) as the method of authentication.
In this walkthrough, I am working out of a Typescript project with an Express backend.
## Resend setup
First, you will need to create a [Resend](https://resend.com/) account. Resend is a new platform to send emails programmatically. Think [Sendgrid](https://sendgrid.com/) but modern and easier to use.
[Sign up] (https://resend.com/signup) and go through the onboarding flow to get an API key. By default, the first key that is created is called "Onboarding"

Add your API key to your `.env` file, something like
```
RESEND_API_KEY="<your_api_key>"
```
Then, add the domain you want to send your emails from.

You will have to add a few records to your domain which Resend will walk you through.
Next, create a file in your project called `smpt.ts` with the following contents
```
const smtpSettings = {
host: 'smtp.resend.com',
authUsername: 'resend',
password: process.env.RESEND_API_KEY,
port: 465,
from: {
name: '<your_email_sender_name>',
email: '<your_email_account>',
},
secure: true,
};
export { smtpSettings };
```
## React Email setup
[React Email](https://react.email/) is a library built by the founder of Resend. Since emails support HTML, the library allows you to customize emails with React components and then compiles them down to HTML before sending.
To install React Email, run
```
yarn add react @react-email/components @react-email/render
```
Create your a file named `Email.tsx` for your React Email components. The file contents would be something like this
```
import * as React from 'react';
import { render } from '@react-email/render';
import {
Body,
Button,
Container,
Head,
Heading,
Hr,
Html,
Img,
Link,
Preview,
Section,
} from '@react-email/components';
const emailSubject = 'Email Subject';
const EmailHtml = (content: string): string => {
return render(<Email content={content} />);
};
// Component
interface EmailProps {
content: string;
}
function Email(props: EmailProps): JSX.Element {
const { content } = props;
return (
<Html>
<Head />
<Preview>{'What the user sees in the preview'}</Preview>
<Body style={main}>
<Container style={container}>
<Img
src={`https://yourlogo.com/logo.png`}
width="42"
height="42"
alt="Logo"
style={logo}
/>
<Heading style={heading}>Click the button below</Heading>
<Section style={buttonContainer}>
<Button style={button} href={content}>
My button
</Button>
</Section>
</Container>
</Body>
</Html>
);
}
// Styling
const logo = {
borderRadius: 21,
width: 42,
height: 42,
};
const main = {
backgroundColor: '#090909',
fontFamily:
'-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif',
// ui-sans-serif, system-ui, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji"
};
const container = {
margin: '0 auto',
padding: '20px 0 48px',
maxWidth: '560px',
};
const heading = {
fontSize: '24px',
letterSpacing: '-0.5px',
lineHeight: '1.3',
fontWeight: '400',
color: '#fff',
padding: '17px 0 0',
};
const paragraph = {
margin: '0 0 15px',
fontSize: '15px',
lineHeight: '1.4',
color: '#3c4149',
};
const buttonContainer = {
padding: '27px 0 27px',
};
const button = {
backgroundColor: '#fff',
borderRadius: '3px',
fontWeight: '600',
color: '#000',
fontSize: '15px',
textDecoration: 'none',
textAlign: 'center' as const,
display: 'block',
padding: '11px 23px',
};
const reportLink = {
fontSize: '14px',
color: '#b4becc',
};
const hr = {
borderColor: '#dfe1e4',
margin: '42px 0 26px',
};
export { EmailHtml, emailSubject };
```
Note that the above code is for a showing a link in the email. Your content may be different.
## Supertokens Config
Supertokens allows you to use your own domain / SMTP server ([link to docs](https://supertokens.com/docs/thirdpartypasswordless/email-delivery/about#method-2-use-your-own-domain--smtp-server)). We take advantage of this method to plug in Resend and the created React component.
In your Supertokens config, override the smtp settings as described in their docs [here](https://supertokens.com/docs/thirdpartypasswordless/email-delivery/smtp/change-email-content).
Don't forget to import your smtpSettings and Email component.
Your Supertokens config would look something like this
```
import supertokens from "supertokens-node";
import Passwordless from "supertokens-node/recipe/passwordless";
import Session from "supertokens-node/recipe/session";
import { SMTPService } from "supertokens-node/recipe/passwordless/emaildelivery";
import EmailVerification from "supertokens-node/recipe/emailverification"
import { SMTPService as EmailVerificationSMTPService } from "supertokens-node/recipe/emailverification/emaildelivery";
import { smtpSettings } from './smtp';
import { EmailHtml, emailSubject } from './Email';
supertokens.init({
appInfo: {
apiDomain: "...",
appName: "...",
websiteDomain: "..."
},
recipeList: [
Passwordless.init({
emailDelivery: {
service: new SMTPService({
smtpSettings,
override: (originalImplementation): any => {
return {
...originalImplementation,
getContent: async function (input): Promise<any> {
const {
isFirstFactor,
codeLifetime, // amount of time the code is alive for (in MS)
email,
urlWithLinkCode, // magic link
} = input;
if (isFirstFactor) {
return {
body: EmailHtml(urlWithLinkCode),
isHtml: true,
subject: emailSubject,
toEmail: email,
};
} else {
return {
body: EmailHtml(urlWithLinkCode),
isHtml: true,
subject: emailSubject,
toEmail: email,
};
}
},
};
},
}),
},
}),
Session.init()
]
});
```
In the config above, we override the email delivery service with our own definition of the `SMPTService`, where we set the previously defined `smptSettings` and a new `getContent` function definition. In the new `getContent`, we get the defined React Component as the body of the email. The component is compiled into HTML, so we can use it in the email.
With those changes, you can start the server and run through the authentication flow. When you receive the email from Supertokens, you should see your new component design as the body of the email.
## Wrapping up
If you're using Supertokens for your authentication, I highly recommend using Resend and React Email to customize your emails. It's very simple to setup and a pleasant developer experience.
If you have any questions, send me a DM in our community [Slack](https://cerebralvalley.ai/slack).
Check out what we're building at [Cerebral Valley](https://cerebralvalley.ai).
| iporollo | |
1,898,124 | Small Forms Bundle | I've just released small/forms-bundle 1.0.0 and small/forms 1.1.2. small/forms provide input data... | 0 | 2024-06-23T21:03:31 | https://dev.to/sebk69/small-forms-bundle-3dc5 | php, symfony, showdev | I've just released small/forms-bundle 1.0.0 and small/forms 1.1.2.
small/forms provide input data validation and transformation.
small/forms-bundle implement it in Symfony normalizer.
git : [https://git.small-project.dev/lib/small-forms-bundle](https://git.small-project.dev/lib/small-forms-bundle)
packagist : [https://packagist.org/packages/small/forms-bundle](https://packagist.org/packages/small/forms-bundle) | sebk69 |
1,898,123 | "Traverse" in Computer Science | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T21:03:29 | https://dev.to/snipertomcat/traverse-in-computer-science-1mh9 | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
The definition of the verb 'traverse' is to "travel across or through", and in the context of computer science simply refers to the read, write, or execution upon the members of a compositional structure, broken down to operate on a granular level.
## Additional Context
No additional context is needed - the Explainer is self-explanatory
| snipertomcat |
1,898,122 | "Traverse" in Computer Science | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T21:03:23 | https://dev.to/snipertomcat/traverse-in-computer-science-24lj | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
The definition of the verb 'traverse' is to "travel across or through", and in the context of computer science simply refers to the read, write, or execution upon the members of a compositional structure, broken down to operate on a granular level.
## Additional Context
No additional context is needed - the Explainer is self-explanatory
| snipertomcat |
1,898,118 | Wie man Wordpress lokal mit docker compose aufsetzt | Wieso so? Um eine reproduzierbare Entwicklungs - Umgebung zu haben, ist es gut das... | 0 | 2024-06-23T20:55:00 | https://dev.to/rubenvoss/wie-man-wordpress-lokal-mit-docker-compose-aufsetzt-3m33 | ## Wieso so?
Um eine reproduzierbare Entwicklungs - Umgebung zu haben, ist es gut das Aufsetzen dieser Umgebung automatisch tun zu können. Dazu kann man docker compose benutzen. Hier zeige Ich euch ein docker-compose.yml, welches ein guter Startpunkt ist für eine reproduzierbare Entwicklungsumgebung.
Wir Benutzen hier als Stack
```
- wordpress:6.5.4-php8.1-apache
- mariadb:10.6.4-focal
- traefik:v3.0.3
```
Die `docker-compose.yml` können wir mit dem wordpress - Service anfangen:
```
services:
wordpress:
# Am Besten die Wordpress - Version festsetzen
image: wordpress:6.5.4-php8.1-apache
volumes:
# Im wp_data Volume liegt der Wordpress - Code,
# So kannst du ihn mit anderen Containern teilen
- wp_data:/var/www/html
restart: always
environment:
# Hier kannst du durch deine eigenen Werte ersetzen
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=wordpress
- WORDPRESS_DB_NAME=wordpress
labels:
# Weiter unten konfigurieren wir dann traefik als Reverse Proxy
- traefik.enable=true
- traefik.http.routers.mywordpress.rule=Host(`localhost`)
networks:
# Füge dein Wordpress dem proxy Netzwerk hinzu:
- proxy
# Optional kannst du dir noch den richtigen Hostname geben,
# sonst beschwert sich Wordpress
hostname: localhost
```
Als nächstes kannst du die Datenbank hinzufügen:
```
db:
# Das mariadb image unterstützt amd64 & arm64 architecture
image: mariadb:11.4.2-noble
command: '--default-authentication-plugin=mysql_native_password'
volumes:
# Deine Datenbank ist unter dem db_data Volume gespeichert
- db_data:/var/lib/mysql
restart: always
environment:
# Hier kannst du durch deine eigenen Werte ersetzen
- MYSQL_ROOT_PASSWORD=somewordpress
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=wordpress
expose:
- 3306
- 33060
networks:
- proxy
```
Jetzt müssen wir das ganze nur noch um unseren traefik Reverse - Proxy erweitern:
```
traefik:
image: traefik:v3.0.3
ports:
- "80:80"
- "8080:8080"
networks:
- proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command:
# Lokal reicht das http - Setup
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
# Mit whoami kannst du traefik debuggen
whoami:
image: traefik/whoami:v1.8
networks:
- proxy
labels:
- traefik.enable=true
- traefik.http.routers.mywhoami.rule=Host(`whoami.localhost`)
networks:
proxy:
name: proxy
volumes:
letsencrypt:
name: letsencrypt
db_data:
wp_data:
```
Nun musst du nur noch `docker compose up` Auf deinem Terminal ausführen, und dein Wordpress wird unter localhost bei dir erreichbar sein!
Happy Coden!
Dein Ruben
[Mein Blog](https://rubenvoss.de/?p=48)
| rubenvoss | |
1,896,867 | WordPress Classic vs. Block Themes | WordPress, powering over 40% of the web, has seen significant transformations in its theme... | 0 | 2024-06-23T20:52:13 | https://dev.to/mikevarenek/wordpress-classic-vs-block-themes-4pi1 | webdev, wordpress, beginners | WordPress, powering over 40% of the web, has seen significant transformations in its theme development approach. From the early days of Classic themes to the revolutionary Block themes introduced with Full Site Editing (FSE), WordPress has continuously adapted to meet the needs of developers and users alike. As we stand at the crossroads of these two approaches, it's essential to understand their differences and decide which path to take for your next project.
## Section 1: Understanding WordPress Themes
**Definition and Purpose**
WordPress themes are collections of templates and stylesheets used to define the appearance and display of a WordPress-powered website. They control the layout, design, and overall aesthetic of a site, enabling developers and users to create a unique look and feel without altering the core WordPress software. Themes can include template files, custom functions, images, and stylesheets, providing a robust framework for building a diverse range of websites.
**The primary purposes of WordPress themes are:**
- Separation of Design and Content: Themes allow users to modify the appearance of their website without affecting the content. This separation ensures that updates to the design can be made independently of the site's content.
- Customization: Themes offer various customization options, enabling users to tailor the design, layout, and functionality of their site to meet specific needs and preferences.
- Consistent Design: By using themes, websites maintain a consistent design across all pages and posts, enhancing the user experience and making the site more professional.
- Extensibility: Themes can be extended with custom plugins and widgets, adding new features and functionalities without altering the core theme files.
## Section 2: WordPress Classic Themes
WordPress Classic themes are the traditional themes that have been the foundation of WordPress websites for years. These themes are built using PHP templates, HTML, CSS, and JavaScript, providing a robust framework for designing and customizing websites.
Classic themes leverage a template-based system, where different PHP files control various parts of the website, such as the header, footer, and individual pages. This approach has been instrumental in the widespread adoption of WordPress, allowing developers to create custom websites with extensive control over design and functionality.
**Structure**
The structure of Classic themes is based on a combination of PHP files, stylesheets, and other assets. Key components include:
**Template Files:** These PHP files define the layout and structure of different parts of the website. Common template files include:
- header.php: Contains the HTML code for the header section.
- footer.php: Contains the HTML code for the footer section.
- index.php: The main template file that serves as a fallback for all other templates.
- single.php: Used to display individual blog posts.
- page.php: Used to display individual pages.
- archive.php: Used to display archive pages such as category or tag listings.
- sidebar.php: Contains the HTML code for the sidebar section.
**Template Hierarchy:** WordPress uses a template hierarchy system to determine which template file to use for a given request. This hierarchy allows themes to include multiple template files for different types of content, ensuring that the appropriate layout is applied based on the context.
**functions.php:** This file is used to add custom functionality to the theme. It can include custom functions, register widget areas, enqueue scripts and styles, and define theme support features such as custom logos and post thumbnails.
**Style.css:** The main stylesheet that controls the visual presentation of the theme. This file is also used to provide theme information, such as the name, author, and version.
**Other Assets:** Classic themes can include additional assets such as JavaScript files, images, and fonts to enhance the design and functionality of the site.
**Customization**
Customization in Classic themes can be achieved through various methods, allowing developers to tailor the theme to meet specific requirements:
**Child Themes:** A child theme inherits the functionality and styling of a parent theme while allowing modifications. By creating a child theme, developers can customize the design and functionality without altering the original theme files, ensuring that updates to the parent theme do not overwrite customizations.
**Custom CSS:** Developers can add custom CSS to modify the appearance of the theme. This can be done directly in the style.css file or through the WordPress Customizer, which provides a user-friendly interface for adding CSS code.
**Hooks and Filters:** WordPress provides a system of hooks (actions and filters) that allow developers to insert custom code at specific points in the theme. Actions allow code to be executed at certain points, while filters modify data before it is displayed. This system provides a powerful way to extend and customize theme functionality.
**Theme Options and Customizer:** Many Classic themes include custom theme options panels or integrate with the WordPress Customizer, allowing users to change settings such as colors, fonts, and layout options through a graphical interface.
## Pros and Cons
**Pros:**
- Established: Classic themes have been around for a long time, with a mature ecosystem and extensive community support.
- Wide Community Support: There are numerous tutorials, forums, and resources available for learning and troubleshooting Classic themes.
- Extensive Documentation: Detailed documentation exists for theme development, making it easier for developers to get started and find solutions to common problems.
- Compatible with Most Plugins: Classic themes are compatible with a vast array of plugins, enhancing functionality without requiring extensive custom code.
**Cons:**
- Can be Complex: Developing and customizing Classic themes can be complex, requiring knowledge of PHP, HTML, CSS, and JavaScript.
- Less Flexibility Without Custom Coding: While customization is possible, achieving advanced customizations often requires writing custom code.
- Limited to Template Hierarchy: The template-based system, while powerful, can be less flexible than newer approaches like block-based editing, especially for users who prefer a more visual editing experience.
- Classic themes have played a crucial role in the growth and success of WordPress, providing a solid foundation for building custom websites. However, as web development continues to evolve, new approaches like Block themes offer additional flexibility and ease of use, representing the next step in the evolution of WordPress theming.
## Section 3: WordPress Block Themes
WordPress Block themes represent a modern approach to theme development, introduced alongside the Full Site Editing (FSE) capabilities of the Gutenberg editor. Unlike Classic themes, which rely on a combination of PHP templates and custom code, Block themes use blocks as the fundamental building units.
This allows for a more visual and intuitive design process, where users can construct and customize their entire site layout directly within the WordPress editor. Full Site Editing enables comprehensive site-wide changes, offering unprecedented flexibility and control over both content and design.
**Structure**
The structure of Block themes differs significantly from Classic themes, focusing on blocks, templates, and configuration files:
**Block Templates: **These are HTML files that define the layout and structure of different parts of the site using blocks. For example:
- index.html: The main template file, serving as a fallback for other templates.
- single.html: Used to display individual blog posts.
- page.html: Used to display individual pages.
- archive.html: Used to display archive pages such as category or tag listings.
- 404.html: Used to display the 404 error page.
**Template Parts:** These are reusable blocks of code that can be included in multiple templates. Common template parts include:
- header.html: Defines the header section of the site.
- footer.html: Defines the footer section of the site.
- sidebar.html: Defines the sidebar section of the site.
- theme.json: This configuration file is a cornerstone of Block themes, allowing developers to define global styles and settings for their theme. It includes settings for typography, colors, spacing, and more, enabling consistent design across the site. The theme.json file also allows for the customization of block styles and properties, ensuring a cohesive and unified appearance.
**Customization**
Customization in Block themes is designed to be user-friendly and highly flexible, leveraging the Gutenberg editor and its associated features:
**Gutenberg Editor:** The Gutenberg editor provides a block-based interface for creating and editing content. Users can add, rearrange, and customize blocks to build complex layouts without writing code. The editor offers a wide range of blocks for different content types, such as text, images, galleries, and widgets.
**Reusable Blocks:** Reusable blocks allow users to create a block once and reuse it across multiple pages or posts. This is particularly useful for consistent elements like call-to-action buttons or promotional banners. Any changes made to a reusable block are reflected wherever it is used, ensuring consistency and saving time.
**Block Patterns:** Block patterns are predefined block layouts that can be inserted into a page or post with a single click. These patterns can include combinations of blocks arranged in specific ways, providing a quick and easy way to create complex layouts. Themes can include custom block patterns to offer users tailored design options.
**Pros and Cons**
**Pros:**
- Highly Flexible: Block themes allow for granular control over every aspect of a site's design and layout through the use of blocks and the Gutenberg editor.
- User-Friendly: The visual nature of the block editor makes it accessible to users with little to no coding experience, enabling them to create and customize layouts easily.
- Live Editing: Changes made in the block editor are displayed in real-time, providing an immediate preview of how the final design will look.
- Modern Approach: Block themes align with current web development trends, offering a more modular and scalable approach to theme design.
Read also: [The Best WordPress Free Themes](https://spacema-dev.com/the-best-wordpress-free-themes/)
**Cons:**
- Relatively New: Block themes and Full Site Editing are relatively new additions to WordPress, which means the ecosystem is still maturing. Some features and best practices are still evolving.
- Limited Compatibility with Some Plugins: Not all plugins are fully compatible with Block themes and FSE. Developers may need to ensure that their plugins support the new paradigm.
- Learning Curve for Traditional Developers: Developers accustomed to Classic themes may face a learning curve when transitioning to Block themes and the new block-based approach.
Block themes represent a significant shift in how WordPress sites are built and customized. By embracing a block-based, visual design approach, they offer powerful new tools for both developers and users. While there are challenges associated with the transition, the potential benefits make Block themes a compelling option for modern WordPress development.
## Section 4: Key Differences
**Editing Experience**
- The Classic editor provides a straightforward, text-based interface reminiscent of traditional word processors. It relies on a combination of HTML and shortcodes to format content.
- Editing content involves switching between the Visual and Text (HTML) tabs, which can be cumbersome for users unfamiliar with HTML.
- Customizing layouts and adding complex elements often requires the use of shortcodes or custom HTML, which can be limiting and less intuitive.
- The Gutenberg block editor offers a visual, block-based interface, allowing users to build content by adding and arranging blocks.
- Each block represents a different type of content (e.g., paragraph, image, gallery, video), providing a more intuitive and flexible editing experience.
- Real-time editing allows users to see changes as they make them, streamlining the design process and reducing the need for previewing.
**Customization Flexibility**
- Customization in Classic themes is heavily reliant on editing PHP and CSS files. Developers use the template hierarchy to control the layout and structure of different parts of the site.
- The `functions.php` file allows for adding custom functionality and theme-specific features, while custom CSS is used for styling.
- Child themes are commonly used to make modifications without altering the parent theme’s core files, ensuring updates do not overwrite customizations.
- While powerful, this approach requires a good understanding of PHP, HTML, and CSS, making it less accessible for non-developers.
**Block Editor and theme.json (Block Themes):**
- Block themes utilize the Gutenberg editor for customization, allowing users to add, remove, and rearrange blocks visually. This eliminates the need for extensive coding for basic customizations.
- The theme.json file is a central configuration file that defines global styles and settings, such as colors, typography, and spacing. This ensures a consistent design across the site and simplifies theme customization.
- Reusable blocks and block patterns offer additional flexibility, enabling users to create complex layouts and design elements that can be reused across the site.
- While highly flexible and user-friendly, this new approach requires users to learn the block editor and understand the theme.json configuration, presenting a new learning curve.
**Learning Curve**
**Established Methods (Classic Themes):**
- Classic themes follow a well-documented and established methodology. The extensive use of PHP and CSS means there are numerous resources, tutorials, and community support available.
- Developers familiar with traditional web development practices find Classic themes more straightforward, as the learning curve is primarily related to WordPress-specific functions and template hierarchy.
- The extensive documentation and community knowledge base make troubleshooting and learning easier.
**New Paradigm (Block Themes):**
- Block themes introduce a new way of thinking about theme development, centered around the Gutenberg block editor and Full Site Editing.
- The block-based approach and theme.json configuration require developers to adapt to new tools and methodologies, which can be challenging for those accustomed to Classic themes.
- While resources and documentation for Block themes are growing, they are not as extensive as those for Classic themes, potentially making the learning process more challenging.
- Developers must also consider compatibility and best practices for integrating existing plugins and custom code with the block-based system.
Read also: [The Best WordPress Gutenberg Plugins for Web Developers](https://spacema-dev.com/the-best-wordpress-gutenberg-plugins-for-web-developers/)
**Community and Support**
**Established Classic Theme Resources:**
- Classic themes benefit from a long-standing community with extensive resources, including forums, documentation, tutorials, and third-party tools.
- The large user base means that most issues and questions have been addressed, providing a wealth of knowledge for developers to draw from.
- Plugins and themes built for the Classic system are abundant, offering a wide range of functionality and customization options.
**Growing Block Theme Support:**
- Block themes are relatively new, and while support and resources are expanding, they are not yet as comprehensive as those for Classic themes.
- The WordPress community is actively developing and sharing knowledge about Block themes, with more tutorials, documentation, and tools becoming available over time.
- As Full Site Editing gains traction, more plugins and themes are being updated or created to be compatible with Block themes, enhancing the ecosystem and support network.
- The continuous development and updates to the Gutenberg editor and FSE features indicate a growing and dynamic support environment, which will likely become as robust as that for Classic themes in the near future.
## Section 5: Use Cases
**Classic Themes: Suitable Projects**
Classic themes are well-suited for a variety of projects, particularly those that require extensive customization and complex functionality. Here are some specific use cases where Classic themes shine:
**Complex Sites Requiring Custom Functionality:**
- E-commerce Platforms: Sites like online stores often need advanced customization to handle unique product displays, custom checkout processes, and intricate inventory management. Classic themes, with their robust PHP and template system, allow developers to create highly customized solutions tailored to specific business needs.
- Membership Sites: Platforms that offer subscription services or member-exclusive content benefit from the deep customization capabilities of Classic themes. Custom PHP code can manage user roles, access levels, and personalized content delivery.
- Large Corporate Websites: Enterprises with specific branding requirements and complex content structures (e.g., multiple departments, services, and internal resources) often rely on the flexibility of Classic themes to implement bespoke designs and functionalities.
**Legacy Projects:**
- Established Websites: Existing sites built with Classic themes are best maintained using the same approach to ensure continuity and compatibility. Migrating to a new theme type might introduce unnecessary risks and complications.
- Long-term Projects: Projects that have been in development for many years typically have a lot of custom code and integrations. Sticking with Classic themes allows developers to leverage the existing codebase and make incremental improvements without a complete overhaul.
**Custom Development Agencies:**
Tailored Solutions: Agencies specializing in custom WordPress development often prefer Classic themes because they offer more granular control over the site’s design and functionality. This control is crucial for delivering unique, client-specific solutions that go beyond the capabilities of standard themes.
**Block Themes: Suitable Projects**
Block themes excel in scenarios where ease of use, flexibility, and real-time customization are paramount. Here are some ideal use cases for Block themes:
**Content-Focused Websites:**
Blogs and News Sites: Sites that frequently publish content benefit from the Gutenberg editor’s block-based approach, allowing editors to create diverse layouts without needing developer intervention. Block patterns make it easy to maintain a consistent look while adding variety to posts.
Portfolios and Personal Sites: Creative professionals can use Block themes to showcase their work with visually appealing layouts. The ease of customization allows them to highlight their content dynamically and attractively.
**Sites Needing Frequent Updates and Design Changes:**
- Marketing Websites: Businesses that run frequent marketing campaigns or need to update their site’s appearance regularly can leverage the flexibility of Block themes. The ability to quickly adjust layouts and designs helps keep the site aligned with current marketing strategies.
- Landing Pages: Creating and modifying landing pages for various campaigns is straightforward with Block themes. The visual editor allows for rapid prototyping and deployment of new designs.
**Small Businesses and Startups:**
- Quick Setup and Deployment: Small businesses and startups often need to get their website up and running quickly with minimal development costs. Block themes, with their user-friendly editor, enable business owners to set up and customize their sites without extensive coding knowledge.
- Iterative Design: As these businesses grow, their website needs may evolve. Block themes make it easy to iterate on the design and functionality, adapting to new requirements without significant redevelopment.
**Educational and Non-Profit Organizations:**
Ease of Use for Non-Technical Users: Organizations that rely on volunteers or staff with limited technical expertise benefit from the intuitive block editor. Content updates and site maintenance can be handled in-house, reducing the need for ongoing developer support.
Dynamic Content Presentation: Block themes allow these organizations to present their content (e.g., event announcements, updates, and resources) in engaging and visually appealing ways, enhancing their communication and outreach efforts.
| mikevarenek |
1,898,117 | My 2024 Software Development Goals Update | It's been 170 days since I set my 2024 resolutions. It's been tough but the grind is worth it. At... | 0 | 2024-06-23T20:52:10 | https://melbite.com/melbite/My-2024-Software-Development-Goals-Update | career, programming, productivity, beginners | It's been 170 days since I set [my 2024 resolutions](https://melbite.com/Evans-Nyamai/Navigating-2024-with-Software-Development,-Tech-Training,-and-Open-Source-Aspirations). It's been tough but the grind is worth it.
At the beginning of the year, I promised self-growth, this includes:
- Growing [melbite](https://melbite.com) platform with over 100,000 users by the end of the year. 😊
- Growing my [YouTube channel](https://www.youtube.com/@codewithevans) to 10k subs.
- 😊 Become the best tech trainer/instructor in Africa
- Grow my software development course "[progskill](https://progskill.com/courses)" and impact thousands of software developers across the globe.
- And finally do 300 days of code, everything documented on my [GitHub profile](https://github.com/Evans-mutuku)
With this kind of goals, it seems hard to achieve, Yes it might be but once you take your first step, most of these goals will align.
### Here is my update on each in no particular order:
- **300DaysOfCode challenge** - I have been able to beat all the odds and coding challenges in which have done [200DaysOfCode - 100DaysToGo](https://github.com/Evans-mutuku).This achievement has made me feel better about myself. I can really achieve anything. 😊
- **Growing my [YouTube Channel](https://www.youtube.com/@codewithevans)** - At the beginning of the year, we have grown [YouTube subscribers](https://www.youtube.com/@codewithevans) from 150 subscribers to 420 subscribers. Motivate me by subscribing to my tech [YouTube channel](https://www.youtube.com/@codewithevans). 😊
- **Being the best trainer In Africa** - this year, I have been able to train over 6,000 people across the globe on how to create software through my [YouTube channel](https://www.youtube.com/@codewithevans) with different programming languages including Python development, [Javascript Development](https://progskill.com/courses/javascript-basics), [ReactJs](https://progskill.com/courses/react-basics), [Typescript Course](https://progskill.com/courses/typescript-for-beginners), and [Machine Learning](https://progskill.com/pro/intermediate-machine-learning).
- **Progskill Growth**- [Progskill](https://progskill.com) has been able to train over 2,000 people in software development. Kindly check out some of our courses.
With all that said, what have you been able to achieve this year on your New Year's resolutions? Leave a comment below.
Thank you for reading! ✨ | evansifyke |
1,898,115 | Exploring the Power of CSS Variables | CSS variables, also known as custom properties, are a powerful feature that can significantly enhance... | 0 | 2024-06-23T20:49:09 | https://dev.to/kevin_asogwa/exploring-the-power-of-css-variables-1hn4 | webdev, css | CSS variables, also known as custom properties, are a powerful feature that can significantly enhance the way you write and manage your stylesheets. They allow you to store values in one place and reuse them throughout your CSS, making your code cleaner, more maintainable, and easier to read. Let's dive into some interesting aspects of using CSS variables.
**Declaring and Using CSS Variables**
CSS variables are declared within a CSS rule that is scoped to a specific element or globally within the `:root` pseudo-class. Here's a basic example:
```css
:root {
--primary-color: #3498db;
--secondary-color: #2ecc71;
--font-family: 'Arial, sans-serif';
--base-padding: 10px;
}
```
In this example, we declare four variables: `--primary-color`, `--secondary-color`, `--font-family`, and `--base-padding`. These variables can now be used throughout your CSS:
```css
body {
font-family: var(--font-family);
padding: var(--base-padding);
}
h1 {
color: var(--primary-color);
}
button {
background-color: var(--secondary-color);
padding: var(--base-padding);
}
```
**Benefits of Using CSS Variables**
1. Consistency: By using variables, you ensure that your styles remain consistent across your entire website. If you need to change a color or a font, you only need to update the variable's value in one place.
2. Maintainability: CSS variables make your code more maintainable. Instead of searching through your entire stylesheet to find and replace values, you can simply update the variable.
3. Readability: Variables make your CSS more readable. Instead of seeing a hex color code or a font name repeated throughout your stylesheet, you see meaningful variable names that describe their purpose.
4. Dynamic Styling: CSS variables can be updated dynamically using JavaScript, allowing for more interactive and responsive designs.
**Advanced Usage**
CSS variables can also be used in more advanced scenarios, such as theming and responsive design.
Theming: You can create different themes for your website by changing the values of your variables.
```css
:root {
--primary-color: #3498db;
--secondary-color: #2ecc71;
}
.dark-theme {
--primary-color: #2c3e50;
--secondary-color: #1abc9c;
}
body {
color: var(--primary-color);
background-color: var(--secondary-color);
}
```
By adding the `dark-theme` class to the body, you can switch to a dark theme:
```html
<body class="dark-theme">
<!-- Content -->
</body>
```
**Responsive Design**: CSS variables can be used in media queries to adjust styles based on screen size.
```css
:root {
--base-padding: 10px;
}
@media (min-width: 768px) {
:root {
--base-padding: 20px;
}
}
.container {
padding: var(--base-padding);
}
```
In this example, the padding of the `.container` class will adjust based on the screen size.
**Conclusion**
CSS variables are a versatile and powerful tool that can greatly improve the way you write and manage your stylesheets. They promote consistency, maintainability, and readability, and they open up new possibilities for dynamic and responsive design. By incorporating CSS variables into your workflow, you can create cleaner, more efficient, and more scalable CSS.
 | kevin_asogwa |
1,896,659 | Vue.js | Introduction Vue.js is a JavaScript framework used for creating user interfaces. Using... | 0 | 2024-06-23T20:40:45 | https://dev.to/allyn/vuejs-ffg | vue, javascript, beginners | ## Introduction
Vue.js is a JavaScript framework used for creating user interfaces. Using Vue.js, you can create components to efficiently build your program and enhance HTML with the template syntax provided that reflects the state of your component. When your state changes, Vue will automatically update the DOM upon said change. Let's go over the basics.
There are many ways to create components for your Vue projects and one way to do it is to develop single-file components, or SFCs. These SFCs contain all of the logic for the component with JS, the template with HTML, and the styling with CSS all in the same file. Vue SFCs use a `*.vue` file extension and are the [recommended way to create Vue components](https://vuejs.org/guide/introduction.html#single-file-components).
## Starting your Vue application
All Vue applications start out with an application instance that you get from invoking the `createApp` function. The `createApp` function takes an object for an argument that will be the root component of the application. Normally with SFCs, you can import the component you intend to have as your root component, and you pass that as your argument to `createApp`, similarly to how one would with an application using React.
## Viewing your Vue application
Once you have your application instance, you'll need to be able to view your application. To do this, Vue provides the `mount` method that renders your application instance. The `mount` method's argument will either be a selector string or a DOM element, and the root component will be rendered inside the argument. One of the caveats of the `mount` method is that you should only invoke it once your application is totally configured and assets are fully registered.
## The application instance
The application instance is more than just a return value of `createApp`; it allows you to add functionality across your entire application. One of the perks of the application instance is the `config` object that allows you to set up ["app-level options"](https://vuejs.org/guide/essentials/application.html#app-configurations), and the application instance provides a way to create assets for your application. As mentioned before, any assets or additional functionality you want to provide to your application must be done before mounting the app.
## Templating
Vue templates are based on HTML and are comparable to AngularJS templates in how they "extend" the HTML syntax and are used to reflect the component's data to the client. Behind the scenes, Vue compiles the templates figures out the fewest components needed for a re-render, and manipulates the DOM on app state changes as minimally as possible.
For developers who like to use JSX in their applications, Vue supports JSX. You can write render functions with JSX instead of templates, however, you run the risk of compromising the compile-time of your components.
## Data Binding
There are 2 forms of data binding in Vue: text interpolation and attribute bindings. Text interpolation, or "Mustache syntax", in Vue uses double curly braces in between HTML tags. The data inside the double curly braces will be interpreted and evaluated in plain text. Attribute bindings are used with the `v-bind` directive. The `v-bind` directive takes an argument, likely an HTML attribute, and will bind the value of that attribute to the component's specified property or any other specified value.
## Directives
Directives are used to apply updates to the DOM and add special behavior to DOM elements. These directives are put inside the DOM element with the attributes and are prefixed with `v-`. Directives can perform a number of different operations, like looping, registering event handlers, updating HTML text, and many more. The syntax for a directive looks like this:
```
<element v-directive:argument="value" > ... </element>
```
Vue also provides syntactic sugar for their `v-bind` and `v-on` directives. The shorthand syntax for `v-bind` lets you omit `v-bind` and only use the colon with the argument and its value following right after.
```
<element :argument="value"> ... </element>
```
Or if you have the argument has the same name as the value it is being bound to, you also can shorten the directive even more.
```
<element :argument> ... </element>
```
The `v-on` directive is used to register event handlers to DOM elements and the shorthand for `v-on` uses the `@` instead of the colon to come before arguments.
```
<element @argument> ... </element>
```
You have probably noticed by now that the arguments for these directives are static, but Vue provides a way for you to use dynamic arguments. These arguments are still prefixed by the colon but are wrapped in square brackets, and they come with a couple of constraints. Dynamic arguments are supposed to evaluate either to a string or null; if the argument is evaluated as null, this can remove your binding. Also, even if your argument evaluates to a string, the string must be able to be a valid HTML attribute, since that is what the argument of directives is, meaning using some characters will throw warnings. You should also be mindful of the casing of your dynamic argument because the browser will adjust the casing behind the scenes and the code will not work as you expect.
Vue allows you to customize your directives with modifiers that specify how the directive should be bound. Modifiers are attached to the directives argument with a dot.
This encapsulates the complete directive syntax for Vue applications.
```
<element v-directive:argument.modifier="value"> ... </element>
```
Vue provides lifecycle hooks as well for your application. These hooks cover the basic parts of the lifecycle, mounting, updating, unmounting, and since all Vue applications aren't server-side applications, there are hooks for other use cases of Vue.
## Conclusion
Vue provides a streamlined way to create dynamic and interactive user interfaces for applications and I look forward to using Vue in the future. | allyn |
1,898,113 | Deploying Django in Production. | This 5-step tutorial will guide you through deploying a Django application using Gunicorn behind a... | 0 | 2024-06-23T20:40:38 | https://dev.to/wassef911/deploying-django-in-production-b1p | django, nginx, webdev, python |

This 5-step tutorial will guide you through deploying a Django application using Gunicorn behind a reverse proxy (such as Nginx).
Its important to note that this greatly depends on the needs of your project, but for most people, this is intended to be a minimal/initial setup. This configuration ensures that your application is served efficiently and securely, straight to production 🚀.
## Prerequisites
Before starting, ensure you have the following:
- A domain name. (duh)
- A server with root access (e.g. an Ubuntu server).
- A Django application ready for deployment. (eg. being under **/opt/project_name**)
- A virtual environment. (eg. being under **/opt/project_name/venv**)
- Basic knowledge of Linux command-line operations.
- Nginx installed on your server.
## Step 0: Setting Up Your Domain and DNS
Before deploying your Django application, ensure your domain points to your server and DNS settings are correctly configured. Here’s a brief guide on how to do this:
1. **Domain Registration:**
- Register your domain with a domain registrar.
2. **Obtain Server IP Address:**
- Get the public IP address of your server. This is the address that clients will use to access your application.
3. **DNS Configuration:**
- Log in to your domain registrar’s control panel and find the DNS settings for your domain.
- Add an A record to point your domain to your server's IP address:
- **Type:** A
- **Name:** @ (this represents your domain, e.g., `your_domain.com`)
- **Value:** [Your Server's IP Address]
- **TTL:** Default or 3600 seconds (1 hour)
- If you want to use `www.your_domain.com`, add a CNAME record:
- **Type:** CNAME
- **Name:** www
- **Value:** your_domain.com
- **TTL:** Default or 3600 seconds (1 hour)
4. **Propagation Time:**
- DNS changes may take some time to propagate (up to 48 hours, but usually within a few hours). You can use tools like `whatsmydns.net` to check the propagation status.
5. **Verify DNS Settings:**
- After DNS propagation, you should be able to verify your domain points to your server by using tools like `ping` or `dig`:
```bash
ping your_domain.com
```
- You should see responses from your server's IP address.
6. **Firewall and Security Group Configuration:**
- Ensure your server's firewall and any cloud provider security groups allow traffic on ports 80 (HTTP) and 443 (HTTPS).
Once your DNS is set up and pointing to your server, you can proceed with the deployment steps outlined in the tutorial. This setup ensures that visitors accessing your domain will be directed to your server, where Nginx and Gunicorn will handle the requests.
## Step 1: Install Gunicorn
First, activate your virtual environment and install Gunicorn:
```bash
source /opt/project_name/venv/bin/activate
python3 -m pip install gunicorn
```
## Step 2: Configure Gunicorn
Create a Gunicorn configuration file (optional but recommended) to manage settings like worker processes. Create a file named `gunicorn_config.py` in your project directory:
```python
# gunicorn_config.py
bind = "127.0.0.1:8000"
workers = 3
```
## Step 3: Adjust Django Settings
Update your Django settings to handle the proxy setup correctly. Edit your `settings.py` file to include the following:
```python
# settings.py
# Static files (CSS, JavaScript, Images)
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
# Media files (Uploaded by users)
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
# Specifies a list of valid host/domain names for the Django site, providing protection against HTTP Host header attacks.
ALLOWED_HOSTS = ['your_domain.com', 'www.your_domain.com', 'another_domain.com']
# Tells Django to use the X-Forwarded-Host header from the proxy, allowing it to know the original host requested by the client.
USE_X_FORWARDED_HOST = True
# Tells Django to use the X-Forwarded-Port header from the proxy, indicating the port number used by the client.
USE_X_FORWARDED_PORT = True
# Instructs Django to trust the X-Forwarded-Proto header, which is set by the proxy server, to determine whether the request is secure (HTTPS).
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Forces all HTTP requests to be redirected to HTTPS.
SECURE_SSL_REDIRECT = True
# Ensures that the CSRF cookie is only sent over HTTPS connections.
CSRF_COOKIE_SECURE = True
# Ensures that the session cookie is only sent over HTTPS connections.
SESSION_COOKIE_SECURE = True
# Enables HTTP Strict Transport Security (HSTS) for the specified duration (in seconds), forcing browsers to only connect via HTTPS.
SECURE_HSTS_SECONDS = 31536000
# Applies HSTS policy to all subdomains.
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
# Allows the domain to be included in browsers' HSTS preload list, ensuring maximum protection.
SECURE_HSTS_PRELOAD = True
# Enables the X-Content-Type-Options header, preventing browsers from MIME-sniffing a response away from the declared content-type.
SECURE_CONTENT_TYPE_NOSNIFF = True
# Controls the information sent in the Referer header, improving privacy and security by not sending the referrer from HTTPS to HTTP.
SECURE_REFERRER_POLICY = 'no-referrer-when-downgrade'
# SECURE_BROWSER_XSS_FILTER = True
# [Deprecated on 4.0]
# Enables the X-XSS-Protection header, which tells browsers to block detected cross-site scripting (XSS) attacks.
```
##### Collect Static Files
Django comes with a handy command to collect all static files into the directory specified in STATIC_ROOT.
```bash
python3 -m /opt/project_name/manage.py collectstatic
```
## Step 4: Create a Systemd Service for Gunicorn
Create a systemd service file to manage the Gunicorn process. Create a file named `gunicorn.service` in `/etc/systemd/system/`:
```ini
# /etc/systemd/system/gunicorn.service
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=yourusername
Group=www-data
WorkingDirectory=/opt/project_name
ExecStart=/opt/project_name/venv/bin/gunicorn --config /opt/gunicorn_config.py project_name.wsgi:application
[Install]
WantedBy=multi-user.target
```
Replace `/opt/project_name`, `/opt/project_name/venv`, and `project_name` with your actual project paths and names.
Reload the systemd daemon and start the Gunicorn service:
```bash
sudo systemctl daemon-reload
sudo systemctl start gunicorn
sudo systemctl enable gunicorn
```
## Step 5: Configure Nginx
Create an Nginx configuration file to proxy requests to Gunicorn. Create a file named `project_name` in `/etc/nginx/sites-available/`:
```nginx
# /etc/nginx/sites-available/project_name
server {
listen 80;
server_name your_domain.com www.your_domain.com another_domain.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias /opt/project_name/static/; # Must be the same as STATIC_ROOT
}
location /media/ {
alias /opt/project_name/media/; # Must be the same as MEDIA_ROOT
access_log /var/log/nginx/media_access.log;
error_log /var/log/nginx/media_error.log;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8000;
access_log /var/log/nginx/django_access.log;
error_log /var/log/nginx/django_error.log;
}
listen [::]:80;
}
```
Create a symbolic link to enable the site, and ensure the static and media directories, as well as the log directories, have the correct permissions.
You need to make sure that the user running Nginx has the necessary read/write permissions.
```bash
# Create directories if they don't exist
sudo mkdir -p /opt/project_name/static /opt/project_name/media /var/log/nginx
# Set ownership (assuming nginx user and group) and permissions
sudo chown -R nginx:nginx /opt/project_name/static /opt/project_name/media /var/log/nginx
sudo chmod -R 755 /opt/project_name/static /opt/project_name/media /var/log/nginx
sudo ln -s /etc/nginx/sites-available/project_name /etc/nginx/sites-enabled
```
Test the Nginx configuration and restart the service:
```bash
sudo nginx -t
sudo systemctl restart nginx
```
## Step 6: Configure HTTPS
It's highly recommended to secure your application with HTTPS. Use Certbot to obtain a free SSL certificate:
```bash
sudo apt-get install certbot python3-certbot-nginx
sudo certbot --nginx -d your_domain.com -d www.your_domain.com -d another_domain.com
```
Follow the prompts to configure SSL.
## Conclusion
Your Django application is now deployed using Gunicorn behind an Nginx reverse proxy.
However, there are several ways you could customize and enhance this deployment:
- **Load Balancing**: If you anticipate high traffic, consider setting up multiple Gunicorn instances behind a load balancer to distribute the load evenly.
- **Containerization**: Using Docker to containerize your application can simplify deployment and provide greater consistency across different environments.
- **Advanced Security Configurations**: Additional security measures such as setting up a Web Application Firewall (WAF), regular security audits, and implementing stricter Content Security Policies (CSP) can further protect your application.
Be sure to explore these options and refer to the official documentation for Gunicorn, Nginx, and Django for further customization and advanced configurations. By continuously refining your deployment setup, you can ensure your Django application remains secure, efficient, and scalable.
| wassef911 |
1,898,102 | Introducing the Schengen Area Calculator: Plan Your European Travels Seamlessly | Are you planning a trip to Europe and worried about the 90/180-day Schengen visa rule? Meet the... | 0 | 2024-06-23T20:19:22 | https://dev.to/dany_trakhtenberg/introducing-the-schengen-area-calculator-plan-your-european-travels-seamlessly-398k | nextjs, react, tooling, firebase | Are you planning a trip to Europe and worried about the 90/180-day Schengen visa rule? Meet the [Schengen Area Calculator](https://schengenareacalc.web.app) – your ultimate tool for tracking stays and ensuring compliance with the Schengen visa regulations.
{% embed https://www.youtube.com/embed/TNb4ESWQzTQ?si=GuN3fdYYNzD3Q-Yt %}
**What is the Schengen Area Calculator?**
The Schengen Area Calculator is a web-based tool designed to help travelers like tourists or digital nomads to track their stays within the Schengen Area. This tool ensures that you comply with the 90/180-day rule, preventing overstaying, fines, or even bans from the Schengen countries.
**Key Features**
Stay Tracking: Log your entry and exit dates for every trip to the Schengen Area. The calculator keeps track of your stays and alerts you if you are close to violating the 90/180-day rule.
Violation Alerts: Receive instant notifications if your planned stays exceed the allowed duration, helping you avoid potential issues with visa regulations.
Visual Timeline: Our interactive Schengen Chart offers a visual representation of your stays, giving you a clear overview of your travel history and future plans.
Country Information: Access detailed information about each Schengen country, including visa requirements and exemptions.
**How It Works**
Input Your Stays: Enter the dates of your stays within the Schengen Area, including the start and end dates of each stay and the country visited.
Track Your Stays: The calculator will automatically track your stays and calculate the total duration of your stays within the last 180 days.
Receive Alerts: If you are close to or have exceeded the allowed 90 days, the tool will alert you, ensuring you can take action to avoid any issues.
**New Feature: Days Left Calculator**
We recently added a feature to calculate the last day you can stay in the Schengen Area if you want to maximize your 90 days. This feature helps you plan your trip efficiently by showing the remaining days starting from a specific stay and the next possible entry date after the 180-day period.
**Why Use the Schengen Area Calculator?**
Avoid Overstaying: Stay within the legal limits and avoid fines or bans.
Plan Efficiently: Use the tool to plan your trips and make the most of your stay in Europe.
Peace of Mind: Travel with confidence, knowing that you are complying with visa regulations.
[Try It Now](https://schengenareacalc.web.app/)
Planning a trip to Europe? Check out the Schengen Area Calculator and start tracking your stays today!
**Technical Implementation**
**Frontend**
The frontend is built using React and Next.js, which allows for server-side rendering and static site generation, improving SEO and performance.
React: Used for building the user interface, ensuring a responsive and interactive experience.
Next.js: Utilized for its server-side rendering capabilities, which enhance SEO and load times.
React Bootstrap: For styling and responsive design, making the tool accessible on both desktop and mobile devices.
Firebase Authentication: Implemented for user login and data persistence, allowing users to save their travel data securely.
**Backend**
The backend leverages Firebase for both hosting and database services.
Firebase Firestore: Stores user data, including travel dates and country information, ensuring secure and scalable data management.
Moment.js: For date manipulation and formatting, making it easy to calculate durations and validate dates.
react-dates: Provides a robust date picker component, ensuring users can easily select their travel dates.
react-helmet: Manages meta tags for SEO, improving the visibility of the tool on search engines.
**Feedback and Support**
We'd love to hear your feedback! If you have any questions or suggestions, please feel free to reach out. | dany_trakhtenberg |
1,899,086 | Headlamp - k8s Lens open source alternative | Since Lens is not open source, I tried out monokle, octant, k9s, and headlamp1. Among them,... | 0 | 2024-06-27T03:16:28 | https://avilpage.com/2024/06/headlamp-k8s-lens-open-source-alternative.html | devops, kubernetes | ---
title: Headlamp - k8s Lens open source alternative
published: true
date: 2024-06-23 20:18:02 UTC
tags: devops,kubernetes
canonical_url: https://avilpage.com/2024/06/headlamp-k8s-lens-open-source-alternative.html
---

Since Lens is not open source, I tried out monokle, octant, k9s, and headlamp<sup id="fnref:headlamp"><a href="https://avilpage.com/2024/06/headlamp-k8s-lens-open-source-alternative.html#fn:headlamp">1</a></sup>. Among them, headlamp UI & features are closest to Lens.
#### Headlamp
Headlamp is CNCF sandbox project that provides cross-platform desktop application to manage Kubernetes clusters. It auto-detects clusters and provides cluster wide resource usage by default.
It can also be installed inside the cluster and can be accessed using a web browser. This is useful when we want to access the cluster from a mobile device.
```
$ helm repo add headlamp https://headlamp-k8s.github.io/headlamp/
$ helm install headlamp headlamp/headlamp
```
Lets port-forward the service & copy the token to access it.
```
$ kubectl create token headlamp
# we can do this via headlamp UI as well
$ kubectl port-forward service/headlamp 8080:80
```
Now, we can access the headlamp UI at [http://localhost:8080](http://).

#### Conclusion
If you are looking for an open source alternative to Lens, headlamp is a good choice. It provides a similar UI & features as Lens and it is accessible via mobile devices as well.
* * *
1. [https://headlamp.dev/](https://headlamp.dev/) [↩](https://avilpage.com/2024/06/headlamp-k8s-lens-open-source-alternative.html#fnref:headlamp "Jump back to footnote 1 in the text") | chillaranand |
1,897,962 | Pitch-Tonic | we're making a cool app for a hackathon : everyone is welcome to join us ! 🎤 Pitch Tonic: Your... | 0 | 2024-06-23T17:17:01 | https://dev.to/tonic/pitch-tonic-500a | whisper, python, webdev, hackathon | we're making a cool app for a hackathon : everyone is welcome to join us !
🎤 **Pitch Tonic: Your Ultimate Pitch Training App!** 🎤
Elevate your pitching game with **Pitch Tonic**, the open-source app designed to perfect your live pitches and investor Q&A training. Here's how it works:
1. **Interlocutor Recording & Tips** 📹: Record your interlocutor and get real-time tips to improve your pitch.
2. **Pitch Deck Training** 🎓: Record yourself practicing your pitch deck and receive constructive feedback.
3. **Knowledge Testing** 🧠: Test your pitch knowledge by answering questions and get instant feedback. Choose from easy, medium, or hard levels to challenge yourself.
💡 **We Need You!** 💡
Pitch Tonic is fully open-source under the MIT license, and we're looking for passionate contributors to help us grow! Whether you're a developer, designer, or pitch enthusiast, join our community and make an impact.
🤝 **Get Involved** 🤝
Contribute to the code, suggest new features, or provide feedback. Let's make Pitch Tonic the go-to tool for startups everywhere!
🚀 **Join Us Today!** 🚀
🔗 [Contribute on our Gitlab](https://git.tonic-ai.com/tonic-ai/pitch-tonic)
👥 **Get in Touch** 👥
Have questions or ideas? request access!
💬 [Discord](https://discord.gg/ughK8cNF) | tonic |
1,898,099 | How to Write Tests | In this lab, we will learn about writing tests in Rust using attributes, macros, and assertions. | 27,834 | 2024-06-23T20:12:17 | https://labex.io/tutorials/rust-how-to-write-tests-100415 | rust, coding, programming, tutorial |
## Introduction
Welcome to **How to Write Tests**. This lab is a part of the [Rust Book](https://doc.rust-lang.org/book/). You can practice your Rust skills in LabEx.
In this lab, we will learn about writing tests in Rust using attributes, macros, and assertions.
## How to Write Tests
Tests are Rust functions that verify that the non-test code is functioning in the expected manner. The bodies of test functions typically perform these three actions:
- Set up any needed data or state.
- Run the code you want to test.
- Assert that the results are what you expect.
Let's look at the features Rust provides specifically for writing tests that take these actions, which include the `test` attribute, a few macros, and the `should_panic` attribute.
## The Anatomy of a Test Function
At its simplest, a test in Rust is a function that's annotated with the `test` attribute. Attributes are metadata about pieces of Rust code; one example is the `derive` attribute we used with structs in Chapter 5. To change a function into a test function, add `#[test]` on the line before `fn`. When you run your tests with the `cargo test` command, Rust builds a test runner binary that runs the annotated functions and reports on whether each test function passes or fails.
Whenever we make a new library project with Cargo, a test module with a test function in it is automatically generated for us. This module gives you a template for writing your tests so you don't have to look up the exact structure and syntax every time you start a new project. You can add as many additional test functions and as many test modules as you want!
We'll explore some aspects of how tests work by experimenting with the template test before we actually test any code. Then we'll write some real-world tests that call some code that we've written and assert that its behavior is correct.
Let's create a new library project called `adder` that will add two numbers:
```bash
$ cargo new adder --lib
Created library $(adder) project
$ cd adder
```
The contents of the `src/lib.rs` file in your `adder` library should look like Listing 11-1.
Filename: `src/lib.rs`
```rust
#[cfg(test)]
mod tests {
1 #[test]
fn it_works() {
let result = 2 + 2;
2 assert_eq!(result, 4);
}
}
```
Listing 11-1: The test module and function generated automatically by `cargo new`
For now, let's ignore the top two lines and focus on the function. Note the `#[test]` annotation \[1\]: this attribute indicates this is a test function, so the test runner knows to treat this function as a test. We might also have non-test functions in the `tests` module to help set up common scenarios or perform common operations, so we always need to indicate which functions are tests.
The example function body uses the `assert_eq!` macro \[2\] to assert that `result`, which contains the result of adding 2 and 2, equals 4. This assertion serves as an example of the format for a typical test. Let's run it to see that this test passes.
The `cargo test` command runs all tests in our project, as shown in Listing 11-2.
```bash
$ cargo test
Compiling adder v0.1.0 (file:///projects/adder)
Finished test [unoptimized + debuginfo] target(s) in 0.57s
Running unittests src/lib.rs (target/debug/deps/adder-
92948b65e88960b4)
1 running 1 test
2 test tests::it_works ... ok
3 test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
4 Doc-tests adder
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
```
Listing 11-2: The output from running the automatically generated test
Cargo compiled and ran the test. We see the line `running 1 test` \[1\]. The next line shows the name of the generated test function, called `it_works`, and that the result of running that test is `ok` \[2\]. The overall summary `test result: ok.` \[3\] means that all the tests passed, and the portion that reads `1 passed; 0 failed` totals the number of tests that passed or failed.
It's possible to mark a test as ignored so it doesn't run in a particular instance; we'll cover that in "Ignoring Some Tests Unless Specifically Requested". Because we haven't done that here, the summary shows `0 ignored`. We can also pass an argument to the `cargo test` command to run only tests whose name matches a string; this is called _filtering_ and we'll cover it in "Running a Subset of Tests by Name". Here we haven't filtered the tests being run, so the end of the summary shows `0 filtered out`.
The `0 measured` statistic is for benchmark tests that measure performance. Benchmark tests are, as of this writing, only available in nightly Rust. See the documentation about benchmark tests at *https://doc.rust-lang.org/unstable-book/library-features/test.html* to learn more.
The next part of the test output starting at `Doc-tests adder` \[4\] is for the results of any documentation tests. We don't have any documentation tests yet, but Rust can compile any code examples that appear in our API documentation. This feature helps keep your docs and your code in sync! We'll discuss how to write documentation tests in "Documentation Comments as Tests". For now, we'll ignore the `Doc-tests` output.
Let's start to customize the test to our own needs. First, change the name of the `it_works` function to a different name, such as `exploration`, like so:
Filename: `src/lib.rs`
```rust
#[cfg(test)]
mod tests {
#[test]
fn exploration() {
let result = 2 + 2;
assert_eq!(result, 4);
}
}
```
Then run `cargo test` again. The output now shows `exploration` instead of `it_works`:
running 1 test
test tests::exploration ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
Now we'll add another test, but this time we'll make a test that fails! Tests fail when something in the test function panics. Each test is run in a new thread, and when the main thread sees that a test thread has died, the test is marked as failed. In Chapter 9, we talked about how the simplest way to panic is to call the `panic!` macro. Enter the new test as a function named `another`, so your `src/lib.rs` file looks like Listing 11-3.
Filename: `src/lib.rs`
```rust
#[cfg(test)]
mod tests {
#[test]
fn exploration() {
assert_eq!(2 + 2, 4);
}
#[test]
fn another() {
panic!("Make this test fail");
}
}
```
Listing 11-3: Adding a second test that will fail because we call the `panic!` macro
Run the tests again using `cargo test`. The output should look like Listing 11-4, which shows that our `exploration` test passed and `another` failed.
running 2 tests
test tests::exploration ... ok
1 test tests::another ... FAILED
2 failures:
---- tests::another stdout ----
thread 'main' panicked at 'Make this test fail', src/lib.rs:10:9
note: run with `RUST_BACKTRACE=1` environment variable to display
a backtrace
3 failures:
tests::another
4 test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
error: test failed, to rerun pass '--lib'
Listing 11-4: Test results when one test passes and one test fails
Instead of `ok`, the line `test tests::another` shows `FAILED` \[1\]. Two new sections appear between the individual results and the summary: the first \[2\] displays the detailed reason for each test failure. In this case, we get the details that `another` failed because it `panicked at 'Make this test fail'` on line 10 in the `src/lib.rs` file. The next section \[3\] lists just the names of all the failing tests, which is useful when there are lots of tests and lots of detailed failing test output. We can use the name of a failing test to run just that test to more easily debug it; we'll talk more about ways to run tests in "Controlling How Tests Are Run".
The summary line displays at the end \[4\]: overall, our test result is `FAILED`. We had one test pass and one test fail.
Now that you've seen what the test results look like in different scenarios, let's look at some macros other than `panic!` that are useful in tests.
## Checking Results with the assert! Macro
The `assert!` macro, provided by the standard library, is useful when you want to ensure that some condition in a test evaluates to `true`. We give the `assert!` macro an argument that evaluates to a Boolean. If the value is `true`, nothing happens and the test passes. If the value is `false`, the `assert!` macro calls `panic!` to cause the test to fail. Using the `assert!` macro helps us check that our code is functioning in the way we intend.
In Listing 5-15, we used a `Rectangle` struct and a `can_hold` method, which are repeated here in Listing 11-5. Let's put this code in the `src/lib.rs` file, then write some tests for it using the `assert!` macro.
Filename: `src/lib.rs`
```rust
#[derive(Debug)]
struct Rectangle {
width: u32,
height: u32,
}
impl Rectangle {
fn can_hold(&self, other: &Rectangle) -> bool {
self.width > other.width && self.height > other.height
}
}
```
Listing 11-5: Using the `Rectangle` struct and its `can_hold` method from Chapter 5
The `can_hold` method returns a Boolean, which means it's a perfect use case for the `assert!` macro. In Listing 11-6, we write a test that exercises the `can_hold` method by creating a `Rectangle` instance that has a width of 8 and a height of 7 and asserting that it can hold another `Rectangle` instance that has a width of 5 and a height of 1.
Filename: `src/lib.rs`
```rust
#[cfg(test)]
mod tests {
1 use super::*;
#[test]
2 fn larger_can_hold_smaller() {
3 let larger = Rectangle {
width: 8,
height: 7,
};
let smaller = Rectangle {
width: 5,
height: 1,
};
4 assert!(larger.can_hold(&smaller));
}
}
```
Listing 11-6: A test for `can_hold` that checks whether a larger rectangle can indeed hold a smaller rectangle
Note that we've added a new line inside the `tests` module: `use super::*;` \[1\]. The `tests` module is a regular module that follows the usual visibility rules we covered in "Paths for Referring to an Item in the Module Tree". Because the `tests` module is an inner module, we need to bring the code under test in the outer module into the scope of the inner module. We use a glob here, so anything we define in the outer module is available to this `tests` module.
We've named our test `larger_can_hold_smaller` \[2\], and we've created the two `Rectangle` instances that we need \[3\]. Then we called the `assert!` macro and passed it the result of calling `larger.can_hold(&smaller)` \[4\]. This expression is supposed to return `true`, so our test should pass. Let's find out!
running 1 test
test tests::larger_can_hold_smaller ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
It does pass! Let's add another test, this time asserting that a smaller rectangle cannot hold a larger rectangle:
Filename: `src/lib.rs`
```rust
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn larger_can_hold_smaller() {
--snip--
}
#[test]
fn smaller_cannot_hold_larger() {
let larger = Rectangle {
width: 8,
height: 7,
};
let smaller = Rectangle {
width: 5,
height: 1,
};
assert!(!smaller.can_hold(&larger));
}
}
```
Because the correct result of the `can_hold` function in this case is `false`, we need to negate that result before we pass it to the `assert!` macro. As a result, our test will pass if `can_hold` returns `false`:
running 2 tests
test tests::larger_can_hold_smaller ... ok
test tests::smaller_cannot_hold_larger ... ok
test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
Two tests that pass! Now let's see what happens to our test results when we introduce a bug in our code. We'll change the implementation of the `can_hold` method by replacing the greater-than sign with a less-than sign when it compares the widths:
--snip--
impl Rectangle {
fn can_hold(&self, other: &Rectangle) -> bool {
self.width < other.width && self.height > other.height
}
}
Running the tests now produces the following:
running 2 tests
test tests::smaller_cannot_hold_larger ... ok
test tests::larger_can_hold_smaller ... FAILED
failures:
---- tests::larger_can_hold_smaller stdout ----
thread 'main' panicked at 'assertion failed:
larger.can_hold(&smaller)', src/lib.rs:28:9
note: run with `RUST_BACKTRACE=1` environment variable to display
a backtrace
failures:
tests::larger_can_hold_smaller
test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
Our tests caught the bug! Because `larger.width` is `8` and `smaller.width` is `5`, the comparison of the widths in `can_hold` now returns `false`: 8 is not less than 5.
## Testing Equality with the assert_eq! and assert_ne! Macros
A common way to verify functionality is to test for equality between the result of the code under test and the value you expect the code to return. You could do this by using the `assert!` macro and passing it an expression using the `==` operator. However, this is such a common test that the standard library provides a pair of macros---`assert_eq!` and `assert_ne!`---to perform this test more conveniently. These macros compare two arguments for equality or inequality, respectively. They'll also print the two values if the assertion fails, which makes it easier to see _why_ the test failed; conversely, the `assert!` macro only indicates that it got a `false` value for the `==` expression, without printing the values that led to the `false` value.
In Listing 11-7, we write a function named `add_two` that adds `2` to its parameter, then we test this function using the `assert_eq!` macro.
Filename: `src/lib.rs`
```rust
pub fn add_two(a: i32) -> i32 {
a + 2
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_adds_two() {
assert_eq!(4, add_two(2));
}
}
```
Listing 11-7: Testing the function `add_two` using the `assert_eq!` macro
Let's check that it passes!
running 1 test
test tests::it_adds_two ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
We pass `4` as the argument to `assert_eq!`, which is equal to the result of calling `add_two(2)`. The line for this test is `test tests::it_adds_two ... ok`, and the `ok` text indicates that our test passed!
Let's introduce a bug into our code to see what `assert_eq!` looks like when it fails. Change the implementation of the `add_two` function to instead add `3`:
```rust
pub fn add_two(a: i32) -> i32 {
a + 3
}
```
Run the tests again:
running 1 test
test tests::it_adds_two ... FAILED
failures:
---- tests::it_adds_two stdout ----
1 thread 'main' panicked at 'assertion failed: `(left == right)`
2 left: `4`,
3 right: `5`', src/lib.rs:11:9
note: run with `RUST_BACKTRACE=1` environment variable to display
a backtrace
failures:
tests::it_adds_two
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
Our test caught the bug! The `it_adds_two` test failed, and the message tells us that the assertion that failed was `assertion failed:`(left == right)\``[1] and what the`left`[2] and`right`[3] values are. This message helps us start debugging: the`left`argument was`4`but the`right`argument, where we had`add_two(2)`, was`5\`. You can imagine that this would be especially helpful when we have a lot of tests going on.
Note that in some languages and test frameworks, the parameters to equality assertion functions are called `expected` and `actual`, and the order in which we specify the arguments matters. However, in Rust, they're called `left` and `right`, and the order in which we specify the value we expect and the value the code produces doesn't matter. We could write the assertion in this test as `assert_eq!(add_two(2), 4)`, which would result in the same failure message that displays `assertion failed:`(left == right)\`\`.
The `assert_ne!` macro will pass if the two values we give it are not equal and fail if they're equal. This macro is most useful for cases when we're not sure what a value _will_ be, but we know what the value definitely _shouldn't_ be. For example, if we're testing a function that is guaranteed to change its input in some way, but the way in which the input is changed depends on the day of the week that we run our tests, the best thing to assert might be that the output of the function is not equal to the input.
Under the surface, the `assert_eq!` and `assert_ne!` macros use the operators `==` and `!=`, respectively. When the assertions fail, these macros print their arguments using debug formatting, which means the values being compared must implement the `PartialEq` and `Debug` traits. All primitive types and most of the standard library types implement these traits. For structs and enums that you define yourself, you'll need to implement `PartialEq` to assert equality of those types. You'll also need to implement `Debug` to print the values when the assertion fails. Because both traits are derivable traits, as mentioned in Listing 5-12, this is usually as straightforward as adding the `#[derive(PartialEq, Debug)]` annotation to your struct or enum definition. See Appendix C for more details about these and other derivable traits.
## Adding Custom Failure Messages
You can also add a custom message to be printed with the failure message as optional arguments to the `assert!`, `assert_eq!`, and `assert_ne!` macros. Any arguments specified after the required arguments are passed along to the `format!` macro (discussed in "Concatenation with the + Operator or the format! Macro"), so you can pass a format string that contains `{}` placeholders and values to go in those placeholders. Custom messages are useful for documenting what an assertion means; when a test fails, you'll have a better idea of what the problem is with the code.
For example, let's say we have a function that greets people by name and we want to test that the name we pass into the function appears in the output:
Filename: `src/lib.rs`
```rust
pub fn greeting(name: &str) -> String {
format!("Hello {name}!")
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn greeting_contains_name() {
let result = greeting("Carol");
assert!(result.contains("Carol"));
}
}
```
The requirements for this program haven't been agreed upon yet, and we're pretty sure the `Hello` text at the beginning of the greeting will change. We decided we don't want to have to update the test when the requirements change, so instead of checking for exact equality to the value returned from the `greeting` function, we'll just assert that the output contains the text of the input parameter.
Now let's introduce a bug into this code by changing `greeting` to exclude `name` to see what the default test failure looks like:
```rust
pub fn greeting(name: &str) -> String {
String::from("Hello!")
}
```
Running this test produces the following:
running 1 test
test tests::greeting_contains_name ... FAILED
failures:
---- tests::greeting_contains_name stdout ----
thread 'main' panicked at 'assertion failed:
result.contains(\"Carol\")', src/lib.rs:12:9
note: run with `RUST_BACKTRACE=1` environment variable to display
a backtrace
failures:
tests::greeting_contains_name
This result just indicates that the assertion failed and which line the assertion is on. A more useful failure message would print the value from the `greeting` function. Let's add a custom failure message composed of a format string with a placeholder filled in with the actual value we got from the `greeting` function:
#[test]
fn greeting_contains_name() {
let result = greeting("Carol");
assert!(
result.contains("Carol"),
"Greeting did not contain name, value was `{result}`"
);
}
Now when we run the test, we'll get a more informative error message:
---- tests::greeting_contains_name stdout ----
thread 'main' panicked at 'Greeting did not contain name, value
was `Hello!`', src/lib.rs:12:9
note: run with `RUST_BACKTRACE=1` environment variable to display
a backtrace
We can see the value we actually got in the test output, which would help us debug what happened instead of what we were expecting to happen.
## Checking for Panics with should_panic
In addition to checking return values, it's important to check that our code handles error conditions as we expect. For example, consider the `Guess` type that we created in Listing 9-13. Other code that uses `Guess` depends on the guarantee that `Guess` instances will contain only values between 1 and 100. We can write a test that ensures that attempting to create a `Guess` instance with a value outside that range panics.
We do this by adding the attribute `should_panic` to our test function. The test passes if the code inside the function panics; the test fails if the code inside the function doesn't panic.
Listing 11-8 shows a test that checks that the error conditions of `Guess::new` happen when we expect them to.
// src/lib.rs
pub struct Guess {
value: i32,
}
impl Guess {
pub fn new(value: i32) -> Guess {
if value < 1 || value > 100 {
panic!(
"Guess value must be between 1 and 100, got {}.",
value
);
}
Guess { value }
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
#[should_panic]
fn greater_than_100() {
Guess::new(200);
}
}
Listing 11-8: Testing that a condition will cause a panic!
We place the `#[should_panic]` attribute after the `#[test]` attribute and before the test function it applies to. Let's look at the result when this test passes:
running 1 test
test tests::greater_than_100 - should panic ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
Looks good! Now let's introduce a bug in our code by removing the condition that the `new` function will panic if the value is greater than 100:
// src/lib.rs
--snip--
impl Guess {
pub fn new(value: i32) -> Guess {
if value < 1 {
panic!(
"Guess value must be between 1 and 100, got {}.",
value
);
}
Guess { value }
}
}
When we run the test in Listing 11-8, it will fail:
running 1 test
test tests::greater_than_100 - should panic ... FAILED
failures:
---- tests::greater_than_100 stdout ----
note: test did not panic as expected
failures:
tests::greater_than_100
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0
filtered out; finished in 0.00s
We don't get a very helpful message in this case, but when we look at the test function, we see that it's annotated with `#[should_panic]`. The failure we got means that the code in the test function did not cause a panic.
Tests that use `should_panic` can be imprecise. A `should_panic` test would pass even if the test panics for a different reason from the one we were expecting. To make `should_panic` tests more precise, we can add an optional `expected` parameter to the `should_panic` attribute. The test harness will make sure that the failure message contains the provided text. For example, consider the modified code for `Guess` in Listing 11-9 where the `new` function panics with different messages depending on whether the value is too small or too large.
// src/lib.rs
--snip--
impl Guess {
pub fn new(value: i32) -> Guess {
if value < 1 {
panic!(
"Guess value must be greater than or equal to 1, got {}.",
value
);
} else if value > 100 {
panic!(
"Guess value must be less than or equal to 100, got {}.",
value
);
}
Guess { value }
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
#[should_panic(expected = "less than or equal to 100")]
fn greater_than_100() {
Guess::new(200);
}
}
Listing 11-9: Testing for a `panic!` with a panic message containing a specified substring
This test will pass because the value we put in the `should_panic` attribute's `expected` parameter is a substring of the message that the `Guess::new` function panics with. We could have specified the entire panic message that we expect, which in this case would be `Guess value must be less than or equal to 100, got 200`. What you choose to specify depends on how much of the panic message is unique or dynamic and how precise you want your test to be. In this case, a substring of the panic message is enough to ensure that the code in the test function executes the `else if value > 100` case.
To see what happens when a `should_panic` test with an `expected` message fails, let's again introduce a bug into our code by swapping the bodies of the `if value < 1` and the `else if value > 100` blocks:
// src/lib.rs
--snip--
if value < 1 {
panic!(
"Guess value must be less than or equal to 100, got {}.",
value
);
} else if value > 100 {
panic!(
"Guess value must be greater than or equal to 1, got {}.",
value
);
}
--snip--
This time when we run the `should_panic` test, it will fail:
running 1 test
test tests::greater_than_100 - should panic ... FAILED
failures:
---- tests::greater_than_100 stdout ----
thread 'main' panicked at 'Guess value must be greater than or equal to 1, got
200.', src/lib.rs:13:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
note: panic did not contain expected string
panic message: `"Guess value must be greater than or equal to 1, got
200."`,
expected substring: `"less than or equal to 100"`
failures:
tests::greater_than_100
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out;
finished in 0.00s
The failure message indicates that this test did indeed panic as we expected, but the panic message did not include the expected string `'Guess value must be less than or equal to 100'`. The panic message that we did get in this case was `Guess value must be greater than or equal to 1, got 200`. Now we can start figuring out where our bug is!
## Using Result\<T, E\> in Tests
Our tests so far all panic when they fail. We can also write tests that use `Result<T, E>`! Here's the test from Listing 11-1, rewritten to use `Result<T, E>` and return an `Err` instead of panicking:
Filename: `src/lib.rs`
```rust
#[cfg(test)]
mod tests {
#[test]
fn it_works() -> Result<(), String> {
if 2 + 2 == 4 {
Ok(())
} else {
Err(String::from("two plus two does not equal four"))
}
}
}
```
The `it_works` function now has the `Result<(), String>` return type. In the body of the function, rather than calling the `assert_eq!` macro, we return `Ok(())` when the test passes and an `Err` with a `String` inside when the test fails.
Writing tests so they return a `Result<T, E>` enables you to use the question mark operator in the body of tests, which can be a convenient way to write tests that should fail if any operation within them returns an `Err` variant.
You can't use the `#[should_panic]` annotation on tests that use `Result<T, E>`. To assert that an operation returns an `Err` variant, _don't_ use the question mark operator on the `Result<T, E>` value. Instead, use `assert!(value.is_err())`.
Now that you know several ways to write tests, let's look at what is happening when we run our tests and explore the different options we can use with `cargo test`.
## Summary
Congratulations! You have completed the How to Write Tests lab. You can practice more labs in LabEx to improve your skills.
---
## Want to learn more?
- 🚀 Practice [How to Write Tests](https://labex.io/tutorials/rust-how-to-write-tests-100415)
- 🌳 Learn the latest [Rust Skill Trees](https://labex.io/skilltrees/rust)
- 📖 Read More [Rust Tutorials](https://labex.io/tutorials/category/rust)
Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄 | labby |
1,898,098 | AI is really here? | I've recently seen a craze about AI. Every application promises integrated AI. But for me, many are... | 0 | 2024-06-23T20:06:16 | https://dev.to/miplle_player1/ai-is-realy-here-1b62 | I've recently seen a craze about AI. Every application promises integrated AI. But for me, many are simply using it as marketing strategy. I tried using Jira, and it has an AI feature that supposedly just requires me to describe what I need in the search. But instead, I still have to set up the query filter in the same way.
A friend enthusiastically told me he was looking at a washing machine with AI. I asked: What AI features does it have?
He said: You can connect it to your phone, and it will notify you when the washing cycle is finished, things like that.
Really? Is that what people understand as AI? And the funny thing is, this guy is an excellent software architect! He's in the field! How could he fall for it so easily?
Okay, maybe there are marketing powers to which we are not immune. But I still find it absurd.
| miplle_player1 | |
1,893,891 | Battleship Game in RUST | Hey! Amir here! 🌟 First of all: Thank you so much for the incredible interaction I had had when I... | 0 | 2024-06-23T20:03:38 | https://dev.to/bekbrace/battleship-game-in-rust-43ie | rust, gamedev, cli, programming |
Hey! Amir here! 🌟
First of all: Thank you so much for the incredible interaction I had had when I shared my Rust Full Course for Beginners last month.
I'm glad to know that it was helpful to you, even if just a bit.
Every opportunity to provide value and help you learn is something I cherish deeply.
I'll always be ready to support your learning journey, guys.
Today, I’m super excited to share my latest project with you all: a classic Battleship game implemented in Rust.
This isn’t just a throwback to those childhood days of sinking ships on graph paper, but also a neat little journey through the awesomeness of Rust's standard library for I/O handling and random number generation.
So, let me show you what I have for you today, and maybe even contribute to this Battleship game.
# Features
Let's kick things off by highlighting some cool features of our Battleship game:
1. Random Ship Placement: Ships are placed randomly ensuring no overlaps or out-of-bounds positioning.
2. User Input: You can fire at coordinates through basic user input.
3. Game Board Display: The game board displays hits, misses, and ships using different symbols.
4. Simple Game Loop: The game runs in a turn-based loop.
5. Game Over Detection: It checks for game over conditions, so you know when you've won (or lost)!
{% youtube arBO1lK3tgQ %}
# Installation
First things first, you need to have Rust installed.
If you haven’t got it yet, head over to rust-lang.org and get it.
Once Rust is all set up, cloning the repository is your next step. Here’s how you do it:
```bash
git clone https://github.com/BekBrace/rust-bship-game.git
cd battleship-rust
cargo build
cargo run
```
Here's the game tutorial if you prefer to watch it :-)
# Game Rules
For those who need a quick refresher on how Battleship works, here are the basics:
- Each player has a 10x10 board.
- Ships of different sizes (5, 4, 3, 3, 2) are randomly placed on the board.
- Players take turns firing at each other’s boards by entering coordinates.
- Hits are marked with a red dot (●), and misses with a blue dot (·).
- The game continues until all ships of one player are sunk.

And there you have it, folks! A full Battleship CLI game in Rust.
Whether you're a Rustacean or just someone looking to play a fun game, I hope this project brings you as much joy as it brought me creating it.
Stay safe, happy coding, and may the best captain win :-)
Catch you next time,
Amir 🚀 | bekbrace |
1,898,096 | "Med-AI: Transforming Healthcare with AI Innovations" | This is a submission for Twilio Challenge v24.06.12 What I Built Med-AI is an innovative... | 0 | 2024-06-23T20:02:38 | https://dev.to/jyotika6221/med-ai-transforming-healthcare-with-ai-innovations-2e12 | twiliochallenge, ai, twilio, devchallenge | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What I Built
Med-AI is an innovative AI-driven healthcare application that harnesses artificial intelligence to enhance medical diagnostics, treatment planning, and healthcare delivery. It focuses on improving patient care and operational efficiency through advanced AI technologies.
## Key Features:
### 📅 Appointment Booking:
- Streamlined appointment scheduling system for patients to book and manage appointments with healthcare providers efficiently.
- After booking, patients receive a confirmation call. [Demo of call received](https://youtu.be/6s9T8U6Je-E?si=t8UF8E6Fm0hYZpvN)
### 📝 Prescription Summary:
- AI-powered summarization of medical prescriptions to provide patients with clear and concise information about their medications.
### 💊 Medicine Reminders:
- Personalized medication reminders based on patient-specific schedules and dosage requirements, ensuring adherence to treatment plans through timely messages.
### 📈 Report Generation:
- Automated generation of medical reports and summaries, facilitating comprehensive documentation and analysis for healthcare professionals. By using `puppeteer` and `handlebars`, we generate the report that are being saved to MongoDB database when patients set reminders.
### 🌐 Multi-Language Support:
- Ensures accessibility and inclusivity by providing support for multiple languages, allowing users worldwide to interact with the application in their preferred language.
## Technologies Used:
- *Frontend:* Vite + React
- *Backend:* Node.js, Express
- *Testing:* Postman
### Dependencies:
- @google/generative-ai
- axios
- express
- handlebars
- mongoose
- nodemon
- puppeteer
- twilio
### Gemini Models:
- gemini-1.5-flash
- @google/generative-ai
### Twilio APIs:
- *SMS*
- *Call*
## Demo
<!-- Share a link to your app and include some screenshots here. -->
[Website link](https://med-ai-alpha.vercel.app/)
[Frontend repository](https://github.com/jyotika6221/med-ai-fe.git)
[Backend repository](https://github.com/pooranjoyb/med-ai-be.git)










## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
## AI Capabilities Integration:
To leverage AI capabilities in our Med-AI application, we integrated Twilio's APIs for both SMS and voice calls, alongside AI models such as Gemini. Here’s how we utilized these capabilities:
### 1. SMS Integration:
Twilio's SMS API is used to send personalized medication reminders to patients. These reminders are tailored based on AI-generated insights from patient-specific schedules and dosage requirements, ensuring adherence to treatment plans.
### 2. Voice Call Integration:
Twilio's voice call API automates confirmation calls for appointment bookings. After patients schedule an appointment through our application, Twilio initiates a voice call to confirm the details, enhancing user engagement with real-time communication.
### 3. AI-Powered Prescription Summaries:
Leveraging Gemini-1.5-Flash model, our application provides real-time AI-powered summarization of medical prescriptions. This feature ensures that patients receive clear and concise information about their medications, reducing errors and improving patient understanding.
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
### Twilio Times Two:
We integrated two Twilio APIs: SMS and call functionalities. These APIs are utilized for sending medication reminders via SMS and confirming appointments through automated calls, enhancing patient engagement and operational efficiency.
### Impactful Innovators:
Med-AI drives significant social impact by leveraging AI to improve healthcare accessibility and efficiency. It enhances patient outcomes through accurate medication reminders, simplifies appointment bookings, and provides clear prescription summaries. This initiative aims to reduce medication errors, improve patient understanding, and support healthcare professionals with automated report generation, ultimately advancing healthcare delivery and patient care.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
{% embed https://dev.to/pooranjoyb %}
| jyotika6221 |
1,898,091 | Concurrency and Parallelism in Ruby | Concurrency and Parallelism in Ruby In programming, concurrency and parallelism are... | 0 | 2024-06-23T19:46:42 | https://dev.to/francescoagati/concurrency-and-parallelism-in-ruby-1b9p | ruby, concurrency, parallelism, threads | ### Concurrency and Parallelism in Ruby
In programming, concurrency and parallelism are essential techniques for improving the performance and efficiency of code. Ruby, a popular programming language, offers various tools to handle these concepts. Let's explore these techniques using a simple example.
#### Synchronous Code
Synchronous code executes tasks one after the other. Here's an example:
```ruby
puts "Synchronous Code"
(1..5).each do |i|
puts i
sleep 1
end
```
In this code, numbers from 1 to 5 are printed with a 1-second delay between each number. The tasks run sequentially, meaning each number is printed only after the previous task (including the sleep) is completed.
#### Threads
Threads allow multiple sequences of instructions to run concurrently within the same program. Here's how we can use threads in Ruby:
```ruby
puts "\nThreads"
threads = []
(1..5).each do |i|
threads << Thread.new do
puts i
sleep 1
end
end
threads.each(&:join)
```
In this example, each number from 1 to 5 is printed by a separate thread. All threads run concurrently, and the `join` method ensures that the main program waits for all threads to finish before proceeding. This makes the tasks run in parallel, potentially reducing the total execution time.
#### Fork
The `fork` method creates a new process, which is a separate instance of the Ruby interpreter:
```ruby
puts "\nFork"
(1..5).each do |i|
pid = fork do
puts i
sleep 1
end
Process.wait(pid)
end
```
In this code, `fork` creates a new process for each number. The parent process waits for each child process to complete using `Process.wait`. Each process runs independently, providing true parallelism on multi-core systems.
#### Fibers
Fibers are lightweight concurrency primitives that enable cooperative multitasking:
```ruby
puts "\nFibers"
fibers = []
(1..5).each do |i|
fibers << Fiber.new do
puts i
sleep 1
Fiber.yield
end
end
fibers.each(&:resume)
```
Each fiber runs a block of code and can be paused and resumed. This example prints numbers 1 to 5, pausing after each number. Although fibers provide concurrency, they do not run in parallel; the main program controls when each fiber resumes.
#### Ractor (Ruby 3)
Ractors, introduced in Ruby 3, enable true parallel execution by running code in isolated compartments:
```ruby
puts "\nRactor"
ractors = (1..5).map do |i|
Ractor.new(i) do |i|
sleep 1
puts i
end
end
ractors.each(&:take)
```
In this example, each number from 1 to 5 is printed by a separate ractor. Ractors can run in parallel, making full use of multi-core processors. The `take` method waits for each ractor to finish and return its result.
### When to Use Concurrency and Parallelism Techniques
Understanding when to use each concurrency and parallelism technique is crucial for optimizing performance in Ruby applications. Here's a guide:
#### Threads and Fibers
- **Threads**: Use threads when you need to handle I/O-bound tasks, such as reading and writing files or making network requests. Threads can run concurrently but are limited by Ruby's Global Interpreter Lock (GIL), which means only one thread executes Ruby code at a time. Threads are heavier than fibers, requiring more resources, but they are suitable for tasks that involve waiting for external data.
- **Fibers**: Fibers are even lighter than threads and are used for cooperative multitasking. A single thread can manage multiple fibers, making fibers ideal for managing multiple I/O-bound tasks without creating additional threads. Fibers need explicit control for yielding and resuming, which provides fine-grained control over task execution.
#### Fork and Ractor
- **Fork**: Use `fork` for CPU-bound tasks that require significant computation and can benefit from true parallelism. Forking creates a new process, allowing it to run on a separate CPU core without being limited by the GIL. This is useful for heavy computations but incurs more overhead due to process creation and inter-process communication.
- **Ractor**: Introduced in Ruby 3, ractors provide a way to achieve parallelism while ensuring thread safety. Ractors are ideal for heavy computations that can be distributed across multiple CPU cores. Unlike threads, ractors do not share memory and communicate via message passing, avoiding issues with the GIL and improving performance on multi-core systems.
Ruby provides multiple ways to handle concurrency and parallelism, each suited for different scenarios. Synchronous code is simple but sequential. Threads and fibers allow for concurrent execution within a single process, with threads offering parallelism on multi-core systems. Forking creates new processes for true parallelism, while ractors offer a modern and thread-safe way to achieve parallel execution in Ruby 3. Understanding these techniques helps developers write efficient and performant Ruby programs. | francescoagati |
1,896,839 | Mood-Based Music: A WhatsApp Chatbot That Curates Personalized Playlists | This is a submission for the Twilio Challenge What I Built I built a WhatsApp chatbot... | 0 | 2024-06-23T20:00:00 | https://dev.to/irensaltali/mood-based-music-a-whatsapp-chatbot-that-curates-personalized-playlists-n51 | devchallenge, ai, twilio, twiliochallenge | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
I built a WhatsApp chatbot using Twilio that generates a personalized playlist of 5 songs based on the user's current mood. When a user sends a message to +1 (856) 975-0130 starting with "Hi", the bot prompts them to share how they are feeling. It then analyzes the sentiment of their response and curates a playlist to match their emotional state.
## Demo
You can try out the chatbot by sending a WhatsApp message to +1 (856) 975-0130 (You can use QR code below). Start your message with "Hi" and then share your current mood when prompted. The bot will respond with a playlist of 5 songs tailored to your emotional state.

{% youtube ECwNNLt56k0 %}
{% youtube RbzXDnEa6Oc %}
## Twilio and AI
This project leverages Twilio's WhatsApp API to enable interactive chat and Twilio Studio & Twilio Functions to handle the chatbot logic. When a message is received from the user, it is forwarded to Cloudflare Workers AI for sentiment analysis.
Cloudflare Workers AI is used to perform sentiment analysis on the user's mood description. The AI model detects the sentiment of the text as positive, negative or neutral. Based on the detected mood, an appropriate search query is generated and sent to the Spotify API to retrieve a playlist of songs matching that emotional state. The playlist is then returned to the Twilio Function, which sends it back to the user via WhatsApp messages.
Twilio enabled quick development of the chat interface, while Cloudflare Workers AI allowed seamless analysis of the user's mood without needing to manage infrastructure. By leveraging AI capabilities, relevant Spotify queries could be generated to curate personalized playlists. Integrating these technologies resulted in an engaging, mood-based music recommendation experience delivered through WhatsApp.
### Twilio Studio Flow

### Twilio Functions Code
[GitHub Repository](https://github.com/irensaltali/dev.to-twilio-challenge-spotify)
## Additional Prize Categories
- Entertaining Endeavors: This chatbot provides a fun and interactive way to discover music that resonates with your current mood.
- Twilio Times Two: This app uses Twilio's WhatsApp API, Twilio Studio for workflow automation, Twilio Functions to create a conversational experience that generates personalized playlists based on the user's mood, and Twilio CLI for building and deploying the project.
### Update
25.06.2024 - Facebook suspended my WhatsApp Account. I'm trying to recover. | irensaltali |
1,898,094 | Facebook System Design Frontend | A post by Bidisha Das | 0 | 2024-06-23T19:58:54 | https://dev.to/officialbidisha/facebook-system-design-frontend-3i0j |

| officialbidisha | |
1,898,093 | Quick tip: Using picoGPT in the SingleStore portal | Abstract picoGPT is a simplified and minimal implementation of the GPT model. It... | 0 | 2024-06-23T19:56:06 | https://dev.to/singlestore/quick-tip-using-picogpt-in-the-singlestore-portal-24lf | singlestoredb, gpt2, picogpt | ## Abstract
[picoGPT](https://github.com/jaymody/picoGPT) is a simplified and minimal implementation of the GPT model. It demonstrates the core principles of the GPT architecture without the requirement for a full-scale implementation. Written in Python and consisting of a small quantity of code, picoGPT doesn't implement many of the optimisations and enhancements found in comprehensive implementations. In this short article, we'll convert the original Python code to a Jupyter notebook and test it in the SingleStore portal.
The notebook file used in this article is available on [GitHub](https://github.com/VeryFatBoy/picogpt).
## Introduction
In an article titled "[GPT in 60 Lines of NumPy](https://jaykmody.com/blog/gpt-from-scratch/)" published in 2023, the author describes a very compact GPT solution. The article contains a wealth of implementation details and step-by-step instructions. Using the Python code provided on [GitHub](https://github.com/jaymody/picoGPT), we'll convert the standalone implementation to run in a Jupyter notebook. We'll then test the notebook in the SingleStore portal.
The code can be broken down into three main parts:
1. **encoder:** A GPT-2 encoder from OpenAI
2. **utils:** Several helper functions for GPT-2 model setup
3. **gpt2:** A transformer for text generation similar to a GPT-2 model
We'll make the minimal changes required to get the code working for us in the SingleStore portal.
## Create a SingleStoreDB Cloud account
A [previous article](https://dev.to/singlestore/quick-tip-using-dbt-with-singlestoredb-161g) showed the steps to create a free SingleStoreDB Cloud account. We'll use the following settings:
- **Workspace Group Name:** picoGPT Demo Group
- **Cloud Provider:** AWS
- **Region:** US East 1 (N. Virginia)
- **Workspace Name:** picogpt-demo
- **Size:** S-00
## Import the notebook
We'll download the notebook from [GitHub](https://github.com/VeryFatBoy/picogpt).
From the left navigation pane in the SingleStore cloud portal, we'll select **DEVELOP > Data Studio**.
In the top right of the web page, we'll select **New Notebook > Import From File**. We'll use the wizard to locate and import the notebook we downloaded from GitHub.
## Run the notebook
After checking that we are connected to the SingleStore workspace, we'll select **Run > Run All Cells**.
We have two examples programmed into the notebook.
First, the example text from the original implementation:
```python
result = main("Alan Turing theorized that computers would one day become")
print(result)
```
Example output:
```
the most powerful machines on the planet.
The computer is a machine that can perform complex calculations, and it can perform these calculations in a way that is very similar to the human brain.
```
Second, some example text from the [Ollama website](https://ollama.com/blog/embedding-models):
```python
result = main("Llamas are members of the camelid family meaning")
print(result)
```
Example output:
```
they are the only members of the family that live in the desert.
The camelid family is a group of animals that live in the desert. The camelid family is a group of animals
```
So, some mixed results.
## Summary
In this short article, we've implemented and run picoGPT in the SingleStore portal. Performance may not be great and results seem somewhat mixed, but implementing the GPT in such few lines of code is very impressive. | veryfatboy |
1,898,092 | The History of Large Language Models (LLM) | Large Language Models (LLMs) have evolved from simple N-Gram models to sophisticated transformers... | 0 | 2024-06-23T19:48:45 | https://dev.to/sgaglione/the-history-of-large-language-models-llm-82f | llm, python, ai | _Large Language Models (LLMs) have evolved from simple N-Gram models to sophisticated transformers like GPT-3, revolutionizing natural language processing. This article traces their development, highlighting key advancements such as Recurrent Neural Networks (RNNs) and the Transformer model, with practical Python examples._
###
Large Language Models (LLM) are at the core of many innovations in artificial intelligence (AI) today. They have the ability to understand and generate natural language impressively. But how did we get here? This article guides you through the history of LLMs, from their beginnings to their current applications, using simple explanations and concrete examples.
## The Beginnings: N-Gram Models
1. N-Gram Models The first language models were based on n-grams, a simple yet effective technique for modeling text. An n-gram is a sequence of n elements, usually words or letters. For example, in the sentence “I eat an apple”, the bigrams (n=2) would be: “I eat”, “eat an”, “an apple”.
Example in Python:
```python
from collections import Counter
def generate_ngrams(text, n):
words = text.split()
ngrams = zip(*[words[i:] for i in range(n)])
return [" ".join(ngram) for ngram in ngrams]
text = "I eat an apple"
bigrams = generate_ngrams(text, 2)
print(Counter(bigrams))
```
## The Advent of Neural Networks
2. Recurrent Neural Networks (RNN) RNNs marked a major advancement by allowing models to retain some memory of past information. This makes them particularly suited for text processing, where context is crucial.
Example in Python with TensorFlow:
```python
import tensorflow as tf
from tensorflow.keras.layers import SimpleRNN, Embedding, Dense
model = tf.keras.Sequential([
Embedding(input_dim=10000, output_dim=32),
SimpleRNN(32),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
```
Transformers: A Revolution
3. The Transformer Model Introduced by Vaswani et al. in 2017, the Transformer model revolutionized natural language processing. It uses an attention mechanism that allows processing all positions in a sequence in parallel, making the model much more efficient.
Example of Attention in Python:
```python
import tensorflow as tf
def scaled_dot_product_attention(query, key, value):
matmul_qk = tf.matmul(query, key, transpose_b=True)
dk = tf.cast(tf.shape(key)[-1], tf.float32)
scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)
output = tf.matmul(attention_weights, value)
return output
query = tf.random.normal(shape=[1, 60, 512])
key = tf.random.normal(shape=[1, 60, 512])
value = tf.random.normal(shape=[1, 60, 512])
output = scaled_dot_product_attention(query, key, value)
print(output.shape)
```
## Large Language Models (LLM)
4. GPT (Generative Pre-trained Transformer) GPT, developed by OpenAI, is one of the most well-known LLMs. It is pre-trained on a vast amount of text and then fine-tuned for specific tasks. GPT-3, for example, has 175 billion parameters, allowing it to generate very coherent and contextual text.
Example of Using GPT-3 with OpenAI API:
```python
response = openai.Completion.create(
engine="text-davinci-003",
prompt="Explain the importance of language models in AI.",
max_tokens=150
)
print(response.choices[0].text.strip())
```
## Conclusion
Language models have come a long way, from simple n-grams to powerful transformers like GPT-3. These advancements enable incredible applications today, from automatic translation to content generation.
Key Points:
N-Gram: Simple text modeling technique.
RNN: Introduction of memory in sequential processing.
Transformer: Use of attention for efficient parallel processing.
GPT: Powerful language models capable of understanding and generating coherent text.
With these basics, you can start exploring the wonders of language models and their impact on our world.
If you have any questions or would like to delve deeper into a particular point, feel free to let me know in the comments.
| sgaglione |
1,893,622 | Creating a Material Spinner with Pure and Simple CSS | Everyone has seen it hundreds, if not thousands of times, and it seems like this loader is very... | 0 | 2024-06-23T19:47:28 | https://dev.to/alekseiberezkin/creating-a-material-spinner-with-pure-and-simple-css-1b60 | webdev, css, material | {%codepen https://codepen.io/wvtyubnf-the-selector/pen/MWpGbre %}
Everyone has seen it hundreds, if not thousands of times, and it seems like this loader is very easy and natural. However, if you try to create it from scratch, you'll find it surprisingly challenging. The first problem is simply understanding the motions involved.
## Rotation and... what?
After observing it for a while, you may notice there are two motions, simple rotation, and something weird happening with the arc ends. Let's remove the first and slow down the second:
{%codepen https://codepen.io/wvtyubnf-the-selector/pen/vYwdvvr %}
Much clearer now. First, the arc's beginning moves forward, then its end catches up with the beginning. But how is this possible to achieve?
## SVG circle and `stroke-dasharray`
The solution relies on the SVG [`stroke-dasharray`](https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/stroke-dasharray) property. When applied to an SVG element, such as `<circle>`, it converts the solid stroke into a dashed one. For example, `stroke-dasharray: 10px 20px 30px 40px` means the first dash is 10px, the next is a 20px gap, then a 30px stroke, then a 40px gap. This pattern repeats until the full circle is complete:
{%codepen https://codepen.io/wvtyubnf-the-selector/pen/jOozPRm %}
You might have noticed a usability issue, though: the value items are the arcs lengths in [user-space](https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/viewBox) pixels. It can be in some other CSS unit such as `%` or `em`, but this is still obscure because we humans measure arcs and angles in degrees. We understand 90° but not 90px.
Fortunately, it's easy to fix this issue by calculating the `--1deg` custom property, which represents the length of one degree, and using it as a unit:
```css
circle {
--r: 47px;
--1deg: calc(2 * pi * var(--r) / 360);
stroke-dasharray:
calc(40 * var(--1deg))
calc(80 * var(--1deg));
}
```
The result resembles the trace of helicopter blades:
{%codepen https://codepen.io/wvtyubnf-the-selector/pen/dyEmYPB %}
We now have a convenient tool to define arc ends in human-readable units — degrees. The next picture shows the animation phases of the spinner and their corresponding arc values.
{%codepen https://codepen.io/wvtyubnf-the-selector/pen/JjqpwVr %}
Conventions:
* Gaps are shown in light gray; these are not actual gaps but are used to make them visible.
* The first and last arcs are not to scale, they are slightly more than 2° for the same reason.
Now we are ready to write it in CSS. Note the additional leading zero, which is needed to skip the initial stroke and start with a gap.
```css
@keyframes dash-anim {
0% {
stroke-dasharray:
0
0
calc(2 * var(--1deg))
calc(358 * var(--1deg));
}
50% {
stroke-dasharray:
0
calc(35 * var(--1deg))
calc(290 * var(--1deg))
calc(35 * var(--1deg));
}
100% {
stroke-dasharray:
0
calc(358 * var(--1deg))
calc(2 * var(--1deg));
}
}
circle {
animation: dash-anim 5000ms ease-in-out infinite;
}
```
This renders the already familiar slow arc animation:
{%codepen https://codepen.io/wvtyubnf-the-selector/pen/vYwdvvr %}
## Fix the jumping arc, or tolerate it?
Because the arc cannot cross the origin of `stroke-dasharray`, there's a noticeable “jump” between `100%` and `0%` phases. It is theoretically possible to fix this:
```CSS
@keyframes dash-anim {
/* ... */
100%: {
/* ... */
transform: rotate(2deg);
}
}
```
Unfortunately it doesn't work very smooth in Firefox — there is annoying blinking in the `0%`/`100%` phase. However, a 2° jump is not that significant. The jump is barely noticeable at full speed and can be safely tolerated.
## Implementation notes
### stroke-dashoffset
If you inspect the [MUI implementation](https://mui.com/material-ui/react-progress/), you will notice they use a 2-component `stroke-dasharray` together with a negative [`stroke-dashoffset`](https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/stroke-dashoffset). The latter effectively functions similarly to a leading zero in `stroke-dasharray`, creating the leading gap. However, I'm using `stroke-dasharray` with a leading zero because I find it easier to understand.
The following fragment shows the same animation rewritten with `stroke-dashoffset`, like in MUI:
```CSS
@keyframes dash-anim {
0% {
stroke-dasharray:
calc(2 * var(--1deg))
calc(358 * var(--1deg));
stroke-dashoffset: 0;
}
50% {
stroke-dasharray:
calc(290 * var(--1deg))
calc(358 * var(--1deg));
stroke-dashoffset: calc(-35 * var(--1deg));
}
100% {
stroke-dasharray:
calc(2 * var(--1deg))
calc(358 * var(--1deg));
stroke-dashoffset: calc(-358 * var(--1deg));
}
}
```
You may also notice that they do not bother to adjust the `stroke-dasharray` values that exceed the full circle. For example, in the `50%` phase, only 35° of a 358° gap will be visible, and everything else will be trimmed away.
### Precalculated values
The MUI spinner doesn't use CSS variables and calculations — all values are precalculated. You may do the same if you target older browsers. The following code represents the same animation given `r: 47px`.
```CSS
@keyframes dash-anim {
0% {
stroke-dasharray: 2px 293px;
stroke-dashoffset: 0;
}
50% {
stroke-dasharray: 238px 293px;
stroke-dashoffset: -29px;
}
100% {
stroke-dasharray: 2px 293px;
stroke-dashoffset: -293px;
}
}
```
Totally cryptic!
### Material Web Components
The official Google implementation of the [Material spinner](https://material-web.dev/components/progress/) is much more sophisticated. It doesn't contain any SVG; instead, its arcs are made with two empty containers using `border-radius: 50%`, along with four rotation animations. This complexity pays off — the result looks very clean and smooth, and it runs perfectly in all browsers.
## And finally
It's time to complete our spinner: speed it up, reintroduce rotation, and let's spice it up with a blurry shadow and fancy colors!
{%codepen https://codepen.io/wvtyubnf-the-selector/pen/GRaxZJP %} | alekseiberezkin |
1,898,090 | Computer science | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T19:30:45 | https://dev.to/wafaberr/computer-science-1eaa | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
<!-- Explain a computer science concept in 256 characters or less. -->
Computer science is the study of computers, including their theoretical foundations, algorithms, hardware, and software. It focuses on computation, information, and automation using computational systems. Algorithms, which provide instructions for tasks, play a central role in computer science
## Additional Context
These are the more important area of computer science:
Data Structures: Organizing and managing data (e.g., arrays, linked lists, trees).
Algorithms: Step-by-step instructions for solving problems (e.g., sorting, searching).
Programming Languages: Tools for writing software (e.g., Python, Java, C++).
Operating Systems: Managing hardware resources (e.g., memory, processes).
Databases: Storing and retrieving structured data (e.g., SQL databases).
Artificial Intelligence: Creating intelligent systems (e.g., machine learning, neural networks).
Computer Networks: Connecting devices (e.g., TCP/IP, routers).
Software Engineering: Developing reliable, maintainable software.
Theory of Computation: Understanding what computers can and cannot do (e.g., Turing machines, complexity theory).
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | wafaberr |
1,898,088 | Ondoarding new developers | Hey Devs! In our previous discussion, we highlighted the importance of a great onboarding process... | 0 | 2024-06-23T19:29:48 | https://dev.to/jwtiller_c47bdfa134adf302/ondoarding-new-developers-kmf | dotnet, onboarding | Hey Devs!
In our previous discussion, we highlighted the importance of a great onboarding process and shared some of your best experiences and practices. Today, let's delve deeper into what makes an onboarding process truly exceptional and how tools like [RazorSharp](https://razorsharp.dev) can enhance this crucial phase, ensuring your developers are empowered and engaged.
## Key Elements of a Great Onboarding Process
1. **Structured Orientation Programs**: A well-defined schedule that includes an introduction to the company culture, processes, and tools sets the tone for a smooth transition.
2. **Accessible Documentation**: Comprehensive and up-to-date documentation is essential. It helps new hires understand workflows, systems, and standards without constant hand-holding.
3. **Mentorship and Support**: Pairing new developers with experienced mentors can accelerate learning and foster a supportive environment.
4. **Hands-On Training**: Practical sessions where new hires can work on actual projects with guidance ensure they gain confidence and competence quickly.
5. **Feedback Mechanisms**: Regular check-ins and feedback loops help identify and address any issues early, ensuring continuous improvement of the onboarding process.
## How RazorSharp Enhances Onboarding
RazorSharp APM is designed to make the onboarding process smoother and more effective for developers, especially in .NET environments. Here’s how:
1. **Real-Time Performance Monitoring**: New developers can quickly understand the performance dynamics of the applications they work on. RazorSharp provides immediate insights into performance bottlenecks, helping them learn faster and contribute effectively from day one.
2. **AI-Powered Automated Documentation**: RazorSharp’s AI-driven documentation tools automatically generate and update documentation based on real-time data. This ensures that new hires always have access to the latest information, reducing the learning curve and enhancing productivity.
3. **Simplified Debugging**: RazorSharp simplifies the debugging process, allowing new developers to quickly identify and fix issues. This not only boosts their confidence but also ensures they can contribute to critical tasks without extensive oversight.
4. **Enhanced Security with RazorSharp Guard**: Security is a crucial aspect of any development process. RazorSharp Guard actively monitors for potential security threats like SQL injection attacks, providing new developers with a secure environment to work in and learn from.
5. **Seamless Integration**: RazorSharp can be integrated into your existing .NET applications with minimal setup. This ensures that new developers can start using the tool almost immediately, without having to navigate complex configurations.
6. **Comprehensive Visualizations**: Tools like the topology map and integration with platforms like OpenTelemetry provide new developers with a clear overview of system architecture and performance metrics. This holistic view aids in understanding complex systems quickly.
## Empowering Developers to Reduce Turnover and Boost Engagement
Using tools that empower your developers to understand and troubleshoot issues is crucial. When developers can quickly identify and resolve problems, they feel more competent and motivated. This empowerment reduces frustration and demotivation, which are significant factors in high turnover rates. Keeping your developers engaged and satisfied with their work environment leads to better retention and a more productive team.
## Future of RazorSharp: Customizable AI-Driven Onboarding
In the future, RazorSharp will further empower developers by offering customizable AI-driven onboarding processes. Imagine being able to tailor the onboarding experience based on a new developer's seniority, tech stack experience, and specific needs. Here’s how it could work:
1. **Personalized Onboarding Plans**: Using AI, RazorSharp will analyze a new developer’s background and create a customized onboarding plan that focuses on their specific needs and knowledge gaps.
2. **Adaptive Learning Paths**: The AI-driven system will adapt the onboarding process in real-time based on the developer’s progress, providing additional resources or adjusting the complexity of tasks as needed.
3. **Integrated Feedback Mechanisms**: Continuous feedback loops will help refine the onboarding process, ensuring it remains relevant and effective for each new hire.
4. **Enhanced Collaboration Tools**: Future integrations will include advanced collaboration tools, making it easier for new developers to communicate with their mentors and peers, fostering a more inclusive and supportive onboarding environment. | jwtiller_c47bdfa134adf302 |
1,898,087 | Mastering Debouncing in JavaScript: Improve Performance with Ease | Debouncing is a simple yet powerful technique in JavaScript that helps optimize performance by... | 0 | 2024-06-23T19:27:39 | https://dev.to/dev_habib_nuhu/mastering-debouncing-in-javascript-improve-performance-with-ease-1n4p | webdev, javascript, programming, react |

Debouncing is a simple yet powerful technique in JavaScript that helps optimize performance by limiting the rate at which a function is executed. This is especially useful for handling events like window resizing, scrolling, or input field changes, where frequent triggers can slow down your application. In this article, we'll explore how debouncing works and how you can easily implement it to enhance your web projects.
Let's dive into how you can implement debouncing in a React application. We’ll create an example where a search input field updates the displayed results as the user types, but using debouncing to avoid making a request on every single keystroke.
**1. Setup Your React App**
First, make sure you have a React app set up. If you don’t already have one, you can create it using Create React App:
```
npx create-react-app debounce-example
cd debounce-example
npm start
```
**2. Create a Debounce Function**
Create a utils.js file (or any other name you prefer) in the src folder for the debounce function:
```
export function debounce(func, wait) {
let timeout;
return function(...args) {
const context = this;
clearTimeout(timeout);
timeout = setTimeout(() => func.apply(context, args), wait);
};
}
```
The debounce function takes a func and wait time in milliseconds. It returns a new function that delays the execution of func until after wait milliseconds have passed since the last time it was invoked.
**3. Implement the Debounced Search Component**
Now, let's create a component that uses this debounce function. In this example, we will create a `Search` component that updates the displayed search results after the user stops typing for a specified period.
```
import React, { useState, useMemo } from 'react';
import { debounce } from './utils';
const Search = () => {
const [query, setQuery] = useState('');
const [results, setResults] = useState([]);
const handleSearch = (query) => {
// Simulating an API call with a timeout
console.log(`Searching for: ${query}`);
setResults([`Result 1 for "${query}"`, `Result 2 for "${query}"`]);
};
const debouncedSearch = useMemo(
() => debounce(handleSearch, 500),
[]
);
const handleChange = (e) => {
const { value } = e.target;
setQuery(value);
debouncedSearch(value);
};
return (
<div>
<input
type="text"
value={query}
onChange={handleChange}
placeholder="Search..."
/>
<ul>
{results.map((result, index) => (
<li key={index}>{result}</li>
))}
</ul>
</div>
);
};
export default Search;
```
We use `useMemo` to create a memoized version of the debounced handleSearch function. This ensures that the debounce function isn’t recreated on every render.
**4. Use the Search Component**
Finally, use the `Search` component in your main `App` component.
```
import React from 'react';
import Search from './Search';
const App = () => {
return (
<div>
<h1>Debounced Search Example</h1>
<Search />
</div>
);
};
export default App;
```
When you run the app and type in the search input, you'll notice that the search function (`handleSearch`) is not called immediately on every keystroke. Instead, it’s called only after you stop typing for 500 milliseconds, thanks to the debounce function. This reduces the number of times the search function is executed, improving the performance of your application.
This example shows how debouncing can be effectively used in a React application to manage high-frequency events efficiently. | dev_habib_nuhu |
1,898,086 | AI assistant/chatbot for use/support | This is a submission for the Twilio Challenge What I Built I built an SMS/whatsapp/call... | 0 | 2024-06-23T19:25:14 | https://dev.to/tophepzz1/ai-assistantchatbot-for-usesupport-jei | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
<!-- Share an overview about your project. -->
I built an SMS/whatsapp/call Assistant that you can chat with and it will still retain context, a conversational AI.
it keeps an history of the conversation so in as much as you use it for AI chatting purposes, it can also be used as a support system as it has a chat channel for the admin to make use of and the admin can check message History between the bot and the user then the admin can take over, the prompt to allow an admin take over is
on the frontend too, it contains a page for the admin to send forth personalized bulk SMS to all their contacts, it has a way of managing contacts and a preview of all the contacts
## Demo
<!-- Share a link to your app and include some screenshots here. -->
[Github url](https://github.com/twilio-hackathon)
A [demo link](https://twilio.spartapp.ng/admin) for this demo, there is a page to set your env and details then you can set the webhook to https://twilio.spartapp.ng/webhook/sms-whatsapp for both whatsapp and SMS
https://twilio.spartapp.ng/webhook/sms-whatsapp
https://twilio.spartapp.ng/webhook/voice/incoming for the voice call
to set environment variables go to https://twilio.spartapp.ng/environment
please set ENV variables before testing because i ran into problem with the whatsapp sender.i couldn't set it up
## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
i added the functionality of AI powered auto-responder based on previous message so it still retains context, it also includes twilios API for direct messaging and bulk messaging, also for the voice autoresponder which automatically responds to calls from users
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
1. Twilio times two i used both voice, whatsapp and sms API
2. Impactful Innovators: with this when advanced, company and organizations can take control of their customer service while giving the users opportunity to interact with AI which can drive sales
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
Generate Messaage
(https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8a5m5ykmsu62wmex92u.png)
the generated message
(https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x86mbgua4q0r99y2puu7.png)
the sent message
(https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gazkx3wgx234v5x3c8uv.jpg)
the contacts management section
(https://dev-to-uploads.s3.amazonaws.com/uploads/articles/352cnrl2f7e79cri982s.png)
<!-- Thanks for participating! → | tophepzz1 |
1,898,085 | Elevate Your Dining Experience with Padded Dining Chairs | Description Transform your dining area into a haven of comfort and style with our exquisite [padded... | 0 | 2024-06-23T19:22:17 | https://dev.to/mani_dia_8c53345aa4e95886/elevate-your-dining-experience-with-padded-dining-chairs-1ojl | Description
Transform your dining area into a haven of comfort and style with our exquisite [padded dining chairs] https://www.elegantcollections.com.au/products/copy-of-set-of-2-aleah-velvet-black-rubberwood-upholstered-dining-chairs-tufted-back. These chairs are not just a seat but an experience, offering unparalleled support and elegance that makes every meal a delight.
Unmatched Comfort:
Our padded dining chairs feature high-density foam cushions that cradle your body, ensuring maximum comfort during long meals and gatherings. Say goodbye to the discomfort of hard, unforgiving seating and hello to the bliss of luxurious padding.
Stylish Design:
Available in a variety of chic designs and fabrics, our padded dining chairs are the perfect blend of modern aesthetics and timeless elegance. Whether your style is contemporary, classic, or eclectic, you’ll find a design that complements your decor beautifully.
Durable and Long-Lasting:
Crafted from premium materials, these chairs are built to last. The sturdy frames and high-quality upholstery are designed to withstand daily use while maintaining their pristine appearance. Enjoy years of comfort and style without compromise.
Versatile and Functional
Perfect for both casual family meals and sophisticated dinner parties, our padded dining chairs add a touch of luxury to any occasion. Their versatile design makes them suitable for dining rooms, kitchens, and even home offices.
Easy Maintenance:
Our chairs are designed for easy care, with stain-resistant fabrics and durable finishes. A simple wipe-down keeps them looking fresh and new, ensuring your dining area always impresses guests.
Why Choose Our Padded Dining Chairs?
Enhanced Comfort:
Soft padding provides superior comfort and support.
Elegant Styles:
A variety of designs to match any interior decor.
Durable Construction:
Built to withstand the test of time.
Versatile Use:
Ideal for multiple settings and occasions.
Easy to Maintain:
Simple cleaning for long-lasting beauty.
Elevate your dining experience with the ultimate in comfort and style. Browse our collection of padded dining chairs today and find the perfect addition to your home. Make every meal a moment of luxury!
FAQs About Our Padded Dining Chairs
1. What materials are used in the padding of the chairs?
**Our padded dining chairs feature high-density foam cushions for maximum comfort and durability, covered in premium upholstery fabrics for a stylish finish.
2. How do I clean and maintain the chairs?
Our chairs are designed with easy maintenance in mind. Simply wipe the fabric with a damp cloth to remove any spills or stains. For deeper cleaning, refer to the care instructions provided when making your purchase.
3. Are the chairs suitable for heavy use?
Yes, our padded dining chairs are built with sturdy frames and high-quality materials, making them ideal for daily use. They are designed to withstand frequent use without losing their comfort or aesthetic appeal.
4. What styles and colours are available?
We offer a wide range of styles and colours to suit various tastes and decor themes. From contemporary to classic designs, you'll find options in neutral tones as well as vibrant hues to match your dining space perfectly.
##
5. Do the chairs come pre-assembled?
Our padded dining chairs come with simple assembly instructions. Most models require minimal assembly, typically involving attaching the legs to the seat. All necessary tools and hardware are included in the package. | mani_dia_8c53345aa4e95886 | |
1,897,934 | STR Fasa 3 Tarikh dan Jumlah | Pengenalan STR Fasa 3 Sumbangan Tunai Rahmah (STR Fasa 3) untuk rakyat Malaysia telah... | 0 | 2024-06-23T16:23:02 | https://dev.to/str2024/str-fasa-3-5fie | ## Pengenalan STR Fasa 3
Sumbangan Tunai Rahmah ([STR Fasa 3](https://semakanstr.com/str-fasa-3/)) untuk rakyat Malaysia telah diumumkan dan dijangka akan diagihkan tidak lama lagi. Berikut adalah maklumat terkini:
STR Fasa 3 adalah sebahagian daripada inisiatif Payung Rahmah yang bertujuan untuk membantu golongan yang kurang berkemampuan dalam menghadapi cabaran kos sara hidup. Bagi tahun 2024, bantuan ini telah ditambahbaik dengan mewujudkan Sumbangan Asas Rahmah, iaitu bantuan bulanan/tambahan khusus untuk golongan miskin, miskin tegar, B40, dan M40.
## Cara Semakan Status STR Fasa 3
Status semakan boleh dibuat melalui pautan berikut:
1. Klik "Lok Masuk MySTR"
2. Masukkan nombor Mykad pemohon
3. Pengesahan kata kunci keselamatan
4. Maklumat yang akan dipaparkan:
- Nama pemohon
- No Mykad pemohon
- Status permohonan
- Maklumat pembayaran Fasa 1 (jika layak)
- Klik menu "Semakan Pembayaran Fasa 1"
- Masukkan kata laluan
## Jumlah Pembayaran Mengikut Kategori
Pembayaran STR Fasa 3 akan berbeza-beza mengikut kategori. Berikut adalah jumlah yang diterima oleh penerima STR mengikut kategori:
- Rumah isi dengan pendapatan kurang daripada RM2,500: RM500
- Rumah isi dengan pendapatan antara RM2,501 dan RM5,000: RM100-RM300
- Individu yang tidak berkahwin dan orang tua tanpa suami: RM150 dan RM100, masing-masing.
## Cara Membaca Status STR Terkini
Untuk memahami status STR, sila rujuk panduan di bawah:
1. Klik "Lok Masuk MySTR"
2. Masukkan nombor Mykad pemohon
3. Pengesahan kata kunci keselamatan
4. Maklumat yang akan dipaparkan:
- Nama pemohon
- No Mykad pemohon
- Status permohonan
- Maklumat pembayaran Fasa 1 (jika layak)
- Klik menu "Semakan Pembayaran Fasa 1"
- Masukkan kata laluan
## Tarikh Pembayaran
Tarikh pembayaran STR Fasa 3 belum diumumkan secara rasmi, tetapi dijangka akan diagihkan tidak lama lagi. Anda boleh memantau perkembangan terkini melalui laman web rasmi Kementerian Kewangan Malaysia.
## Cara Membuat Permohonan
Permohonan STR boleh dibuat secara dalam talian atau melalui borang kertas. Berikut adalah langkah-langkah untuk membuat permohonan secara dalam talian:
1. Kunjungi laman web rasmi Kementerian Kewangan Malaysia.
2. Klik "Permohonan Baharu STR 2024".
3. Masukkan maklumat diri, termasuk nama, umur, status pekerjaan, pendapatan bulanan, status perkahwinan, maklumat akaun bank, dan lain-lain.
4. Muat naik dokumen tambahan yang diperlukan (jika ada).
5. Klik "Teruskan" dan "Hantar" untuk menghantar permohonan.
## Pautan Rasmi
Anda boleh memantau perkembangan terkini dan membuat status status melalui pautan berikut:
- [https://manfaat.mof.gov.my/](https://bantuantunai.hasil.gov.my/)
- [https://manfaat.mof.gov.my/](https://manfaat.mof.gov.my/)
## Peringatan
Anda disarankan untuk menonton pautan rasmi dan tidak mengikuti pautan yang tidak sah daripada pihak-pihak yang tidak bertanggungjawab. | str2024 | |
1,897,763 | Importance of Soft Skills for Job Interviews | Most of the time, while preparing for job interviews, most candidates try to demonstrate their... | 0 | 2024-06-23T19:21:07 | https://dev.to/m_midas/the-importance-of-soft-skills-5 | webdev, beginners, programming, career | Most of the time, while preparing for job interviews, most candidates try to demonstrate their technical abilities, or hard skills. Hard skills are no doubt essential, but soft skills play an equally important role and can literally make or break an opportunity. Learning and working on soft skills will not only help in acing an interview but will also lay the foundation for long-term success in any career field.
<img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExb3B5Mmo5bmI5em41eTAybTk3eXNldXo4cXA2aTYxZGtxdGxvMjVlaCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/8IGuMMq3Aka8Zq9Kax/giphy.gif">
### What Are Soft Skills?
Soft skills, along with being referred to as interpersonal or people skills, are personal attributes and a set of behaviors that facilitate a person to relate and interact effectively and harmoniously with other people. These skills do not emanate from a particular job or profession; rather, they are universally useful in any workplace. Some of the basic examples of soft skills include the following:
- **Communication**: the ability to clearly convey information to others and listen attentively;
- **Teamwork:** the capability to work in cooperation with others for some common objective.
- **Problem-solving**: ability in spotting issues, and arriving at resolutions that work.
- **Time management**: distribute the use of time wisely according to priorities.
- **Adaptability**: be ready to change or adapt to new situations and challenges.
- **Leadership**: art of guiding and encouraging others to fulfill a goal.
### Why are Soft Skills Important?
1. **Better collaboration**: The modern workplace calls for collaboration on every level. Effective communication and team working skills are the doors to better collaboration, hence improved productivity, and enhanced atmosphere at the workplace.
2. **Enhanced problem-solving:** Employers are looking for candidates who can think creatively and seriously about problems at hand. With good problem-solving skills, you will take lesser time trying to sort the issues effectively.
3. **Better Adaptability:** The modern workplace is constantly in flux. Any employee who will be able to adapt to new technologies, processes, or roles is invaluable to any organization.
4. **Increased Leadership Potential:** To be a leader means not only to manage people but also to inspire and motivate your team. Good leaders possess strong soft skills that help build trust and move their teams forward toward success.
### Soft Skills vs. Hard Skills
While hard skills are discrete, teachable abilities or knowledge sets, relatively easy to measure like coding, data analysis, etc. Soft skills are more subjective and harder to quantify. Both types of skills are imperative, and it is often the soft skills that become differentiators. The reasons why this is so are expounded in the following points: Soft skills don't prove easy for one to undervalue or belittle over hard skills.
- **Interpersonal Interaction:** Even the most technically skilled individual will not perform so well in a role if he/she lacks the ability to communicate or collaborate with colleagues and clients.
- **Adaptability to Change:** That's the thing: hard skills can get outdated as technology and methodologies change. Soft skills, however, stay the same no matter what.
- **Holistic Development:** The employer seeks people who have depth and have the not only technical ability to get the job done but could also enhance company culture.
- **Ease of Teaching:** Hard skills are always easier to be taught by senior colleagues, whereas soft skills—like effective communication, teamwork, and adaptability—require more of a fundamental change in behavior and mindset; hence, they are extremely hard to inculcate in a work environment. Hence, people with already high quotient soft skills have a considerable edge over others.
### Developing Soft Skills
Developing soft skills takes time and conscious efforts. Some of the strategies to enhance your soft skills are enumerated below:
1. **Self-Assessment and Feedback:** Begin by knowing one's weaknesses, asking peers, mentors, and supervisors about the changes they would like them to develop. Self-assessment tools and personality tests prove quite helpful in this regard.
2. **Active Listening**: Listen actively by giving full attention to the speaker, asking questions for clarification, and summarizing their points to be certain that one has understood them.
3. **Effective Communication**: Verbal and non-verbal communication is important. It encapsulates clear articulation, good body language, and active engagement in conversations.
4. **Conflict Resolution**: Learn techniques of conflict resolution to handle disagreements or misunderstandings professionally.
5. **Time Management**: Tools and techniques for managing time will consist of to-do lists, calendars, and priority frameworks.
6. **Continuous Learning**: Participate in workshops, seminars, and courses to develop soft skills. Read books and articles on leadership, communication, and personal development.
7. **Networking**: Engage in networking activities to make relations and test one's interpersonal skills in different environments. Attend meetups, join conferences, and take part in other sector-specific events that will help you introduce yourself to colleagues. These contacts will assist not only in widening the circle of professional society but will also work towards developing communications and social skills.
### Conclusion
Soft skills are very vital in a job interview and for your all-round career development. Whereas hard skills show that you are competent technically, soft skills reflect one's inter-relationship abilities, flexibility to change, and leadership skills. Much of your employability and potential career growth depends on the investment you put into building your soft skills. After all, it is not what you know but how you have to galvanize it within a team-oriented environment at work, where things seem to go at a blurry pace. | m_midas |
1,898,059 | A Comprehensive Guide to Effective Backlink Strategies | Introduction Embarking on an SEO journey can be both exciting and challenging. Backlinks... | 0 | 2024-06-23T19:18:56 | https://dev.to/gohil1401/a-comprehensive-guide-to-effective-backlink-strategies-36mm | webdev, beginners, tutorial, seo |
## Introduction
Embarking on an SEO journey can be both exciting and challenging. Backlinks are a crucial component of SEO, helping to improve your website's authority and search engine ranking. In this comprehensive guide, we'll explore various backlink strategies, tools like the Moz Bar Chrome extension, and different submission techniques to enhance your SEO efforts.
## Understanding Backlinks
Backlinks are hyperlinks from one website to another. They play a critical role in SEO by signaling to search engines that other websites vouch for your content. High-quality backlinks can enhance your website's authority, leading to better search engine rankings and increased traffic.
## Types of Backlinks
- **Natural Backlinks:** These are earned without any direct effort, usually when someone finds your content valuable and links to it.
- **Manually Built Backlinks:** These are acquired through deliberate actions, such as asking influencers to link to your site.
- **Self-Created Backlinks:** These are created by adding links in forums, blog comments, or online directories.
**Dofollow vs. Nofollow Backlinks**
**Dofollow Backlinks**
- **Definition:** Pass on link juice, enhancing the linked site's authority.
- **Impact on SEO:** Positively impacts SEO and search engine rankings.
**Nofollow Backlinks**
- **Definition:** Do not pass on link juice.
- **Impact on SEO:** Do not directly impact search engine rankings but can drive traffic and increase visibility.
**How to Identify Dofollow and Nofollow Links**
- **Inspecting the HTML Code:** Right-click on the link and select "Inspect" to view the HTML code. Dofollow links will appear as normal `<a href="URL">Link Text</a>` tags, while nofollow links will include the `rel="nofollow"` attribute.
## Domain Authority and Page Authority
**Domain Authority (DA)**
- **Explanation:** A metric developed by Moz to predict a website's ability to rank on search engines.
- **Importance:** A higher DA indicates a stronger potential for ranking.
**Page Authority (PA)**
- **Explanation:** Measures the strength of a single page.
- **Importance:** A higher PA indicates a better chance of ranking for that specific page.
**Using Moz Bar Chrome Extension**
- **Installation:** Download and install the Moz Bar from the Chrome Web Store.
- **Usage:** Once installed, activate the Moz Bar to view DA and PA scores for any website you visit. This tool is invaluable for assessing the strength of potential backlink sources.
## Link Building Strategies
Link building involves acquiring hyperlinks from other websites to your own. It’s a fundamental SEO strategy aimed at increasing a site's authority and visibility.
**Best Practices for Link Building**
- **Focus on Quality Over Quantity:** Aim for backlinks from high-authority sites within your niche.
- **Build Relationships:** Network with influencers and webmasters to create opportunities for backlinks.
- **Create High-Quality Content:** Produce content that is valuable, shareable, and link-worthy.
## Search Engine Submission
Submitting your website to search engines is an essential step in SEO.
**Benefits**
- **Indexing:** Ensures your site is indexed by search engines.
- **Visibility:** Increases the likelihood of appearing in search results.
**How to Submit Your Site**
- **Google Search Console:**
- Sign in to Google Search Console.
- Add your site by clicking "Add a property."
- Verify ownership by following the provided methods.
- Submit your sitemap under the "Sitemaps" section.
- **Bing Webmaster Tools:**
- Sign in to Bing Webmaster Tools.
- Add your site by clicking "Add a site."
- Verify ownership and submit your sitemap.
## Free Classified Submission
Free classified submission involves posting ads on classified sites to generate backlinks and traffic.
**Examples**
- **Craigslist**
- **Gumtree**
**How to Use It Effectively**
- **Create Compelling Ads:** Ensure your ads are detailed and include relevant keywords.
- **Regular Updates:** Update your ads regularly to keep them fresh and relevant.
- **Include Backlinks:** Embed links back to your website in your ad descriptions.
## Image Submission
Submitting images to high-quality websites can boost your SEO.
**Best Practices**
- **Use Relevant Keywords:** Include keywords in image titles, descriptions, and tags.
- **Ensure High Quality:** Use high-resolution images to attract more attention and shares.
**Popular Platforms**
- **Flickr**
- **Pinterest**
**How to Submit**
- **Create an Account:** Sign up on the platform.
- **Upload Images:** Follow the site’s guidelines for image uploads.
- **Optimize Descriptions:** Use keyword-rich descriptions and tags.
## PDF/PPT Submission
Sharing PDF and PPT files on document-sharing sites can enhance your backlink profile.
**Benefits**
- **Diversification:** Adds variety to your backlink sources.
- **Increased Reach:** Extends the reach of your content.
**Platforms to Use**
- **SlideShare**
- **Scribd**
**How to Submit**
- **Create an Account:** Sign up on the chosen platform.
- **Upload Documents:** Follow the site’s guidelines for document uploads.
- **Optimize Metadata:** Include keywords in titles and descriptions.
## Social Bookmarking
Social bookmarking involves saving and sharing web pages on social bookmarking sites.
**Benefits**
- **Traffic:** Drives traffic to your website.
- **Indexation:** Helps search engines index your content faster.
**Popular Platforms**
- **Reddit**
- **StumbleUpon**
**How to Use**
- **Sign Up:** Create an account on the platform.
- **Bookmark Your Content:** Save your web pages with relevant tags and descriptions.
- **Engage with the Community:** Participate in discussions to increase visibility.
## Forum Posting
Engaging in forums relevant to your niche can provide valuable backlinks and traffic.
**Best Practices**
- **Be Genuine:** Provide value in your posts and avoid spamming.
- **Include Backlinks Naturally:** Integrate links into your posts in a way that adds value to the discussion.
**How to Find Forums**
- **Search Online:** Use keywords related to your niche to find relevant forums.
- **Join Active Communities:** Participate in forums with high activity and engagement.
## Guest Posting
Guest posting involves writing articles for other websites, providing backlinks to your site.
**Benefits**
- **Authority:** Builds your authority in your niche.
- **Audience Reach:** Reaches new audiences and drives traffic to your site.
**How to Find Opportunities**
- **Search for Blogs in Your Niche:** Look for blogs that accept guest posts.
- **Offer High-Quality Content:** Pitch unique, high-quality articles to blog owners.
**Writing Tips**
- **Research the Blog’s Audience:** Tailor your content to fit the audience.
- **Include Backlinks Wisely:** Ensure your backlinks are relevant and add value.
## Press Release
Publishing press releases on reputable sites can improve your SEO and brand visibility.
**Benefits**
- **High-Quality Backlinks:** Generates backlinks from reputable sources.
- **Brand Credibility:** Enhances your brand’s credibility and visibility.
**How to Write a Press Release**
- **Keep it Newsworthy:** Focus on newsworthy content.
- **Include Keywords:** Use relevant keywords in the title and body.
- **Follow a Standard Format:** Use a clear, concise format with a strong headline.
**Submission Sites**
- **PR Newswire**
- **PRWeb**
## Infographic Submission
Creating and submitting infographics can attract backlinks from diverse sources.
**Benefits**
- **Visual Appeal:** Engages audiences visually and encourages sharing.
- **High Shareability:** Infographics are often shared widely, increasing backlink potential.
**How to Create Infographics**
- **Use Design Tools:** Tools like Canva and Piktochart can help you create professional infographics.
- **Include Data:** Use accurate and relevant data to support your points.
**Submission Sites**
- **Visual.ly**
- **Infographic Journal**
## Video Submission
Submitting videos to platforms like YouTube can drive traffic and improve SEO.
**Best Practices**
- **Optimize Titles and Descriptions:** Use keywords
in your video titles and descriptions.
- **High-Quality Content:** Ensure your videos are high quality and provide value.
**Popular Platforms**
- **YouTube**
- **Vimeo**
**How to Submit**
- **Create a Channel:** Set up your channel on the platform.
- **Upload Videos:** Follow the site’s guidelines for video uploads.
- **Promote Your Videos:** Share your videos on social media and other platforms for maximum reach.
## Conclusion
Starting an SEO journey involves understanding and implementing various backlink strategies. From dofollow and nofollow links to different submission techniques, each method contributes to a robust backlink profile. Embrace these strategies, and watch your website's authority and traffic grow.
| gohil1401 |
1,898,036 | Throttling in JS | Throttling Throttling in JavaScript is a technique used to control the rate at which a... | 0 | 2024-06-23T19:16:10 | https://dev.to/margish288/throttling-in-js-j44 | throttling, javascript, webdev, learndev | ## Throttling
-
**Throttling** in JavaScript is a technique used to control the rate at which a function is executed.
- This is especially useful in scenarios where a function could be called frequently, such as during **scroll events**, **window resizing**, or **handling user input** in real-time.
- By throttling a function, you ensure that it is not executed more often than a specified interval, thereby improving performance and responsiveness of the application.

Ok ok we will not go into too much of theories and stuff but just remember that throttling means **preventing a functional call which might called continuously**.
### Here's how throttling works...
- So we have to just wrap our function into custom function which is implemented on top of `setTimeout()`
### Have you ever work around closure concept of Javascript ?
if no then this might look confusing...
### Step 1 : Main body of our wrapper function.
```
function throttle(func, delay) {
return function (...args) {
return func(...args);
};
}
```
we will call this wrapper function like this :
```
function callApi() {
// here comes the api call logic or any work you want to throttle.
}
const throttledFunction = throttle(callApi, 3000) // 3 in seconds
```
### Step 2 : Now we add timer to this function ⏰
- In next step we will understand why we are getting this timestamp.
```
function throttle(func, delay) {
return function (...args) {
const now = new Date().getTime(); // getting the timestamps
return func(...args);
};
}
```
### Step 3 : Some logic
- In here we have something interesting. I mean how do we compare old time with new one ? we probably need some logic right ?
- so here the actual part which is helping the wrapper function to call our passed function only and only if certain delay is passed.
- We have to keep track of `lastCall` which is initially **0**.
- we update this lastCall every time with current time, in our case we set it with `now`.
```
function throttle(func, delay) {
let lastCall = 0;
return function (...args) {
const now = new Date().getTime(); // getting the timestamps
lastCall = now;
return func(...args);
};
}
```
### step 4 : Putting some condition which adds throttle behaviour
- we must have to compare our old time and new timer with passed `delay` time and if it is less than passed `delay` time then we have to return our function right away so out actual function will not call this time.
- it will wait for another time when this throttled function fires.
```
function throttle(func, delay) {
let lastCall = 0;
return function (...args) {
const now = new Date().getTime(); // getting the timestamps
if(now - lastCall < delay) {
return;
}
lastCall = now;
return func(...args);
};
}
```
**_NOTE_**: Sometime you might mess up around comparing timing in `if` statement so i would suggest you to wrap your subtraction line with `abs(now - lastCall) < delay` So we are on same line.
**TIP of the day 💡** : Its always good idea to write readable code so i would highly prefer you to give appropriate name and naming convention should be constant to your entire code base.
### Conclution 🍫 🍩
Yo buddy, **Congratulations ! 🎉** you just made your own throttling function. Go and use it in your code base.
Reach out to me if you have any queries. I would love to answer you questions.
| margish288 |
1,898,058 | Happy to be a part | Just Singned up and and excited to be part of this community. Im currently working as full stack web... | 0 | 2024-06-23T19:15:49 | https://dev.to/shadabfalak/happy-to-be-a-part-4bna | Just Singned up and and excited to be part of this community. Im currently working as full stack web developer and recently made website [77Links](https://77links.com/).
Im also interested in developing Mobile Apps. My colllegyes introduced me Flutter framework so im here to follow flutter giants to learn more this new technology.
Happy coding! | shadabfalak | |
1,898,057 | An Intro to blockchain | Blockchain Blockchain is a distributed immutable ledger that is completely transparent. Let's break... | 0 | 2024-06-23T19:14:48 | https://dev.to/arsh_the_coder/an-intro-to-blockchain-32h1 | **Blockchain**
Blockchain is a distributed immutable ledger that is completely transparent. Let's break that down. Distributed means it is not available on a single machine, hence not centrally controlled. Immutable refers to the fact that it can't be changed (we'll see that later). Ledger means it contains a record of all the transactions that have occurred.
Uses of blockchain technology are quite popular, like in cryptocurrencies, cross-border payments, etc. Let me show you something interesting.
**Types of blockchain: public & private**
Let me cut and paste ChatGPT's answer (because it's from ChatGPT that I learned it).
Public and private blockchains are two primary types of blockchain networks, each with distinct characteristics, advantages, and use cases. Here’s a detailed comparison:
**Public Blockchain**
Definition: A public blockchain is a decentralized network accessible to anyone who wants to participate. It is open-source and allows anyone to join the network, validate transactions, and maintain the shared ledger.
**Characteristics:**
Decentralization: No single entity controls the network.
Transparency: All transactions are visible to everyone on the network.
Security: High level of security due to consensus mechanisms like Proof of Work (PoW) or Proof of Stake (PoS).
Anonymity: Participants can remain pseudonymous.
Immutability: Once data is recorded, it cannot be altered.
**Examples:**
Bitcoin: The first and most well-known public blockchain.
Ethereum: A public blockchain that supports smart contracts and decentralized applications (dApps).
**Advantages:**
Trustless: No need for participants to trust each other or a central authority.
Open Access: Anyone can participate and contribute to the network.
Transparency: Enhances trust through visibility of all transactions.
**Disadvantages:**
Scalability: Typically slower transaction processing times and higher costs due to the need for widespread consensus.
Energy Consumption: High energy usage, especially with PoW consensus.
**Private Blockchain**
Definition: A private blockchain is a restricted network where access is limited to specific participants. It is typically used by organizations for internal purposes and is controlled by a single entity or consortium.
**Characteristics:**
Controlled Access: Only authorized participants can join the network and validate transactions.
Privacy: Transactions are visible only to authorized participants.
Centralization: Managed by one or a few organizations.
Efficiency: Faster transaction processing due to fewer nodes and simpler consensus mechanisms.
Governance: The controlling entity or consortium can alter the rules or data.
**Examples:**
Hyperledger Fabric: A modular framework for building private blockchain applications.
Corda: Designed for business use, focusing on privacy and scalability.
**Advantages:**
Scalability: Higher transaction speeds and lower costs.
Privacy: Greater control over who can see and validate transactions.
Customizability: Can be tailored to specific organizational needs.
Energy Efficiency: Lower energy consumption due to simpler consensus mechanisms.
**
Disadvantages:**
Trust: Requires trust in the central authority or consortium.
Limited Transparency: Not all transactions are visible to all participants, which may reduce trust.
Centralization: Potential single points of failure and control.
**Architecture**
Each block of blockchain has the following components:
- Block number
- Data
- Prev Hash
Prev Hash is the hash of the previous block. For the 1st block (also called the genesis block), the hash is 0. Using hash makes it immutable as if you change one block's data, its hash changes, thus making a new block, thus depicting data has been changed. The common hashing algorithm used is SHA256.
Using this blockchain technology, we can construct various things, one such is cryptocurrency like Bitcoin, Ethereum, Dogecoin, etc.
Now let us move to the world of cryptocurrency. Till now we have understood most of the cryptocurrency-backed tech. One thing to know is not each cryptocurrency is the same. For example, Bitcoin is not Turing complete while Ethereum is. Referring to Cointelegraph, "In computer science and blockchain technology, the term 'Turing completeness' describes a system's ability to carry out any computation that a Turing machine is capable of." In Bitcoin's world, the computation we cannot perform is a for loop. Seems intriguing? Well, the founder of Bitcoin thought that if we run a for loop, we can run into an infinite loop. However, the founder of Ethereum came up with a very genius idea stating that if each transaction costs some unit (called GAS), then we can terminate the operation if it costs more GAS than estimated. Thus each operation will be completed only if the GAS used for it is less than the GAS estimated.
**Good To Know**
GAS is not the same as Ether. GAS is a unit, its cost is in gwei which is in Ether.
The rate of GAS depends upon the transaction. (Refer to the opcode manual to know how much it costs for an operation).
And that's all folks. Stay tuned for more.
Signing off
Peace | arsh_the_coder | |
1,898,056 | code error need help plz | <!DOCTYPE html> My Website - Help Needed! <!-- Oops, forgot to... | 0 | 2024-06-23T19:13:02 | https://dev.to/shadabfalak/code-error-need-help-plz-2fdd | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>My Website - Help Needed!</title>
<link rel="stylesheet" href="styles.css">
<script src="script.js"></script>
<!-- Oops, forgot to close the head tag -->
<body>
<header>
<h1>Welcome to My Website</h1>
<!-- Navbar with missing closing div tag -->
<nav>
<ul>
<li><a href="#home">Home</a></li>
<li><a href="#about">About</a></li>
<li><a href="#contact">Contact</a></li>
<!-- Missing closing li tag -->
</ul>
</header>
<main>
<section id="home">
<h2>Home Page</h2>
<p>Welcome to my website! Here's some content.</p>
<!-- Forgot to close the section tag -->
<section id="about">
<h2>About Page</h2>
<p>About me and what I do.</p>
</section>
<section id="contact">
<h2>Contact Page</h2>
<form action="submit.php" method="post">
<label for="name">Name:</label>
<input type="text" id="name" name="name">
<label for="email">Email:</label>
<input type="email" id="email" name="email">
<textarea id="message" name="message" rows="4" cols="50"></textarea>
<!-- Missing submit button -->
</form>
</section>
</main>
<footer>
<p>© 2024 My Website</p>
<!-- Missing closing footer tag -->
</body>
</html>
its not working one of its page is missing click [here](https://77links.com/) | shadabfalak | |
1,898,055 | 18 dicas para se destacar como engenheiro de software | Postado originalmente no Dev na Gringa Substack. Quer receber futuros artigos no seu e-mail? Assine... | 0 | 2024-06-23T19:12:51 | https://dev.to/lucasheriques/18-dicas-para-se-destacar-como-engenheiro-de-software-junior-4857 | braziliandevs, career, beginners | Postado originalmente no [Dev na Gringa Substack](https://devnagringa.substack.com/p/18-dicas-para-se-destacar-na-carreira?utm_source=devto). Quer receber futuros artigos no seu e-mail? [Assine gratuitamente aqui](https://devnagringa.substack.com/subscribe?utm_source=devto).
---
Nesse artigo, vou falar sobre dicas que você pode aplicar para se destacar no seu trabalho. Essas são coisas que eu aprendi ao longo dos anos e venho aplicando diariamente. Até então, tenho recebido bons feedback na empresa que eu entrei e completei 1 ano recentemente. Todos os itens que vou listar têm sido importantes para isso.
Caso esteja com pressa, recomendo apenas ler os títulos. E, se ficar alguma dúvida, sinta-se à vontade para deixar um comentário ou me perguntar diretamente.
---
Você acabou de entrar numa empresa nova. Essa empresa tem um plano de carreira onde você pode crescer para pleno, sênior, e às vezes até mais. Mas, quais são os principais pontos que você pode aplicar para crescer nessa direção?
Nas minhas primeiras oportunidades, eu sempre pensei que era sobre programar mais, ter conhecimento profundo em tecnologias, e resolver problemas de maneira otimizada.
Esses itens são importantes, mas as habilidades técnicas (_hard skills_) não são o único item relevante. Especialmente numa empresa grande.
[Engenharia de software é uma atividade social](https://lucasfaria.dev/bytes/social-side-of-software-engineering). Mesmo que você seja o único programador de uma empresa, você ainda vai programar para os seus usuários. Você precisa entendê-los, saber o que os motiva, e como você pode ajudar.
Dito isso, vamos para as dicas.

## 1. Foque em uma única linguagem, um projeto, uma stack
Você terá muitas oportunidades de aprender novas tecnologias no futuro. A computação está sempre mudando. Mas, no começo, **profundidade é melhor que amplitude.**
Saiba muito bem uma linguagem e todas as suas peculiaridades, antes de partir para outra.
Quando buscar outra linguagem, aprenda-a porque ela resolve melhor algum problema específico que você tem. Exemplo:
- Desenvolvimento de _web services_ para alta performance. **Go** pode ser uma boa escolha.
- Implementação de um sistema de recomendação. **Python** provavelmente irá ganhar aqui.
- Criação de interfaces interativas e/ou visualização de dados. **JavaScript**, sendo a linguagem padrão para navegadores, irá cair bem.
## 2. Aprenda a ler código muito bem
Engenheiros sênior entendem que códigos são muito mais vezes lidos, movidos, copiados e deletados do que escritos. Portanto, você precisa ser bom em ler código. Você simplesmente fará isso muito mais vezes na sua carreira.
O que fazer então? **Faça code reviews.** Não apenas entregue suas tarefas. Saiba o que outras pessoas do seu time estão fazendo.
## 3. Teve algum incidente em produção? Participe, mesmo que não saiba como resolver.
Familiarize-se com as ferramentas de debugging que o seu time usa. Só de estar junto com as pessoas e entender o seu raciocínio para depurar os sistemas é aprendizado enorme.
Saiba usar as ferramentas de observabilidade que sua empresa usa. Exemplos:
- Performance e monitoramento, como Honeycomb, DataDog, New Relic.
- Replay de sessões de usuário, como PostHog e LogRocket.
## 4. Mantenha sempre todos os envolvidos atualizados do seu progresso nas suas iniciativas
Se o seu gestor e/ou o gerente de produto precisam te perguntar o status de alguma tarefa sua, isso quer dizer que você pode fazer isso de maneira melhor.
Faça o seu trabalho em público. Poste updates regulares no canal do seu projeto e/ou do seu time. Deixe que toda informação fique disponível para todos. Isso vai deixar apenas o seu trabalho mais fácil. E te ajudar a construir visibilidade sobre o que você faz.
## 5. **Terminou um projeto? Mande num canal público pra todo mundo saber.**
Explique porque a _feature_ que você fez é importante. Qual problema ela resolveu. Quais serão os planos para o futuro.
Se possível, inclua métricas e dados. Pense em como você pode medir se os usuários estão usando ela. Qual foi o impacto que ela teve.
Exemplos de como escrever isso:
- Implementei uma otimização na nossa busca que aumentou a conversão final dos clientes em 20%.
- Adicionei um novo índice no banco de dados que diminuiu o tempo de resposta médio da API em 30%.
Números deixam qualquer update mais relevante e fácil de ser digerido.
## 6. Coordene com o seu gestor para ele te dar a liderança de projetos de escopo pequeno
Imagine que você é uma API e seu gestor é o cliente dela.
Seu objetivo é ser uma API otimizada e confiável. Onde seu gestor pode fazer a requisição para você, e você entregar o que ele busca.
Isso é uma relação baseada primariamente em confiança.
Como aumentar essa confiança?
Pegue tarefas maiores. Aos poucos, peça oportunidades para liderar projetos.
## 7. Se você não souber algo, tente resolver inicialmente
Não tenha medo de mexer em código que você não conhece. Falamos antes: ler código é uma habilidade importante.
## 8. Cole no sênior ou _tech lead_ do seu time
Provavelmente são pessoas ocupadas. E há chance de que elas queiram fazer algo, mas não tem tempo. Exemplos:
1. Atualizar instruções de _on-call_.
2. Escrever documentos sobre a arquitetura.
3. Melhorar processos de _onboarding_.
Se mostre como alguém que pode ajudar a completar esses objetivos.
## 9. Externalize todo o conhecimento que você adquiriu ao longo do tempo
Especialmente quando você estiver entrando. Anote tudo o que você queria saber, como descobriu e por que é importante. Anexe isso nos documentos de _onboarding_ do seu time.
## 10. Seja amigo do seu designer e product manager
Entenda a visão de mundo deles. E qual a missão do seu produto no longo prazo. Participe de conversas com clientes, se possível.
Está gostando do conteúdo até agora? Considere se inscrever para receber futuros artigos no seu e-mail.
## 11. Expresse gratidão (_kudos_)
Faça isso sem esperar um retorno. Você só tem a ganhar com isso. E não apenas você: a maioria das pessoas gosta de se sentir apreciadas pelo seu trabalho.
Agradeça publicamente, de preferência.
Isso é bom não apenas para você, mas para a cultura da sua empresa.
## 12. Tenha opiniões. Questione as coisas se você não concordar
Isso faz parte do processo de aprendizado.
Mas, depois que uma decisão foi tomada, aceite. Mesmo que você discorde.
E faça o seu melhor para que seja uma decisão acertada, também.
[Pratique o _disagree and commit._](https://en.wikipedia.org/wiki/Disagree_and_commit)
## 13. Escreva um [brag document](https://jvns.ca/blog/brag-documents/)
Ao longo do ano, você vai esquecer o que você já fez.
O _brag document_ serve para você conseguir, facilmente, se lembrar.
Especialmente importante para momentos de promoções e _feedback_.
É basicamente a munição que o seu gestor precisa para mostrar que você merece ser promovido.
## 14. Faça reflexões com si mesmo
Procure aprender uma coisa nova toda semana, por menor que seja.
E, mensalmente, escreva algum impacto que você teve.
Se, após um mês, você não tiver nada para escrever, é um sinal de que algo precisa mudar, e rápido.
## 15. Escreva de forma clara e concisa
Use palavras simples e frases curtas.
A capacidade de explicar as coisas de maneira rápida e concisa é muito mais importante do que um texto no estilo redação do ENEM.
As pessoas são ocupadas. Otimize seu texto para que possam ler ele rapidamente.
## 16. Seja específico quando pedir por feedback
_Feedback_ é muito importantes para um crescimento acelerado.
Porém, muitas pessoas pedem *feedback* de forma genérica. Um exemplo: "Você pode me falar se tem algo que eu poderia fazer melhor?".
Se você acabou de entregar um projeto, pergunte no lugar: "como foi minha comunicação nesta última tarefa? Existe algo que você gostaria que eu fizesse diferente?".
A verdade é que é muito difícil dar um conselho genérico, pois o seu gestor também não vai lembrar de tudo que você fez.
Ser específico facilita o trabalho dele de lhe dar um bom *feedback*.
## 17. Não fique na defensiva ao receber _feedback_ construtivo
Receber _feedback_ é algo que pode ser raro. Feedback construtivo pode ser ainda mais.
Portanto, não tente justificar logo de cara se alguém lhe dizer algo que não concorde.
Procure entender o ponto de vista da pessoa.
Discuta o _feedback_ de maneira receptiva. Obtenha clareza do porquê a pessoa lhe disse algo.
## 18. Não tenha medo de errar
Essa é uma das mais importantes.
Você vai puxar um código que vai quebrar produção eventualmente.
**Não tem problema.** Aprenda com isso. E ajude para que outras pessoas no futuro não cometam o mesmo erro.
—
Espero que tenha gostado da edição dessa semana do Dev na Gringa.
Recentemente, comecei a construir uma comunidade aqui para o Dev na Gringa.
O objetivo é ter um espaço seguro onde possamos compartilhar ideias, aprendizados e conhecimentos.
Semana passada, tivemos nosso primeiro encontro para prática de conversação em inglês. Provavelmente vamos repetir isso de novo essa semana.
Se tiver interesse em participar, ou apenas de conversar sobre qualquer assunto, [entre em nosso servidor no Discord](https://discord.gg/n9m9ebfEnk). | lucasheriques |
1,898,054 | Marquage vehicule Essonne | Le marquage des véhicules est une technique publicitaire de plus en plus prisée par les entreprises... | 0 | 2024-06-23T19:12:31 | https://dev.to/esrcompany08/marquage-vehicule-essonne-3nfc | Le marquage des véhicules est une technique publicitaire de plus en plus prisée par les entreprises cherchant à accroître leur visibilité et à renforcer leur image de marque. À Paris et en Essonne, cette méthode offre une opportunité unique de toucher un large public grâce à une publicité mobile. Cet article explore les différents aspects du marquage des véhicules, ses avantages, et les spécificités du marquage publicitaire dans la région parisienne, avec un focus particulier sur l'agence ESR spécialisée dans ce domaine.
Le Marquage des Véhicules : Un Moyen Efficace de Se Faire Connaître
Le marquage des véhicules consiste à apposer des visuels, des logos, des slogans, ou des informations de contact sur la carrosserie d'un véhicule. Cette technique présente plusieurs avantages pour les entreprises :
Visibilité accrue : Les véhicules marqués circulent dans différentes zones, exposant ainsi la marque à un large public. Chaque déplacement devient une opportunité de publicité.
Publicité mobile : Contrairement aux panneaux publicitaires fixes, les véhicules marqués se déplacent et peuvent atteindre des zones variées et diversifiées.
**_[Marquage vehicule Essonne](https://esr5.com)_**
Coût-efficacité : Le marquage des véhicules est généralement moins coûteux que d'autres formes de publicité à long terme, comme les spots télévisés ou les annonces en ligne.
Renforcement de la marque : Un véhicule bien marqué renforce la notoriété de la marque et donne une image professionnelle et sérieuse de l'entreprise.
Le Marquage des Véhicules à Paris
À Paris, la densité de population et le flux constant de circulation rendent le marquage des véhicules particulièrement efficace. Voici quelques éléments à prendre en compte pour le marquage des véhicules dans la capitale :
Conception et design : Dans une ville aussi visuellement saturée que Paris, il est crucial de créer un design accrocheur et mémorable. Utiliser des couleurs vives, des polices lisibles et des images percutantes peut faire toute la différence.
Réglementations locales : Paris a des réglementations strictes concernant la publicité et l'affichage. Il est essentiel de se conformer aux lois locales pour éviter des amendes ou des sanctions.
Zones de forte affluence : Cibler les zones à forte circulation, comme les quartiers commerciaux, les zones touristiques et les grands axes routiers, permet de maximiser l'impact de la publicité mobile.
Le Marquage des Véhicules en Essonne
L'Essonne, située au sud de Paris, offre un contexte différent pour le marquage des véhicules. Moins dense que la capitale, le département présente néanmoins de nombreuses opportunités pour les entreprises locales.
Ciblage local : Le marquage des véhicules en Essonne permet de toucher une clientèle locale de manière efficace. Les petites et moyennes entreprises peuvent ainsi accroître leur notoriété dans leur zone de chalandise.
Zones rurales et urbaines : L'Essonne combine des zones rurales et urbaines. Adapter le design et le message en fonction de ces environnements peut augmenter l'efficacité de la publicité.
Réseaux de transport : Utiliser les principaux axes de transport de l'Essonne, tels que les routes départementales et les gares, permet de maximiser la visibilité du marquage.
Le Flocage des Véhicules : Une Technique Prisée
Le flocage des véhicules, une forme de marquage, est particulièrement populaire. Il s'agit d'apposer un film adhésif sur la carrosserie du véhicule. Voici quelques avantages spécifiques au flocage :
Personnalisation : Le flocage permet une grande flexibilité en termes de design et de personnalisation. Les entreprises peuvent choisir des visuels qui reflètent leur identité de marque.
Protection de la carrosserie : En plus d'être esthétique, le film de flocage protège la carrosserie contre les rayures et les intempéries.
Facilité de retrait : Contrairement à la peinture, le flocage peut être retiré sans endommager la surface du véhicule, ce qui est idéal pour les campagnes publicitaires temporaires.
Le Flocage des Véhicules à Paris
Paris, avec son environnement urbain dense, est un lieu idéal pour le flocage des véhicules. Le flocage permet aux entreprises de se démarquer dans un paysage visuel souvent saturé. Les zones touristiques, les quartiers d'affaires et les grandes artères sont des lieux privilégiés pour maximiser l'impact du flocage.
L'Agence ESR : Spécialiste du Marquage Publicitaire à Paris
L'agence ESR est une référence en matière de marquage publicitaire à Paris. Voici ce qui distingue cette agence :
Expertise : Avec une équipe de designers et de techniciens expérimentés, ESR garantit des résultats de haute qualité.
Technologie avancée : L'agence utilise des matériaux et des techniques de pointe pour assurer la durabilité et l'attrait visuel du marquage.
Service personnalisé : ESR offre un service sur mesure, adapté aux besoins spécifiques de chaque client, qu'il s'agisse de grandes entreprises ou de petites structures locales.
Marquage Publicitaire à Paris : L'Impact de l'Image de Marque
Le marquage publicitaire contribue significativement à l'image de marque d'une entreprise. Voici comment :
Cohérence visuelle : Un design cohérent avec les autres supports de communication de l'entreprise renforce la reconnaissance de la marque.
Professionnalisme : Un véhicule marqué professionnellement donne une impression de sérieux et de fiabilité, ce qui peut attirer de nouveaux clients.
Différenciation : Dans un marché concurrentiel, un marquage distinctif aide à se démarquer des concurrents et à rester dans l'esprit des consommateurs.
Engagement environnemental : Utiliser des véhicules électriques ou hybrides pour le marquage peut également renforcer l'image d'une entreprise soucieuse de l'environnement.
Conclusion
Le marquage des véhicules est une stratégie publicitaire puissante pour les entreprises de toutes tailles à Paris et en Essonne. Il offre une visibilité accrue, renforce l'image de marque et permet de toucher un large public de manière coût-efficace. En s'appuyant sur des designs accrocheurs, une application professionnelle et une connaissance des réglementations locales, les entreprises peuvent maximiser l'impact de leur publicité mobile et se démarquer dans un marché concurrentiel. L'agence ESR, avec sa gamme de solutions de marquage de haute qualité, est un partenaire idéal pour toutes les entreprises souhaitant exploiter le potentiel du marquage des véhicules. | esrcompany08 | |
1,898,053 | Magical Stories for All: Journey into Fantasy with Our StoryTeller Bot! | This is a submission for Twilio Challenge v24.06.12 Ever wanted to go back to your childhood, when... | 0 | 2024-06-23T19:11:43 | https://dev.to/shubrah_gupta_107/get-enchanted-by-the-good-old-storyteller-5g6h | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
Ever wanted to go back to your childhood, when grandma told you stories about animals and birds, adding her own little twists while you sat in wonder about what would happen next? Ever felt bored and wished someone would read you magical stories that could transport you to another realm? Well, we've got just the thing for you!
## What We Built
We(@khemraj_bawaskar_f283a984 and I) have created a storyteller bot for WhatsApp that reads stories to you over a call. Simply provide the genre and maturity level, and our bot will deliver an engaging storytelling experience tailored to your preferences.
## Demo
{% embed https://youtu.be/s7uwEDdPI9k %}
## Twilio and AI
We have leveraged the WhatsApp Sandbox feature of Twilio to create a storyteller bot that uses a webhook link for a Flask server. By integrating Azure OpenAI LLM for advanced AI capabilities and Twilio's voice call feature, our bot can call users and tell them a story in their desired genre.
We have a command '/story' to generate the response from the bot.
To get into the sandbox, follow this:

This link can help you get started with the Twilio Whatsapp sandbox: [https://www.twilio.com/docs/whatsapp/sandbox](https://www.twilio.com/docs/whatsapp/sandbox)
When the sandbox starts, the user can get started with '/start' command, which throws the following message:
*Hello there! I'm your Storyteller Bot, here to whisk you away on incredible adventures. Whether you seek epic fantasies, heartwarming tales, or thrilling mysteries, I have a story for every mood and moment.*
*Please use '/story' tag followed with the genre and age group for which you want to hear the story.*
*Following template can be used:*
*'/story tell me a romantic story for age group of 24-26 years*
The user can accordingly use the commands and get responses.
**/story:** Generates a story using Azure OpenAI from the genre and the maturity level provided by the user, and calls the user to read them a beautiful story.
Checkout our Github repo: [StoryTellerBot repository]({% embed https://github.com/shubrahgupta/story-twilio %})
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
Our team believes that our submission of the fitness bot qualifies for two of the categories:
1. **Twilio Times Two:** We have utilized two APIs provided by Twilio: the voice calling API and the WhatsApp Sandbox messaging API.
2. **Entertaining Endeavors:** This project offers a wonderfully entertaining experience that can calm, soothe, or entertain users with their requested stories. Children often enjoy listening to stories while eating or sleeping, making this a great tool for them, akin to having their grandparents read to them. It is also beneficial for grown-ups who desire stories that are intense, thrilling, or chilling.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
| shubrah_gupta_107 |
1,898,052 | Your personal fitness trainer at your service! | This is a submission for Twilio Challenge v24.06.12 I used to be hesitant about fitness, worrying... | 0 | 2024-06-23T19:11:41 | https://dev.to/shubrah_gupta_107/your-personal-fitness-trainer-at-your-service-4144 | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
I used to be hesitant about fitness, worrying about how much effort it would take and what might happen if I did an exercise wrong. I thought it would be too much work, but then I just went for it. Guess what? It's awesome! Fitness is the best way to prolong your life, live to your maximum potential, and keep yourself free from illness. And we've created just the thing you need to get started!
## What We Built
We(@khemraj_bawaskar_f283a984 and I) have developed a fitness bot on Whatsapp designed to support and enhance your fitness journey. Our fitness bot offers motivational fitness quotes, assists in planning your workouts and diet, and schedules reminders to ensure you stay hydrated and eat your meals on time. You can also ask the bot any fitness-related queries, and it will provide the best possible answers and advice.
## Demo
{% embed https://youtu.be/BWltYXFmZmo %}
## Twilio and AI
We have leveraged the WhatsApp Sandbox feature of Twilio to create a bot that utilizes a webhook link for a Flask server. By integrating Azure OpenAI LLM for advanced AI capabilities and Twilio's voice call feature, our bot can call users to remind them about their scheduled reminders.
We have certain commands such as '/tip' or '/dietplan' or '/workoutplan' or '/reminder' or '/query' to generate the response from the bot.
To get into the sandbox, follow this:

This link can help you get started with the Twilio Whatsapp sandbox: [https://www.twilio.com/docs/whatsapp/sandbox](https://www.twilio.com/docs/whatsapp/sandbox)
When the sandbox starts, the user can get started with '/start' command, which throws the following message:
*Hi, start your wellness journey now. please use '/tip' or '/dietplan' or '/workoutplan' or '/reminder' or '/query' tag along with the information needed.*
*These templates can be used:*
*'/dietplan weight: 50Kg, height: 5 feet, purpose: muscle-enhancement, non-veg food'*
*'/workoutplan weight: 50Kg, height: 5 feet, purpose: leg-muscles-enhancement, exercise mode: mid'*
*'/query I am unable to feel my back-muscle while doing lat-pull downs. What should I do to improve?'*
*'/reminder Please set an call reminder for lunch at 2 PM on 25/06/2024'*
*'/tip'*
The user can accordingly use the commands and get responses.
1. **/tip:** Provides motivational fitness/workout quotes to keep you inspired.
2. **/reminder:** Helps to schedule a reminder with a time and description, and a voice call comes at the scheduled time to remind the user.
3. **/query:** The bot answers the query with the best possible suggestions.
4. **/workoutplan:** The bot creates a workout plan for the user for a day according to the given weight, height, purpose of the workout, and intensity of the exercise mode.
5. **/dietplan:** The bot creates a diet plan for the user for a day according to the given weight, height, purpose of the diet(for bulking/cutting/normal muscle growth), and preference of the food(veg/non-veg).
Checkout our Github repo: [FitnessBot repository](https://github.com/shubrahgupta/fitness-bot)
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
Our team believes that our submission of the fitness bot qualifies for two of the categories:
1. **Twilio Times Two:** We have utilized two APIs provided by Twilio: the voice calling API and the WhatsApp Sandbox messaging API.
2. **Impactful Innovators:** We believe this project can drive a positive impact on society by promoting fitness. Beginners can overcome their nervousness by asking the bot relevant questions, helping them become more fit and start their fitness journey with confidence.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
| shubrah_gupta_107 |
1,898,050 | Tired of Messy Code?🥴 | Want to Make Your Code Look Pretty and Well-Organized? Try the 𝗣𝗿𝗲𝘁𝘁𝗶𝗲𝗿 extension in VS Code!💡 𝗪𝗵𝗮𝘁... | 0 | 2024-06-23T19:10:56 | https://dev.to/aurnab990/tired-of-messy-code-36h0 | vscode, prettier, vscodextention, vscodesettings | Want to Make Your Code Look Pretty and Well-Organized?
Try the 𝗣𝗿𝗲𝘁𝘁𝗶𝗲𝗿 extension in VS Code!💡
𝗪𝗵𝗮𝘁 𝗶𝘀 𝗣𝗿𝗲𝘁𝘁𝗶𝗲𝗿?
↳ Prettier is a tool that makes your code look neat and consistent.
↳ It works by checking your code and fixing its style according to set rules.
↳ Prettier supports many programming languages and works with most code editors, including Visual Studio Code (VS Code).
↳ By using Prettier, your code will always be clean, easy to read, and free of style issues.
𝗛𝗼𝘄 𝘁𝗼 𝗜𝗻𝘀𝘁𝗮𝗹𝗹 𝗣𝗿𝗲𝘁𝘁𝗶𝗲𝗿 𝗶𝗻 𝗩𝗦 𝗖𝗼𝗱𝗲:
1️⃣ Open 𝗩𝗦 𝗖𝗼𝗱𝗲.
2️⃣ Go to the 𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻𝘀 𝘃𝗶𝗲𝘄 by clicking on the Extensions icon in the Activity Bar on the side of the window or by pressing 𝗖𝘁𝗿𝗹+𝗦𝗵𝗶𝗳𝘁+𝗫.
3️⃣ Search for "𝗣𝗿𝗲𝘁𝘁𝗶𝗲𝗿 - 𝗖𝗼𝗱𝗲 𝗳𝗼𝗿𝗺𝗮𝘁𝘁𝗲𝗿" in the search bar.
4️⃣ Select the first one which appears in the search results, Click on the 𝗶𝗻𝘀𝘁𝗮𝗹𝗹 𝗯𝘂𝘁𝘁𝗼𝗻.
5️⃣ Once installed, you can 𝗰𝗹𝗼𝘀𝗲 𝘁𝗵𝗲 𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻𝘀 𝘃𝗶𝗲𝘄.
𝗛𝗼𝘄 𝘁𝗼 𝗘𝗻𝗮𝗯𝗹𝗲 𝗔𝘂𝘁𝗼 𝗦𝗮𝘃𝗲 𝗮𝗻𝗱 𝗔𝘂𝘁𝗼 𝗙𝗼𝗿𝗺𝗮𝘁 𝘄𝗶𝘁𝗵 𝗣𝗿𝗲𝘁𝘁𝗶𝗲𝗿:
1️⃣ Open the Command Palette by pressing 𝗖𝘁𝗿𝗹+𝗦𝗵𝗶𝗳𝘁+𝗣.
2️⃣ Type and select Preferences: 𝗢𝗽𝗲𝗻 𝗦𝗲𝘁𝘁𝗶𝗻𝗴𝘀 (𝗝𝗦𝗢𝗡).
3️⃣ Add the following settings to your 𝘀𝗲𝘁𝘁𝗶𝗻𝗴𝘀.𝗷𝘀𝗼𝗻 file:
{
"editor.formatOnSave": true,
"editor.defaultFormatter":
"esbenp.prettier-vscode"
}
4️⃣ Save the 𝘀𝗲𝘁𝘁𝗶𝗻𝗴𝘀.𝗷𝘀𝗼𝗻 file.
Now, every time you save your any file in VS Code, Prettier will automatically format your code according to its rules.✨ | aurnab990 |
1,892,028 | Understanding Dispose and Garbage Collection in .NET 🗑️ | Dispose vs Close | Garbage... | 0 | 2024-06-23T19:06:53 | https://dev.to/ipazooki/understanding-dispose-and-garbage-collection-in-net-3ach | dotnet, csharp, gc, tutorial | {% embed https://youtu.be/HnvOllctapI?si=XwXVtCbp1ers_2lc %}
## Introduction
Hello, tech enthusiasts! 🌟 Today, we're diving into the fascinating world of memory management in .NET, focusing on the `Dispose` method and the Garbage Collector (GC). These concepts are crucial for ensuring your applications run efficiently without wasting precious resources. Let's explore how .NET manages memory and what we can do to keep our applications running smoothly. 🚀
## The Life Cycle of a Variable
Imagine you've created a string variable called `name` with an initial value of "Jack" inside a method like this:
```csharp
var name = "Jack";
```
In the stack, a variable is created holding the address of the actual data stored in the heap. When the method completes, the stack variable is removed, freeing up its space. However, the heap space remains occupied because the reference in the stack is now null. It's the GC's job to identify and release such orphaned heap spaces.
## Dispose vs. Close
Before diving deeper into GC, let's clarify the difference between `Dispose` and `Close`. Consider an instance of `DBContext` running a query. Before the query executes, a connection to the database opens. Once the query completes, the connection closes. This process opens and closes a connection, not an object.
On the other hand, disposing of an object nullifies the object and all its dependent connections. This means `Dispose` removes both the object and its dependencies, while `Close` only terminates a single connection.
## The Role of Garbage Collection
So, how does the GC decide which objects to remove? The key condition is the object root. There should be no reference between objects in the heap and stack. When there's no reference, the object is orphaned and becomes a candidate for GC.
One way to call the `Dispose` method is using the `try-finally` block in C#:
```csharp
try
{
// Use object
}
finally
{
// Call dispose method
}
```
However, this approach can be cumbersome. To streamline this, .NET introduced the `using` statement, which automatically disposes of the used object, making it a candidate for GC.
```csharp
using (var resource = new Resource())
{
// Use resource
}
// Automatically calls Dispose when exiting the using block
```
## When Does GC Start Working?
GC isn't a scheduled process; it works based on the operating system and its internal thresholds. It constantly checks the heap allocation and, when necessary, starts releasing heap spaces.
### GC Operations: Mark, Compact, Sweep
1. **Mark**: The GC traverses the object graph, marking accessible objects.
2. **Compact**: It compacts the heap by shifting live objects together, optimizing memory usage.
3. **Sweep**: Finally, it cleans up the memory occupied by dead objects.
## Generations in GC
GC divides memory into three generations:
- **Generation 0**: For short-lived objects, collected frequently.
- **Generation 1**: Acts as a buffer between Gen 0 and Gen 2, collected less frequently.
- **Generation 2**: For long-lived objects, collected infrequently to avoid performance overhead. Large objects are placed in the Large Object Heap (LOH) directly into Gen 2.
You can force GC to collect using `GC.Collect();` with an overload method to specify the generation.
## Object Resurrection
Object resurrection refers to reviving an object about to be collected. This is done in the finalizer method, making the object accessible again by assigning its reference to a global variable or another live object. However, this practice is discouraged due to its complexity and unpredictability. It's better to use patterns like `IDisposable` for managing resources.
## Summary
In summary, understanding `Dispose`, `Close` and GC in .NET is essential for efficient memory management. By using these tools wisely, you can ensure your applications run smoothly without unnecessary memory bloat.
What are your thoughts on .NET's memory management? Have you encountered any challenges or have tips to share? Drop your comments below! 👇 Let's get the conversation going! 💬 | ipazooki |
1,897,978 | Dynamic Programming, Design and Analysis of Algorithms | Dynamic Programming Dynamic programming (DP) is a method used in computer science and... | 0 | 2024-06-23T19:05:48 | https://dev.to/harshm03/dynamic-programming-design-and-analysis-of-algorithms-2pf2 | coding, interview, algorithms, dsa | ## Dynamic Programming
Dynamic programming (DP) is a method used in computer science and mathematics to solve complex problems by breaking them down into simpler subproblems. It involves solving each subproblem just once and storing their solutions – typically using a data structure like an array or a table – to avoid redundant work. The essence of DP is to use previously computed results to build up the solution to the overall problem efficiently. This approach is particularly effective for optimization problems and problems with overlapping subproblems and optimal substructure properties.
### Key Concepts
1. **Optimal Substructure**: This property means that the solution to a problem can be composed of optimal solutions to its subproblems. If a problem exhibits optimal substructure, it can be solved recursively by combining the solutions to its smaller instances.
2. **Overlapping Subproblems**: This property indicates that a problem can be broken down into subproblems that are reused multiple times. Instead of solving the same subproblem repeatedly, dynamic programming stores the results of these subproblems, often in a table, to avoid redundant computations and improve efficiency.
3. **Sequence of Decisions**: Many dynamic programming problems involve making a sequence of decisions to achieve an optimal solution. Each decision depends on previous decisions and affects future ones. Dynamic programming helps in systematically exploring all possible sequences of decisions and choosing the one that leads to the optimal outcome. Examples include determining the optimal way to cut a rod to maximize profit or deciding the optimal order to multiply matrices to minimize the number of operations.
4. **Optimization Problems**: These are problems that seek the best solution among many possible options. Dynamic programming is particularly useful for solving optimization problems, such as finding the shortest path in a graph, the maximum profit in a knapsack problem, or the minimum cost path in a grid. By breaking down the problem into smaller subproblems and solving each one optimally, dynamic programming ensures that the overall solution is optimal.
### Comparison with Other Problem-Solving Techniques
1. **Brute Force**:
- **Approach**: Brute force involves trying all possible solutions to find the best one. It is straightforward but often inefficient, especially for large problem instances.
- **Efficiency**: Typically has exponential time complexity due to the exhaustive search of all possibilities.
- **Use Case**: Suitable for small problems where the number of possible solutions is manageable.
2. **Divide and Conquer**:
- **Approach**: This technique involves dividing a problem into smaller, independent subproblems, solving each subproblem recursively, and then combining their solutions to solve the original problem.
- **Efficiency**: More efficient than brute force for many problems but can still be suboptimal if subproblems overlap and redundant calculations are performed.
- **Use Case**: Effective for problems like merge sort and quick sort, where subproblems do not overlap.
3. **Greedy Algorithms**:
- **Approach**: Greedy algorithms make a series of choices, each of which looks the best at the moment, with the hope of finding a global optimum.
- **Efficiency**: Often very efficient with linear or logarithmic time complexity but does not always produce the optimal solution for all problems.
- **Use Case**: Works well for problems like the fractional knapsack problem, Huffman coding, and certain graph problems like Dijkstra's shortest path algorithm.
### Techniques in Dynamic Programming
Dynamic Programming (DP) employs two primary techniques to efficiently solve complex problems by breaking them down into smaller subproblems and storing their solutions:
1. **Memoization (Top-Down Approach)**: Memoization is a technique that optimizes recursive algorithms by storing the results of expensive function calls and reusing them when the same inputs occur again. This approach is particularly effective in problems where the same subproblems are solved multiple times. By caching the results, memoization reduces redundant computations, significantly improving the overall efficiency of the algorithm. For example, in the computation of Fibonacci numbers using memoization, each Fibonacci number is computed only once and stored, ensuring subsequent calls for the same number retrieve the result from memory rather than recalculating it.
2. **Tabulation (Bottom-Up Approach)**: Tabulation involves solving the problem iteratively by building up solutions to subproblems in a table, such as an array or matrix. Unlike memoization, which uses recursion, tabulation starts with the smallest subproblems and systematically computes solutions for larger subproblems based on previously computed values. This method ensures that all subproblems are solved in a predefined order, typically from the base cases up to the final solution. A classic example of tabulation is computing Fibonacci numbers iteratively using an array to store intermediate results, which allows each Fibonacci number to be calculated based on its preceding values stored in the array.
These techniques form the backbone of dynamic programming and are selected based on the problem's characteristics, constraints, and optimization requirements. Memoization is particularly useful for problems with overlapping subproblems where recursion can be utilized effectively, while tabulation excels when all subproblems need to be solved and stored iteratively. Understanding these techniques enables developers to apply dynamic programming effectively to a wide range of computational problems, from optimizing recursive algorithms to solving complex optimization problems efficiently.
### Steps to Solve a Dynamic Programming Problem
Dynamic Programming (DP) is a methodical approach to solving complex problems by breaking them down into smaller, manageable subproblems. Here are the steps typically followed to solve a DP problem:
1. **Identify if it's a DP problem**:
Determining if a problem can benefit from DP involves recognizing patterns of overlapping subproblems and optimal substructure. Overlapping subproblems mean that the solution to a problem relies on solutions to the same subproblem multiple times, while optimal substructure ensures that an optimal solution to the problem can be constructed from optimal solutions to its subproblems.
2. **Define the state**:
Defining the state involves identifying the variables that represent the current state of the problem. These variables encapsulate the information necessary to solve a subproblem and move towards the solution of the larger problem. For example, in problems involving sequences, the state might include the current index or position in the sequence.
3. **Formulate a recurrence relation**:
The recurrence relation defines the relationship between the solution to the larger problem and its subproblems. It expresses how the optimal solution of the current state depends on the solutions of smaller related subproblems. This recursive formulation is crucial in dynamic programming as it provides a roadmap for solving larger instances of the problem by solving smaller instances.
4. **Identify the base cases**:
Base cases are the simplest subproblems that can be solved directly without further decomposition. They provide the starting points for the recurrence relation and serve as termination conditions for recursive processes. Base cases are essential for initiating the solution process and building up towards solving larger instances of the problem.
5. **Choose the approach (Memoization or Tabulation)**:
Depending on the problem characteristics and constraints, decide between memoization (top-down) or tabulation (bottom-up):
- **Memoization**: Involves recursive calls where results of subproblems are stored (memoized) to avoid redundant computations.
- **Tabulation**: Involves iterative computation where solutions to subproblems are stored in a table (usually an array or matrix) and built up from smaller to larger subproblems.
6. **Implement the solution**:
Implement the chosen approach based on the recurrence relation and base cases identified. Ensure the implementation correctly computes solutions using the selected method (memoization or tabulation) and handles edge cases effectively.
7. **Optimize the solution (if necessary)**:
Analyze the time and space complexity of the implemented solution. Consider optimizations such as reducing redundant computations, optimizing space usage (especially in tabulation), or improving the recurrence relation for faster computation. Optimization ensures that the DP solution is efficient and scalable for larger inputs or real-time applications.
By following these structured steps, dynamic programming allows developers to systematically break down and solve complex problems, leveraging optimal substructure and overlapping subproblems to achieve efficient and effective solutions. Each step contributes to building a comprehensive understanding of the problem and crafting a solution that balances clarity, efficiency, and scalability.
### Common Dynamic Programming Problems
Dynamic Programming (DP) is applied to a wide range of computational problems, offering efficient solutions by breaking them down into smaller subproblems and storing their solutions. These problems typically exhibit overlapping subproblems and optimal substructure, making DP an effective approach for optimization, sequence alignment, shortest path finding, and more. By identifying patterns in problem-solving techniques, DP enables systematic computation of solutions that are both optimal and efficient, leveraging recursive relationships and iterative computation methods.
### 0/1 Knapsack Problem
The 0/1 Knapsack Problem involves selecting items from a given set, each with a weight and a corresponding value, such that the total weight does not exceed a specified capacity of a knapsack while maximizing the total value of the selected items.
For instance, given `n` items where each item `i` has a weight `weights[i]` and a value `values[i]`, and a knapsack with a capacity `capacity`, the objective is to determine the maximum value that can be achieved by selecting a subset of items.
### Brute-Force (Recursive) Approach for 0/1 Knapsack Problem
In the brute-force recursive approach to solving the 0/1 Knapsack problem, we systematically explore all possible combinations of items to determine the optimal subset that fits within the capacity of the knapsack.
```cpp
#include <iostream>
#include <vector>
#include <algorithm> // For max function
using namespace std;
// Brute-force recursive function to solve the 0/1 Knapsack problem
double knapsackRecursive(vector<double>& weights, vector<double>& values, double capacity, int n) {
// Base case: if no items are left or the capacity is 0
if (n == 0 || capacity == 0) {
return 0;
}
// If weight of the nth item is more than the capacity, it cannot be included
if (weights[n - 1] > capacity) {
return knapsackRecursive(weights, values, capacity, n - 1);
}
// Return the maximum value obtained by either including or excluding the nth item
return max(
values[n - 1] + knapsackRecursive(weights, values, capacity - weights[n - 1], n - 1),
knapsackRecursive(weights, values, capacity, n - 1)
);
}
int main() {
vector<double> weights = {1, 5, 20, 35, 90}; // Weights of items
vector<double> values = {15, 14.5, 19.2, 19.8, 195.2}; // Values of items
double capacity = 20; // Capacity of the knapsack
int n = weights.size(); // Number of items
double maxValue = knapsackRecursive(weights, values, capacity, n);
cout << "Maximum value in Knapsack: " << maxValue << endl;
return 0;
}
```
In this implementation, the `knapsackRecursive` function recursively evaluates two choices for each item: either including the item in the knapsack (if its weight allows) or excluding it. It computes the maximum value obtainable by either of these choices until all items are considered or the knapsack's capacity is exceeded. While this approach guarantees finding the optimal solution by evaluating all subsets of items, its time complexity grows exponentially with the number of items, making it inefficient for large inputs.
### Memoization Approach for 0/1 Knapsack Problem
In the memoization approach to solving the 0/1 Knapsack problem, we optimize the recursive solution by storing computed results of subproblems in a memoization table. This technique helps avoid redundant calculations and improves the efficiency of the solution.
```cpp
#include <iostream>
#include <vector>
#include <algorithm> // For max function
using namespace std;
// Helper function to solve the 0/1 Knapsack problem using memoization
double knapsackMemo(vector<int>& weights, vector<double>& values, int capacity, int n, vector<vector<double>>& memo) {
// Base case: if no items are left or the capacity is 0
if (n == 0 || capacity == 0) {
return 0;
}
// Return the stored value if the subproblem has already been solved
if (memo[n][capacity] != -1) {
return memo[n][capacity];
}
// If weight of the nth item is more than the capacity, it cannot be included
if (weights[n - 1] > capacity) {
return memo[n][capacity] = knapsackMemo(weights, values, capacity, n - 1, memo);
}
// Return the maximum value obtained by either including or excluding the nth item
return memo[n][capacity] = max(
values[n - 1] + knapsackMemo(weights, values, capacity - weights[n - 1], n - 1, memo),
knapsackMemo(weights, values, capacity, n - 1, memo)
);
}
// Function to initialize memoization and call the helper function
double knapsack(vector<int>& weights, vector<double>& values, int capacity) {
int n = weights.size(); // Number of items
vector<vector<double>> memo(n + 1, vector<double>(capacity + 1, -1)); // Memoization table
return knapsackMemo(weights, values, capacity, n, memo);
}
int main() {
vector<int> weights = {1, 5, 20, 35, 90}; // Weights of items
vector<double> values = {15, 14.5, 19.2, 19.8, 195.2}; // Values of items
int capacity = 200; // Capacity of the knapsack
double maxValue = knapsack(weights, values, capacity);
cout << "Maximum value in Knapsack: " << maxValue << endl;
return 0;
}
```
In this implementation, `knapsackMemo` uses a memoization table (`memo`) to store results of subproblems. If a subproblem's solution has already been computed, it is retrieved from the memoization table, avoiding redundant calculations. This approach optimizes the recursive solution to the 0/1 Knapsack problem by reducing its time complexity from exponential to polynomial, making it more suitable for larger inputs.
### Tabulation Approach for 0/1 Knapsack Problem
The tabulation approach to solving the 0/1 Knapsack problem uses dynamic programming to build a DP table iteratively from smaller subproblems to larger ones. This method avoids recursion and uses space efficiently to compute the maximum value that can be placed in a knapsack of given capacity.
```cpp
#include <iostream>
#include <vector>
#include <algorithm> // For max function
using namespace std;
// Function to solve the 0/1 Knapsack problem using dynamic programming (tabulation)
double knapsack(vector<int>& weights, vector<double>& values, int capacity) {
int n = weights.size(); // Number of items
vector<vector<double>> dp(n + 1, vector<double>(capacity + 1, 0)); // DP table
// Build the DP table in a bottom-up manner
for (int i = 1; i <= n; ++i) {
for (int w = 1; w <= capacity; ++w) {
if (weights[i - 1] <= w) {
dp[i][w] = max(values[i - 1] + dp[i - 1][w - weights[i - 1]], dp[i - 1][w]);
} else {
dp[i][w] = dp[i - 1][w];
}
}
}
return dp[n][capacity]; // Maximum value that can be put in a knapsack of capacity
}
int main() {
vector<int> weights = {1, 5, 20, 35, 90}; // Weights of items
vector<double> values = {15, 14.5, 19.2, 19.8, 195.2}; // Values of items
int capacity = 200; // Capacity of the knapsack
double maxValue = knapsack(weights, values, capacity);
cout << "Maximum value in Knapsack: " << maxValue << endl;
return 0;
}
```
In this implementation, the `knapsack` function initializes a DP table (`dp`) where `dp[i][w]` represents the maximum value that can be achieved with the first `i` items and a knapsack capacity of `w`. The table is filled in a bottom-up manner, iterating through each item and capacity combination. If the current item can fit into the knapsack (i.e., its weight is less than or equal to the current capacity), the function computes the maximum value by either including or excluding the item.
### Coin Changing Problem
The Coin Changing Problem involves determining the minimum number of coins required to make up a specified amount using a given set of coin denominations. Each coin denomination has a specific value, and the goal is to find the optimal combination of coins that sums up to the desired amount.
For instance, given `n` coin denominations where each coin `i` has a value `coins[i]`, and a target amount `amount`, the objective is to compute the minimum number of coins needed to make up `amount`. If it is impossible to form the amount using the given denominations, the solution should indicate that it's not feasible.
### Brute-Force (Recursive) Approach for Coin Changing Problem
In the brute-force recursive approach to solving the Coin Changing problem, we systematically explore all possible combinations of coins to determine the minimum number of coins needed to achieve the specified amount.
```cpp
#include <iostream>
#include <vector>
#include <climits> // For INT_MAX
using namespace std;
// Brute-force recursive function to calculate the minimum coins
int minCoinsRecursive(vector<int>& coins, int amount) {
// Base case: if amount is 0, no coins are needed
if (amount == 0) {
return 0;
}
// If amount is negative, return INT_MAX (impossible situation)
if (amount < 0) {
return INT_MAX;
}
int minCoins = INT_MAX;
// Try every coin and find the minimum number of coins needed
for (int coin : coins) {
int res = minCoinsRecursive(coins, amount - coin);
if (res != INT_MAX) {
minCoins = min(minCoins, res + 1);
}
}
return minCoins;
}
int coinChange(vector<int>& coins, int amount) {
int result = minCoinsRecursive(coins, amount);
return result == INT_MAX ? -1 : result;
}
int main() {
vector<int> coins = {1, 4, 7, 9, 16, 43};
int amount = 17;
int result = coinChange(coins, amount);
if (result != -1) {
cout << "Minimum coins needed: " << result << endl;
} else {
cout << "Amount cannot be made up by any combination of the given coins." << endl;
}
return 0;
}
```
In this implementation, the `minCoinsRecursive` function recursively evaluates every possible combination of coins to find the minimum number required to make up the amount `amount`. It checks each coin denomination and recursively computes the minimum coins needed by subtracting the coin's value from the amount. This approach ensures that all possible combinations are explored, but it may be inefficient for larger amounts due to its exponential time complexity.
### Memoization Approach for Coin Changing Problem
In the memoization approach to solving the Coin Changing problem, we optimize the recursive solution by storing computed results of subproblems in a memoization table (`memo`). This helps avoid redundant calculations and improves efficiency.
```cpp
#include <iostream>
#include <vector>
#include <climits> // For INT_MAX
using namespace std;
// Helper function to calculate the minimum coins using memoization
int minCoinsMemo(vector<int>& coins, int amount, vector<int>& memo) {
if (amount == 0) {
return 0;
}
if (amount < 0) {
return INT_MAX;
}
if (memo[amount] != -1) {
return memo[amount];
}
int minCoins = INT_MAX;
for (int coin : coins) {
int res = minCoinsMemo(coins, amount - coin, memo);
if (res != INT_MAX) {
minCoins = min(minCoins, res + 1);
}
}
memo[amount] = minCoins;
return minCoins;
}
int coinChange(vector<int>& coins, int amount) {
vector<int> memo(amount + 1, -1);
int result = minCoinsMemo(coins, amount, memo);
return result == INT_MAX ? -1 : result;
}
int main() {
vector<int> coins = {1, 4, 7, 9, 16, 43};
int amount = 85;
int result = coinChange(coins, amount);
if (result != -1) {
cout << "Minimum coins needed: " << result << endl;
} else {
cout << "Amount cannot be made up by any combination of the given coins." << endl;
}
return 0;
}
```
In this implementation, the `minCoinsMemo` function recursively computes the minimum number of coins needed for each amount using memoization. The `memo` vector stores results of subproblems to avoid redundant calculations. By recursively exploring each coin denomination and memoizing results, the algorithm efficiently determines the minimum coins required to make up the amount `amount`. This approach significantly improves performance compared to the brute-force recursive approach, especially for larger amounts and coin sets.
### Tabulation Approach for Coin Changing Problem
In the tabulation approach to solving the Coin Changing problem, we use dynamic programming to build up solutions to subproblems in a bottom-up manner, filling out a table (`dp`) where `dp[i]` represents the minimum number of coins needed to make up the amount `i`.
```cpp
#include <iostream>
#include <vector>
#include <climits> // For INT_MAX
using namespace std;
// Function to calculate the minimum coins using dynamic programming (tabulation)
int coinChange(vector<int>& coins, int amount) {
vector<int> dp(amount + 1, INT_MAX);
dp[0] = 0; // Base case: 0 coins needed to make up amount 0
for (int i = 1; i <= amount; ++i) {
for (int coin : coins) {
if (i >= coin && dp[i - coin] != INT_MAX) {
dp[i] = min(dp[i], dp[i - coin] + 1);
}
}
}
return dp[amount] == INT_MAX ? -1 : dp[amount];
}
int main() {
vector<int> coins = {1, 4, 7, 9, 16, 43};
int amount = 82;
int result = coinChange(coins, amount);
if (result != -1) {
cout << "Minimum coins needed: " << result << endl;
} else {
cout << "Amount cannot be made up by any combination of the given coins." << endl;
}
return 0;
}
```
In this implementation, the `coinChange` function uses a `dp` array where `dp[i]` is initialized to `INT_MAX` (indicating that the amount `i` cannot be made up with the current coins). We iteratively compute the minimum coins required for each amount up to `amount` by iterating through each coin denomination and updating `dp[i]` based on the minimum of its current value or `dp[i - coin] + 1` (where `coin` is the current coin denomination).
By the end of the iteration, `dp[amount]` will contain the minimum number of coins needed to make up `amount`, or `-1` if it's not possible with the given denominations. This tabulation approach ensures an efficient solution to the Coin Changing problem with a time complexity of `O(amount * n)`, where `n` is the number of coin denominations.
### Rod Cutting Problem
The Rod Cutting Problem involves determining the maximum profit obtainable by cutting a rod of a given length into smaller pieces and selling them based on specified prices for each piece length. Each allowed piece length has a corresponding price, and the goal is to find the optimal way to cut the rod to maximize profit.
For instance, given `n` allowed piece lengths where each piece `i` has a length `lengths[i]` and a corresponding price `prices[i]`, and a target rod length `rodLength`, the objective is to compute the maximum profit that can be obtained by cutting the rod into pieces of the allowed lengths. If it is not possible to achieve a profit using the given lengths, the solution should indicate that the rod cannot be cut profitably.
### Brute-Force (Recursive) Approach for Rod Cutting Problem
In the brute-force recursive approach to solving the Rod Cutting problem, we systematically explore all possible ways to cut the rod to determine the optimal subset of lengths that maximizes profit.
```cpp
#include <iostream>
#include <vector>
#include <climits> // For INT_MIN
using namespace std;
// Function to calculate the maximum profit recursively
double rodCutting(vector<int>& lengths, vector<double>& prices, int n) {
// Base case: if rod length is 0, profit is 0
if (n == 0) {
return 0;
}
if (n < 0) {
return INT_MIN; // If n is negative, return INT_MIN
}
double maxProfit = INT_MIN; // Initialize max profit with minimum possible value
// Recursively calculate the maximum profit by considering all possible cuts
for (int i = 0; i < lengths.size(); ++i) {
maxProfit = max(maxProfit, prices[i] + rodCutting(lengths, prices, n - lengths[i]));
}
maxProfit = max(maxProfit, 0.0); // Consider the case where we waste the entire rod
return maxProfit;
}
int main() {
vector<int> lengths = {1, 3, 5, 10, 30, 50, 75}; // Allowed lengths for cutting
vector<double> prices = {0.1, 0.2, 0.4, 0.9, 3.1, 5.1, 8.2}; // Prices corresponding to each length
int rodLength = 30; // Example rod length to cut
double maxProfit = rodCutting(lengths, prices, rodLength);
cout << "Maximum profit for rod of length " << rodLength << " is: " << maxProfit << endl;
return 0;
}
```
In this implementation, the `rodCutting` function recursively evaluates different ways to cut the rod by considering each allowed length and its corresponding price. For each possible cut, it computes the maximum profit obtainable by including that piece length and recursively solving for the remaining rod length. This approach guarantees finding the optimal solution by evaluating all possible ways to cut the rod, but its time complexity grows exponentially with the rod length and the number of allowed lengths, making it inefficient for large inputs.
### Memoization Approach for Rod Cutting Problem
In the memoization approach to solving the Rod Cutting problem, we optimize the recursive solution by storing computed results of subproblems in a memoization table. This technique helps avoid redundant calculations and improves the efficiency of the solution.
```cpp
#include <iostream>
#include <vector>
#include <climits> // For INT_MIN
using namespace std;
// Helper function to calculate the maximum profit using memoization
double rodCuttingMemo(vector<int>& lengths, vector<double>& prices, int n, vector<double>& memo) {
// Base case: if rod length is 0, profit is 0
if (n == 0) {
return 0;
}
if (n < 0) {
return INT_MIN; // If n is negative, return INT_MIN
}
if (memo[n] != -1) {
return memo[n]; // Return cached result if available
}
double maxProfit = INT_MIN; // Initialize max profit with minimum possible value
// Recursively calculate the maximum profit by considering all possible cuts
for (int i = 0; i < lengths.size(); ++i) {
maxProfit = max(maxProfit, prices[i] + rodCuttingMemo(lengths, prices, n - lengths[i], memo));
}
maxProfit = max(maxProfit, 0.0); // Consider the case where we waste the entire rod
memo[n] = maxProfit; // Store the result in the cache
return maxProfit;
}
double rodCutting(vector<int>& lengths, vector<double>& prices, int n) {
vector<double> memo(n + 1, -1); // Initialize memoization array with -1
return rodCuttingMemo(lengths, prices, n, memo);
}
int main() {
vector<int> lengths = {1, 3, 5, 10, 30, 50, 75}; // Allowed lengths for cutting
vector<double> prices = {0.1, 0.2, 0.4, 0.9, 3.1, 5.1, 8.2}; // Prices corresponding to each length
int rodLength = 300; // Example rod length to cut
double maxProfit = rodCutting(lengths, prices, rodLength);
cout << "Maximum profit for rod of length " << rodLength << " is: " << maxProfit << endl;
return 0;
}
```
In this implementation, `rodCuttingMemo` uses a memoization table (`memo`) to store results of subproblems. If a subproblem's solution has already been computed, it is retrieved from the memoization table, avoiding redundant calculations. This approach optimizes the recursive solution to the Rod Cutting problem by reducing its time complexity, making it more suitable for larger inputs.
### Tabulation Approach for Rod Cutting Problem
The tabulation approach to solving the Rod Cutting problem uses dynamic programming to build a DP table iteratively from smaller subproblems to larger ones. This method avoids recursion and uses space efficiently to compute the maximum profit that can be obtained by cutting a rod of a given length.
```cpp
#include <iostream>
#include <vector>
#include <climits> // For INT_MIN
using namespace std;
// Function to calculate the maximum profit using dynamic programming (tabulation)
double rodCutting(vector<int>& lengths, vector<double>& prices, int n) {
// Create a DP array to store maximum profit for each rod length from 0 to n
vector<double> dp(n + 1, 0); // Bottom-up approach: initialize dp array with size n+1 filled with 0
// Calculate maximum profit for each rod length up to n
for (int i = 1; i <= n; ++i) {
double maxProfit = INT_MIN;
for (int j = 0; j < lengths.size(); ++j) {
if (i >= lengths[j]) { // Only consider valid cuts where i >= lengths[j]
maxProfit = max(maxProfit, prices[j] + dp[i - lengths[j]]);
}
}
dp[i] = maxProfit; // Store the maximum profit for rod length i
}
return dp[n]; // Return maximum profit for rod of length n
}
int main() {
vector<int> lengths = {1, 3, 5, 10, 30, 50, 75}; // Allowed lengths for cutting
vector<double> prices = {0.1, 0.2, 0.4, 0.9, 3.1, 5.1, 8.2}; // Prices corresponding to each length
int rodLength = 300; // Example rod length to cut
double maxProfit = rodCutting(lengths, prices, rodLength);
cout << "Maximum profit for rod of length " << rodLength << " is: " << maxProfit << endl;
return 0;
}
```
In this implementation, the `rodCutting` function initializes a DP array (`dp`) where `dp[i]` represents the maximum profit that can be obtained with a rod of length `i`. The array is filled in a bottom-up manner, iterating through each possible rod length from `1` to `n`. For each rod length, the function calculates the maximum profit by considering all possible cuts and updating the DP array accordingly. This approach optimizes the solution to the Rod Cutting problem by ensuring each subproblem is solved only once, thus improving efficiency. | harshm03 |
1,898,049 | First Post | Hello everyone at the Dev community! I have just joined and wanted to say hello. I am fairly new to... | 0 | 2024-06-23T19:04:45 | https://dev.to/paul_8bd971efd08cf4d64c7d/first-post-5c6h | webdev, beginners | Hello everyone at the Dev community!
I have just joined and wanted to say hello. I am fairly new to web development. Currently, I am learning the ropes with Scrimba, ZTM, and Frontend Mentor. I have been learning for almost 2 months currently, and at the moment, I try to spend most of my time just coding and getting my reps in.
I have also recently switched over to Linux as a side project. I want to learn more about DevOps, servers, and bash scripting. I am learning to become a Front End Developer primarily at the moment.
Regards,
Paul
| paul_8bd971efd08cf4d64c7d |
1,898,047 | I a Avid Vim User, Finally Migrated to Neovim! How does it work, what do I gain from it? | Having migrated to Neovim, I will give you some feedback and give you the keys to understanding and... | 0 | 2024-06-23T19:00:05 | https://dev.to/umairk/i-a-avid-vim-user-finally-migrated-to-neovim-how-does-it-work-what-do-i-gain-from-it-3a24 | productivity, devops, opensource, linux | Having migrated to Neovim, I will give you some feedback and give you the keys to understanding and succeeding in your migration to Neovim! First of all, a little context and history.
## Vim, the essential
Who doesn't know Vim, if only through jokes about how to quit it? And for good reason Vim - or often its predecessor Vi, which has been around for over 44 years - often remains our most faithful ally, especially in minimal environments.
Developed in 1976 by Bill Joy, Vi was a real revolution, notably by introducing the use of the entire screen, which for the time was a real revolution. It must be said that at that time the majority of text editing was done using ed, from which Vi and Vim will inherit the modal aspect.
A decade later, a certain Vi user Bram Moolenaar sought to port it to the Amiga. He ended up developing version 1.0 of Vim in C in 1988. This version was intended to be an improved version of Vi, hence the acronym (VI iMproved).
Vim continues to evolve from year to year still in the hands of Bram Moolenaar, even if the pace of this evolution has decreased over the last 2 decades. This slowdown can be accentuated by the very closed contribution management policy. The proposals are complex to have the creator accept.
## NeoVim, reinventing yourself without forgetting yourself
We might as well admit it, Vim development issues ended up impacting the user community. So much so that in 2014 part of the Vim community embarked on the NeoVim project, notably launching a fundraiser which was a success. The goal of NeoVim is simple: to create a more modern, extensible Vim, with better integrations, and of course a more efficient development community.
From December 2015, version 0.1 of NeoVim was released, offering support for a very large part of Vim's functionalities. The community will quickly expand and the number of users will increase. The versions will also follow one another, regularly offering new features. To give you an idea, today we are at version 0.10.
In short, a project which has already proven itself and which continues to evolve by gaining more and more features.
## Why am I using Vim / NeoVim in 2024?
Except for specific development needs, I assume that the majority of you use VSCode, although I have no doubt that the more hipsters use an IDE from the IntelliJ suite. First of all, yes, Vim/Neovim are not IDEs, but code editors. That's not to say they can't fulfill the same needs as an IDE once custom configuration and plugins are added. In short, yes, in my opinion and for my use, it is comparable in this case.
I haven't always used Vim, I started on Eclipse in my early years when I was into Java EE development. Like many people, I suppose, I had the impression of using heavy software, full of overpowering features. But ultimately having the use of too little. Especially since at that time, I was still a student, I might as well tell you that when you move from Java to system-oriented C, your IDE is no longer suitable.
After that, I went to the dark side. Needless to say, I spent more time in front of a terminal than anything else, even if automation made it possible to limit manual tasks. Unfortunately, sometimes we have no choice and have to make changes often via SSH. Suffice it to say that your IDE will be of little help to you. So I did like everyone else, use Vim. At the beginning, it's complicated: modal functioning seems from another time and it feels like we are doing violence to ourselves. Over time, I found myself working largely over an SSH connection and had to start getting into Vim and Tmux, which remains, in my opinion, an excellent combo.

After this, I went back to working on my job. I could have started with an IDE, but in the end, I was starting to adapt to Vim. I wasn't an expert, but it was enough for my job. Over time, I started to get comfortable, added plugins that really changed my perception of Vim. From a very basic text editor, where the most advanced feature was syntax highlighting, I moved to an editor on steroids, with a real compression engine, Git integration, and lots of other tools. In short, everything I needed.
But around this time Atom and later VSCode gained popularity, particularly in the DevOps environment. I'm not going to lie to you: I tried it. It was at this moment that I understood two things: Vim, with my configuration and my plugins, was just as powerful in terms of functionality for my use and above all, Vim had changed the way I worked. When I say “changed my way of working”, it is, in particular, for a detail that will seem stupid to you: Vim is in a terminal. Coupled with Tmux it has become, for me, an ideal combo. The organization of my Gnome workspaces, my screens, in fact my entire work workflow ended up being focused on this. So I decided to return to Vim, while continuing to develop my configuration.
### What changes daily

When I saw Neovim pass by, it seemed to me that it was in 0.4. At the time, I said to myself: “ah, another fork that doesn’t add much”. So, I stayed on Vim. Especially since version 8, released recently at that time, brought cool features, notably asynchronous support. I still followed the evolution of NeoVim from afar.
One day, a new version 0.5 of NeoVim was released, and this was, I think, a real argument to start considering migration. This update brought a lot of very interesting features. Of course a fully integrated client for the LSP (we will talk about it later), better support for Lua configuration,and a new parsing system for syntax coloring in particular (Tree-sitter).
In the end I ended up giving in and testing it in front of the list of its contributions:
- A client for directly integrated LSP
- Being able to configure in Lua: Vimscript quickly has its limits
- More open: many interesting plugins are available only on NeoVim
- Better documentation
Casually, a more dynamic community, often with a modern approach
I will spare you the bug fixes and little everyday joys, especially on the default settings, which are much less austere than Vi or even Vim. For many, this list must be vague and that is normal! Beyond the list, I suggest you take a quick tour of these new features, what they bring and especially how they work.
### Lua
If you are a minimum advanced Vim user you have already encountered the famous VimScript, the language which allows you to configure your installation, but also to create plugins. It used to be an unpleasant point for a lot of people, including me. With Lua we have gained in comfort and possibilities.
I was a little afraid of having trouble at first having never used Lua, but it's quite clear and simple. To be more comfortable, I still took a few minutes to know the basics and for that I recommend this site.
One thing is still worth noting: you can have a lot of problems if, like me, you cut your code too much. Either way we can correct it, but personally it tired me out a lot for a minimal contribution so I put all my code in a file.
Small example of code in Lua:
```
function map(mode, shortcut, command)
vim.api.nvim_set_keymap(mode, shortcut, command, { noremap = true, silent = true })
end
map('', '<C-D>', ':Telescope find_files<CR>')
map('', '<C-F>', ':Telescope grep_string<CR>')
map('', '<C-X>', ':NvimTreeToggle<CR>')
```
### Packer
I have been talking about plugins since the beginning of the article, but using a simple editor doesn't involve doing everything by hand. So I have been using a plugin manager for a long time and if you don't, I strongly advise you to get started: it's very practical. I used [Vim plug](https://github.com/junegunn/vim-plug) which was everything I like: simple and effective.
But with NeoVim, a whole new world of plugins is available to us. And among these plugins, one of them, very popular, allows you to manage plugins: _it is called [Packer](https://github.com/wbthomason/packer.nvim) and is not developed by Hashicorp._
This offers, in my opinion, everything I could expect from a plugin manager under Neovim:
- Developed and configured in Lua
- Manage dependencies
- Lots of installation and management options
- Manages installations through asynchronous tasks
- Supports Git especially in terms of tags
The manager works well, allows you to install, update and delete different plugins. Beyond that, it installs easily and automatically with just a few lines of Lua code:
```
local fn = vim.fn
-- Automatically install packer
local install_path = fn.stdpath "data" .. "/site/pack/packer/start/packer.nvim"
if fn.empty(fn.glob(install_path)) > 0 then
PACKER_BOOTSTRAP = fn.system {
"git",
"clone",
"--depth",
"1",
"https://github.com/wbthomason/packer.nvim",
install_path,
}
print "Installing packer close and reopen Neovim..."
vim.cmd [[packadd packer.nvim]]
end
```
Installing plugins is just as simple and efficient:
```
require('packer').startup(function(use)
use 'wbthomason/packer.nvim' -- Package manager
# Put all your plugins
use { 'nvim-telescope/telescope.nvim', tag="nvim-0.6", requires = { 'nvim-lua/plenary.nvim' } } #Plugin with a dependency
tag
end)
```
To interact with Packer: you just need to use the NeoVim console to execute commands, such as _PackerInstall_ and_ PackerSync_.
Please note: Neovim having a more regular development cycle, the plugins are also often updated. If you are not using the latest version of Neovim, do not hesitate to tag the plugin installations. I had some unfortunate surprises: plugins, which, once updated, were no longer compatible with older versions of NeoVim. This is done simply as in the example given above.
### LSP
One of the weaknesses of Vim, for a long time, was the quality of comprehension, more especially of the engine which allowed the editor to understand the structure of the code in order to do autocompletion or even to notice errors. Syntactic problems and so on.
This situation has already changed thanks to Language Server Protocol and the numerous implementations based on Json-RPC.
Neovim provides a perfectly functional native client, which simplifes installation, and allow us to benefit from features worthy of the most popular editors.

It should be noted that, although Neovim integrates the client, certain plugins are necessary for LSP to give you a better integration experience. I advise you:
- nvim-lsp-installer which allows you to install language servers directly from the Neovim console
- nvim-lspconfig which brings together a set of basic configurations for LSP
- cmp-nvim-lua which displays small windows for autocompletion.
If you want to verify that your editor has detected the language and is using the correct server, you can use the command: _LspInfo_, which will show you all the LSP client information.
It may happen that your editor does not associate a file type with the language server. In this case, you can specify it, as below:
```
require'lspconfig'.terraformls.setup{
capabilities = capabilities,
filetypes = { "tf", "tfvar", "terraform" }
}
```
Bonus tip, you can run a diagnostic of your Neovim, in order to identify certain problems. For this just run:_ checkhealth_.
### Telescope, boosted fzf

Very often when you start customizing your Vim or Neovim, you install a plugin allowing you to display the tree structure in your editor. It's nice, it allows you to have a view of the structure, but moving from one file to another is slow. So, very quickly, we turn to [Fuzzy-finder](https://github.com/junegunn/fzf). And there, generally, we come back to life and we no longer want to leave our publisher.
Fzf is good, but as I said above, Neovim offers a lot of new plugins with new implementations. And among them, a supercharged fzf: [Telescope](https://github.com/nvim-telescope/telescope.nvim)! It allows you to search for files, and even text patterns, while offering an interface with file previews! A must have , quite simply.
## Finally the version of Vim we have been waiting for?

When I decided to migrate to Neovim, I preferred to do away with Vimscripts, in favor of Lua. And looking back, I think it was the right approach. However, I must admit that I wasted a lot of time, particularly in wanting, at all costs, to divide my configuration files too much, which Lua obviously doesn't like. I ended up using a simple base I found online and adapting it to my taste. I still kept the theme which is a nice homage to Atom.io which I had tried.
Vim was clearly my main IDE / code editor, I hesitated to change for a long time, not being sure of the contribution. In the end, the transition is going well: Neovim is a great development, but it does not destroy Vim's heritage. We therefore find the modal system, the lightness, the native terminal operation and everything that makes Vim so remarkable. | umairk |
1,898,046 | What is Recursion? | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T18:58:58 | https://dev.to/00gizem00/what-is-recursion-145e | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
When defining an object or a function, using itself within its own definition is called recursion.
## Additional Context
It is a very powerful problem-solving strategy. | 00gizem00 |
1,898,045 | Working of Web and DNS - Day 1/? | The way I finished my last blog was sad, I was so damn tired but I wanted to finish the publishing... | 27,813 | 2024-06-23T18:55:17 | https://dev.to/theshakeabhi/re-learning-the-basics-of-web-day-1-34gh | The way I finished my last blog was sad, I was so damn tired but I wanted to finish the publishing website part, but couldn't yet still manage to skim through it and complete it. But today will be going through that again and reading more carefully
## Day 1: Publishing of website and How does web works and Beyond
- Web hosting is rented file space on a hosting company's web server. The web server provides website content to website visitors. _You can rent your domain name for as many years as you want from a **domain registrar**._
- For beginners I would say, GitHub pages does the magic for basic ones and even React projects if I am not wrong. Still need to investigate backend capabilities on GitHub pages
### How the web works
- Basic working of client(you or any user on browser) and server

- Basic steps needed:
- Your internet connection
- TCP/IP
- DNS
- HTTP
- Component files
- Code files
- Assets
- [Working of the web](https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/How_the_Web_works#so_what_happens_exactly)
- HTML is parsed from the response from the server and it tries to fetch any `<link>` for CSS and `<script>` for JS files and paints the screen. For clearer understanding read from this [link](https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/How_the_Web_works#order_in_which_component_files_are_parsed)
- Real web addresses aren't the nice, memorable strings you type into your address bar to find your favourite websites. They are special numbers that look like this: 192.0.2.172. This is called an IP address.
- DNS: [Domain Name System] (https://developer.mozilla.org/en-US/docs/Learn/Common_questions/Web_mechanics/What_is_a_domain_name). A fun website for [DNS](https://howdns.works/ep1/)
NOTE: No coding was done, today was more like a reading day.
Have a good day, or night! :)
| theshakeabhi | |
1,898,044 | Why Downtime and Reliability Top the List of Backend Performance Concerns for Engineers | Last week we asked our community what aspect of backend performance concerns them the most. Almost... | 0 | 2024-06-23T18:53:06 | https://dev.to/apitoolkiti/why-downtime-and-reliability-top-the-list-of-backend-performance-concerns-for-engineers-30mh | backendreliability, applicationdowntime, sitereliabilityengineering, apitoolkit | Last week we asked our community what aspect of backend performance concerns them the most. Almost 80% of engineers say downtime and reliability are their most pressing concerns.

The results were telling: a whopping 78.6% of respondents cited downtime and reliability as their top concerns, while 21.4% were worried about slow API responses. In this article, we'll delve into why nearly 80% of engineers prioritize downtime and reliability, supported by data, insights, and community feedback.
## The High Cost of Downtime
The financial implications of downtime are staggering. According to Gartner, the average cost of IT downtime is approximately $5,600 per minute ([CBC Orlando](https://computerbusiness.com/news/the-true-cost-of-it-downtime-and-how-to-avoid-it/)) ([Atlassian](https://www.atlassian.com/incident-management/kpis/cost-of-downtime)). This figure can vary widely depending on the industry and the size of the business. For instance, large enterprises can incur costs upwards of $9,000 per minute ([Atlassian](https://www.atlassian.com/incident-management/kpis/cost-of-downtime)). This includes not only direct revenue loss but also the costs associated with lost productivity, recovery efforts, and potential damage to the company's reputation.
For startups and smaller businesses, it's even worse as a few minutes of downtime can damage the reputation, and decrease customer trust.
> "We had an hour of downtime last month, and it cost us a major client. Reliability isn't just a technical concern; it's a business imperative." - Reddit user.
## Customer Trust and User Experience
In today's digital world, users expect services to be available 24/7. Any downtime can lead to frustration and erode trust. A survey by Uptime Institute revealed that 31% of respondents experienced a downtime event that significantly impacted their business in the past year.
A tweet from @cra highlights the issue:
> "Users don't care why you're down, they care that you're down. Downtime kills user trust. #DevOps #SRE"
## Competitive Pressure
In competitive markets, reliability can be a differentiator. Companies like Amazon and Google have set high standards with their near-zero downtime. This sets a benchmark that other companies strive to meet.
> "Our uptime is our USP. If we can't keep our services running, our competitors will." from LinkedIn
## Complexity of Modern Systems
Modern applications are increasingly complex, often relying on multiple microservices, third-party APIs, and cloud infrastructure. This complexity increases the risk of downtime and makes troubleshooting more challenging.
A Hacker News discussion highlighted this issue:
> "With so many moving parts, one small failure can cascade into a major outage. Ensuring reliability across the board is a constant challenge."
## Strategies to Mitigate Downtime - Monitoring and Observability

To address these concerns, companies often invest in proactive monitoring, APM Management, and observability strategies. Partnering with IT-managed service providers can offer real-time monitoring and regular maintenance to prevent issues before they escalate ([CBC Orlando](https://computerbusiness.com/news/the-true-cost-of-it-downtime-and-how-to-avoid-it/)).
> The real turning point for me was understanding that you don’t really “prevent” downtime. You mitigate it, you design around it, and you set proper expectations. [A Reddit user](https://www.reddit.com/r/devops/comments/qqvelr/what_do_you_do_to_prevent_software_downtime/)
Effective monitoring and observability tools are crucial for maintaining uptime and reliability. They allow engineers to detect and resolve issues before they escalate. [APItoolkit](https://apitoolkit.io/), for example, provides end-to-end observability, helping engineers catch errors from any source, whether it's the API itself or a dependent service.
## Join Our Webinars to Learn More
Downtime and reliability are top concerns for engineers, as highlighted by our Twitter poll. To address these challenges, we’re hosting a webinar titled **"Backend Performance and Error Monitoring with APItoolkit"** on **June 28th at 7:00 PM CET.**
In this session, industry experts will share strategies for maintaining uptime, ensuring reliability, and optimizing backend performance. Learn practical solutions to common challenges and enhance your backend systems.
Don't miss out— [register now](https://apitoolkit.io/events/webinar-ii/) to secure your spot!
Follow us on [X](https://twitter.com/APItoolkitHQ) to stay updated to our webinars
Join our [Discord Server](https://discord.gg/dEB6EjQnKB) and drop us a question.
| apitoolkit |
1,898,042 | geojson-faker: fake geodata in GeoJSON format | geojson-faker is a tool for generating fake geodata in GeoJSON format. What problem does... | 0 | 2024-06-23T18:51:27 | https://dev.to/impocode/geojson-faker-fake-geodata-in-geojson-format-3oho | python, geojson, faker, pydantic | [geojson-faker](https://github.com/impocode/geojson-faker) is a tool for generating fake geodata in GeoJSON format.
## What problem does the library solve
If your product is related to geodata, then there is often a need to generate a large amount of this data. For example, in order to run tests or prepare a demonstration of the project. This is the problem that `geojson-faker` solves. With it, you can easily generate data of any size.
## GeoJSON
[GeoJSON](https://geojson.org/) is a format for encoding data about geographic features using JavaScript Object Notation (JSON). It's in this format that `geojson-faker` generates data, making it easy to embed the library into your services.
## Examples of use
Here's a simple example of how to use the library:
```python
>>> from geojson_faker import GeoJsonFaker
>>> geojson_faker = GeoJsonFaker()
>>> # Point2D or Point3D
>>> geojson_faker.point
Point(bbox=None, type='Point', coordinates=Position2D(longitude=-50.56703965217093, latitude=19.72513434718111))
>>> geojson_faker.point
Point(bbox=None, type='Point', coordinates=Position3D(longitude=111.84911865610678, latitude=-19.488979926988165, altitude=7921.968274391678))
>>> # Point2D
>>> geojson_faker.point2d
Point(bbox=None, type='Point', coordinates=Position2D(longitude=29.98434638920918, latitude=36.476444735501616))
>>> # Point3D
>>> geojson_faker.point3d
Point(bbox=None, type='Point', coordinates=Position3D(longitude=-76.36126084558762, latitude=30.682266859380533, altitude=15816.987234147065))
```
For more information, see the [geojson-faker](https://github.com/impocode/geojson-faker) project repository.
## Project plans
Support for basic geodata, namely `Position`, `Point`, `MultiPoint`, `LineString`, `MultiLineString`, `Polygon`, `MultiPolygon`, `GeometryCollection` has already been implemented. `Feature` and `FeatureCollection` will soon be added, as well as the generation of realistic data such as countries, cities, famous places, etc.
## Thank you
I'd really appreciate it if you could leave a comment, like or star on GitHub! Thank you! | impocode |
1,898,214 | Setting Up TanStack File-Based Router with a Vite React App | Integrating a file-based router in your Vite React application can streamline your development... | 0 | 2024-06-30T05:44:48 | https://iamdipankarpaul.hashnode.dev/setting-up-tanstack-file-based-router-with-a-vite-react-app | react, tanstack, vite, projects | ---
title: Setting Up TanStack File-Based Router with a Vite React App
published: true
date: 2024-06-23 18:46:19 UTC
tags: React,tanstack,vite,projects
canonical_url: https://iamdipankarpaul.hashnode.dev/setting-up-tanstack-file-based-router-with-a-vite-react-app
---
Integrating a file-based router in your Vite React application can streamline your development process by allowing you to organize your routes in a simple, intuitive manner. TanStack's file-based router is an excellent choice for this task. In this blog post, I'll guide you through the process of setting up TanStack File Router in a Vite React app.
## Step 1: Set Up a Vite React Project
First, we need to create a new Vite React project. If you already have a Vite React project, you can skip this step.
### Create a new Vite React project
```sh
npm create vite@latest my-react-app -- --template react
cd my-react-app
```
### Install dependencies
```sh
npm install
```
## Step 2: Install TanStack Router
Next, we need to install the TanStack Router.
### Install TanStack Router
```sh
npm install @tanstack/react-router
```
### Install the Vite Plugin and the Router Devtools
```sh
npm install --save-dev @tanstack/router-plugin @tanstack/router-devtools
```
### Configure the Vite Plugin
```js
import {
defineConfig
} from "vite";
import react from "@vitejs/plugin-react";
import {
TanStackRouterVite
} from "@tanstack/router-plugin/vite";
// https://vitejs.dev/config/
export default defineConfig({
plugins: [TanStackRouterVite(), react()],
});
```
## Step 3: Set Up the File-Based Router
Now, let's set up the file-based router by creating the necessary directory structure and defining our routes.
### Create the directory structure
- Create a `routes` folder insie `src` folder of your project.
- Inside the `routes` folder, create your route components and structure them according to your needs.
- Follow Tanstack Router [guide-lines](https://tanstack.com/router/latest/docs/framework/react/guide/file-based-routing#file-naming-conventions).
### Example directory structure
```
src
┣ routes
┃ ┣ about.lazy.jsx
┃ ┣ index.lazy.jsx
┃ ┣ posts.jsx
┃ ┗ __root.jsx
┣ App.jsx
┣ index.css
┣ main.jsx
┗ routeTree.gen.ts
```
> Route files with the `.lazy.tsx` extension are lazy loaded via separate bundles to keep the main bundle size as lean as possible.
### Define the routes
#### src/routes/__root.jsx
```jsx
import { createRootRoute, Link, Outlet } from "@tanstack/react-router";
import { TanStackRouterDevtools } from "@tanstack/router-devtools";
// It's the layout component
export const Route = createRootRoute({
component: () => (
<>
<div className="p-2 flex gap-2">
<Link to="/" className="[&.active]:font-bold">
Home
</Link>{" "}
<Link to="/about" className="[&.active]:font-bold">
About
</Link>
<Link to="/posts" className="[&.active]:font-bold">
Posts
</Link>
</div>
<hr />
<Outlet />
<TanStackRouterDevtools />
</>
),
});
```
#### src/routes/index.lazy.jsx
```jsx
import { createLazyFileRoute } from "@tanstack/react-router";
export const Route = createLazyFileRoute("/")({
component: Index,
});
function Index() {
return (
<div className="p-2">
<h3>Welcome Home!</h3>
</div>
);
}
```
#### src/routes/about.lazy.jsx
```jsx
import { createLazyFileRoute } from "@tanstack/react-router";
export const Route = createLazyFileRoute("/about")({
component: About,
});
function About() {
return <div className="p-2">Hello from About!</div>;
}
```
#### src/routes/posts.jsx
```jsx
import { createFileRoute } from "@tanstack/react-router";
export const Route = createFileRoute("/posts")({
component: Posts,
});
function Posts() {
return (
<div className="p-2">
<h3>Hello from Post!</h3>
</div>
);
}
```
### Configure the router in App.jsx
```jsx
import { RouterProvider, createRouter } from "@tanstack/react-router";
// Import the auto generated route tree
import { routeTree } from "./routeTree.gen";
// Create a new router instance
const router = createRouter({ routeTree });
export default function App() {
return (
<>
<RouterProvider router={router} />
</>
);
}
```
> `src/routeTree.gen.ts` file will be autometically generated.
## Step 4: Run Your Application
With everything set up, it's time to run your application.
### Start the development server
```sh
npm run dev
```
Open your browser and navigate to `http://localhost:5173/` . You should see your Vite React application running with TanStack Router handling the routes.
To add more routes, simply create new components in the `routes` directory and configure them in the `__root.jsx` file as needed.
By following these steps, you can efficiently set up a file-based router in your Vite React application using TanStack Router. This setup not only simplifies route management but also enhances the organization and scalability of your project.
| dipankarpaul |
1,898,040 | Twilio challenge - Environmental Bot | This is a submission for Twilio Challenge v24.06.12 What I Built This project provides an... | 0 | 2024-06-23T18:44:55 | https://dev.to/imkarthikeyan/twilio-challenge-environmental-bot-jm1 | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What I Built
<!-- Share an overview about your project. -->
This project provides an AI-driven air quality alert system using Twilio and OpenAI. Users can send their location via WhatsApp and receive alerts based on the current air quality index (AQI) along with AI-based safety advice. This system helps individuals stay informed about air quality and take necessary precautions.
## Demo
<!-- Share a link to your app and include some screenshots here. -->
Github link: https://github.com/skarthikeyan96/envrionmental-bot
**Sandbox link:**

**Screenshots:**


## Steps to try
1. Join the WhatsApp sandbox (details in the screenshot).
2. Send your location via WhatsApp.
3. Receive AI-generated analysis and safety measures.
## Twilio and AI
We used Twilio's WhatsApp API to receive user location data and send back real-time air quality alerts. OpenAI analyzes the air quality index from OpenWeatherMap and generates personalized safety advice. This advice is then sent to the user via Twilio, providing immediate and actionable information.
## Additional Prize Categories
Impactful Innovator: Our AI-driven system leverages Twilio and OpenAI to provide real-time air quality alerts and safety advice via WhatsApp. This tool addresses environmental and health concerns by keeping individuals and communities informed about air quality, promoting healthier and safer living conditions.
Thank you
| imkarthikeyan |
1,898,038 | Supervised Learning: Algorithms and Applications | Imagine you're training a puppy. You show it a ball and say "fetch," then throw it. Each time the... | 0 | 2024-06-23T18:41:10 | https://dev.to/abhinav_yadav_554cab962bb/supervised-learning-algorithms-and-applications-348h | supervised, machinelearning, ai, beginners | Imagine you're training a puppy. You show it a ball and say "fetch," then throw it. Each time the puppy retrieves the ball, it learns to associate the word "fetch" with the action. That's supervised learning in a nutshell!
## Table Of Content
- Introduction to Supervised Learning
- Types of Supervised Learning
- Popular Algorithms for Supervised Learning
- Practical Example: Implementing a Classification Model
- Applications of Supervised Learning
- Challenges and Best Practices
## Introduction to Supervised Learning
Supervised learning is a type of machine learning where computer taught by using examples with answers. It is very similar like to teach a child with the use of flashcards. We show computer a lots of example and tell what's the right answer so that it can learn to predict the answers for new examples on its own.
## Types Of Supervised Learning
1.**Classification**
Here, the machine sorts things into categories. Is this email spam or not? Is this picture a cat or a dog? Think of sorting laundry – whites go in one pile, colours in another.
2.**Regression**
This is all about predicting continuous values. How much will this house cost? What will the weather be like tomorrow? Imagine estimating how much your laundry pile will grow each week – a never-ending prediction game!
## Popular Algorithms for Supervised Learning
Now let's meet some superheroes of supervised learning:
1.**Linear Regression**
Also known as the "regression starter pack". It finds a straight line that best fits the data to make predictions. It is most suitable for stuff that changes gradually like house prices over time.
2.**Logistic Regression**
Also known as binary classifier. It predicts the probability of something belonging to one of two classes, like spam or not spam. Perfect for sorting purposes.
3.**Decision Trees**
Imagine a choose-your-own-adventure book for the machine. It asks a series of questions based on the data to arrive at a decision. Useful for tasks like medical diagnosis where different symptoms can lead to different outcomes.
4.**Support Vector Machines (SVMs)**
These guys find a special dividing line (or hyperplane in high dimensions) that best separates the data into categories. Imagine a bouncer separating people based on height at a club – SVM creates an optimal separation line.
5.**K-Nearest Neighbours (k- NN)**
This algorithm is all about finding similar neighbours. For a new data point, it checks the k closest data points it's seen before and predicts the same class for the new one. Like asking your friends for movie recommendations based on their favourites.
> These are just few examples there are a lot more supervised learning algorithms out there.
## Practical Example: Implementing a Classification Model
**Using a Real World Dataset:**
Imagine we have a dataset of emails labeled as spam or not spam. Our aim is to create a classification model which can separate new emails accordingly.
**Walkthrough:**
1.**Importing Libraries:**
```
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
import pandas as pd
```
2.**Loading the Dataset:**
Let's assume you have a CSV file 'emails.csv' with features like 'contains_urgent_offer', 'sender_suspicious', and 'num_links'.
```
# Hypothetical dataset for illustration purposes
data = {
'contains_urgent_offer': [1, 0, 1, 0, 1],
'sender_suspicious': [1, 0, 1, 0, 0],
'num_links': [5, 1, 3, 0, 2],
'is_spam': [1, 0, 1, 0, 1]
}
df = pd.DataFrame(data)
X = df[['contains_urgent_offer', 'sender_suspicious', 'num_links']]
y = df['is_spam']
```
3.**Splitting the Dataset:**
We split the data into training and testing sets to train the model and then evaluate its performance on unseen data.
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
```
4.**Training the Model:**
We create a decision tree classifier and train it on the training data.
```
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
```
5.**Making Predictions:**
We use the trained model to predict labels for the test data.
```
y_pred = model.predict(X_test)
```
6.**Evaluating the Model:**
Finally, we evaluate the model's accuracy by comparing its predictions to the actual labels.
```
accuracy = accuracy_score(y_test, y_pred)
print(f'Accuracy: {accuracy}')
```
By following this process, you can implement a decision tree classifier to sort emails into spam or not spam, demonstrating a practical example of supervised learning.
## Applications of Supervised Learning
Here are few areas where supervised learning shines:
1. Healthcare: Predicting disease risk or treatment outcomes.
2. Finance: Detecting fraudulent transactions or approving loans.
3. Marketing: Recommending products to customers or personalizing their experience.
## Challenges and Best Practices
Of course, with great power comes great responsibility! There are challenges to consider, like:
1. Overfitting: When the model memorizes the training data too well and can't handle new situations. Imagine training your puppy only on balls that bounce a certain way – it might get confused by a frisbee!
2. Underfitting: When the model is too simple and can't learn the patterns in the data. Our puppy wouldn't learn "fetch" at all if we just showed it a picture of a ball.
3. Data Quality: Just like a messy room makes learning difficult, dirty data can hinder a machine's learning process.
> But with careful planning and the right tools, supervised learning can be a powerful asset in our technological toolbox.
> Now you're equipped to tackle the world of supervised learning! It's like having a superpower that lets you train machines to solve real-world problems. So, put on your data scientist cape and get ready to make a difference, one machine learning model at a time!
Happy Learning !
Please do comment below if you like the content or not
Have any questions or ideas or want to collaborate on a project, here is my [linkedin](https://www.linkedin.com/in/abhinav-yadav-482a4a26b/)
| abhinav_yadav_554cab962bb |
1,898,035 | Structured Approach to Designing an Airline Management System | First, I want to emphasize that you should Use these characteristics as a guideline, not a rigid set... | 0 | 2024-06-23T18:38:22 | https://dev.to/muhammad_salem/structured-approach-to-designing-an-airline-management-system-47if | First, I want to emphasize that you should Use these characteristics as a guideline, not a rigid set of rules. Your judgment and experience will play a crucial role in making informed decisions during OOA.
Here's how I can help you identify valid objects during Object-Oriented Analysis (OOA) and refine the selection characteristics we discussed earlier:
**Beyond Nouns:**
* While nouns are a good starting point, don't limit yourself strictly to them. Verbs can also indicate potential objects representing actions or processes within the system. Look for concepts with well-defined responsibilities.
**Focus on Responsibilities:**
* A key aspect of object identification is identifying the responsibilities (behaviors) of a potential object. These responsibilities become the object's methods. If a noun doesn't have clear and distinct responsibilities, it might be better suited as an attribute of another object.
**Refined Selection Characteristics:**
1. **Retained Information and Responsibilities:** Combine these characteristics. The object should encapsulate data (attributes) that needs to be remembered and operations (methods) that manipulate that data or interact with other objects.
2. **Granularity and Cohesion:** Look for a balance between having enough attributes and functionalities to be useful and avoiding becoming a "god object" that does too much. Strive for focused and cohesive objects with clear responsibilities.
3. **Completeness:** An object should have a complete set of data and functionalities to fulfill its intended purpose. Avoid situations where critical functionalities reside outside the object, leading to scattered logic.
4. **Independence:** Objects should strive to be as independent as possible from other objects. This promotes loose coupling and easier maintenance. However, some collaboration is often necessary.
5. **Reusability:** Consider the potential reusability of the object. Can it be used in other parts of the system or even future projects? Designing reusable objects promotes code efficiency.
**Additional Tips:**
* **Domain Expertise:** Leverage your understanding of the problem domain to identify objects that reflect real-world concepts relevant to the system.
* **Scenarios and Use Cases:** Analyze user stories, use cases, and system behavior to identify objects that participate in these scenarios and fulfill specific functionalities.
* **Maintainability:** Always consider the long-term maintainability of your object model. Strive for a clear and well-organized structure that can be easily understood and modified in the future.
**Remember:** Object identification is an iterative process. As you gather more information and refine your understanding of the system, you might revisit your initial decisions and adjust your object model accordingly. The goal is to create a model that accurately reflects the real-world domain and facilitates the design of a well-structured and maintainable system.
# Structured Approach to Designing an Airline Management System
## Table of Contents
1. **Introduction**
2. **Identifying Candidate Objects**
3. **Applying Selection Characteristics**
4. **Refining Objects**
5. **Defining Attributes and Operations**
6. **Designing Class Diagrams**
7. **Handling Complexities and Ensuring Scalability**
8. **Conclusion**
---
## 1. Introduction
Designing a robust, scalable, and maintainable Airline Management System (AMS) involves a structured approach to identifying and refining objects. This example will illustrate how to methodically derive the necessary objects and their interactions based on the provided requirements.
## 2. Identifying Candidate Objects
### Problem Statement
We will focus on the following set of requirements while designing the Airline Management System:
1. **Flight Search**: Customers should be able to search for flights for a given date and source/destination airport.
2. **Ticket Reservation**: Customers should be able to reserve a ticket for any scheduled flight. Customers can also build a multi-flight itinerary.
3. **Flight Information**: Users of the system can check flight schedules, their departure time, available seats, arrival time, and other flight details.
4. **Multi-passenger Reservation**: Customers can make reservations for multiple passengers under one itinerary.
5. **Admin Functions**: Only the admin of the system can add new aircraft, flights, and flight schedules. Admin can cancel any pre-scheduled flight (all stakeholders will be notified).
6. **Reservation Cancellation**: Customers can cancel their reservation and itinerary.
7. **Crew Assignment**: The system should be able to handle the assignment of pilots and crew members to flights.
8. **Payments**: The system should be able to handle payments for reservations.
9. **Notifications**: The system should be able to send notifications to customers whenever a reservation is made/modified or there is an update for their flights.
### Candidate Objects
From the problem statement, we identify the following candidate objects (nouns):
- Customer
- Flight
- Date
- Airport
- Ticket
- Itinerary
- User
- Seat
- Admin
- Aircraft
- Schedule
- Pilot
- CrewMember
- Reservation
- Payment
- Notification
## 3. Applying Selection Characteristics
### A. Retained Information
- **Flight**: Information about flights must be retained.
- **Customer**: Customer details must be retained for reservations and notifications.
- **Ticket**: Details of tickets must be retained.
- **Itinerary**: Itineraries must be stored.
- **Admin**: Admin credentials and permissions must be retained.
- **Aircraft**: Aircraft details must be stored.
- **Schedule**: Flight schedules must be stored.
- **Reservation**: Reservation details must be retained.
- **Payment**: Payment records must be stored.
- **Notification**: Notification details must be stored.
### B. Needed Services
- **Flight**: Search, schedule management, seat availability check.
- **Customer**: Search flights, make reservations, cancel reservations.
- **Ticket**: Generate ticket, modify ticket.
- **Itinerary**: Build, modify, cancel.
- **Admin**: Add/modify/cancel flights and schedules.
- **Aircraft**: Add, modify details.
- **Schedule**: Add/modify schedules.
- **Reservation**: Make, modify, cancel reservations.
- **Payment**: Process payments, refund.
- **Notification**: Send notifications, manage subscriptions.
### C. Multiple Attributes
- **Flight**: Flight number, departure time, arrival time, source, destination, aircraft, schedule.
- **Customer**: Name, contact information, user ID, email.
- **Ticket**: Ticket number, flight details, customer details, seat number.
- **Itinerary**: List of flights, customer details.
- **Admin**: Admin ID, permissions.
- **Aircraft**: Model, capacity, manufacturer, tail number.
- **Schedule**: Flight, date, time.
- **Reservation**: Reservation ID, customer, itinerary, payment status.
- **Payment**: Amount, status, method, transaction ID.
- **Notification**: Type, recipient, message.
### D. Common Attributes
- **Flight**: All flights have flight number, departure/arrival times, source/destination, etc.
- **Customer**: All customers have name, contact info, etc.
- **Ticket**: All tickets have ticket number, flight details, etc.
- **Itinerary**: All itineraries have a list of flights, customer details.
- **Admin**: All admins have admin ID, permissions.
- **Aircraft**: All aircraft have model, capacity, etc.
- **Schedule**: All schedules have flight, date, time.
- **Reservation**: All reservations have reservation ID, customer, itinerary, etc.
- **Payment**: All payments have amount, status, etc.
- **Notification**: All notifications have type, recipient, message.
### E. Common Operations
- **Flight**: Search, check availability.
- **Customer**: Search flights, make reservations, cancel reservations.
- **Ticket**: Generate, modify.
- **Itinerary**: Build, modify, cancel.
- **Admin**: Add/modify/cancel flights, schedules.
- **Aircraft**: Add, modify.
- **Schedule**: Add, modify.
- **Reservation**: Make, modify, cancel.
- **Payment**: Process, refund.
- **Notification**: Send, manage.
### F. Essential Requirements
- **Flight**: Essential for airline operation.
- **Customer**: Essential for making reservations.
- **Ticket**: Essential for boarding and flight tracking.
- **Itinerary**: Essential for managing multi-flight trips.
- **Admin**: Essential for managing the system.
- **Aircraft**: Essential for flight operations.
- **Schedule**: Essential for flight timings.
- **Reservation**: Essential for booking flights.
- **Payment**: Essential for processing reservations.
- **Notification**: Essential for informing customers.
## 4. Refining Objects
Based on the selection characteristics, we finalize the following objects:
- **Flight**
- **Customer**
- **Ticket**
- **Itinerary**
- **Admin**
- **Aircraft**
- **Schedule**
- **Reservation**
- **Payment**
- **Notification**
## 5. Defining Attributes and Operations
### A. Flight
#### Attributes:
- FlightNumber
- DepartureTime
- ArrivalTime
- Source
- Destination
- Aircraft
- Schedule
- AvailableSeats
#### Operations:
- searchFlight(Date, Source, Destination)
- checkAvailability()
- getFlightDetails()
### B. Customer
#### Attributes:
- Name
- ContactInformation
- UserID
- Email
#### Operations:
- searchFlights(Date, Source, Destination)
- makeReservation(Itinerary)
- cancelReservation(ReservationID)
- buildItinerary(List<Flight>)
- modifyItinerary(ItineraryID, List<Flight>)
### C. Ticket
#### Attributes:
- TicketNumber
- FlightDetails
- CustomerDetails
- SeatNumber
#### Operations:
- generateTicket(Reservation)
- modifyTicket(TicketID, FlightDetails)
### D. Itinerary
#### Attributes:
- ItineraryID
- List<Flight>
- CustomerDetails
#### Operations:
- buildItinerary(List<Flight>)
- modifyItinerary(ItineraryID, List<Flight>)
- cancelItinerary(ItineraryID)
### E. Admin
#### Attributes:
- AdminID
- Permissions
#### Operations:
- addFlight(Flight)
- modifyFlight(FlightID, FlightDetails)
- cancelFlight(FlightID)
- addAircraft(Aircraft)
- modifyAircraft(AircraftID, AircraftDetails)
- addSchedule(Schedule)
- modifySchedule(ScheduleID, ScheduleDetails)
### F. Aircraft
#### Attributes:
- Model
- Capacity
- Manufacturer
- TailNumber
#### Operations:
- addAircraft(AircraftDetails)
- modifyAircraft(AircraftID, AircraftDetails)
### G. Schedule
#### Attributes:
- Flight
- Date
- Time
#### Operations:
- addSchedule(ScheduleDetails)
- modifySchedule(ScheduleID, ScheduleDetails)
### H. Reservation
#### Attributes:
- ReservationID
- Customer
- Itinerary
- PaymentStatus
#### Operations:
- makeReservation(Itinerary)
- modifyReservation(ReservationID, Itinerary)
- cancelReservation(ReservationID)
### I. Payment
#### Attributes:
- Amount
- Status
- Method
- TransactionID
#### Operations:
- processPayment(ReservationID, Amount)
- refundPayment(TransactionID)
### J. Notification
#### Attributes:
- Type
- Recipient
- Message
#### Operations:
- sendNotification(Recipient, Message)
- manageSubscriptions(CustomerID, Preferences)
## The key is to focus on identifying objects that represent complete and meaningful concepts within the system domain. The number of attributes is a factor, but not the sole deciding criterion. Consider the nature of the attributes, their cohesion, and the overall complexity of the model.
| muhammad_salem | |
1,898,033 | Deploying an Application Using Apache as a Web Server | Deploying an application using Apache as a web server is a fundamental skill for web developers and... | 0 | 2024-06-23T18:33:59 | https://dev.to/iaadidev/deploying-an-application-using-apache-as-a-web-server-1f9j | apache, deployment, webdev, beginners |
Deploying an application using Apache as a web server is a fundamental skill for web developers and system administrators. Apache, an open-source HTTP server, is renowned for its robustness, flexibility, and widespread use. This blog will guide you through the steps to deploy a web application using Apache, with relevant code snippets to ensure a smooth deployment process.
## Table of Contents
1. Introduction
2. Prerequisites
3. Installing Apache
4. Configuring Apache
5. Deploying a Static Website
6. Deploying a Dynamic Website (PHP Application)
7. Security Considerations
8. Conclusion
## 1. Introduction
Apache HTTP Server, commonly referred to as Apache, is a free and open-source web server that delivers web content through the internet. By following this guide, you will learn how to install and configure Apache on a Linux server and deploy both static and dynamic web applications.
## 2. Prerequisites
Before we begin, ensure you have the following:
- A server running a Linux distribution (e.g., Ubuntu, CentOS).
- Root or sudo access to the server.
- Basic knowledge of the Linux command line.
- Your application ready for deployment.
## 3. Installing Apache
To start, we need to install Apache on our server. The installation process varies slightly depending on the Linux distribution you're using. Here, we'll cover the installation for both Ubuntu and CentOS.
### On Ubuntu
1. Update the package index:
```bash
sudo apt update
```
2. Install Apache:
```bash
sudo apt install apache2
```
3. Start and enable Apache to run on boot:
```bash
sudo systemctl start apache2
sudo systemctl enable apache2
```
### On CentOS
1. Update the package index:
```bash
sudo yum update
```
2. Install Apache:
```bash
sudo yum install httpd
```
3. Start and enable Apache to run on boot:
```bash
sudo systemctl start httpd
sudo systemctl enable httpd
```
After installation, you can verify Apache is running by visiting your server's IP address in a web browser. You should see the Apache default welcome page.
## 4. Configuring Apache
Apache configuration files are typically located in the `/etc/apache2` directory on Ubuntu and `/etc/httpd` on CentOS. The main configuration file is `httpd.conf` or `apache2.conf`.
### Configuring a Virtual Host
Virtual Hosts allow you to host multiple websites on a single server. Here’s how to set up a virtual host:
1. Create a directory for your website:
```bash
sudo mkdir -p /var/www/yourdomain.com/public_html
```
2. Set permissions for the directory:
```bash
sudo chown -R $USER:$USER /var/www/yourdomain.com/public_html
sudo chmod -R 755 /var/www/yourdomain.com
```
3. Create a sample index.html file:
```bash
echo "<html><body><h1>Welcome to YourDomain.com!</h1></body></html>" > /var/www/yourdomain.com/public_html/index.html
```
4. Create a virtual host configuration file:
```bash
sudo nano /etc/apache2/sites-available/yourdomain.com.conf
```
Add the following content:
```apache
<VirtualHost *:80>
ServerAdmin webmaster@yourdomain.com
ServerName yourdomain.com
ServerAlias www.yourdomain.com
DocumentRoot /var/www/yourdomain.com/public_html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
```
5. Enable the new virtual host:
```bash
sudo a2ensite yourdomain.com.conf
sudo systemctl restart apache2
```
On CentOS, the steps are similar, but the directory paths differ slightly. For example, the sites-available directory does not exist by default, so you need to create it.
## 5. Deploying a Static Website
Deploying a static website is straightforward. Simply place your HTML, CSS, and JavaScript files in the `DocumentRoot` directory defined in your virtual host configuration.
For example, if your virtual host configuration points to `/var/www/yourdomain.com/public_html`, ensure all your static files are in this directory. Apache will serve these files directly to clients.
## 6. Deploying a Dynamic Website (PHP Application)
To deploy a PHP application, you need to install PHP and configure Apache to handle PHP files.
### Installing PHP
On Ubuntu:
```bash
sudo apt install php libapache2-mod-php
```
On CentOS:
```bash
sudo yum install php php-mysql
```
Restart Apache to apply the changes:
```bash
sudo systemctl restart apache2 # On Ubuntu
sudo systemctl restart httpd # On CentOS
```
### Configuring Apache to Handle PHP
Apache is already configured to handle PHP files after installing `libapache2-mod-php` (on Ubuntu) or `php` (on CentOS). Place your PHP application files in the `DocumentRoot` directory.
For example, if your PHP application is named `index.php`, place it in `/var/www/yourdomain.com/public_html/index.php`.
### Testing Your PHP Application
Create a simple PHP file to test the configuration:
```bash
echo "<?php phpinfo(); ?>" > /var/www/yourdomain.com/public_html/info.php
```
Visit `http://yourdomain.com/info.php` in your web browser. If PHP is configured correctly, you will see the PHP information page.
### Deploying Your PHP Application
Upload your entire PHP application to the `DocumentRoot` directory. Ensure your application files and directories have the correct permissions:
```bash
sudo chown -R www-data:www-data /var/www/yourdomain.com/public_html
sudo chmod -R 755 /var/www/yourdomain.com/public_html
```
### Configuring MySQL (Optional)
If your PHP application requires a MySQL database, you need to install and configure MySQL.
On Ubuntu:
```bash
sudo apt install mysql-server
```
On CentOS:
```bash
sudo yum install mariadb-server
sudo systemctl start mariadb
sudo systemctl enable mariadb
```
Secure the MySQL installation:
```bash
sudo mysql_secure_installation
```
Create a database and user for your application:
```bash
sudo mysql -u root -p
```
Inside the MySQL shell:
```sql
CREATE DATABASE yourdatabase;
CREATE USER 'youruser'@'localhost' IDENTIFIED BY 'yourpassword';
GRANT ALL PRIVILEGES ON yourdatabase.* TO 'youruser'@'localhost';
FLUSH PRIVILEGES;
EXIT;
```
Configure your PHP application to connect to the MySQL database using the database name, user, and password created above.
## 7. Security Considerations
### Configuring Firewalls
Ensure your firewall allows HTTP and HTTPS traffic. On Ubuntu, you can use `ufw`:
```bash
sudo ufw allow 'Apache Full'
```
On CentOS, use `firewalld`:
```bash
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
```
### Enabling SSL
For secure connections, enable SSL on your Apache server. Use Certbot to obtain and install a free SSL certificate from Let's Encrypt.
Install Certbot:
On Ubuntu:
```bash
sudo apt install certbot python3-certbot-apache
```
On CentOS:
```bash
sudo yum install certbot python3-certbot-apache
```
Obtain and install the SSL certificate:
```bash
sudo certbot --apache
```
Follow the prompts to configure SSL for your domain. Certbot will automatically configure Apache to use the new SSL certificate.
### Securing Apache
Edit the Apache configuration file to improve security. Open `/etc/apache2/apache2.conf` (Ubuntu) or `/etc/httpd/conf/httpd.conf` (CentOS) and make the following changes:
- Disable directory listing:
```apache
<Directory /var/www/>
Options -Indexes
</Directory>
```
- Hide Apache version and OS details:
```apache
ServerTokens Prod
ServerSignature Off
```
- Limit request size to prevent DoS attacks:
```apache
LimitRequestBody 10485760
```
Restart Apache to apply the changes:
```bash
sudo systemctl restart apache2 # On Ubuntu
sudo systemctl restart httpd # On CentOS
```
## 8. Conclusion
Deploying an application using Apache as a web server involves installing and configuring Apache, setting up virtual hosts, and securing your server. Whether you're deploying a static website or a dynamic PHP application, Apache provides a robust and flexible platform for web hosting.
By following the steps outlined in this guide, you should be able to deploy your web application with confidence. Remember to keep your server and applications up to date with the latest security patches to protect against vulnerabilities.
Happy deploying! | iaadidev |
1,898,032 | I utilized Twilio’s powerful SMS and Voice APIs to build the reminder system: | What I Built I developed an automated reminder system that leverages Twilio's SMS and Voice APIs to... | 0 | 2024-06-23T18:32:41 | https://dev.to/aditya_kushwaha_0a7aa61d6/i-utilized-twilios-powerful-sms-and-voice-apis-to-build-the-reminder-system-2bcc | twiliochallenge | What I Built
I developed an automated reminder system that leverages Twilio's SMS and Voice APIs to send appointment reminders via text messages and voice calls. This system includes basic AI functionality to handle user interactions, such as confirming or rescheduling appointments through SMS and IVR (Interactive Voice Response) calls.
Demo
You can access the app here. Below are some screenshots of the application:
Twilio and AI
I utilized Twilio’s powerful SMS and Voice APIs to build the reminder system:
SMS Reminders: The system uses Twilio’s SMS API to schedule and send automated text message reminders for upcoming appointments.
Voice Call Reminders: Using Twilio’s Voice API, the system schedules and makes automated voice calls to remind users of their appointments. I used TwiML (Twilio Markup Language) to create the call flow, which allows users to confirm or reschedule their appointments during the call.
AI Integration: I integrated the spaCy NLP library to add basic AI capabilities. This allows the system to understand and process user responses such as "yes," "no," "reschedule," and "cancel." The AI component helps in providing a more interactive and user-friendly experience.
Here is a code snippet showcasing the integration:
python
Copy code
from flask import Flask, request
import spacy
import json
from twilio.rest import Client
import requests
app = Flask(__name__)
# Load spaCy model
nlp = spacy.load("en_core_web_sm")
# Load config and user data
with open('config.json', 'r') as f:
config = json.load(f)
# Twilio configuration
twilio_config = {
"account_sid": config["account_sid"],
"auth_token": config["auth_token"],
"twilio_number": config["twilio_number"]
}
client = Client(twilio_config['account_sid'], twilio_config['auth_token'])
def translate_text(text, target_language):
api_url = "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=" + target_language
headers = {
'Ocp-Apim-Subscription-Key': '<your_subscription_key>',
'Content-Type': 'application/json'
}
body = [{'text': text}]
response = requests.post(api_url, headers=headers, json=body)
return response.json()[0]['translations'][0]['text']
def personalize_message(user, message_template):
return message_template.format(name=user['name'], time=user['appointment_time'])
@app.route('/handle_voice_response', methods=['POST'])
def handle_voice_response():
# Handle user response from voice call
pass
if __name__ == '__main__':
app.run(debug=True)
Additional Prize Categories
This submission qualifies for the following additional prize categories:
Twilio Times Two: This project makes use of both Twilio’s SMS and Voice APIs, providing a robust and flexible reminder system.
Impactful Innovators: The system is designed to reduce the number of missed appointments by providing timely and interactive reminders, potentially benefiting healthcare providers, businesses, and their clients.
Thank you for considering my submission! | aditya_kushwaha_0a7aa61d6 |
1,898,031 | Event Booking System with Twilio and OpenAi | This is a submission for the Twilio Challenge What I Built I built an event booking... | 0 | 2024-06-23T18:32:20 | https://dev.to/toubielawbar/event-booking-system-with-twilio-and-openai-15m3 | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
<!-- Share an overview about your project. -->
I built an event booking system built using Twilio and AI allows users to book events through various communication channels like SMS and voice calls.
## Demo
<!-- Share a link to your app and include some screenshots here. -->
https://caperstack.com
## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! → | toubielawbar |
1,897,903 | AWS Summit TokyoのAWS Jamに参加しました | AWS Jamの紹介 AWS... | 0 | 2024-06-23T18:30:39 | https://dev.to/regent0ro/aws-summit-tokyonoaws-jamnican-jia-simasita-50n7 | aws, awssummit, awsjam | ## AWS Jamの紹介
AWS JamとはAWSのユースケースに基づいた問題をチームで解決に取り組むイベントです。問題を解決するとチームに得点が入り、チーム対抗で得点を競うようなゲーム形式で、楽しくAWSを学ぶことができます。
AWS Jamは AWS クラスルームトレーニングまたはAWS re:inventやAWS re:inforceなどのグローバルイベントでも開催されいます。
AWS Summit Tokyoでも毎年AWS Jamが開催されていて、セッションの申し込み開始からすぐ満席になるほど大人気なセッションの一つとなっています。私の場合も最初の申し込みでは申し込めませんでしたが、追加枠の申し込みで幸い参加することができました。
## 当日の様子
### スケジュールとチーム構成
当日のスケジュールは以下のような感じです。
12:20~12:50: 受付開始、準備
13:00~13:50: 進め方の説明、チーム内で自己紹介
13:50~16:20: AWS Jamを楽しむ!
16:20~17:00: 結果発表、振り返り
会場に入ってくじ引き、そこに書かれていた数字でチームが構成されます。
今回は全部で28チーム、1チーム4人の構成でした。
### AWS Jamの説明
AWS Jamの説明ではゲームの進め方について簡単に説明があったあと、問題に取り組むときはソロー型ではなく、ペア型(またはモブ型)を推奨するとの案内がありました。
<figure>

<figcaption>AWS Jamの取り組み方について<br><a href="https://aws.amazon.com/jp/blogs/news/aws-jam-report-aws-summit-tokyo-2023" target="_blank">AWS Jam 実施レポート @ AWS Summit Tokyo 2023</a>より</figcaption>
</figure>
ソロー型が一番効率はよいですが、AWS Jamは「楽しく学ぶ」ことが一番の目的なので、ペアまたはモブ型でチームで相談しながら進めることをおすすめしています。
### ゲーム開始
いよいよゲーム開始です。
問題はEasy,Medium,Hardに分かれていて、問題ごとに用意されたAWS環境で問題を買い消すして得点するような方式となっています。分野はネットワークからDevOpsまで幅広かったです。また、よく利用されているサービスを利用する問題もある反面、全く触ったことないサービスを使う問題もいくつかありました。
また、問題ごとにヒントがあるので、どうしてもわからない場合はヒントを見ることもできます。ペナルティーで得点できる点数が減ってしまいますが、悩みすぎて時間切れになるよりは、適切なタイミングでヒントを見るのが得点につながったりもします。
ヒントでは参考になるドキュメントや確認ポイントを教えてくれるので、あんなに悩んでいた課題がヒントを見た瞬間、一瞬でクリアできることもあります。そして自分のトラブルシューティングのスキルが不足していることに痛感します。笑
### クロージング
ゲームが終了したら最終結果が発表され、一番多くの得点をした1位にはトロフィー、2,3位にはメダルが授与されました。
その後、チームごとに振り返りの時間をもち、集合写真を撮ることでAWS SummitでのAWS Jamは終了しました。
## 全体的な感想
私のチームは最終的に上位3位には入れませんでしたが、10位の中には入ることができました。イベントの途中は一時期1,2位になったときもあったので少し悔しいですね。
いやいや.....もっと解けたのに悔しい!それでも楽しかった!またやりたい!
その分AWSの勉強にはすごくなったかと思います。
自分で解決策を必死で悩み、最後に答えがわかってもわからなくても記憶に長く残ることになると思います。今回新しく知ったサービスは一生忘れない気がします(笑)
また、Summitでの参加は初めてでしたが、数年前に個別のイベントとして開かれたAWS Jamには参加したことはあります。Jamのテーマにもよりますが、新しく発表された機能やサービスが入る問題があったりどんどんアップデートされていくので、何回参加しても新しく楽しく学べることができそうです。
来年もまた機会があったら参加したいと思います。今年の順位より高い順位を目標に頑張ります!
<figure>

<figcaption>AWS JAMでもらったステッカー</figcaption>
</figure>
## おまけ
AWS Jamは AWS クラスルームトレーニング または AWSのイベント時に開催されますが、AWS Skill Builderの有料のサブスクリプションに加入すると、AWS Jam Journeyという形で個人的にJamを楽しむことができます。普通のAWS学習としてもよし、パブリックなイベント前にどんな感じか確認でもよしと思います。
https://skillbuilder.aws/jp/subscriptions | regent0ro |
1,898,029 | Get Personalized tasks extracted, and sent to you, from your meeting room! | This is a submission for the Twilio Challenge What I Built Get Personalized tasks... | 0 | 2024-06-23T18:29:37 | https://dev.to/santhosh_0484000/get-personalized-tasks-extracted-and-sent-to-you-from-your-meeting-room-2gj7 | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
Get Personalized tasks extracted, and sent to you, from your meeting room! Using meeting transcripts
## Demo
https://github.com/santosh8309/twilio_challenge.git



## Twilio and AI
Daily scrum calls often involve lengthy discussions (over 30 minutes) about task allocation. Assigning priorities after each meeting can be time-consuming, especially with tight deadlines. This use case tackles this inefficiency. By leveraging Gemini and meeting transcripts, it automatically generates detailed tasks and sends them to team members via email or SMS (using Twilio). Functioning as a productivity tool, this approach can save Team Leads and Business Analysts at least 30 minutes per meeting. I(@santhosh_0484000) built the use case and posted it to the code exchange as well
## Additional Prize Categories
Twilio Times Two, Impactful Innovators, Entertaining Endeavors
| santhosh_0484000 |
1,898,030 | This is a submission for the Twilio Challenge | What I Built I developed an automated reminder system that leverages Twilio's SMS and Voice APIs to... | 0 | 2024-06-23T18:28:56 | https://dev.to/aditya_kushwaha_0a7aa61d6/this-is-a-submission-for-the-twilio-challenge-4hif | devchallenge, twiliochallenge, ai, twilio |
What I Built
I developed an automated reminder system that leverages Twilio's SMS and Voice APIs to send appointment reminders via text messages and voice calls. This system includes basic AI functionality to handle user interactions, such as confirming or rescheduling appointments through SMS and IVR (Interactive Voice Response) calls.
Demo
You can access the app here. Below are some screenshots of the application:
Twilio and AI
I utilized Twilio’s powerful SMS and Voice APIs to build the reminder system:
SMS Reminders: The system uses Twilio’s SMS API to schedule and send automated text message reminders for upcoming appointments.
Voice Call Reminders: Using Twilio’s Voice API, the system schedules and makes automated voice calls to remind users of their appointments. I used TwiML (Twilio Markup Language) to create the call flow, which allows users to confirm or reschedule their appointments during the call.
AI Integration: I integrated the spaCy NLP library to add basic AI capabilities. This allows the system to understand and process user responses such as "yes," "no," "reschedule," and "cancel." The AI component helps in providing a more interactive and user-friendly experience.
Here is a code snippet showcasing the integration:
python
Copy code
from flask import Flask, request
import spacy
import json
from twilio.rest import Client
import requests
app = Flask(__name__)
# Load spaCy model
nlp = spacy.load("en_core_web_sm")
# Load config and user data
with open('config.json', 'r') as f:
config = json.load(f)
# Twilio configuration
twilio_config = {
"account_sid": config["account_sid"],
"auth_token": config["auth_token"],
"twilio_number": config["twilio_number"]
}
client = Client(twilio_config['account_sid'], twilio_config['auth_token'])
def translate_text(text, target_language):
api_url = "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=" + target_language
headers = {
'Ocp-Apim-Subscription-Key': '<your_subscription_key>',
'Content-Type': 'application/json'
}
body = [{'text': text}]
response = requests.post(api_url, headers=headers, json=body)
return response.json()[0]['translations'][0]['text']
def personalize_message(user, message_template):
return message_template.format(name=user['name'], time=user['appointment_time'])
@app.route('/handle_voice_response', methods=['POST'])
def handle_voice_response():
# Handle user response from voice call
pass
if __name__ == '__main__':
app.run(debug=True)
Additional Prize Categories
This submission qualifies for the following additional prize categories:
Twilio Times Two: This project makes use of both Twilio’s SMS and Voice APIs, providing a robust and flexible reminder system.
Impactful Innovators: The system is designed to reduce the number of missed appointments by providing timely and interactive reminders, potentially benefiting healthcare providers, businesses, and their clients.
Thank you for considering my submission! | aditya_kushwaha_0a7aa61d6 |
1,898,027 | Machine Learning and ML Algorithms | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T18:26:13 | https://dev.to/selvadharshini/machine-learning-and-ml-algorithms-5ff | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Machine Learning (ML) is a subset of AI that enables systems to learn and improve from experience without explicit programming. ML algorithms use data to identify patterns, make decisions, and predict outcomes. Key types include supervised learning, unsupervised learning, and reinforcement learning. Applications range from recommendation systems to medical diagnosis.
<!-- Explain a computer science concept in 256 characters or less. -->
## Additional Context
Machine learning algorithms enable computers to learn from data and make predictions or decisions without explicit programming. In 2024, they power real-world applications like personalized recommendations on streaming services, fraud detection in banking, autonomous driving, and healthcare diagnostics, improving accuracy and efficiency.
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | selvadharshini |
1,898,026 | Unlock Your Web Development Potential: Essential Skills for Success | Unlock Your Web Development Potential: Essential Skills for Success HTML & CSS - The foundation... | 0 | 2024-06-23T18:24:09 | https://dev.to/ridoy_hasan/unlock-your-web-development-potential-essential-skills-for-success-2in6 | webdev, programming, productivity, learning | Unlock Your Web Development Potential: Essential Skills for Success
HTML & CSS - The foundation of any website. Learn to structure and style effectively.
JavaScript - Bring your site to life with dynamic and interactive features.
Responsive Design - Ensure your site looks great on all devices.
Version Control (Git) - Track changes and collaborate seamlessly with others.
Frameworks & Libraries - Enhance productivity with tools like React, Angular, and Vue.js.
Continuous Learning - Stay updated with the latest trends and technologies.
P.S. Which of these skills have you found most challenging to master, and how did you overcome it?
get connected with me for more web related tips and tricks -https://www.linkedin.com/in/ridoy-hasan7
#WebDevelopment #Programming #Coding #TechSkills #ContinuousLearning #FrontendDevelopment #BackendDevelopment #ResponsiveDesign #JavaScript #HTML #CSS | ridoy_hasan |
1,897,977 | Migrando Microservicios de NestJS con TypeScript a Go: Una Semana de Descubrimientos | En la última semana, me he sumergido en el mundo de Go con el objetivo de migrar nuestros microservicios desarrollados en NestJS con TypeScript. Esta travesía ha sido un ejercicio intenso de desaprender ciertos paradigmas y adoptar otros, comprendiendo las diferencias fundamentales entre estos dos ecosistemas de desarrollo. | 0 | 2024-06-23T18:18:19 | https://dev.to/devjaime/migrando-microservicios-de-nestjs-con-typescript-a-go-una-semana-de-descubrimientos-4585 | nestjs, go, spanish | ---
title: Migrando Microservicios de NestJS con TypeScript a Go: Una Semana de Descubrimientos
published: true
description: En la última semana, me he sumergido en el mundo de Go con el objetivo de migrar nuestros microservicios desarrollados en NestJS con TypeScript. Esta travesía ha sido un ejercicio intenso de desaprender ciertos paradigmas y adoptar otros, comprendiendo las diferencias fundamentales entre estos dos ecosistemas de desarrollo.
tags: #nestjs #golang
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-23 18:15 +0000
---
## Migrando Microservicios de NestJS con TypeScript a Go: Una Semana de Descubrimientos
En la última semana, me he sumergido en el mundo de Go con el objetivo de migrar nuestros microservicios desarrollados en NestJS con TypeScript. Esta travesía ha sido un ejercicio intenso de desaprender ciertos paradigmas y adoptar otros, comprendiendo las diferencias fundamentales entre estos dos ecosistemas de desarrollo.
## Nuestra Arquitectura en NestJS
En nuestro stack con NestJS, manejamos microservicios conectados a bases de datos PostgreSQL y Redis. Implementamos diversas estrategias de comunicación entre microservicios:
1. **Comunicación por Eventos**: Utilizamos Pub/Sub para suscripciones y tópicos que permiten la comunicación asíncrona entre microservicios.
2. **Backend for Frontend (BFF)**: Implementamos APIs REST protegidas con JWT, que sirven de intermediarios entre el frontend y la base de datos.
### Validaciones y Migraciones
La validación de DTOs y la migración de datos son cruciales en nuestro sistema. TypeScript nos ha permitido definir tipos estrictos y estructuras con Knex y TypeORM para manejar migraciones. Aunque eficaz, este enfoque requiere un conocimiento profundo del lenguaje y de cómo manipular flujos de datos a través de diferentes microservicios.
### Retos con NestJS
Detectamos problemas de **event loop** que afectaban la performance, los cuales abordamos usando la librería Clinic.js. Identificamos los cuellos de botella y optimizamos el uso de patrones de diseño junto con `async` y `await`. Sin embargo, manejar concurrencia en Node.js puede ser complejo y costoso en términos de recursos.
## Adentrándonos en Go
Al explorar Go, nos encontramos con una transición de paradigmas y una serie de diferencias significativas:
1. **Compilación y Tipado Estático**: A diferencia de TypeScript, Go es un lenguaje compilado con tipado estático fuerte, lo que nos fuerza a detectar errores en tiempo de compilación.
2. **Control de Flujo y Manejo de Errores**: Go simplifica el manejo de errores a través de su enfoque explícito de retorno de errores, en lugar de excepciones.
3. **Estructura de Datos y Memoria**: La asignación de memoria y la gestión de estructuras de datos en Go requiere una comprensión más profunda del hardware, lo cual es diferente al enfoque más abstracto de JavaScript.
### POO e Interfaces
En Go, aunque la orientación a objetos es soportada, se manifiesta de manera diferente. La ausencia de herencia tradicional y la utilización de interfaces proporciona una flexibilidad distinta que debe ser entendida a fondo para aprovechar al máximo.
### Ejemplos Comparativos
#### Validación de Datos
- **NestJS**: Usamos Decoradores en DTOs para la validación.
```typescript
import { IsString, IsInt } from 'class-validator';
class CreateUserDto {
@IsString()
name: string;
@IsInt()
age: number;
}
```
- **Go**: Usamos librerías como `go-playground/validator` para la validación.
```go
import (
"gopkg.in/go-playground/validator.v9"
)
type User struct {
Name string `validate:"required"`
Age int `validate:"gte=0"`
}
validate := validator.New()
user := &User{Name: "Alice", Age: 25}
err := validate.Struct(user)
```
#### Comunicación Asíncrona
- **NestJS**: Uso de `async/await` para manejar promesas.
```typescript
async function fetchData(): Promise<void> {
const data = await apiCall();
console.log(data);
}
```
- **Go**: Uso de gorutinas y canales para concurrencia.
```go
func fetchData() {
dataChan := make(chan string)
go func() {
dataChan <- apiCall()
}()
data := <-dataChan
fmt.Println(data)
}
```
### Herramientas y Configuración
En Go, hemos adoptado herramientas como **Gin** para APIs REST y **Gorm** como ORM. La configuración de nuestro entorno en VSCode con `make` para automatizar tareas ha sido crucial para mantener la productividad y adaptarnos a este nuevo flujo de trabajo.
## Reflexiones Finales
Migrar de NestJS con TypeScript a Go ha sido desafiante pero también gratificante. Mientras que NestJS ofrece una experiencia enriquecida en el desarrollo rápido de APIs con un enfoque en la reutilización y la abstracción, Go nos ha brindado un control más granular sobre la concurrencia y el rendimiento, esencial para aplicaciones altamente escalables.
Seguimos experimentando y ajustando nuestros flujos de trabajo, y a pesar de los desafíos, estamos entusiasmados con las posibilidades que ofrece Go para el futuro de nuestros microservicios.
---
Espero que este blog sirva como una guía y una inspiración para aquellos que estén considerando una transición similar. ¿Qué experiencias han tenido ustedes con la migración de tecnologías? ¿Qué retos y soluciones han encontrado en el camino?
¡Compartan sus historias y sigamos aprendiendo juntos!
| devjaime |
1,897,975 | Getting Started with Colud Acquisition | A post by Bhogadi Vidhey | 0 | 2024-06-23T18:09:52 | https://dev.to/vidheyb/getting-started-with-colud-acquisition-4m1g | vidheyb | ||
1,897,974 | AWS Billing and Cost Management | A post by Bhogadi Vidhey | 0 | 2024-06-23T18:09:18 | https://dev.to/vidheyb/aws-billing-and-cost-management-10da | vidheyb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.