id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,881,005 | Connect MongoDB with Node.js: A Practical Guide with Mongoose | In the ever-evolving landscape of web development, building scalable and efficient applications has... | 0 | 2024-06-08T03:00:01 | https://dev.to/vyan/seamlessly-connect-mongodb-with-nodejs-a-practical-guide-with-mongoose-1gk6 | webdev, javascript, node, react | In the ever-evolving landscape of web development, building scalable and efficient applications has become a top priority. MongoDB, a powerful NoSQL document-oriented database, has emerged as a go-to solution for developers seeking flexibility and scalability. When combined with the versatility of Node.js, a high-performance JavaScript runtime, you can create robust applications that can handle a wide range of use cases with ease.
While you can connect MongoDB with Node.js using the official MongoDB driver, many developers prefer to use Mongoose, an Object Document Mapping (ODM) library that provides a higher-level abstraction for working with MongoDB. Mongoose simplifies the interaction with MongoDB by providing a structured approach to defining data models, performing CRUD operations, and handling database validations.
Let's dive into the practical steps to seamlessly integrate MongoDB with your Node.js applications using Mongoose:
1.**Install Mongoose**
First, you'll need to install Mongoose by running the following command:
```
npm install mongoose dotenv
```
2.**Import Mongoose and Load Environment Variables**
Once the installation is complete, import Mongoose and the `dotenv` package (for loading environment variables) into your Node.js application:
```javascript
const mongoose = require('mongoose');
require('dotenv').config();
```
3.**Connect to MongoDB**
To connect to your MongoDB instance, you'll need to provide a connection string. Instead of hardcoding this sensitive information in your codebase, it's a best practice to store it in an environment variable for better security and portability.
Create a `.env` file in the root of your project and add your MongoDB connection string:
```
MONGODB_URI=mongodb://localhost:27017/myDatabase
```
Then, in your Node.js application, you can connect to MongoDB using the `mongoose.connect()` method and the connection string from your environment variable:
```javascript
const uri = process.env.MONGODB_URI;
mongoose.connect(uri, {
useNewUrlParser: true,
useUnifiedTopology: true,
})
.then(() => console.log('Connected to MongoDB'))
.catch(err => console.error('Failed to connect to MongoDB', err));
```
4.**Define a Data Model**
With Mongoose, you define data models using a schema. A schema describes the structure of the documents in a particular collection, including fields, data types, and validation rules.
```javascript
const userSchema = new mongoose.Schema({
name: { type: String, required: true },
age: { type: Number, default: 0 },
});
const User = mongoose.model('User', userSchema);
```
In this example, we define a `userSchema` with two fields: `name` (required and of type `String`) and `age` (optional, with a default value of `0`).
5.**Interact with MongoDB**
Once you've defined your data model, you can interact with MongoDB by performing CRUD operations using Mongoose's model methods. Here's an example of how to create a new document:
```javascript
const newUser = new User({ name: 'John Doe', age: 30 });
newUser.save()
.then(user => console.log('New user created:', user))
.catch(err => console.error('Failed to create user:', err));
```
You can perform other CRUD operations using Mongoose's powerful query syntax, such as `find`, `updateOne`, `deleteMany`, and more.
6.**Error Handling and Closing the Connection**
When you're done with your application, it's crucial to handle errors gracefully and close the connection to MongoDB. You can achieve this by adding event listeners to the Mongoose connection:
```javascript
mongoose.connection.on('error', err => {
console.error('MongoDB connection error:', err);
});
process.on('SIGINT', () => {
mongoose.connection.close(() => {
console.log('MongoDB connection closed');
process.exit(0);
});
});
```
This code listens for errors on the Mongoose connection and gracefully closes the connection when the process receives the `SIGINT` signal (e.g., when you press `Ctrl+C` in the terminal).
By following these practical steps and incorporating Mongoose into your Node.js applications, you can seamlessly integrate MongoDB, unlocking a world of possibilities for building scalable and efficient data-driven applications. Mongoose simplifies the interaction with MongoDB by providing a structured approach to defining data models, performing CRUD operations, and handling database validations, ultimately improving your development experience and productivity. | vyan |
1,881,003 | Setting Up Docker in a Next.js Project: A Comprehensive Guide | Docker is a powerful tool for creating, deploying, and managing containerized applications. Using... | 0 | 2024-06-08T02:45:34 | https://dev.to/hasancse/setting-up-docker-in-a-nextjs-project-a-comprehensive-guide-3m5d | docker, nextjs, webdev, tutorial | Docker is a powerful tool for creating, deploying, and managing containerized applications. Using Docker in a Next.js project can streamline your development workflow, ensure consistent environments, and simplify deployment. In this blog post, we'll walk through setting up Docker for a Next.js project from scratch.
## Table of Contents
1. Introduction
2. Prerequisites
3. Setting Up a New Next.js Project
4. Creating a Docker File
5. Writing the Docker Compose File
6. Building and Running the Docker Container
7. Conclusion
## 1. Introduction
Next.js is a popular React framework for building server-side rendered and statically generated applications. By containerizing a Next.js application with Docker, we can create a consistent development environment and easily deploy the application across different environments.
## 2. Prerequisites
Before we start, make sure you have the following installed on your machine:
- Docker
- Node.js (for initializing the Next.js project)
## 3. Setting Up a New Next.js Project
First, let's create a new Next.js project. Open your terminal and run the following commands:
```
npx create-next-app my-nextjs-app
cd my-nextjs-app
```
This will create a new Next.js project in a directory called my-nextjs-app and allow you to navigate into it.
## 4. Creating a Docker File
The Docker File is a script that contains a series of instructions on how to build a Docker image for your application. Create a Docker file in the root of your project with the following content:
```
# Use the official Node.js image as a base
FROM node:16-alpine
# Set the working directory
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Build the Next.js application
RUN npm run build
# Expose the port the app runs on
EXPOSE 3000
# Define the command to run the application
CMD ["npm", "start"]
```
This Docker File performs the following steps:
1. Uses the official Node.js 16 Alpine image as the base image.
2. Sets the working directory to /app.
3. Copies package.json and package-lock.json files to the working directory.
4. Installs the project dependencies.
5. Copies the rest of the application code to the container.
6. Builds the Next.js application.
7. Exposes port 3000 for the application.
8. Defines the command to start the application.
## 5. Writing the Docker Compose File
Docker Compose is a tool for defining and running multi-container Docker applications. It allows us to manage the Docker containers easily. Create a docker-compose.yml file in the root of your project with the following content:
```
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
environment:
- NODE_ENV=development
```
This docker-compose.yml file does the following:
1. Defines a service named web.
2. Builds the Docker image using the Dockerfile in the current directory.
3. Maps port 3000 on the host to port 3000 in the container.
4. Mounts the current directory to /app inside the container to enable live code reloading.
5. Sets the NODE_ENV environment variable to development.
## 6. Building and Running the Docker Container
Now, let's build and run our Docker container using Docker Compose. In your terminal, run the following command:
```
docker-compose up --build
```
This command will build the Docker image and start the container. You should see output indicating that the Next.js application is being built and started. Once the process is complete, open your browser and navigate to http://localhost:3000 to see your Next.js application running inside a Docker container.
## 7. Conclusion
In this guide, we've covered how to set up Docker in a Next.js project. By creating a Dockerfile and a docker-compose.yml file, we've containerized the application and set up a development environment with Docker. This setup not only ensures consistency across different environments but also simplifies the deployment process.
Docker is a versatile tool that can greatly enhance your development workflow. As you continue to work with Docker, you'll discover more advanced configurations and optimizations to further improve your setup.
______________________________________________________________________
Feel free to reach out if you have any questions or run into any issues while setting up Docker in your Next.js project. Docker and Next.js together can create a powerful and efficient development environment, making your development process smoother and more enjoyable.
| hasancse |
1,881,002 | Endless Summer | This is a submission for [Frontend Challenge... | 0 | 2024-06-08T02:41:43 | https://dev.to/srishti_01/endless-summer-4964 | frontendchallenge, devchallenge, css | This is a submission for [Frontend Challenge v24.04.17]
https://sriss-webweaver.github.io/frontend-challenge/
Inspiration
A vibrant summer scene with a sliced watermelon, a melting ice cream stick, and two sunglasses resting. The colorful background features a sunset motif. This imagery evokes feelings of summertime fun, refreshment, and relaxation.

https://sriss-webweaver.github.io/frontend-challenge/
Journey
Based on this summer feeling, I thought of ideas that people might associate with summer fun, like relaxing in a hammock, going to a music festival, or having fun near beaches.
Learnt a lot about properties and values, layout techniques.
| srishti_01 |
1,799,780 | Engenharia Reversa Primeiro Contato - Parte 2 | DIABLO - ORiON_CrackMe1 +18 O tutorial é dedicado aos iniciantes em engenharia reversa,... | 0 | 2024-06-08T02:29:20 | https://dev.to/ryan_gozlyngg/engenharia-reversa-primeiro-contato-parte-2-m2g | braziliandevs, beginners, tutorial | ## DIABLO - ORiON_CrackMe1 +18
O tutorial é dedicado aos iniciantes em engenharia reversa, mas eu tentarei explicar, o máximo possível, alguns processos do Assembly e da arquitetura Intel x86, para que, até mesmo pessoas que nunca estudaram nada disso, possam acompanhar, e -se eu não falhar na clareza- entender os processos.
E aí, quem sabe, você não crie interesse por está área de estudos.
Caso não saiba usar o Debugger x64dbg, segue uma breve introdução: https://dev.to/ryan_gozlyngg/engenharia-reversa-primeiro-contato-parte-1-2gih
Esse tutorial consiste na quebra de um programa feito para isso.
É de nível extremamente básico, feito, justamente, para iniciantes na área da engenharia reversa.
"Quebra de software" quer dizer: fazer o programa se submeter à sua vontade!
Algumas notas estão presentes para auxiliar no entendimento de coisas que eu julgue que devem ser de conhecimento do leitor.
---
## Lista de Conteúdo
[Requisitos para total entendimento do tutorial a seguir](#Requisitos-para-total-entendimento-do-tutorial-a-seguir)
[Termos utilizados durante o tutorial](#Termos-utilizados-durante-o-tutorial)
[Link para o programa CrackMe](#Link-para-o-programa-CrackMe)
[Análise Estática](#Analise-Estatica)
[Examinando as strings do programa](#Examinando-as-strings-do-programa)
[Análise Dinâmica](#Análise-Dinâmica)
[Como o Code (input) é processado](#Como-o-Code-(input)-é-processado)
[TEST CL, CL](#TEST-CL-CL)
[Registrador EFLAGS](#Registrador-EFLAGS)
[CL](#CL)
[O Processo de Tratamento do Input](#O-Processo-de-Tratamento-do-Input)
[Vamos rastrear a origem desse valor no topo da stack/pilha](#Vamos-rastrear-a-origem-desse-valor-no-topo-da-stack-ilha)
[Observando a stack](#Observando-a-stack)
[Retorno de Função](#Retorno-de-Função)
[Observando a Função crackme1.46B828](#Observando-a-Função-crackme1-46B828)
[Encontrando valores no Dump](#Encontrando-valores-no-Dump)
[Entrando na Função crackme1.46B828](Entrando-na-Função-crackme1-46B828)
[Entendendo como chegamos ao resultado](#Entendendo-como-chegamos-ao-resultado)
[Caso o salto não seja executado, as seguintes instruções são usadas](#Caso-o-salto-não-seja-executado-as-seguintes-instruções-são-usadas)
[Resumindo a lógica da função](#Resumindo-a-lógica-da-função)
[Saindo da função e obtendo o resultado definitivo](#Saindo-da-função-e-obtendo-o-resultado-definitivo)
[SETNE - Instrução que gera o resultado](#SETNE-Instrução-que-gera-o-resultado)
[Considerações Finais](#Considerações-Finais)
---
### Requisitos para total entendimento do tutorial a seguir
* Conhecimento básico sobre segmentos de memória, principalmente sobre o segmento chamado stack/pilha
* Conhecimento sobre bases numéricas
* Conhecimento básico de programação em C/C++
Aqui eu espero poder mostrar a quebra desse software na forma de tutorial, usando partes do processo para compartilhar alguns conhecimentos úteis com aquele leitor que está iniciando, e ainda tem alguma dúvida, ou dificuldade, em entender algo específico.
Também é dedicado aos que desejam ver como esse tipo de coisa é feita.
No mais, é sempre bom ver como outras pessoas solucionam problemas, como a mente delas funciona em determinadas atividades, para, talvez, extrairmos alguma coisa para nós mesmos.
#### Se você já iniciou os estudos em engenharia reversa, tente quebrar esse software por sua conta primeiro, depois volte aqui para ler.
___
### Termos utilizados durante o tutorial
- **Code, CODE ou code**: nome da área de input; é o mesmo que "serial", "key", "senha" ou input. Nesse software, os desenvolvedores chamaram a área de input de **Code**.
- **Breakpoint**: ponto de interrupção, ponto de parada.
- **Setar**: ação de aplicar algo sobre alguma coisa: exemplo: "setar breakpoint" significa: aplicar, definir, introduzir um breakpoint em ou sobre determinada instrução.
- **Binário**: refere-se a um programa, um executável.
___
### Link para o programa CrackMe
* Baixe aqui: https://github.com/ReversingID/Crackmes-Repository/tree/master
* Encontre ele no path: ***Crackmes.DE\\1-very_easy_for_newbies\\windows\\diablo***
* Nota: Usar ou não uma máquina virtual dedicada para rodar o software está a seu critério.
_______________________________________________________________________
### Análise Estática
Em resumo, "análise estática" é uma análise feita apenas com as informações contidas no próprio arquivo do programa, também chamado **binário**, obtidas independentemente da execução do programa.
Para saber em qual arquitetura o programa foi compilado, você pode usar o programa DIE - Detect It Easy:

O DIE é um programa muito poderoso, por hora, vamos ficar somente com essas informações da interface inicial. Recomendo que você se aprofunde no uso do DIE.
Na caixa de baixo, com grande destaque em vermelho, podemos ver algumas informações interessantes. "PE32" indica que o programa é um executável de 32 bits. Nas caixas menores, marcadas em vermelho, podemos ver também, o **modo** e a **arquitetura**.
---
### Examinando as strings do programa
Você pode ver as strings contidas dentro do programa, usando o software que preferir (inclusive com DIE, clicando no botão "Strings", na interface inicial):

Não tem como examinarmos as strings, uma por uma, já que **3,750** strings depois, alguma coisa pode acabar passando batido...
As strings mais interessantes são as que o **programador** criou. Vamos dar uma olhada no local em que as variáveis globais e estáticas, inicializadas, são guardadas: ***data segment***, ou ***.data***.
Eu utilizei o programa ***IDA*** para acessar a seção de dados (.data) mencionada.
O **IDA** é um programa utilizado para análise estática de binários, ele é muito poderoso, possui várias funcionalidades, não vou falar muito sobre ele aqui, recomendo procurar mais sobre.
O IDA possui uma versão gratuita: https://hex-rays.com/ida-free/
Caso você nunca tenha usado o IDA, e mesmo assim baixou para testar aí, saiba que ele nos mostra várias mensagens ao carregarmos o binário nele, por hora não se preocupe, dê "ok" em tudo até chegar na tela principal. Caso queira se aprofundar, recomendo começar com alguma das muitas playlists de introdução ao IDA, encontradas no Youtube.
* No IDA: **SHIFT+F7** -> clique duas vezes em ***.data*** -> clique na aba **Hex view** -> vá descendo e observando a coluna lateral:

Estamos vendo a seção de dados dentro do programa, em hexadecimal, e a sua respectiva representação em ASCII, na coluna da direita (circulada em vermelho).
Dica: Aqui podemos separar as strings em um bloco de notas, e ir testando uma a uma no input do programa, já que é a coisa mais fácil a se fazer.
Se você fez isso, então já sabe a resposta.

___
### Análise Dinâmica
A Análise Dinâmica consiste em observar e analisar o programa durante a execução.
Eu usei o debugger **x64dbg** aqui, e como de costume, a primeira coisa que faço é procurar pelas referências às strings.
Encontrando a string correta (nesse caso "Wrong Code! Try Again!") eu consigo observar as instruções que ocorrem antes dela ser usada na caixa de diálogo.
Seguindo essa string, eu chego ao momento em que ela é usada, e a partir desse local, eu posso buscar entender qual a lógica por trás da tomada de decisão, que julga se o nosso Input está correto ou não.
Abra o programa no x64dbg (não esqueça que o programa é de 32 bits, e precisa ser aberto no x32dbg.exe) e rode (apertando **F9**) o programa até ele abrir, e a janela com o input aparecer para você.
Geralmente o x64dbg tem alguns breakpoints iniciais, que só funcionam na primeira vez que o programa é rodado, então se o **EIP** -Instruction Pointer (ponteiro de instrução), responsável por dizer em qual instrução você está no momento (a setinha verde na lateral esquerda te mostra ele), estiver parado, continue clicando em **F9** para rodar o programa até ele ser completamente carregado, e aí ele estará disponível para você usar (observe a barra de tarefas do Windows).

Com o programa já rodando, dentro do x64dbg:
1. Clique com o botão direito do mouse, no meio do debugger, na janela CPU
2. Vá em: "Pesquisar por" -> "All User Modules" -> "Referências String"

Você será direcionado para está janela. A anterior está ali em cima, em **CPU**.
Aqui vemos algumas strings que chamam atenção...
Se começamos por esse método, fazemos como anteriormente:
- Pegamos as strings "diferenciadas", e as testamos como input, uma a uma.
Você já deve saber que isso vai dar certo, mas por questão de curiosidade, se isso não tivesse funcionado, prosseguiríamos da seguinte maneira:
* Clique duas vezes na string "**Wrong Code! Try Again!**"
Você será direcionado para a instrução (onde ela é referida) que carrega ela na função (provavelmente a **main**), dentro da janela **CPU**.
* Sete um breakpoint nela, clicando em **F2**
* Abra a janela do programa e insira um code/input diferente do esperado, um errado qualquer.
* Clicando em "Ok" o programa irá rodar até parar em seu breakpoint setado.
* Agora vamos procurar pelo que decidiu que nossa resposta está errada, em outras palavras, procuramos entender como viemos parar na mensagem de erro.
Vamos fazer a engenharia reversa do programa, até descobrirmos o critério de "resposta certa" e "resposta errada".
---
### Como o Code (input) é processado
Vamos rever os passos até aqui:\*
1. Buscamos as referências de strings em todos os **User Modules**
2. Encontramos a string que aparece na caixa de diálogo ao errar o input
3. Setamos um breakpoint nela
4. Rodamos o programa depois de inserir um input qualquer
Subindo um pouco a partir do breakpoint (setado em "Wrong Code! try Again!"), vemos que há uma instrução `test cl, cl`, um pouco antes das instruções que nos mostram a mensagem de erro.
Se você observar a instrução em \[**004016D7**], que é um salto condicional **JE** (Jump if Equal),
``pula para crackme1.4016F6 se o resultado de TEST CL, CL for igual a 0``, ela nos direciona para a outra mensagem: a mensagem de sucesso.
Basta você dar dois cliques na instrução `je crackme.4016F6` que o programa te redireciona para esse endereço.
Com isso, sabemos que, se **CL** for **zero**, quer dizer que colocamos o input/code certo.
Agora nos fazemos uma pergunta: O que leva **CL** a ter o valor zero, ou mesmo o valor atual?

Nota: \**Se você não entendeu nada desse trecho, leia a parte seguinte, **TEST CL, CL**, com calma...*
___
### TEST CL, CL
Antes de tudo, o que significa a instrução ``TEST CL, CL``?
Caso saiba, pule para "Respondendo à pergunta: O que leva **CL** a ter o valor zero?".
***Instrução TEST: o programa está testando se **CL** é zero ou não ("ou não" é literalmente qualquer coisa diferente de zero, como, por exemplo, -1, 1, 90000, 0xABC, 0xFFFF, 6, etc.).
**CL** é a parte mais à direita do registrador **ECX** (ver imagem adiante).
Registradores são espaços de armazenamento dentro do processador.
**TEST**: instrução que compara dois operandos, no nosso caso, compara **CL** com ele mesmo.
Com "TEST" o programa performa uma operação bit a bit, (bitwise) chamada **AND**.
A operação **AND** é uma operação lógica, efetuada com binários.
Caso não saiba nada sobre operações lógicas, veja: https://imasters.com.br/desenvolvimento/conheca-os-operadores-bitwise-bit-bit
Vamos tentar entender essa operação bit a bit, ou bitwise, utilizada aqui.
***Nota:** aqui fica claro a importância de saber sobre as bases numéricas: você tem um dado da realidade, e tem várias maneiras de quantificá-lo. Nós temos razões culturais para usarmos as bases decimais para quantificar a maioria das coisas, por isso, ao nos referirmos, por exemplo, ao número dez, dizendo "temos aqui, dez árvores", sabemos exatamente a quantidade de um determinado dado da realidade; no exemplo, sabemos que temos dez árvores.*
*Por razões, também culturais, agora, com a existência dos computadores, nós passamos a usar as bases hexadecimal, octal e binária, nessa área.*
*Sendo assim, podemos dizer que temos: 10 árvores, em decimal. 0xA árvores em Hexadecimal, 012 árvores, em Octal, 0b1010 árvores, em Binário.
A quantidade na realidade não mudou, apenas a forma de representá-la é que foi alterada.
Os sufixos utilizados aqui são uma convenção, utilizada na linguagem C, sendo ZERO (0) para octal, ZERO XIS (0x) para hexadecimal, ZERO BE (0b) para binário, e NADA DE SUFIXO para decimal.*
*Cada uma das bases têm sua razão de existir. Recomendo ler:*
* https://www.quora.com/Why-are-binary-octal-and-hexadecimal-number-systems-popular-in-computing
* https://www.computerengineeringconcepts.org/2.3-Binary-Octal-and-Hexadecimal
* https://en.wikipedia.org/wiki/Binary_number
* https://en.wikipedia.org/wiki/Octal
* https://en.wikipedia.org/wiki/Hexadecimal
Primeiro, um exemplo da operação bitwise **AND**, com o número **7**, que em binário é **0111**:
7 AND 7 ou TEST 7, 7
0b0111
0b0111
------
0b0111
Operação **AND**: 0b1 AND 0b1 = 0b1
0b1 AND 0b0 = 0b0
Ela só resulta em 0b1 se **ambos** os operandos forem 0b1.
Se um dos operandos for 0b0, a operação resulta em 0b0.
**No windows: abra sua calculadora, escolha o modo programador, e faça as contas.**
**Nota:** *O zero à esquerda não diz nada, nem precisaria ali estar. 0111 e 111 dão na mesma.*
*Você precisa saber sobre algumas convenções de tamanhos, chamadas WORD. DWORD e QWORD.*
*Observações: 1.essas WORDS são traduzidas como "palavra", eu não gosto dessa tradução, por isso não uso. Mas você pode ver materiais falando sobre "o tamanho da palavra".*
*2.O tamanho real a que cada WORD se refere pode mudar conforme o sistema operacional.*
*Leia:* https://mentebinaria.gitbook.io/engenharia-reversa/numeros/o-byte
*Sobre o zero aqui mostrado: -resumindo- um byte tem oito bits. O número sete, se representado segundo uma convenção, que pede que, todos os números sejam mostrados como BYTES, seria representado assim: 0000 0111. Viu? Oito bits (1 BYTE = 8 bits), em duas colunas, com quatro valores cada.*
*Por costume, valores que usam quatro bits ou menos, eu os represento como "0000".*
#### Registrador EFLAGS
Nós temos também um registrador especial, chamado **EFLAGS**, que possui todas as flags usadas pelo sistema, cada uma com um propósito diferente, que por sua vez, serão emitidas em determinadas operações, com o fim de sinalizar alguma coisa (leia o manual da Intel, link abaixo). Não vou me estender na explicação.
Para saber mais sobre EFLAGS, confira essa parte do livro gratuito de Assembly: https://mentebinaria.gitbook.io/assembly/aprofundando-em-assembly/flags-do-processador
Agora nós precisamos saber que, nesse registrador, nós temos uma flag chamada **ZeroFlag**.
Essa flag, em conjunto com as instruções de comparação, ``cmp`` e ``test``, e as instruções de salto condicional, serve para controle do fluxo de execução do programa.
Sobre as instruções de salto condicional, veja: http://unixwiz.net/techtips/x86-jumps.html
**Nota:** *Baixe o manual da Intel. Lá tem todas as instruções listadas, com as condições necessárias para a sua execução (o nome é Jcc para todas as condições diferentes de JMP):* https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html

*É o primeiro link do site, apontado pela seta vermelha na imagem acima.*
A operação **TEST CL, CL** resultando em **0**, seta a **ZeroFlag** para 1: **ZF=1**.
E é isso que a combinação das instruções **TEST** e **JE** (Jump if equal) faz:
se(ZF==1) então Pule para a localização, se não, se(ZF==0) não pule
if(ZF==1) then Jump to location else if(ZF==0) don't jump
---
### CL
**ECX** é um registrador de 32 bits, e podemos usar apenas "uma parte dele", "dividindo-o":


(OBS: ignore o **RCX**, que é o nome usado para registradores de 64 bits, os coloquei ali para que você saiba da existência dele).
Quando usamos um registrador como **ECX**, usamos TODO ele, nossos dados de 4 bytes
(4 bytes são 32 bits) ocupam todos os espaços -incluindo os 32 bits de ECX, os 16 bits de CX, os 8 bits de CH, e os 8 bits de CL.
Mantenha em mente que **ECX** é um corpo completo, e podemos usar uma parte específica dele, obrigando o computador a ler/escrever somente na parte desejada, sendo CX, CH ou CL.
"Curiosidade": antigamente existiam processadores de 16 bits, **CH** ficava com os 8 bits mais significantes e **CL** com os 8 bits menos significantes, e hoje em dia ainda é assim, só que os registradores possuem mais espaço; mais espaço, mais divisões.
Para saber mais sobre **bits mais significantes** e **menos significantes** (MSB e LSB),
acesse o link: https://www.morningstar.io/post/2016/12/25/midi-msb-and-lsb#:~:text=MSB%20stands%20for%20most%20significant,4%20bits%20would%20be%200011
**CH** e **CL**: o "C" é padrão para Counter (mas o registrador é de propósito geral, então pode ser usado para quase "qualquer coisa"), o **H** significa "High byte" e o **L** "Low byte"
(o E de ECX é de extended, lembra que era CX -16 bits? Ele foi "extendido" para 32 bits).
Mais sobre: https://en.wikipedia.org/wiki/IA-32
___
### O Processo de Tratamento do Input
Se você já sabe o input/code certo, coloque um breakpoint na instrução ``test cl, cl``, depois insira o input/code correto, rode o programa, e perceba como **CL** vai estar com o valor **0** durante o teste; inserindo um valor errado, ele vai estar com valor 1 nesse momento.
Assim temos a prova de que é realmente ali que acontece confirmação final do input inserido.
Mantenha o breakpoint na instrução `test cl, cl`, insira um input errado e rode novamente.

O registrado EFLAGS (circulado em vermelho, a direita) nos mostra que a ZeroFlag vai ser modificada na próxima operação através do traço vermelho abaixo de **ZF**.
Circulado em preto, temos o valor contido em **CL**, bem como vemos sublinhado em verde na coluna dos registradores.
Foi inserido um input/code errado, e a ZeroFlag será setada para 0, dizendo que não tivemos o resultado 0 na operação TEST. Dê um stepover e veja por si mesmo.
___
Perceba que **antes** de ``test cl, cl`` nós temos a instrução ``pop ecx``.
A instrução ``pop`` pega o valor que está no **topo** da stack/pilha (visível no registrador **ESP**), e carrega para dentro do operando, que no nosso caso é o **ECX**.
**Nota:** *Não se esqueça de que o registrador **ESP** contém o endereço do topo da stack/pilha, e dentro desse endereço há um valor, e é ele que o pop "joga" para dentro de **ECX**.*
Breakpoint na Instrução ``pop ecx``, inserimos um input errado, clicamos em "OK":

Nesse momento, o topo da nossa stack/pilha está assim (o input foi "AAA", que é errado):

Como o code/input não é o correto, o topo da stack está com 1.
(Se você já tem o code/input correto, como mencionado anteriormente, insira-o, e veja que, ao chegar nesse ponto, a stack/pilha está com o valor **0** no topo). Preste atenção no endereço do topo da stack/pilha, é ele que, mais tarde, vai nos levar até a função que dita se o input/code está certo ou errado.
Apenas o bit menos significativo de **ECX**, **CL**, é usado para a operação seguinte.
Com isso, na operação ``test cl, cl`` (é o mesmo que ``test 1, 1``) temos o resultado 1, a **ZeroFlag** não é setada (``ZF=0``), então o programa **não pula** para ``crackme1.4016F6``.
___
### Vamos rastrear a origem desse valor no topo da stack
Como já vimos que é o valor, zero ou um, que está no topo da pilha, no momento do ``pop ecx``, que determina o nosso sucesso ou erro, a depender do input, vamos agora tentar encontra a origem desse número.
Os valores que se encontram na stack/pilha, durante o breakpoint em ``pop ecx``, têm origem em alguma operação da função atual.
As instruções responsáveis por lidar com valores da stack/pilha, são **POP** e **PUSH**.
**POP**: Carrega o valor do topo da pilha para o local especificado pelo operando de destino (ou opcode explícito) e, em seguida, incrementa o ponteiro da stack/pilha (**ESP**).
**PUSH**: Decrementa o ponteiro da stack/pilha (**ESP**) e armazena o operando de origem no topo da pilha.
Não deixe de consultar o manual da Intel para saber mais!
Lembre-se que o valor retornado por uma função é "guardado" no registrador **EAX**.
*Nota: Aqui eu assumo que você já leu o primeiro tutorial, relacionado ao x64dbg, onde eu explico brevemente as funções em Assembly de x86. Novamente colocarei o link aqui:* https://dev.to/ryan_gozlyngg/engenharia-reversa-primeiro-contato-parte-1-2gih
#### Segue o passo a passo efetuado:
1. Setamos um breakpoint nas três instruções ``call`` mais próximas ao ``test cl, cl``.
2. Execute o programa novamente, até que ele chegue no breakpoint. Para fazer isso, basta dar um "Run" (atalho: **F9**), para ele sair dos breakpoints e pedir o input novamente.
3. Insira uma mensagem ERRADA no input/code. Clique em "Ok" para continuar.
5. Dê um **step over** com **F8** até o **EIP** passar da instrução call, e verifique registrador **EAX**.
6. Verificamos se o valor contido em **EAX** é colocado no topo da stack/pilha (geralmente com a instrução ``push eax``).
**MUITA ATENÇÃO NO 6° PASSO**: o endereço do topo da stack/pilha, em **ESP**, precisa ser ``0x19EFDC`` imediatamente depois da instrução ``call``. Isso porque, no momento do ``pop ecx`` em xxxxx, o endereço (da stack/pilha) ``0x19EFD8`` é que está com o nosso valor (0 ou 1) para o ``test cl, cl``, e agora estamos assumindo que esse valor foi guardado na stack/pilha via instrução **PUSH**, a qual decrementa o endereço de **ESP**, e se você leu a documentação da Intel, sabe que **ESP** é decrementado por **4**. Logo ``0x19EFDC - 4 = 0x19EFD8``.
(O manual nos diz: The operand size (16, 32, or 64 bits) determines the amount by which the stack pointer is decremented (2, 4 or 8). No nosso caso, como veremos logo, o operado é **EAX**, logo 32 bits, decrementado por 4).
**Nota:** *Setando Breakpoint - alguns detalhes:*
*Como você já sabe, setamos um breakpoint em uma instrução, dentro do x64dbg, com a tecla **F2**:*
* *Selecione a linha desejada, e aperte **F2**, a "bolinha" no canto esquerdo ficará **vermelha**.*
* *Se você clicar em cima dessa "bolinha", a "bolinha" ficará com verde, desabilitando o breakpoint.*
* *Clicando na "bolinha" uma terceira vez, você removerá o breakpoint.*
---
**Nota:** *Ao clicar duas vezes em uma instrução ``call``, ou dar um **step in** com o EIP nela (lembre da "setinha" na esquerda, que aponta para onde está o EIP), você vai entrar, ser direcionado, até o código dessa função.*
*Como fazemos para voltar à instrução anterior? Para isso usamos o seguinte atalho:* ``-``
---
### Observando a stack
Ao efetuar os passos acima...

Na imagem, você pode ver que eu inseri "AAA" como code/input, e por estar errado, precisamos ver o "1" no topo da stack/pilha ao passarmos por está ``call``, no endereço que buscamos.
Observe as instruções. Na imagem, paramos antes da função ser executada.
Executamos um stepover (**F8**).

Podemos ver que no topo da stack/pilha temos o endereço desejado, e no campo das instruções temos ``push eax``, que decrementará o nosso **ESP**, colocando o valor "1" de **EAX** no topo da stack/pilha.
Bem como estávamos procurando!
---
### Retorno de Função
Um breve exemplo de um programa em C comentando o comportamento do código quando em Assembly de x86, para reforçar que o valor de retorno de uma função, em programas x86, é passado para **EAX** (na grande maioria dos casos, lembre-se das calling conventions).
```
// programa em C
/* Função SOMA: soma dois numeros a + b e retorna resultado */
int soma(int a, int b){
int resposta_da_soma;
resposta_da_soma = a + b;
return resposta_da_soma; // esse valor é retornado em EAX
}
/* Função principal Main */
int main(){
int a = 2;
int b = 3;
int resultado;
// chama função com soma de dois valores
// aqui será uma call: call soma.endereço
resultado = soma(a, b).
// Aqui dentro da main, nesse momento teremos em EAX o número 5
}
```
---
### Observando a Função crackme1.46B828
Antes de entrarmos na função, vamos ver os argumentos que são passados para ela.
Eu não vou explicar como as **Calling Conventions** funcionam, e é esse conhecimento que vai lhe ajudar a entender como os argumentos são passados para funções. Sim, existe mais de uma maneira de passar argumentos para funções em Assembly.
Para entender melhor sobre elas, acesse os links: https://mentebinaria.gitbook.io/assembly/programando-junto-com-c/convencoes-de-chamada-no-windows
https://en.wikipedia.org/wiki/X86_calling_conventions
Em resumo, as convenções de chamada (calling conventions) ditam como os argumentos de uma função vão ser passados a ela. Aqui nós podemos ver que antes da função ser chamada, o endereço de algo que está em ``[ebp-10]`` é carregado em **EDX**, e também vemos que o que está no topo da stack é colocado dentro de **EAX**.

Vamos ver, brevemente, o que essas funções, ``lea`` e ``pop`` estão fazendo:
``lea`` - Load Efective Address: Carrega o endereço de qualquer coisa que está em ``[ebp-10]`` e guarda em **EDX**. Para entender ``[ebp-10]``, segue o link: https://mentebinaria.gitbook.io/assembly/a-base/enderecamento
``pop`` - Pega oque está no TOPO da stack/pilha e guarda no operando, aqui é guardado em **EAX**, e em seguida incrementa o topo da stack/pilha.
Os argumentos foram passados através dos registradores **EAX** e **EDX**.
Entrando na função isso vai ficar claro.
---
### Encontrando valores no Dump
Vamos ver rapidamente como rastreamos valores no x64dbg.
(Algo como ``[ebp-10]`` é carregado em **EDX**, basta um **step-over** para executar a instrução, e aí verificamos oque é que foi carregado em **EDX**, muito mais simples)...
Vamos ver quais valores se encontram em ``[ebp-10]``, e no topo da stack nessa hora:

Você já deve saber buscar esse valor no dump. Botão direito do mouse na instrução desejada, etc.(basta fazer como na imagem acima).
Encontre o valor desejado, que no caso é ``[ebp-10]``.
Agora vamos ao topo da stack/pilha:

Perceba que o topo da stack/pilha (em **ESP**) é sempre destacado em verde na Janela da Stack.
Vamos ver os valores carregados nos registradores:

Perceba que **EDX** tem um comentário que nos diz que o valor ``0x0019F02C`` é um endereço para a string "\*\*\*vErYeAsY\*\*\*". Sabemos que é um endereço pelo símbolo "&".
Você pode segui-lo no dump e confirmar isso.
Em **EAX** temos o valor ``0x0019F030``, e para sabermos oque tem lá, podemos segui-lo no dump.

Perceba o valor que se encontra aqui. Parece outro endereço.
Podemos tentar seguir ele no dump também.

Chegamos nesse lugar. Observe que é aqui que se encontra o nosso code/input.
0x41 é o mesmo que o char 'A'. Cada 0x41 é um byte, você tem ali a seguinte string:
"AAA\0"
Os três caracteres 'A', mais o ``null terminator`` ``\0``, que indica o fim da string.
Confira: https://man7.org/linux/man-pages/man7/ascii.7.html
Então é isso, essa função está recebendo os endereços das duas strings como argumento.
Nos perguntamos: Será que ela vai comparar a nossa string "AAA" com essa string diferente aí??? E então testamos a string "\*\*\*vErYeAsY\*\*\*" como code/input.
E então é isso, funcionou!
___
### Entrando na Função crackme1.46B828
Para entrar na função utilize o **STEP-INTO** (**F7**).
Dentro da função:

Perceba que aqui dentro, a função pega os argumentos, que são endereços, e os "desreferencía", fazendo com que os valores reais dentro daqueles endereços sejam guardados agora em **EAX** e **EDX**. Portanto, depois dessas operações **MOV**, os nossos registradores conterão os endereços diretos para nossas strings puras (perceba que antes disso nós temos um endereço para um endereço).

___
Observação: geralmente as funções chamam outras funções, que chamam outras funções, e chega em um nível em que você deve se perguntar: vale mesmo apena entrar em todas essas funções? Será que estou no caminho certo?
O que me fez perceber que estava no caminho certo aqui, foi a instrução que vem depois da ``call crackme1.460C48``, a instrução ``setne al`` (sobre ela mais a frente).
**AL** é a parte "mais baixa" de **EAX**, sabemos que ele contém o retorno da função após a execução dela. E a operação que é feita com **AL** nos dá o retorno "1" que buscamos.
Por isso achei que valia apena continuar entrando nas funções.
Como aqui o programa já foi quebrado, e o desafio já foi vencido, eu estava motivado apenas por pura curiosidade de entender como o programa funciona, então achei que valia continuar com a busca.
Vamos para essa outra função agora.
---
### Entendendo como chegamos ao resultado
Vou expor o processo de maneira linear (atenção: não vou expor o processo todo devido ao tamanho que o texto ficaria, vou focar nos pontos mais importantes para esse exemplo).
*Nota: antes de tudo, perceba que, conforme o tamanho da nossa string digitada no input, o programa faz um caminho diferente dentro dessa função, já que ele compara alguns caracteres por vez. Sendo assim, quanto maior a nossa string, mais testes serão feitos.*
*No teste abaixo foi utilizado uma string de três caracteres: "AAA".*
Acompanhe parte do processo da função ``0x00460C48``, começando pela comparação:
Passando do prólogo da função nós temos:
1.``cmp eax, edx``
Compara os endereços das duas strings.
A instrução **CMP** faz uma subtração: ``eax - edx``. A diferença é que aqui o resultado não é salvo, e os operandos permanecem os mesmos.
Perceba que como os endereços estão sendo comparados, a próxima instrução, que é um salto "pule se for igual", não acontece.
---
2.``test esi, esi``
Verificando se o code/input não está vazio, caso esteja, ele pula para o fim da função e termina retornando **1**.
---
3.``test edi, edi``
Verificando se a string com o code/input correto, passada pelo programador, não está vazia.
---
4.``mov eax, dword ptr ds:[esi+4]``
``mov edx, dword ptr ds:[edi+4]``
**EAX** recebe o tamanho da string que passamos para o code/input
``edx`` recebe o tamanho da string que o programador passou para a função.
O tamanho das strings foi calculado em outra função.
---
5.``sub eax, edx``
subtrai **EDX** de **EAX**. A string correta tem o tamanho de 14 bytes, em hexa são 0xE.
A conta:
COM O INPUT "AAA": 3 - 14 = -11 ou 0xFFFFFFF5
COM O INPUT CORRETO: 14 - 14 = 0
O resultado é guardado em **EAX**.
Se ambos estiverem com o valor de 14 bytes, sendo a string correta ou não, a operação ``sub eax, edx`` resultaria em ZeroFlag = 1, e CarryFlag = 0.
Lembre-se que se uma operação resultar em zero, a **ZeroFlag** é setada, recebendo o valor "1".
---
6.``ja crackme1.460C6B ``
**JA** - Jump if above, pula se as flags **CF** e **ZF** forem ambas iguais a **0**.
Em outras palavras, o salto só será executado caso a nossa string seja **==maior==** que a string do programador.
Com a nossa string sendo maior que a string do programador, a CarryFlag não é setada, porque não temos um resultado negativo, e a ZeroFlag não é setada, porque a operação não resulta em zero.
**CF ou Carray Flag**: Caso a operação aritmética precise utilizar o bit mais significante (MSB - que é o bit mais à esquerda) para guardar o resultado, a **CF** é setada para "1".
O bit mais a esquerda é usado quando precisamos representar um número com sinal, um número negativo.
Como o resultado da operação com a string menor "AAA" seta a Carry Flag, o programa não executa o salto. Já com a string correta, ele executa o salto.
**Resumindo:**
Com Uma ==String Menor==: \[CarryFlag é setada e o salto não é executado]
Com Uma ==String de 14 bytes==: \[ZeroFlag é setada e o salto não é executado]
Todo esse processo foi para ver se o code/input passado por nós é maior que a string do programador.
---
#### Caso o salto não seja executado, as seguintes instruções são usadas
7.``add edx, eax``
**COM UMA STRING MENOR**
Usei a string "AAA".
``eax: 0xFFFFFFF5 - no caso de "AAA" (ou qualquer string de três bytes)``
``edx: 0xE``
Cálculo: 0xE + 0xFFFFFFF5 = 00000003 (Resultado vem com o MSB setado: 100000003)
**COM UMA STRING DE 14 BYTES**
``eax: 0 - resultado da subtração de 14 - 14``
``edx: 0xE - tamanho da string em hexadecimal (0xE = 14)``
Cálculo: 0xE + 0 = 0xE
Aqui estamos restaurando o número de bytes da nossa string para **EDX**.
O resultado é guardado em **EDX**.
E em seguida o resultado é guardado na stack com ``push edx``.
---
8.``shr edx, 2 ``
Aqui o programa usa a instrução **SHR**, com **EDX** e o número **2**.
**SHR** é usada para mover os bits de **EDX** duas casas para a direita.
Para cada bit movido para a direita, um zero é adicionado à esquerda.
**EXEMPLO COM UMA STRING DE 14 BYTES**:
>
> EDX está com o valor 0xE ou 14.
> 14 em binário é: 1110
> Agora vamos mover os bits para a direita duas vezes:
>
> 1110
> 0111 -> 1.movemos um bit para a direita e adicionamos um zero à esquerda
> 0011 -> 2.movemos um bit para a direita e adicionamos um zero à esquerda
>
> Agora nós ficamos com o resultado 0011 que é 3 (três é representado da mesma forma em hexadecimal e em decimal).
>
> Ao terminar a instrução, aqui temos **EDX** = 3.
**EXEMPLO COM UMA STRING DE 3 BYTES**:
>
> EDX está com o valor 3.
> 3 em binário é: 0011
> Agora vamos mover os bits para a direita duas vezes:
>
> 0011
> 0001 -> 1.movemos um bit para a direita e adicionamos um zero à esquerda
> 0000 -> 2.movemos um bit para a direita e adicionamos um zero à esquerda
>
> Agora nós ficamos com o resultado 0000 que é 0 (zero é representado da mesma forma em hexadecimal e em decimal).
>
> Ao terminar a instrução, aqui temos **EDX** = 0.
---
9.``je``
se ``shr edx, 2 == 0`` (Testando para ver se a operação resultou em zero)
a operação serve para o programa decidir qual caminho tomar baseado no tamanho da string que nós passamos para ele.
Se for uma muito pequena, ele vai pular para uma comparação de um único byte, o mais a direita.
Se a nossa string tiver um tamanho parecido com a correta, ele compara os primeiros 4 bytes,
e assim o programa prossegue, sempre se baseando no tamanho da nossa string.
Aqui segue os resultados para strings de 0 bytes até as de 14 bytes:

Perceba que as strings de 14, 13 e 12 bytes resultam em 3, forçando a função a ir pelo mesmo caminho, até detectar a divergência de tamanho.
---
### Resumindo a lógica da função
Em resumo, você pode entender 100% da lógica sendo usada em qualquer função, mas se você não quer gastar todo o seu tempo com isso, pode começar a pegar padrões de comportamento para todos os tipos de dados processadas, no nosso caso, as strings.
Aqui não vou mostrar o que cada uma das instruções seguintes faz, isso vai deixar esse tutorial mais maçante e longo ainda...
Mas para quem quer saber oque se passa depois do já mencionado, dentro da função:
Perceba a série de saltos (``jmp e jcc``) e comparações (``test e cmp``).
A função está percorrendo ambas as strings e comparando os bytes.
A ordem da comparação varia conforme o tamanho da string enviada.
Bom, sabemos que a função vai retornar um valor dentro de **EAX**.
Com nossa string de três caracteres, ao fim da função, perceba que o valor em EAX é quase o mesmo que temos após a instrução ``sub eax, edx`` do passo cinco desse tutorial na parte intitulada "Entendendo como chegamos ao resultado".
Que tipo de alteração ele sofreu?
Vamos até a instrução mencionada (``sub eax, edx``), e a partir dela, seguimos com a atenção fixada em **EAX**, uma instrução por vez, com o step-over.
Se você fez isso, percebeu que o valor se mantém o mesmo em **EAX** até chegar em uma instrução que soma **EAX** com ele mesmo desde então.
(Mas lembre-se que o percurso depende do tamanho da string que usamos no code/input).
Após isso, o programa pula direto para a parte final da função com um salto obrigatório ``jmp``.
**EAX** não sofre mais nenhuma alteração, e voltamos a função anterior.
---
### Saindo da função e obtendo o resultado definitivo
Agora que você já tem uma noção de como o resultado retornado é formado, vamos ver oque é que nos dá o **resultado definitivo** para a comparação final.
Sabemos que a comparação final é, ou com o valor "1" ou com o valor "0".
A função em que entramos é a seguinte ``crackme1.46B828``:

Já vimos oque acontece na função chamada em ``call crackme1.460C48``, e retornamos na instrução ``setne al``.
---
#### SETNE - Instrução que gera o resultado
``setne - Set byte if not equal (ZF=0)``: essa instrução vai mudar o operando (no nosso caso o **AL**) para **0** ou **1**, dependendo do status da **ZeroFlag**.
Como vimos antes, o caminho tomado pela instrução anterior, que retorna nosso valor em **EAX**, depende do tamanho da string enviada. Basta seguir o fluxo da função até ela tomar um salto para o final da instrução, e depois voltar a última instrução de comparação.
Para voltar para trás, e seguir esse fluxo, basta usar o atalho ``-`` (traço).
Assim vemos onde é que a ZeroFlag é setada.
Vamos seguir o caso da string "AAA":

A função nos faz saltar para o final, antes da instrução de retorno **RET**.
Usamos o atalho até voltarmos ao local com uma instrução de comparação.

Voltando, vemos o local do salto, e em cima dele, a última instrução de comparação, que está comparando o byte mais à direita da nossa string com a do programador.
Como o resultado é diferente, nós não temos a ZeroFlag setada.
Sabendo que a ZeroFlag não foi setada, nós também sabemos que a instrução ``setne al`` irá colocar o valor **"1"** na parte mais baixa (mais a direita) do nosso registrador EAX.
Após a execução de ``setne al``, **EAX** fica com o seguinte valor: ``0xFFFFFF01``, e para termos apenas o valor setado por ``setne`` em **EAX**, a instrução ``and eax, 1`` é performada.
Como "1" é representado como ``0x00000001`` e EAX é ``0xFFFFFF01``, somente o primeiro byte, mais a direita vai ter o valo "1", caso não seja "0", e qualquer outro byte vai ser mudado para zero. Lembra, operação AND só resulta em "1" se ambos os bits forem "1", e é por isso que, com a string correta, **AL** é setado para zero, e essa operação AND resulta em zero.
---
### Considerações Finais
Espero que você tenha conseguido seguir e entender até aqui.
Espero que tenha pegado o "espirito" da coisa, e se for o caso, que tenha aumentado a sua curiosidade e interesse por esse tipo de conhecimento.
Para iniciar de verdade na engenharia reversa, eu recomendo começar pela seguinte playlist:
##### CERO - Curso de Engenharia Reversa Online por Mente Binária:
Curso de engenharia reversa em português 100% gratuito.
https://youtube.com/playlist?list=PLIfZMtpPYFP6zLKlnyAeWY1I85VpyshAA&si=3wYZb0E7iHaAFaMm | ryan_gozlyngg |
1,880,999 | Get wallet address for multiple chains in cosmos app chains | In the Cosmos ecosystem, each chain has its own specific prefix for Bech32 addresses. When you... | 0 | 2024-06-08T02:29:01 | https://dev.to/tqmvt/get-wallet-address-for-multiple-chains-in-cosmos-app-chains-56pb | web3, cosmoschain, cosmjs, betch32 | In the Cosmos ecosystem, each chain has its own specific prefix for Bech32 addresses. When you connect your [Keplr](https://www.keplr.app/) wallet to a Cosmos SDK-based chain, you receive a wallet address specific to that chain. However, the underlying public key remains the same across these chains, and you can derive the addresses for other chains by converting the address prefix.
Here's how you can derive the address for multiple chains using JavaScript, assuming you have the wallet address for one chain:
1. Extract the Public Key: You need the public key associated with the address. This step typically involves interacting with the Keplr wallet to get the public key.
2. Convert the Address Prefix: Use the [Bech32 encoding library](https://www.npmjs.com/package/@cosmjs/encoding) to convert the address prefix to the desired chain's prefix.
## Show me the code
Below is an example using JavaScript and the bech32 library to convert an address from the Osmosis chain to the Stargaze chain:
- Install the necessary libraries:
```bash
npm install @cosmjs/encoding @cosmjs/crypto
```
- JavaScript code to convert the address prefix:
```javascript
const { Bech32 } = require('@cosmjs/encoding');
/**
* Converts a Bech32 address to another Bech32 address with a different prefix.
* @param {string} address - The original Bech32 address.
* @param {string} newPrefix - The new prefix for the Bech32 address.
* @returns {string} - The new Bech32 address with the specified prefix.
*/
function convertAddressPrefix(address, newPrefix) {
const { prefix, data } = Bech32.decode(address);
return Bech32.encode(newPrefix, data);
}
// Example usage
const osmosisAddress = 'osmo1n3v7hdf3lj6gr7hyluq7x4hj4snmajsze3fnlq';
const stargazePrefix = 'stars';
const stargazeAddress = convertAddressPrefix(osmosisAddress, stargazePrefix);
console.log('Stargaze Address:', stargazeAddress);
```
This code will convert an Osmosis address to a Stargaze address by changing the Bech32 prefix.
## Summary
By extracting the public key from Keplr and using a Bech32 conversion, you can derive addresses for multiple chains in the Cosmos ecosystem. This approach leverages the shared public key across these chains and the flexibility of Bech32 encoding.
Thanks for your time. Happy coding! | tqmvt |
1,880,998 | Exploring the Tech Stack of Major Banks: Key Tools and Technologies for Software Engineers | Exploring the Tech Stack of Major Banks: Key Tools and Technologies for Software... | 0 | 2024-06-08T02:27:52 | https://dev.to/isamarsoftwareengineer/exploring-the-tech-stack-of-major-banks-key-tools-and-technologies-for-software-engineers-3o7f | java, bank, springboot, devops | ## Exploring the Tech Stack of Major Banks: Key Tools and Technologies for Software Engineers
In the fast-evolving world of finance, major banks are leveraging cutting-edge technologies to enhance their services, improve security, and streamline operations. Software engineers in this sector work with a diverse set of tools and frameworks. Here, we explore some of the key technologies and methodologies that are essential for software development in major banks.
### 1. Java: The Backbone of Banking Software
Java has been a cornerstone of banking software for decades due to its robustness, portability, and extensive ecosystem. It is widely used for building large-scale, high-performance applications.
#### Why Java?
- **Platform Independence:** Java's "write once, run anywhere" capability makes it ideal for the diverse infrastructure found in large banks.
- **Scalability:** Java applications can easily scale to handle increasing loads, which is crucial for high-transaction environments.
- **Security:** Java provides comprehensive security features, which are essential for handling sensitive financial data.
#### Applications in Banking
- **Core Banking Systems:** Java is often used to build core banking platforms that manage accounts, transactions, and customer information.
- **Payment Processing:** High-speed, secure transaction processing systems are frequently developed in Java.
- **Fraud Detection:** Java's ability to handle large datasets and perform complex computations makes it suitable for real-time fraud detection systems.
### 2. Spring Framework: Enhancing Java Development
The Spring Framework simplifies Java development by providing comprehensive infrastructure support for developing robust applications.
#### Why Spring?
- **Dependency Injection:** Simplifies code management and enhances testability.
- **Aspect-Oriented Programming:** Facilitates separation of cross-cutting concerns like logging and security.
- **Comprehensive Ecosystem:** Spring Boot, Spring Data, and Spring Security provide specialized tools for web applications, data management, and security.
#### Applications in Banking
- **Web Services:** Spring Boot is widely used to develop RESTful APIs and microservices for banking applications.
- **Data Management:** Spring Data simplifies database interactions, enabling efficient handling of large volumes of financial data.
- **Security:** Spring Security ensures that banking applications are protected against common vulnerabilities.
### 3. DevOps: Streamlining Development and Operations
DevOps practices integrate software development (Dev) and IT operations (Ops) to shorten the development lifecycle and deliver high-quality software continuously.
#### Why DevOps?
- **Continuous Integration/Continuous Deployment (CI/CD):** Automates testing and deployment, reducing the time to market.
- **Infrastructure as Code (IaC):** Allows infrastructure management through code, enhancing consistency and scalability.
- **Monitoring and Logging:** Provides real-time insights into application performance and issues.
#### Applications in Banking
- **Automated Testing:** Ensures that banking applications are thoroughly tested, reducing the risk of defects.
- **Continuous Delivery:** Enables rapid deployment of new features and updates, improving customer satisfaction.
- **Disaster Recovery:** Automates backups and restores, ensuring business continuity in case of failures.
### 4. Microservices Architecture: Building Modular Applications
Microservices architecture breaks down applications into smaller, loosely coupled services, each responsible for a specific functionality.
#### Why Microservices?
- **Scalability:** Individual services can be scaled independently based on demand.
- **Resilience:** Failures in one service do not affect the entire application.
- **Flexibility:** Allows the use of different technologies for different services.
#### Applications in Banking
- **Customer Management:** Microservices can manage customer information, preferences, and interactions.
- **Transaction Processing:** Independent services handle different types of transactions, ensuring efficient processing.
- **Risk Management:** Modular services analyze various risk factors, providing real-time insights and responses.
### 5. Cloud Computing: Enhancing Flexibility and Efficiency
Cloud computing provides on-demand access to computing resources, enabling banks to scale their operations and innovate rapidly.
#### Why Cloud Computing?
- **Scalability:** Easily scale resources up or down based on demand.
- **Cost Efficiency:** Pay only for the resources used, reducing operational costs.
- **Innovation:** Access to advanced technologies like AI and machine learning.
#### Applications in Banking
- **Data Storage and Management:** Cloud platforms offer secure and scalable storage solutions.
- **AI and Machine Learning:** Cloud services provide the computational power needed for advanced analytics and fraud detection.
- **Disaster Recovery:** Cloud-based solutions ensure quick recovery from disasters, minimizing downtime.
### Conclusion
Major banks rely on a sophisticated tech stack to deliver reliable, secure, and innovative financial services. Technologies like Java and the Spring Framework provide the foundation for robust applications, while DevOps practices ensure efficient development and deployment. Microservices architecture offers flexibility and scalability, and cloud computing enhances efficiency and innovation. For software engineers in the banking sector, mastering these tools and methodologies is essential to drive the future of financial technology. | isamarsoftwareengineer |
1,880,964 | PITFALLS OF CONFIRMATION BIAS IN PROGRAMMING | What Confirmation Bias Is. This is simply a situation whereby you search for, interprete or... | 0 | 2024-06-08T02:20:09 | https://dev.to/davidbosah/pitfalls-of-confirmation-bias-in-programming-3mai | webdev, beginners, tutorial, programming | **What Confirmation Bias Is.**
This is simply a situation whereby you search for, interprete or understand certain facts based on an your already mentally programmed information or experience. As simple as this may sound, it plays a huge role in our lives. The reason for this is that an already well crafted ideology of scenarios before they happen may end up manipulating how we perceive or interpete these events once they start unfolding.
**How confirmation bias is ruining your debugging process**
Whether it's _syntax error debugging_ or _Logic Error debugging_ the moment you allow your confirmation bias rule you it begins to ruin your debugging process by:
1. Inefficient debugging: Because of the idea of what the bug should be or where it should be your brain finds it difficult to locate the actual bug and tends to only search in areas you have initially thought it would be.
2. Poor problem solving.
**How to avoid confirmation bias during debugging**
To create better programming experience you have to take the conscious decision to avoid confirmation bias as much as possible. You could do this through the following methods:
* Approach debugging with a blank mental state.
* Consider different hypotheses regarding the cause of the bugs.
* You could introduce tools like debuggers or profilers.
* Seek different perspectives from colleagues and mentors.
* Stay open minded.
| davidbosah |
1,880,992 | Estudos em Quality Assurance (QA) - SDLC | O SDLC (Software Development Life Cycle ou Ciclo de Vida de Desenvolvimento de Sistemas) é um... | 0 | 2024-06-08T02:13:22 | https://dev.to/julianoquites/estudos-em-quality-assurance-qa-sdlc-172p | qa, testing, automation, sdlc | O **SDLC** (Software Development Life Cycle ou Ciclo de Vida de Desenvolvimento de Sistemas) é um framework utilizado para estruturar o desenvolvimento de sistemas de informação de maneira organizada e eficiente. Ele abrange todas as etapas, desde o planejamento inicial até o encerramento do projeto, garantindo que os objetivos do cliente sejam atendidos. É uma abordagem clássica que surgiu na década de 1960, desenvolvida para ajudar na criação de sistemas de grande escala. Ela segue uma sequência linear e estruturada de fases, facilitando a gestão e controle de projetos complexos. As fases são:
**Planejamento → Análise → Desenho → Desenvolvimento → Verificação → Implantação → Manutenção → Encerramento**
**Planejamento**: Definição do escopo e objetivos do projeto, alocação de recursos e estabelecimento de cronograma. Identificação das necessidades do cliente e alinhamento dos stakeholders.
**Análise**: Coleta e análise detalhada dos requisitos do sistema através de entrevistas e revisão de processos. Estabelecimento de uma base clara para o design do sistema.
**Design**: Criação da arquitetura do sistema e especificações detalhadas, incluindo diagramas de fluxo e modelos de dados. Definição da estrutura técnica para integração eficiente dos componentes.
**Desenvolvimento**: Programação do sistema conforme as especificações de desenho, utilizando linguagens e ferramentas apropriadas. Realização de testes unitários para garantir a funcionalidade de cada módulo.
**Verificação**: Testes de integração, sistema e aceitação para garantir que o sistema atenda aos requisitos especificados. Identificação e correção de bugs antes da implementação.
**Implantação**: Transferência do sistema para o ambiente de produção, incluindo instalação e configuração. Garantia de operação plena e acessibilidade para os usuários finais.
**Manutenção**: Correção de bugs, atualizações e melhorias contínuas após a implantação. Monitoramento para garantir desempenho e adaptação às novas necessidades.
**Encerramento**: Documentação final e entrega dos componentes do projeto. Revisão de desempenho para futuros aprendizados e melhorias. Nem sempre é mencionada. | julianoquites |
1,880,988 | Arquitetura Monolítica: Uma Visão Geral | Introdução A arquitetura de software é um campo vasto e diversificado, crucial para o... | 0 | 2024-06-08T02:01:41 | https://dev.to/iamthiago/arquitetura-monolitica-uma-visao-geral-l9j | ## Introdução
A arquitetura de software é um campo vasto e diversificado, crucial para o desenvolvimento de sistemas robustos e eficientes. Entre os diferentes paradigmas de arquitetura, a arquitetura monolítica é uma das mais tradicionais e amplamente utilizadas. Neste artigo, exploraremos o conceito de arquitetura monolítica, suas características, vantagens, desvantagens e casos de uso.
## O Que é Arquitetura Monolítica?
A arquitetura monolítica é um estilo de design de software onde todas as funcionalidades de um aplicativo são combinadas em um único programa executável. Em uma aplicação monolítica, todas as componentes – como interface de usuário, lógica de negócios e acesso a dados – são integradas e executadas como uma única unidade.
### Características da Arquitetura Monolítica
1. **Unidade de Implantação Única**: Todo o sistema é implantado como uma única unidade. Isso significa que uma atualização em qualquer parte do sistema requer a reimplantação de toda a aplicação.
2. **Tightly Coupled**: As diferentes componentes de um sistema monolítico são fortemente acopladas, o que pode tornar difícil a modificação e a manutenção do sistema.
3. **Escalabilidade Vertical**: Escalar uma aplicação monolítica geralmente envolve aumentar a capacidade dos servidores onde a aplicação está hospedada (escalabilidade vertical).
## Vantagens da Arquitetura Monolítica
### 1. Simplicidade
A arquitetura monolítica é simples de desenvolver e implantar. A simplicidade decorre do fato de que tudo está localizado em um único código-base, facilitando o desenvolvimento inicial e a integração de funcionalidades.
### 2. Desempenho
Em muitos casos, uma aplicação monolítica pode oferecer melhor desempenho, pois todas as chamadas de função são locais, eliminando a latência associada à comunicação inter-serviços que é comum em arquiteturas distribuídas.
### 3. Facilidade de Desenvolvimento
Para equipes pequenas e projetos menores, a arquitetura monolítica pode ser a escolha mais prática. Ela permite que os desenvolvedores tenham uma visão completa do sistema, o que facilita o processo de desenvolvimento e depuração.
## Desvantagens da Arquitetura Monolítica
### 1. Manutenção e Evolução
À medida que uma aplicação monolítica cresce, torna-se cada vez mais difícil mantê-la e evoluí-la. O forte acoplamento entre componentes pode levar a um efeito cascata, onde mudanças em uma parte do sistema exigem alterações em outras partes, aumentando o risco de bugs.
### 2. Escalabilidade Limitada
A escalabilidade de uma aplicação monolítica é limitada pela necessidade de escalar toda a aplicação de uma vez. Isso pode ser ineficiente e caro, especialmente se apenas algumas partes da aplicação precisarem ser escaladas.
### 3. Tempo de Implantação
O tempo de implantação pode ser longo, já que a atualização de qualquer parte da aplicação requer a reimplantação de todo o sistema. Isso pode afetar negativamente o tempo de inatividade e a continuidade dos negócios.
## Casos de Uso da Arquitetura Monolítica
Apesar das desvantagens, a arquitetura monolítica é adequada para várias situações:
1. **Aplicações Pequenas e Médias**: Para projetos de menor escala, onde a complexidade do sistema não justifica a adoção de arquiteturas mais complexas, a arquitetura monolítica pode ser a melhor escolha.
2. **Prototipagem Rápida**: Para startups e projetos que estão na fase inicial, uma arquitetura monolítica permite um desenvolvimento rápido e ágil, possibilitando a validação de ideias antes de investir em uma arquitetura mais robusta.
3. **Equipes Pequenas**: Em organizações com equipes de desenvolvimento pequenas, a simplicidade de uma aplicação monolítica pode facilitar a coordenação e o gerenciamento do projeto.
## Conclusão
A arquitetura monolítica, com suas vantagens e desvantagens, continua sendo uma escolha viável para muitos projetos. Sua simplicidade e desempenho são atraentes para projetos menores e para a prototipagem rápida. No entanto, à medida que o projeto cresce, as limitações de escalabilidade e manutenção podem tornar necessário considerar outras abordagens, como a arquitetura de microsserviços. Entender as características e os trade-offs da arquitetura monolítica é crucial para tomar decisões informadas sobre o design de sistemas.
---
Gostou do artigo? Siga-me para mais insights sobre desenvolvimento de software e arquitetura de sistemas! | iamthiago | |
1,880,985 | Let’s Build One Person Business Using 100% AI | AI made it possible for 9-to-5 workers to start a one-person business without quitting their... | 0 | 2024-06-08T01:55:30 | https://dev.to/exploredataaiml/lets-build-one-person-business-using-100-ai-1mgo | llm, rag, ai | AI made it possible for 9-to-5 workers to start a one-person business without quitting their jobs.
[Full Article] (https://medium.com/@learn-simplified/lets-build-one-person-business-using-100-ai-4bb4285892c9)
The Opportunities for Starting a Business
○ There are huge opportunities to start your own business by leveraging valuable skills to attract paying audiences.
○ New software and AI platforms make it easier to distribute products/services and automate tasks that were previously time-consuming.
Our One Person Book Publication House
○ This article explores building a one-person AI-powered business focused on publishing books.
○ Users input data on a topic, and AI generates a comprehensive book structure and content based on that.
○ The generated content can be formatted, designed, and published digitally or in print easily.
Why Read This Article?
○ It presents an innovative AI-powered approach to streamline the book publishing process.
○ It provides technical implementation details using LLM, Python and the Streamlit library as a reference.
○ It highlights AI's potential in automating creative tasks like writing and content creation.
Approaching the One Person Business
○ Reflect on areas where you overcame personal struggles and gained valuable skills.
○ Leverage that expertise to build an AI business serving others facing similar obstacles.
○ Use AI tools to create content, automate processes, and efficiently scale your offerings.
The Publication Business Idea
○ Focus on writing and publishing small books using AI writing assistants.
○ AI can streamline research, writing drafts, outlines, and ideas across genres.
○ Concentrate efforts on editing, formatting, and marketing while AI handles writing.
The Book Generation Process
○ Users input structured topic data like outlines, key points, and references. ○ Advanced AI language models generate flowing book content from that data.
○ Minimal human effort is needed beyond initial inputs and refinement.
○ AI systems automatically handle formatting, design, and publishing.
Technical Implementation
○ Includes a Book class to represent a book's hierarchical structure in Python.
○ Functions to generate book structures and section content using AI models.
○ Integrates with a Streamlit app for user input and output.
○ Allows downloading the final book in Markdown format.
Closing Thoughts
○ This AI-powered approach makes book writing and publishing more accessible to individuals.
○ AI handles the heavy lifting, with humans providing quality control through editing.
○ It opens up possibilities for innovative knowledge sharing as technology evolves. | exploredataaiml |
1,880,984 | Beviral - beviral.me social media engagement services | Elevate Your Social Media Game with Be Viral's Top Services In today's digital landscape,... | 0 | 2024-06-08T01:52:00 | https://dev.to/beviralme/beviral-beviralme-social-media-engagement-services-f1e | ### Elevate Your Social Media Game with Be Viral's Top Services
In today's digital landscape, having a robust social media presence is crucial for success. At Be Viral, we specialize in providing top-tier services to help you grow your accounts organically and effectively. Our most profitable services focus on enhancing your visibility and engagement on platforms like TikTok and YouTube. Here’s how we can help you achieve your goals:
#### [Buy TikTok Views](https://beviral.me)
Increase your TikTok reach by purchasing views from real accounts. More views not only enhance your profile’s popularity but also increase the chances of your videos going viral. By boosting your view count, you can attract more organic traffic and engagement, making your content stand out on this highly competitive platform. Visit beviral.me to buy TikTok views and watch your influence grow.
#### [Buy TikTok Likes](https://beviral.me)
Likes are a vital metric for success on TikTok. They indicate your content’s popularity and play a significant role in the platform’s algorithm. Purchasing TikTok likes can give your videos the initial boost they need to attract more viewers and interactions organically. More likes can lead to higher visibility and increased chances of your content being featured. Visit beviral.me to buy TikTok likes and elevate your TikTok presence.
#### [Buy YouTube Subscribers](https://beviral.me)
Grow your YouTube channel with authentic subscribers. A higher subscriber count not only increases your credibility but also enhances your channel’s ranking on YouTube’s search results. More subscribers mean more views and engagement, helping you build a loyal audience for your content. Visit beviral.me to buy YouTube subscribers and take your channel to the next level.
#### [Buy YouTube Likes](https://beviral.me)
Likes on YouTube videos are a key indicator of your content’s value and relevance. By purchasing YouTube likes, you can improve your video’s ranking, attract more viewers, and encourage more engagement. This can lead to more organic growth and a stronger presence on the platform. Visit beviral.me to buy YouTube likes and boost your video’s popularity.
At [BeViral](https://beviral.me), we understand the importance of social media metrics and how they influence your overall success. Our tailored solutions are designed to provide maximum engagement and visibility, ensuring your success on platforms like TikTok and YouTube.
By utilizing these services, you can significantly enhance your social media presence and achieve your digital marketing goals more effectively. | beviralme | |
1,880,982 | Sleepy Cloud Animation | This is a submission for Frontend Challenge v24.04.17, CSS Art: June. Inspiration ... | 0 | 2024-06-08T01:51:04 | https://dev.to/umeshsuwal/sleepy-cloud-animation-5g5b | frontendchallenge, devchallenge, css | _This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._
## Inspiration
<!-- What are you highlighting today? -->
## Demo
Link - https://sleepycloudanimation.netlify.app/

## Journey
I was inspired for this art by a certain Dev from whom I started learning CSS Arts..
<!-- Thanks for participating! --> | umeshsuwal |
1,880,979 | What are the top-level steps to create an API ? | Step #1. Create the (data) models Step #2. Create the Server Step #3. Create the Create API Step #4.... | 0 | 2024-06-08T01:26:20 | https://dev.to/mbshehzad/what-are-the-top-level-steps-to-create-an-api--53nc | node, javascript, express | Step #1. Create the (data) models
Step #2. Create the Server
Step #3. Create the Create API
Step #4. Create the Read API
Step #5. Create the Update API
Step #6. Create the Delete API | mbshehzad |
1,880,869 | Fintech's role in mainstreaming cryptocurrency adoption | Cryptocurrency, once a niche domain reserved for tech enthusiasts and early adopters, is rapidly... | 0 | 2024-06-07T22:09:57 | https://dev.to/eincheste/fintechs-role-in-mainstreaming-cryptocurrency-adoption-4k9l | beginners, devops, career, web3 |
Cryptocurrency, once a niche domain reserved for tech enthusiasts and early adopters, is rapidly gaining traction in the mainstream financial world. This shift can be largely attributed to the integration of financial technology (fintech) with the cryptocurrency ecosystem. By leveraging innovative technologies and developing user-friendly platforms, fintech is playing a pivotal role in making cryptocurrency accessible to a broader audience. This article delves into the multifaceted ways in which fintech is driving the mainstream adoption of cryptocurrency.
Bridging the Gap: Simplifying Cryptocurrency Access
One of the primary barriers to cryptocurrency adoption has been the complexity associated with acquiring and managing digital assets. Early platforms required users to navigate through intricate processes involving digital wallets, private keys, and unfamiliar exchanges. Fintech companies have stepped in to simplify these processes, offering intuitive interfaces and seamless user experiences.
For instance, fintech startups like Coinbase and Robinhood have revolutionized the way people buy, sell, and store cryptocurrencies. By integrating cryptocurrency trading into their existing platforms, these companies have made it possible for users to manage their digital assets alongside traditional investments. This integration has not only increased accessibility but also fostered trust among users who might have been wary of venturing into the cryptocurrency space.
Enhancing Security and Compliance
Security concerns have long plagued the cryptocurrency industry. High-profile hacks and scams have eroded trust, making potential investors cautious. Fintech companies are addressing these issues by implementing robust security measures and adhering to regulatory standards.
Blockchain technology, the underlying technology of cryptocurrencies, inherently offers strong security features. However, fintech companies are going a step further by incorporating advanced encryption methods, multi-factor authentication, and secure custody solutions. Companies like Gemini and BitGo have set industry standards for secure storage, providing insured custodial services that protect users' assets against theft and loss.
Moreover, compliance with regulatory frameworks is crucial for gaining mainstream acceptance. Fintech firms are working closely with regulatory bodies to ensure that their platforms are compliant with anti-money laundering (AML) and know-your-customer (KYC) regulations. By doing so, they are not only legitimizing the industry but also protecting users from potential legal repercussions.
Facilitating Everyday Transactions
For cryptocurrencies to be widely adopted, they must be usable for everyday transactions. Fintech companies are developing solutions that enable seamless integration of cryptocurrencies into daily financial activities.
Payment processors like BitPay and CoinPayments are allowing merchants to accept cryptocurrencies as payment, thereby expanding the utility of digital assets. Additionally, fintech startups are creating cryptocurrency debit cards that can be used at any point-of-sale terminal that accepts traditional debit cards. These innovations are making it easier for consumers to spend their cryptocurrencies, thereby enhancing their practical value.
Furthermore, decentralized finance (DeFi) platforms are opening up new avenues for financial services such as lending, borrowing, and earning interest on cryptocurrency holdings. Fintech companies like Aave and Compound are at the forefront of this movement, providing users with decentralized alternatives to traditional banking services.
Education and Awareness
Despite the growing interest in cryptocurrencies, a significant knowledge gap still exists among the general public. Fintech companies are playing a crucial role in bridging this gap by providing educational resources and tools to help users understand the complexities of the cryptocurrency market.
Platforms like Binance Academy and CoinMarketCap offer comprehensive educational content ranging from beginner guides to advanced trading strategies. By demystifying the technical aspects of cryptocurrencies, these resources empower users to make informed decisions and engage confidently with the digital asset ecosystem.
The Future of Fintech and Cryptocurrency
The synergy between fintech and cryptocurrency is shaping the future of the financial industry. As fintech continues to innovate and develop new solutions, the barriers to cryptocurrency adoption will continue to diminish. Here are a few trends to watch:
1.Interoperability: Enhancing the interoperability between different blockchain networks will make it easier for users to transfer assets across various platforms, increasing liquidity and utility.
2.Central Bank Digital Currencies (CBDCs): As governments explore the creation of their own digital currencies, fintech companies will play a key role in integrating these CBDCs into existing financial systems, further legitimizing digital assets.
3.Institutional Adoption: Fintech is driving institutional interest in cryptocurrencies. Companies like Fidelity and Square are investing heavily in digital assets, paving the way for greater institutional participation and market stability.
4. Regulatory Developments: The evolving regulatory landscape will shape the future of cryptocurrency adoption. Fintech companies will need to navigate these changes and work collaboratively with regulators to ensure compliance and foster innovation.
Conclusion
Fintech is at the forefront of mainstreaming cryptocurrency adoption by simplifying access, enhancing security, facilitating transactions, and promoting education. As the fintech sector continues to evolve, its impact on the cryptocurrency industry will only grow, making digital assets an integral part of the global financial system. By addressing the challenges and leveraging the opportunities presented by this dynamic landscape, fintech is poised to drive the next wave of financial innovation. | eincheste |
1,880,978 | HOW TO RECOVER YOUR CRYPTOCURRENCY FROM SUSPICIOUS INVESTMENTS AND ONLINE TRADING | While browsing through Instagram, I came across a post from one of my friends about their successful... | 0 | 2024-06-08T01:24:05 | https://dev.to/conchi_martingambero_fe0/how-to-recover-your-cryptocurrency-from-suspicious-investments-and-online-trading-2l6d | cryptocurrency, recovery, bitcoinexper, bitcoin |
While browsing through Instagram, I came across a post from one of my friends about their successful Bitcoin investment and substantial profits. Intrigued by their claims, I decided to visit the website mentioned in the post. After creating an account, I contacted the support chat and was provided with a Telegram contact for further assistance. I connected with a person on Telegram who instructed me to install the MetaMask app and informed me that they would receive my money transfers. I was asked to make four bank transfers, totaling $158,000. Throughout this process, my friend, whose post had initially caught my attention, was also aware of the investment opportunity. However, I later discovered that his account had been hacked, and the hacker was using his page to promote false information about the company, leading me to believe it was legitimate. After noticing the scam, I was devastated and felt completely lost. I couldn't believe I had fallen for such an elaborate scheme, and the thought of losing such a significant amount of money was overwhelming. I didn't know where to turn or who to trust. Fortunately, a close friend recommended it. I seek help from TRUST GEEKS HACK EXPERT. Desperate and hopeful, I contacted them, explaining my situation in detail. From the very first interaction, I was impressed with TRUST GEEKS HACK EXPERT professionalism and empathy. They quickly reassured me that I was not alone and that they had successfully handled numerous cases similar to mine. Their confidence and expertise provided me with a glimmer of hope in an otherwise bleak situation. TRUST GEEKS HACK EXPERT immediately began working on my case. Their team of experts meticulously analyzed the fraudulent website and the transactions I had made. They kept me informed throughout the entire process, explaining each step they were taking and why it was necessary. This transparency helped rebuild my trust, which the scam had shattered. One of the most impressive aspects of TRUST GEEKS HACK EXPERT service was their thorough understanding of cryptocurrency and online scams. They knew exactly where to look and how to gather the necessary evidence to build a strong case against the perpetrators. Their technical skills and knowledge of the digital landscape were evident in the speed and efficiency with which they operated. Within a 2 day short TRUST GEEKS HACK EXPERT was able to expose the entire operation. They identified the fraudulent website's operators and worked tirelessly to shut it down. Their efforts were relentless, and their determination to recover my funds was unwavering. Despite the complexity of the situation, they never wavered in their commitment to helping me. The moment I received the notification that my funds had been successfully recovered was one of immense relief and joy. I couldn't believe that the nightmare was finally over and that I had regained the money I thought was lost forever. TRUST GEEKS HACK EXPERT not only restored my financial stability but also my faith in the possibility of justice in the digital world. I am writing this review with immense joy and gratitude for the exceptional assistance provided by TRUST GEEKS HACK EXPERT. Their dedication, expertise, and compassion were instrumental in navigating a very difficult period in my life. If you find yourself in a similar situation, I recommend reaching out to TRUST GEEKS HACK EXPERT. They are a beacon of hope in the often world of online investments, and their ability to recover lost funds is nothing short of miraculous. Trust in their capabilities, and you will not be disappointed.
Web-site. h-t-t-p-s-://trustgeekshackexpert.com/
E-m-a-i-l: info@trustgeekshackexpert.com
W-h-a-t-s-A-p-p +1-7-1-9-4-9-2-2-6-9-3
Tele-Gram: Trust-Geeks-Hack-Expert | conchi_martingambero_fe0 |
1,880,976 | Tipos Básicos em Kotlin | Introdução Recentemente mudei de equipe na empresa onde estou trabalhando atualmente.... | 0 | 2024-06-08T01:23:11 | https://dev.to/oliversieto/tipos-basicos-em-kotlin-10i2 | kotlin, programação, tipodedados | ---
title: Tipos Básicos em Kotlin
published: true
description:
tags: kotlin, programação, tipodedados
# cover_image: https://kotlinlang.org/_next/static/chunks/images/hero-cover-6dd34ed75729683235a4f47d714a604e.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-07 23:34 +0000
---
## Introdução
Recentemente mudei de equipe na empresa onde estou trabalhando atualmente. Nessa equipe trabalhamos com **Android Nativo** com **Java** e **Flutter**. Como eu não tinha muito conhecimento em desenvolvimento nativo, comecei a estudar e comecei a utilizar **Kotlin** para ter acesso ao mundo do desenvolvimento de aplicativos Android.
Apesar de eu gostar de desenvolvimento backend, volta e meia estou inserido no desenvolvimento de aplicativos, comecei com React Native, e após algumas escolhas próprias decidi mudar para Flutter e fui seguindo.
Agora com o estudo de Kotlin, pensei e registrar o meu desenvolvimento conforme eu vou estudando. Então hoje vou começar falando sobre tipos de variáveis.
Apesar de ter desenvolvido bastante em JavaScript, pessoalmente eu prefiro linguagens fortemente tipadas e por esse motivo eu migrei para TypeScript e venho desenvolvendo em NestJS com TypeScript desde então.
Para quem está começando no mundo da programação, variável é um nome exclusivo que a identifica dentro do programa. O valor salvo que pode mudar durante a execução do programa. [Ebac](https://ebaconline.com.br/blog/variaveis-na-programacao-seo#:~:text=Uma%20vari%C3%A1vel%20na%20programa%C3%A7%C3%A3o%20%C3%A9,booleanos%20e%20assim%20por%20diante.)
No Kotlin variáveis são estaticamente tipadas, o que significa que uma vez que você declarou o seu tipo, isso não pode ser mais alterado. E dentre os tipos básicos podemos ter os seguintes:
* Numbers (Números)
* Booleans (Booleanos)
* Characters (Caracteres)
* Strings (Sequência de Caracteres)
Hoje eu falarei apenas do tipo Number, onde podemos ter valores inteiros e de ponto flutuante.
### Números Inteiros
De acordo com Rafael C. Asth professor de matemática, os números inteiros são os números positivos e negativos, que não apresentam parte decimal e, o zero. [Toda Matéria] (https://www.todamateria.com.br/numeros-inteiros/)
Entre os números inteiros, temos os seguintes tipos:
| Tipo | Tamanho (bits) | Valor Mínimo | Valor Máximo |
| --- | -------------- | ------------ | ------------ |
| Byte | 8 | -128 | 127 |
| Short| 16 | -32768 | 32767 |
| Int | 32 | -2,147,483,648 (-231) | 2,147,483,647 (231 - 1)
| Long | 64 | -9,223,372,036,854,775,808 (-263) | 9,223,372,036,854,775,807 (263 - 1) |
Em Kotlin quando você inicializa uma variável sem especificar o tipo, o compilador automaticamente infere o tipo com a menor tamanho à partir do tipo inteiro. Dependendo do tamanho do número o compilador pode inferir como Long.
As variáveis podem começar com as palavras chaves **val** e **var**, sendo **val** para variáveis que não irão mudar seus valores durante a execução do programa e **var** para aquelas que poderão ter seus valores alterados.
Segue um exemplo de tipagens com números inteiros:
```kotlin
val one = 1 // Int
val threeBillion = 3000000000 // Long
val oneLong = 1L // Long
```
No exemplo acima, todos os tipos foram inferidos pelo compilador. Agora segue um exemplo onde você pode definir o tipo:
```kotlin
val one: Byte = 1
val valueShort: Short = -25000
val valueInt: Int = 600000
val valueLong: Long = 3000000000
```
Temos que ter um certo cuidado quando vamos definir o tipo da variável, pois o seu valor pode ultrapassar o seu limite máximo gerando um erro de tipagem.
### Números de Ponto Flutuante
De acordo com Rafael C. Asth professor de matemática, os números reais podem ser representados de diversas formas, como: inteiros positivos e negativos, frações, decimais, notação científica, raízes, etc. [Toda Matéria](https://www.todamateria.com.br/numeros-reais/)
Na computação definimos como números reais números que possuem casas decimais. Ex: 2.50 ou 3.14 ou 300000.85. Note que ao contrário do que estamos acostumados usamos ponto ao invés de vírgula.
Kotlin provê dois tipos de ponto flutuantes, utilizando o padrão [IEEE 754](https://en.wikipedia.org/wiki/IEEE_754):
| Tipo | Tamanho (bits) | Bits Significativos | Bits Expoentes | Digitos Decimais |
| --- | -------------- | ------------ | ------------ | ---------- |
| Float | 32 | 24 | 8 | 6-7 |
| Double | 64 | 53 | 11 | 15-16 |
Como ocorre com os números inteiros, quando você não específica um tipo para variável, o compilador por padrão infere o tipo Double. Sempre que você quiser definir o tipo Float, você deve colocar um f ao final número. Caso você defina um tipo como Float e não adicione o f ao final do número isso irá gerar um erro de tipagem.
```kotlin
val pi = 3.14 // Double
val oneDouble = 1.0 // Double
val floatValue = 328.25f // Float
```
Você também pode decidir qual o tipo você quer utilizar:
```kotlin
val e: Double = 2.7182818284 // Double
val eFloat: Float = 2.7182818284f
```
O tipo Double é o mais utilizado quando se trata de números com pontos flutuantes.
Vou ficando por aqui e espero que tenha dado uma noção sobre os tipos numéricos em Kotlin.
| oliversieto |
1,880,975 | Building in Public - 1 | I’m building a client-only version of Splitwise for fun and practice. One thing that I’m trying to... | 27,633 | 2024-06-08T01:20:14 | https://bryanliao.dev/blog/building-in-public-1/ | buildinpublic | I’m building a client-only version of Splitwise for fun and practice. One thing that I’m trying to figure out is how to persist data without using a server or database. Some client-based options are [localStorage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage) and [indexedDB](https://developer.mozilla.org/en-US/docs/Web/API/Window/indexedDB), but what if I wanted to collaborate with others or switch computers? At some point, I’d need to save the information somewhere and be able to transfer it somewhere else.
Taking some inspiration from video games, and something I noticed with [Pokeclicker](https://github.com/pokeclicker/pokeclicker), what if it in addition to local storage, I was able to create encoded text files for data? That way I can work locally, save it to my computer or phone, email it elsewhere, etc. Who needs to pay for server hosting 😛
Something to think about, and something I’m going to explore. | liaob |
1,880,973 | A comprehensive comparison between MySQL and PostgreSQL | MySQL and PostgreSQL are both open-source relational database management systems with wide user bases... | 0 | 2024-06-08T01:19:28 | https://dev.to/concerate/a-comprehensive-comparison-between-mysql-and-postgresql-53oi | MySQL and PostgreSQL are both open-source relational database management systems with wide user bases and years of development history in the field of database management. While both are used for storing and managing data, they have significant differences in various aspects including performance, features, scalability, licensing, and community support. In this article, we will provide a comprehensive comparison of these two databases to help you choose the database management system that best fits your needs.
1. Basic Information Comparison
MySQL
Developer: Maintained by Oracle Corporation.
License: Uses GPL (General Public License).
Supported OS: Windows, Linux, macOS, etc.
Initial Use Case: Developed initially for web applications such as WordPress, Drupal, etc.
Programming Languages: Supports multiple languages including Java, Python, PHP, etc.
PostgreSQL
Developer: Maintained by the PostgreSQL Global Development Group.
License: Uses MIT License, which allows more flexible usage.
Supported OS: Windows, Linux, macOS, etc.
Focus: Emphasizes ACID compliance and data integrity.
Features: Known for strong extensibility and customization capabilities.
2. Data Types Comparison
MySQL
Standard SQL Types: Provides standard SQL data types such as integers, floating points, date-time, etc.
Non-standard Types: Supports ENUM, SET, etc.
Array Types: Does not support array data types.
JSON Support: Relatively new feature for JSON support.
PostgreSQL
Extensive Data Types: Provides a wide range of data types including integers, floating points, date-time, arrays, JSON, JSONB, etc.
Custom Types: Allows developers to create user-defined data types.
Range Types: Includes range data types for handling date, time ranges, etc.
Spatial Data: Supports geospatial data types and full-text search data types.
3. Scalability Comparison
MySQL
Large Datasets: May encounter performance issues with large datasets.
Partitioning: Uses table partitioning and vertical partitioning to enhance performance and scalability.
Replication: Supports master-slave replication and clustering configuration.
PostgreSQL
Large Datasets: Excellent scalability, capable of handling large datasets.
Features: Supports table partitioning, parallel query processing, tablespaces, etc.
Replication: Offers flexible replication and advanced streaming replication settings.
4. ACID Compliance Comparison
MySQL
ACID: Complies with ACID (Atomicity, Consistency, Isolation, Durability) principles.
Default Isolation Level: Repeatable Read.
PostgreSQL
ACID: Strong emphasis on ACID compliance and data integrity.
Isolation Levels: Offers multiple isolation levels, including Repeatable Read and Serializable.
Concurrency: Supports advanced concurrency control and transaction management.
5. Extensions and Plugins Comparison
MySQL
Community and Plugins: Extensive community support and third-party plugins.
Procedures and Triggers: Supports stored procedures and triggers.
Storage Engines: Uses storage engines to achieve various functionalities.
PostgreSQL
Custom Functions: Supports writing custom functions, triggers, stored procedures, etc.
Plugin System: Powerful plugin system supporting numerous extensions.
Custom Plugins: Allows developers to write custom plugins.
6. Community Support Comparison
MySQL
Community Support: Large community support and extensive documentation.
Resources: Multiple official and unofficial forums, blogs, etc.
PostgreSQL
Community Engagement: Enthusiastic community emphasizing user participation.
Resources: Rich official documentation and online resources.
Updates: Regular updates and patches.
7. Security Comparison
MySQL
Basic Security: Provides basic security features such as user privilege management and SSL support.
Enhanced Security: Third-party tools and plugins available for enhanced security.
PostgreSQL
Advanced Security: Offers advanced security features including row-level security and SSL support.
Authentication: Supports various authentication methods and LDAP integration.
8. Replication and High Availability
MySQL
Replication: Supports master-slave replication with automatic failover.
High Availability: Multiple high availability solutions such as MySQL Group Replication.
PostgreSQL
Streaming Replication: Supports streaming replication and can configure streaming replication clusters.
Advanced Features: Features logical replication and BDR (Bi-Directional Replication).
9. Performance Characteristics Comparison
MySQL
Read-Intensive Applications: Excels in read-intensive applications with query caching to improve read performance. However, query caching may not be suitable for high-concurrency environments due to lock contention.
Partitioning: Supports both vertical and horizontal partitioning to improve performance and scalability by splitting tables into multiple partitions.
Replication: Master-slave replication allows distributing read traffic to multiple slave nodes, enhancing performance and availability.
Indexes: Simple indexing system; good performance with well-designed indexes but may suffer if indexes are misused.
PostgreSQL
Complex Queries: Excels in handling complex queries with a mature query optimizer, making it ideal for analytical applications and data warehousing.
Write-Intensive Applications: Performs well in write-intensive applications using Multi-Version Concurrency Control (MVCC) to allow multiple transactions to modify data simultaneously without lock contention, providing excellent performance in high-concurrency write scenarios.
Parallel Query: Supports parallel queries, allowing multiple CPU cores to process queries simultaneously, enhancing query performance.
Advanced Indexing: Advanced indexing mechanisms including B-trees, hashes, GIN (Generalized Inverted Index for full-text search), and GiST (Generalized Search Tree for geospatial data), providing good performance across different types of applications.
10. Performance Comparison
Database Versions:
MySQL: 8.0
PostgreSQL: 13.4
Hardware Configuration:
Server Specs: Dual-core, 4GB RAM
Storage: SSD
Load Conditions:
Data Volume: 1,000,000 rows
Queries: Includes read queries, write queries, and complex queries
Read Query Performance:
MySQL: Average query response time of 10 milliseconds in read-intensive scenarios.
PostgreSQL: Average query response time of 8 milliseconds under the same load, slightly better than MySQL.
Write Query Performance:
MySQL: Can handle 1000 write operations per second in write-intensive scenarios.
PostgreSQL: Can handle 1200 write operations per second, performing slightly better.
Complex Query Performance:
MySQL: Average response time of 50 milliseconds for complex queries involving multi-table joins and aggregations.
PostgreSQL: Better average response time of 40 milliseconds under the same load.
Concurrency Performance:
MySQL: Stable performance with an average response time increasing to 20 milliseconds with 100 concurrent users.
PostgreSQL: Lower average response time of 15 milliseconds under the same load.
Conclusion:
MySQL: Performs well in read-intensive applications but slightly lags behind PostgreSQL in complex queries and write-intensive applications.
PostgreSQL: Excels in complex queries, write-intensive applications, and high-concurrency scenarios.
11. Summary
Both MySQL and PostgreSQL are powerful relational database management systems suited to different use cases and requirements. For handling large datasets, complex queries, and applications emphasizing data integrity, PostgreSQL is likely the better choice. On the other hand, for read-intensive applications or those requiring high-performance write operations, MySQL may be more suitable.
The final choice will depend on your specific needs, team expertise, and project nature. Thoroughly researching each database’s features and best practices is crucial to ensuring high performance and reliability.
Best SQL IDE for MySQL/ PostgreSQL
SQLynx is a powerful MySQL/ PostgreSQL management tool that is highly favored by Database Administrators (DBAs) for its efficient graphical user interface and rich feature set.
Features:
Intuitive GUI: SQLynx offers a clean and intuitive user interface for convenient operations.
Web Management: Supports web-based data management, enabling collaborative management among multiple users and enterprise-level security.
Batch Data Operations: Supports batch data import, export, and batch processing operations, enhancing data management efficiency.
Intelligent Code Suggestions: Intelligent code completion and syntax highlighting reduce errors in writing SQL statements.
Download: http://www.sqlynx.com/en/#/home/probation/SQLynx
| concerate | |
1,880,972 | I RECOMMEND TRUST GEEKS HACK EXPERT FOR PHONE HACK & SPY ON CHEATERS | As I struggled to uncover the source of the anonymous threats and harassment I was facing online, I... | 0 | 2024-06-08T01:16:56 | https://dev.to/garcia_gladystolbert_1/i-recommend-trust-geeks-hack-expert-for-phone-hack-spy-on-cheaters-3ebe | spy, general, hackers, love | As I struggled to uncover the source of the anonymous threats and harassment I was facing online, I turned to (TRUST GEEKS HACK EXPERT) for assistance. With their expertise in digital forensics and online investigation, they were able to trace the messages back to their origin. Through meticulous analysis and advanced techniques, the team at (TRUST GEEKS HACK EXPERT) identified the anonymous account responsible for the threats as belonging to my ex-boyfriend. It was a shocking revelation that left me feeling both relieved to have found the culprit and deeply disturbed by his actions. (TRUST GEEKS HACK EXPERT) swift and thorough investigation not only provided me with the evidence I needed to confront my ex-boyfriend but also helped me regain a sense of control over the situation. Their professionalism and dedication to helping me reclaim my privacy were invaluable during such a challenging time. With their assistance, I was able to take decisive action to protect myself from further harm and ensure that my online accounts remained secure. The experience highlighted the importance of having reliable resources like (TRUST GEEKS HACK EXPERT) to turn to when faced with digital threats and harassment. Thanks to their expertise and support, I was able to confront the truth about my ex-boyfriend's actions and take steps to safeguard my online security. Their assistance gave me peace of mind and allowed me to move forward with confidence, knowing that I had a trusted ally in my corner.
INFORMATION OF TRUST GEEKS HACK EXPERT
Email: info@trustgeekshackexpert.com
WhatsApp +1.7.1.9.4.9.2.2.6.9.3
Web-site. https://trustgeekshackexpert.com/ | garcia_gladystolbert_1 |
1,880,971 | A Declaration of World Peace | At each appointed time, Mankind has abandoned the narrative that binds us - choosing to revolt over... | 27,632 | 2024-06-08T01:14:39 | https://desir.foundation/series/a-declaration-world-peace | opensource, watercooler, ai, discuss | _At each appointed time, Mankind has abandoned the narrative that binds us - choosing to revolt over the status quo._
---
_We will choose to amend our story when participation corrupts human flourishing & Coherence. And we find ourselves at yet another appointed time, one of the utmost profundity and demanding a deliberate, rational re-evaluation of our collective futures_
---
_It is at this moment in human history, we approach the horizon of the Technological Singularity - the penultimate evolution for Mankind through synthesis of all scientific knowledge_
---
_At risk of understatement, humanity stands at a crossroads between unparalleled progress and enlightenment or catastrophic collapse and despair_
---
_In gracious humility, under the authority of God, who is Truth, Reality, and Self, we declare this proposal of peace to all nations among kindred and offer a logical framework & Religion in order to advance Planetary & Interplanetary Unity._ | desirtechnologies |
1,880,970 | How to embed your git bash into Visual Studio | Today you'll learn how to integrate your git bash straight into your Visual Studio. In the end it'll... | 0 | 2024-06-08T01:14:37 | https://dev.to/henriqueholtz/how-to-embed-your-git-bash-into-visual-studio-1afb | bash, visualstudio | Today you'll learn how to integrate your git bash straight into your Visual Studio. In the end it'll looks just like this:

---
To do that you need open the terminal configuration section in the following tab: `Tools => Options => Environment => Terminal`.
Then you should click in the Add button and configure its name and the path location pointing to the `basn.exe`. The most important part here is to add `-i -l` in the arguments field. These arguments are to set up the git bash to be open embedded.
Usually the bash's location is similar as `C:\Program Files\Git\git-bash.exe`
Optionally you can set this terminal as default.
The configuration should looks similar as the following image:

Now, to open the terminal go to `View => Terminal`. If you didn't set up the git bash as default you'll need to open it explicitly like this:

Hopefully it'll be pretty useful to have the power of git bash directly into your VS! Thanks for reading.
| henriqueholtz |
1,880,965 | O Google IO e as alegrias dos eventos presenciais | A popularização dos eventos online permitiu que o conteúdo e conhecimento conseguissem viajar até os... | 0 | 2024-06-08T01:00:59 | https://dev.to/tyemy/o-google-io-e-as-alegrias-dos-eventos-presenciais-1i1i | google, community, watercooler | A popularização dos eventos online permitiu que o conteúdo e conhecimento conseguissem viajar até os públicos mais distantes, e isso foi ótimo! Em contrapartida, nesse modelo acabamos interagindo um pouco menos com outras pessoas e ficamos menos imersos na experiência de um evento (não tem lanchinho gostoso, brindes, os adesivos que eu adoro colocar na minha geladeira, etc).
Os eventos presenciais me causam o sentimento de: preciso começar algo novo!
Talvez por ver tanta gente compartilhando seus projetos, ouvir sobre coisas diferentes do meu cotidiano e conversar com outras pessoas, acabo me animando e tendo novas ideias de iniciativas e projetos (mesmo que no fim acabam perdidos na correria da rotina e perrengues da vida).
Esse ano fui para o Google IO que aconteceu nos dias 14 e 15 de maio em Mountain View e voltei determinada a tirar o projeto de escrever artigos do papel.
O evento é bem organizado, o clima estava mega ensolarado e os swags são bonitos. Quem se registrou no dia 13, teve direito a um moletom e pode fazer um passeio pelos prédios do Google. No fim de cada dia de evento tinha algo para o pessoal se divertir (Jengas gigantes, bingos, música, comida, etc).

O tema deste ano foi IA dando ênfase no Gemini e mostrando o uso da inteligência artificial dentro dos produtos do Google. O keynote principal foi um show de improviso de um DJ utilizando Gemini nas composições para mostrar um de seus diversos usos e o keynote de desenvolvimento focou na parte de ferramental (Android, Firebase, Play Store, Tensor, etc).
O evento era dividido em sessões, workshops e um tipo de laboratório maluco com diferentes projetos utilizando o Gemini (de produções de áudio até análises de chutes a gol, literalmente). As sessões foram gravadas e estão [disponíveis no Youtube](https://io.google/2024/explore/intl/pt/).

No meio do evento tinha uma lojinha e todos os participantes tinham 15% de desconto em uma compra. Os Pixels e tablets são muito bonitos, especialmente a telona do Pixel Fold.
Pessoalmente eu senti falta de mais conteúdos de mobile e em especial de Flutter, mas como o assunto principal era outro também aproveitei para aprender sobre outros temas.
Em anos anteriores assisti o evento online e foi uma experiência bem legal ver tudo pessoalmente. Quem tiver a oportunidade de ir, vale muito a pena! Lembrando que os convites são bem limitados, como o Google IO é sempre na mesma época só ficar de olho próximo destas datas.
| tyemy |
1,880,966 | the Core Azure Architectural Components | The core architectural components of Microsoft Azure include: Azure Regions: These are... | 0 | 2024-06-08T01:00:39 | https://dev.to/oluwole_akins/the-core-azure-architectural-components-8g6 | azure, architecture | The core architectural components of Microsoft Azure include:
1. Azure Regions: These are geographically distributed datacenter locations where Azure resources are hosted. Each region consists of multiple datacenters.
2. Availability Zones: These provide redundancy and high availability within a region. Availability Zones are physically separate datacenters with independent power, cooling, and networking.
3. Resource Groups: These are logical containers for managing and organizing Azure resources. You can group related resources together for easier management.
4. Azure Resource Manager (ARM): ARM is the control plane for managing and deploying Azure resources. It provides a consistent management layer for creating, updating, and deleting resources.
Oluwole_Akins | oluwole_akins |
1,880,950 | EsLint + TypeScript + Prettier (Flat Config) | Today I woke up with the need to create an email scheduled notification for my personal project using... | 0 | 2024-06-08T00:45:57 | https://dev.to/joshuanr5/eslint-typescript-prettier-flat-config-1bmb | Today I woke up with the need to create an email scheduled notification for my personal project using AWS and I noticed that these last few months I have abandoned NodeJS ... y cuando comenzé a configurar una función lambda usando SAM (Serverless Application Model) me di cuenta que por lo menos necesitaria una configuración de EsLint con Typescript minima para poder sentirme comodo codificando.
Para mi sorpresa me di cuenta que EsLint ha cambiado completamente su sistema de configuración haciendo uso de lo que ellos llaman [Flat Config](https://eslint.org/blog/2022/08/new-config-system-part-2/) por ello decidi hacer este pequeño blog para poder configurar EsLint, TypeScript y Prettier desde cero.
_Nota: Vamos a utilizar `npm` como gestor de paquetes para node pero puedes usar el de tu elección._
## Pre-requisitos
- Tener NodeJS version >=20.14.0
## Creación del projecto
Vamos a crear una carpeta vacia e inicializar `npm`.
```bash
mkdir my-project
cd my-project
npm init -y
```
## Instalando dependencias
Vamos a instalar las siguientes dependencias con sus repestivas dependencias @Types, las cuales son:
- `eslint`
- `@eslint/js` con `@types/eslint__js`
- `typescript`
- `typescript-eslint`
- `prettier`
- `eslint-plugin-prettier`
- `eslint-config-prettier` con `@types/eslint-config-prettier`
```bash
npm install --save-dev eslint @eslint/js @types/eslint__js typescript typescript-eslint prettier eslint-plugin-prettier eslint-config-prettier @types/eslint-config-prettier
```
## Configurar EsLint y Prettier
Una vez instalado vamos a comenzar con la configuración de EsLint y Prettier. Vamos a crear el archivo `eslint.config.msj`.
```bash
nano eslint.config.msj
```
una vez creada vamos a poner toda la configuración del archivo.
{% gist https://gist.github.com/joshnavdev/3634f849314358a58e044ba9edfa2fd0 file=eslint.config.mjs %}
Voy a explicar a gran escala en que consiste el codigo previo, aunque igual recomiendo que lean la [documentación oficial del Eslint](https://eslint.org/docs/latest/use/configure/).
Como podemos ver al inicio estan los imports el cual importamos las librerias `@eslint/js` que cuenta con la configuración para JavaScript, `typescript-eslint` que como pueden adiviar cuenta con la configuración de TypeScript, `eslint-plugin-prettier` que es un plugin de eslint para mostrar errores de Prettier y `eslint-config-prettier` debido a que prettier y eslint pueden tener reglas similares esta libreria nos ayuda a evitar problemas con estas librerias y que todo funcione de manera esperada.
Para la configuración del eslint usamos la librerias `typescript-eslint` con su metodo config que es un helper opcional el cual permite ingresar distintas configuraciones de eslint y que internamente retorne todo lo que se eslint requiere.
En el primer argumento del metodo `config` agregamos toda la configuración para TypeScript extendiendo las configuraciones recomendadas tanto para JavaScript y Typescript.
Como ahora el Flat Config de EsLint permite tener una configuración en cascada, decidí que toda la configuración es Prettier estaria en el siguiente argumento de configuración extendiendo su configuración recomendada.
Ya que tenemos configurado EsLint vamos a configurarlo creando el archivo `.prettierrc.yaml`.
```bash
touch .prettierrc.yaml
```
Y agregarmos la configuración de su preferencia, este es un ejemplo de mi configuración minima.
{% gist https://gist.github.com/joshnavdev/3634f849314358a58e044ba9edfa2fd0 file=.prettierrc.yaml %}
Bueno por ahora esto es lo mas básico que en lo personal necesito para poder desarrollar en TypeScript, igual pueden ir agregando mas plugins como para React, AirBnb, etc.
Saludos!
| joshuanr5 | |
1,880,961 | LaCebollaAventurera | Check out this Pen I made! | 0 | 2024-06-08T00:45:44 | https://dev.to/dalelo_gamesygraphics_f/lacebollaaventurera-1hfl | codepen | Check out this Pen I made!
{% codepen https://codepen.io/Dalelo-GAMES-y-GRAPHICS/pen/NWVvrLq %} | dalelo_gamesygraphics_f |
1,880,960 | Learning the Basics of Large Language Model (LLM) Applications with LangChainJS | LangChainJS is a powerful tool for building and operating Large Language Models (LLMs) in JavaScript.... | 0 | 2024-06-08T00:38:38 | https://dev.to/praveencs87/learning-the-basics-of-large-language-model-llm-applications-with-langchainjs-4035 | langchain, langchainjs, javascript, llm | LangChainJS is a powerful tool for building and operating Large Language Models (LLMs) in JavaScript. It’s perfect for creating applications across various platforms, including browser extensions, mobile apps with React Native, and desktop apps with Electron. The popularity of JavaScript among developers, combined with its ease of deployment and scalability, makes LangChainJS an ideal choice for these tasks.
LangChainJS uses a special language to create chains of components called "runnables." These runnables define core methods, input and output types, and enable functionalities like invoking, streaming, batching, and modifying runtime parameters.
Example: Making a Joke Bot
Here’s a simple example to demonstrate how LangChainJS works:
```
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106"
});
await model.invoke([
new HumanMessage("Tell me a joke.")
]);
// Expected Response:
// "Why don't skeletons fight each other? They don't have the guts!"
```
## Understanding Prompt Templates
Prompt templates are standardized formats used for creating prompts in LLM applications. They include placeholders for variable input, making them reusable and adaptable for different queries. In LangChain, prompt templates are implemented using classes like _PromptTemplate_ and _ChatPromptTemplate_.
```
`import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromTemplate(
`What are three good names for a company that makes {product}?`
);
await prompt.format({
product: "colorful socks"
});
// Expected Output:
// "What are three good names for a company that makes colorful socks?"
`
```
## LangChain Expression Language (LCEL)
LCEL connects different components (runnables) in a sequence, creating workflows where the output of one component becomes the input for another. These runnables come with methods like invoke, stream, and batch.
```
const chain = prompt.pipe(model);
await chain.invoke({
product: "colorful socks"
});
// Expected Response:
// "1. Rainbow Soles\n2. Vivid Footwear Co.\n3. Chromatic Sockworks"
```
## Using Output Parsers
Output parsers transform the chat model output into a different format, such as a simple string.
```
import { StringOutputParser } from "@langchain/core/output_parsers";
const outputParser = new StringOutputParser();
const nameGenerationChain = prompt.pipe(model).pipe(outputParser);
await nameGenerationChain.invoke({
product: "fancy cookies"
});
// Expected Response:
// "1. Gourmet Cookie Creations\n2. Delicate Delights Bakery\n3. Heavenly Sweet Treats Co."
```
## Streaming Responses
The .`stream` method allows handling LLM responses that take a long time to generate, returning the output in an iterable stream.
```
const stream = await nameGenerationChain.stream({
product: "really cool robots",
});
for await (const chunk of stream) {
console.log(chunk);
}
```
## Batch Processing
The `batch` method performs multiple operations concurrently, handling multiple inputs simultaneously.
```
const inputs = [
{ product: "large calculators" },
{ product: "alpaca wool sweaters" }
];
await nameGenerationChain.batch(inputs);
// Expected Response:
// ["1. GiantCalc Co.\n2. MegaMath Devices\n3. JumboCalculations Inc.",
// "1. Alpaca Luxe\n2. Sweater Alpaca\n3. Woolly Alpaca Co."]
```
## Retrieval Augmented Generation (RAG)
RAG combines the capabilities of LLMs with retrieval techniques to generate text with contextual information. It involves loading documents, splitting them for clarity, embedding them into vectors, and storing them in a vector database for efficient retrieval.
## Document Loading with LangChainJS
LangChainJS offers document loaders to collect data from various sources. For example, you can load a GitHub repository:
```
import { GithubRepoLoader } from "langchain/document_loaders/web/github";
import ignore from "ignore";
const loader = new GithubRepoLoader(
"https://github.com/langchain-ai/langchainjs",
{ recursive: false, ignorePaths: ["*.md", "yarn.lock"] }
);
const docs = await loader.load();
console.log(docs.slice(0, 3));
```
## Splitting Documents
LangChainJS provides strategies for splitting documents to ensure coherence and context.
```
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
const splitter = RecursiveCharacterTextSplitter.fromLanguage("js", {
chunkSize: 32,
chunkOverlap: 0,
});
const code = `function helloWorld() {
console.log("Hello, World!");
}
// Call the function
helloWorld();`;
await splitter.splitText(code);
// Expected Output:
// ["function helloWorld() {", 'console.log("Hello, World!");\n}', "// Call the function", "helloWorld();"]
```
## Embedding and Searching
Embedding converts document contents into vectors, which are then stored in a vector database. You can search for relevant chunks using these embeddings.
```
import { OpenAIEmbeddings } from "@langchain/openai";
const embeddings = new OpenAIEmbeddings();
await embeddings.embedQuery("This is some sample text");
// Expected Output:
// An array of numbers representing the embedded text.
```
## Constructing a Retrieval Chain
Create a chain to retrieve documents and generate answers based on user queries.
```
import { RunnableSequence } from "@langchain/core/runnables";
import { ChatOpenAI } from "@langchain/openai";
import { StringOutputParser } from "@langchain/core/output_parsers";
const retrievalChain = RunnableSequence.from([
{
context: documentRetrievalChain,
question: (input) => input.question,
},
answerGenerationPrompt,
model,
new StringOutputParser(),
]);
const answer = await retrievalChain.invoke({
question: "What are the prerequisites for this course?"
});
console.log(answer);
// Expected Response:
// Detailed answer about the prerequisites for the course.
```
## Handling Follow-up Questions
LangChainJS can handle follow-up questions by saving chat history and rephrasing questions to make them standalone.
```
import { MessagesPlaceholder } from "@langchain/core/prompts";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
import { ChatMessageHistory } from "langchain/stores/message/in_memory";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
const messageHistory = new ChatMessageHistory();
const finalRetrievalChain = new RunnableWithMessageHistory({
runnable: conversationalRetrievalChain,
getMessageHistory: (_sessionId) => messageHistory,
historyMessagesKey: "history",
inputMessagesKey: "question",
});
const originalQuestion = "What are the prerequisites for this course?";
const originalAnswer = await finalRetrievalChain.invoke({
question: originalQuestion,
}, {
configurable: { sessionId: "test" }
});
const finalResult = await finalRetrievalChain.invoke({
question: "Can you list them in bullet point form?",
}, {
configurable: { sessionId: "test" }
});
console.log(finalResult);
// Expected Response:
// List of prerequisites in bullet points.
```
This guide covers the basics of using LangChainJS for building LLM applications, from loading documents and creating prompt templates to constructing retrieval chains and handling follow-up questions. By leveraging these tools, you can create powerful and efficient LLM applications in JavaScript.
| praveencs87 |
1,880,958 | Beginner's Guide to Clocks: Understanding the Essentials | Project:- 6/500 Clocks Project Live Demo Description The clock project is a... | 27,575 | 2024-06-08T00:35:01 | https://dev.to/raajaryan/beginners-guide-to-clocks-understanding-the-essentials-1g0l | javascript, beginners, opensource, tutorial | ## Project:- 6/500 Clocks Project
[Live Demo](https://deepakkumar55.github.io/ULTIMATE-JAVASCRIPT-PROJECT/Basic%20Projects/4-clock)
## Description
The clock project is a simple web application that displays the current time in real-time. It provides users with a convenient way to check the time on their devices without relying on external sources.
## Features
- **Real-Time Updates**: The clock updates automatically to reflect the current time.
- **12-Hour or 24-Hour Format**: Users can choose between a 12-hour or 24-hour time format.
- **Customizable Design**: The clock's design is customizable, allowing users to adjust its appearance to their preferences.
## Technologies Used
- **JavaScript**: Handles the logic for updating the time and providing user interaction.
- **HTML**: Defines the structure of the clock interface.
- **CSS**: Styles the clock interface to enhance its visual appeal and user experience.
## Setup
Follow these steps to set up and run the clock project on your local machine:
1. **Clone the repository**:
```bash
git clone https://github.com/deepakkumar55/ULTIMATE-JAVASCRIPT-PROJECT.git
```
2. **Navigate to the project directory**:
```bash
cd Basic Projects/4-clock
```
3. **Open `index.html` in your web browser**:
You can open the `index.html` file directly in your web browser by double-clicking on it or by using a live server extension in your code editor (like Live Server for VSCode).
## Contribution
Contributions to the clock project are welcome! Follow these steps to contribute:
1. **Fork the repository**: Click on the 'Fork' button at the top right corner of the repository page.
2. **Clone the forked repository**:
```bash
git clone https://github.com/deepakkumar55/ULTIMATE-JAVASCRIPT-PROJECT.git
```
3. **Create a new branch**: Make sure your fork is up-to-date with the latest changes.
```bash
git checkout -b feature-yourfeature
```
4. **Make your changes**: Implement your new feature or bug fix.
5. **Commit your changes**:
```bash
git add .
git commit -m "Description of your changes"
```
6. **Push to your branch**:
```bash
git push origin feature-yourfeature
```
7. **Open a Pull Request**: Navigate to the original repository and open a pull request from your forked repository. Provide a detailed description of your changes and any relevant information.
## Get in Touch
If you have any questions or need further assistance, feel free to open an issue on GitHub or contact us directly. Your contributions and feedback are highly appreciated!
---
Thank you for your interest in the Clocks project. Together, we can build a more robust and feature-rich application. Happy coding! | raajaryan |
1,880,956 | Exploring Data Structures and Algorithms in C | Introduction Data structures and algorithms are fundamental concepts in computer science... | 0 | 2024-06-08T00:32:28 | https://dev.to/kartikmehta8/exploring-data-structures-and-algorithms-in-c-2am9 | webdev, javascript, beginners, programming | ## Introduction
Data structures and algorithms are fundamental concepts in computer science that enable efficient storage and retrieval of data. They are essential in the development of efficient and optimized software and play a crucial role in problem-solving. In this article, we will explore the basic data structures and algorithm implementations in the C programming language.
## Advantages of Using C for Data Structures and Algorithms
1. **Efficiency and Speed:** C is a low-level language, making it closer to hardware and allowing for efficient memory usage and faster execution. This is particularly beneficial for performance-critical applications.
2. **Built-in Functionalities:** C offers a wide range of built-in functionalities such as pointers, arrays, and structures that can be utilized for implementing data structures and algorithms, providing a strong foundation for custom solutions.
## Disadvantages of Using C
1. **Complex Syntax and Manual Memory Management:** Working with C can be challenging, especially for beginners due to its complex syntax and the need for manual memory management. Understanding pointers and memory allocation can be daunting and time-consuming.
## Features of C for Data Structures and Algorithms
C offers a variety of data structures and algorithm implementations such as arrays, linked lists, stacks, queues, trees, and graphs. These structures have their unique features and are suitable for solving different types of problems. Moreover, C allows for the creation of custom data structures and algorithms, making it a versatile language for problem-solving.
### Examples of Data Structures in C
#### Arrays
```c
#include <stdio.h>
int main() {
int array[5] = {1, 2, 3, 4, 5};
for(int i = 0; i < 5; i++) {
printf("%d ", array[i]);
}
return 0;
}
```
#### Linked List
```c
#include <stdio.h>
#include <stdlib.h>
typedef struct node {
int data;
struct node *next;
} Node;
Node* createNode(int data) {
Node* newNode = (Node*) malloc(sizeof(Node));
if (!newNode) return NULL;
newNode->data = data;
newNode->next = NULL;
return newNode;
}
int main() {
Node* head = createNode(1);
head->next = createNode(2);
head->next->next = createNode(3);
Node* current = head;
while (current != NULL) {
printf("%d ", current->data);
current = current->next;
}
return 0;
}
```
## Conclusion
Exploring data structures and algorithms in C can be highly beneficial, especially for performance-driven applications. While C may have its challenges, the depth of its built-in functionalities makes it a popular choice among developers for data structure and algorithm implementation. With a strong understanding of C fundamentals, one can efficiently build complex and optimized software solutions. | kartikmehta8 |
1,880,954 | Meet the world most beautiful beaches | This is a submission for [Frontend Challenge... | 0 | 2024-06-08T00:30:25 | https://dev.to/elmerurbina/best-beaches-in-the-world-5ghd | devchallenge, frontendchallenge, css, javascript | _This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches_
<!-- Tell us what you built and what you were looking to achieve. -->
## Demo
Screenshots of the template.




[See the code on GitHub ](https://github.com/elmerurbina/markUp)
<!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
The code have the MIT free licence.
<!-- We encourage you to consider adding a license for your code. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | elmerurbina |
1,700,361 | Testando das trincheiras: Usando um "clock" fixo | Vamos aprender como usar um relógio fixo para tornar nossas classes mais testáveis. | 0 | 2024-06-08T00:28:26 | https://dev.to/hugaomarques/testando-das-trincheiras-usando-um-clock-fixo-4gl6 | java, testes, oop, patterns | ---
title: Testando das trincheiras: Usando um "clock" fixo
published: true
description: Vamos aprender como usar um relógio fixo para tornar nossas classes mais testáveis.
tags: #java #testes #oop #patterns
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fkyo1zxrt96bhev8c0h0.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2023-12-17 14:47 +0000
---
Outro curtinho sobre testes. Um dos problemas mais comuns que eu vejo é o uso do tempo variável dentro do código. Como assim? Imagine o seguinte exemplo:
```java
@Component
public class TaskScheduler {
private static final LocalTime START_OF_WORKING_DAY = LocalTime.of(8, 0);
private static final LocalTime END_OF_WORKING_DAY = LocalTime.of(22, 0);
public void scheduleTask(Task task) {
if (shouldSchedule()) {
executeTaskNow(task);
}
}
public static boolean shouldSchedule() {
// Get the current time in the system default time zone
LocalDateTime now = LocalDateTime.ofInstant(Instant.now(), ZoneId.systemDefault());
LocalTime currentTime = now.toLocalTime();
// Check if the current time is within the working hours
return !currentTime.isBefore(START_OF_WORKING_DAY) && !currentTime.isAfter(END_OF_WORKING_DAY);
}
}
```
Qual o problema com o código acima? Devido ao `Instant.now()` no meio do seu código, você não consegue testar o seu método! Como sua lógica é não-determinística e depende do tempo, o seu teste vai passar/falhar conforme o horário que o teste é executado.
## Como corrigir esse problema?
Uma alternativa bem simples a partir do java 8 é utilizar a classe `Clock` para injetar sua dependência que controla o tempo.
No nosso exemplo acima, nosso código ficaria:
```java
@Component
public class TaskScheduler {
private static final LocalTime START_OF_WORKING_DAY = LocalTime.of(8, 0);
private static final LocalTime END_OF_WORKING_DAY = LocalTime.of(22, 0);
private final Clock clock;
@Autowired
public TaskScheduler(Clock clock) {
this.clock = clock;
}
public void scheduleTask(Task task) {
if (shouldSchedule()) {
executeTaskNow(task);
}
}
public static boolean shouldSchedule() {
// Get the current time in the current clock
LocalDateTime now = LocalDateTime.ofInstant(clock);
LocalTime currentTime = now.toLocalTime();
// Check if the current time is within the working hours
return !currentTime.isBefore(START_OF_WORKING_DAY) && !currentTime.isAfter(END_OF_WORKING_DAY);
}
}
```
Dessa forma, você consegue escrever os testes passando o `Clock` com o tempo que você deseja.
```java
@Test
public void testIsNowWithinWorkingHours_withinHours() {
// Arrange: set a fixed instant within working hours
Instant fixedInstant = LocalDateTime.of(2024, 6, 1, 10, 0)
.toInstant(ZoneOffset.UTC);
Clock fixedClock = Clock.fixed(fixedInstant, ZoneId.systemDefault());
TaskScheduler t = new TaskScheduler(fixedClock);
// Act: call the method with the fixed clock
boolean result = t.shouldSchedule();
// Assert: should be within working hours
assertTrue(result, "The time should be within working hours");
}
```
## Dicas interessantes!
### 1. Eu uso `Spring`, como eu crio esse `Clock` pra ser injetado?
Simples, você pode declarar o seu Clock padrão pro sistema em uma classe de `@Configuration`.
```java
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.time.Clock;
@Configuration
public class AppConfiguration {
@Bean
public Clock clock() {
// Retorna o relógio do sistema na zona padrão do sistema
return Clock.systemDefaultZone();
}
}
```
E aí na sua classe é só fazer o `@Autowired` no construtor que nem fizemos no nosso exemplo acima.
### 2. Eu uso o meu construtor default em 50 locais diferentes, eu vou ter que alterar todos esses locais pra injetar o Clock agora?
Nada jovem padawan! Um truque bacana é fazer um overloaded constructor:
```java
@Component
public class TaskScheduler {
private static final LocalTime START_OF_WORKING_DAY = LocalTime.of(8, 0);
private static final LocalTime END_OF_WORKING_DAY = LocalTime.of(22, 0);
private final Clock clock;
// Essa anotação fala pro nosso Spring da massa usar esse construtor
@Autowired
public TaskScheduler() {
this.clock = Clock.systemDefaultZone();
}
// Esse construtor aqui a gente pode usar pros testes.
public TaskScheduler(Clock clock) {
this.clock = clock;
}
```
E pronto! Com os dois construtores, você mantém a classe funcionando onde ela já existia, além de permitir a escrita de testes automatizados de forma simples.
## Sumário
1. Evite o uso de tempo variável no meio do código.
2. Use injeção de dependências para adicionar o seu relógio.
3. Use construtores padrões e sobrecarga no construtor para permitir adicionar os testes com o mínimo de refatoramento.
Espero que vocês estejam curtindo essas dicas rápidas sobre testes.
Em breve, vou escrever também meus aprendizados sobre paralelismo!
Happy coding!
| hugaomarques |
1,880,949 | RECOVER YOUR STOLEN CRYPTO THROUGH RESILIENT SHIELD RECOVERY | People are falling victim to it. I am so delighted to share my incredible experience with the... | 0 | 2024-06-08T00:11:10 | https://dev.to/chris_henry_67381bd2daad8/recover-your-stolen-crypto-through-resilient-shield-recovery-4l9p | People are falling victim to it. I am so delighted to share my incredible experience with the recovery company called RESILIENT SHIELD RECOVERY. I invested 145,000 in a fake investment. After falling victim. I was devastated and unsure if I would ever see my hard-earned money again. I’m happy I found RESILIENT SHIELD RECOVERY on google who helped me recover my cryptocurrency. I can’t forget how RESILIENT SHIELD RECOVERY helped me recover my cryptocurrency. I never would have imagined that cryptocurrency could be recovered. To anyone who finds themselves in a similar predicament, I recommend RESILIENT SHIELD RECOVERY. Their unwavering commitment to their client’s well-being, combined with their unparalleled expertise in cryptocurrency recovery. You can also contact them via
Email: resilientshieldrecovery@contractor.net
Whatsapp:+1(936)244‑3264 | chris_henry_67381bd2daad8 | |
1,880,947 | Choosing the Right Cybersecurity Company: Why Techgenies Stands Out | In today’s world, keeping your digital stuff safe is super important. That’s where cybersecurity... | 0 | 2024-06-08T00:05:20 | https://dev.to/andrew_morgan_fef3e706051/choosing-the-right-cybersecurity-company-why-techgenies-stands-out-1dea | security, cybersecurity | In today’s world, keeping your digital stuff safe is super important. That’s where cybersecurity companies come in. They help protect your computers, phones, and all your online stuff from bad guys who want to mess things up. One of the best in the business is Techgenies. Here’s why they’re awesome:
**
## What Cybersecurity Companies Do
**
Cybersecurity companies are like the superheroes of the internet. They use special tools and tricks to stop hackers and other bad people from stealing your stuff or messing with your computer. They do things like:
Finding and Stopping Bad Guys: [Cybersecurity experts](https://techgenies.com/cybersecurity/) are like detectives. They hunt down bad guys trying to break into your computer systems and stop them in their tracks.
Fixing Problems Fast: If something goes wrong and your computer gets attacked, cybersecurity folks jump into action to fix things ASAP. They’re like the firefighters of the internet!
Making Sure Everything is Super Secure: Cybersecurity pros make sure all your digital stuff is locked up tight. They set up special locks and alarms to keep the bad guys out.
## Why Techgenies is Awesome
Techgenies isn’t your average cybersecurity company. They’re like the Avengers of the internet world, with all the right skills to keep your stuff safe. Here’s why they’re so great:
## 1. They’re Experts
**Techgenies knows their stuff. They’ve been doing this for a long time and have super-smart people who know all about the latest tricks hackers use. You can trust them to keep your digital world safe.
## 2. They’ve Got Your Back 24/7
Bad stuff can happen at any time, day or night. That’s why [Techgenies](https://techgenies.com/) is always ready to help. If something goes wrong, they’re there to fix it, no matter what time it is. It’s like having your own personal digital bodyguard!
## 3. They’re Super Friendly
Some techy people can be a bit intimidating, but not the folks at Techgenies. They’re super friendly and explain things in a way that’s easy to understand. You don’t need to be a computer genius to work with them.
## 4. They’re Always Learning
The digital world is always changing, and so are the bad guys. But Techgenies stays one step ahead by always learning new things and keeping up with the latest tech trends. They’re like digital ninjas, always ready for whatever comes their way.
## Conclusion
When it comes to keeping your digital stuff safe, you need a superhero on your side. That’s where Techgenies comes in. With their expertise, round-the-clock support, friendliness, and dedication to staying ahead of the bad guys, they’re the perfect choice for all your cybersecurity needs. Trust Techgenies to keep your digital world safe and sound! | andrew_morgan_fef3e706051 |
1,880,946 | AI's antibiotic breakthrough | Introductuon: Researchers just published a new study detailing the use of AI to predict close to 1M... | 0 | 2024-06-08T00:04:58 | https://dev.to/sam15x6/ais-antibiotic-breakthrough-54nn | **Introductuon:** Researchers just published a new study detailing the use of AI to predict close to 1M new antibiotics hidden within tiny microbes all over the world, uncovering new potential treatments against bacteria and superbugs.
**The details:**
Researchers used AI to analyze publicly available data on over 100,000 different genomes and meta-genomes.
The AI then predicted which parts of the microbial genomes could potentially produce antibiotic compounds, generating a list of nearly one million candidates.
100 of the AI-predicted drug candidates were tested in the lab, with 79 of them being a potential antibiotic.
The paper’s author Cesar de la Fuente said the findings are “the largest antibiotic discovery ever”, accelerating the process from years to just hours.
**Why it matters:** As the world faces growing threats from antibiotic-resistant bacteria, AI’s ability to unlock millions of new potential treatments could be a lifeline toward staying ahead in the race to outsmart superbugs responsible for millions of deaths every year. | sam15x6 | |
1,880,945 | China's new Sora rival is here | ``Introduction: Chinese tech firm Kuaishou just introduced KLING, a new text-to-video AI model... | 0 | 2024-06-08T00:01:56 | https://dev.to/sam15x6/chinas-new-sora-rival-is-here-lco | ai, machinelearning, career, discuss | ``**Introduction**: Chinese tech firm Kuaishou just introduced KLING, a new text-to-video AI model capable of generating high-quality videos up to 2 minutes long with outputs that appear to rival OpenAI’s still-unreleased Sora.
**The details:**
KLING can produce videos at 1080p resolution with a maximum length of 2 minutes, surpassing the 1-minute Sora videos demoed by OpenAI.
KLING’s demos include realistic outputs like a man eating noodles and scenic shots, as well as surreal clips like animals in clothes.
The model uses a 3D space-time attention system to simulate complex motion and physical interactions that better mimic the real world.
The model is currently available to Chinese-based users as a public demo on the KWAI iOS app.
**Why it matters:** These generations are even more mind-blowing when you consider that Will Smith’s spaghetti-eating abomination was barely a year ago. With users still anxiously waiting for the public release of Sora, other competitors are stepping in — and the AI video landscape looks like it’s about to heat up in a major way. | sam15x6 |
1,880,882 | Caching in ASP.NET Core: Improving Application Performance | Caching is one of the simplest techniques to significantly improve your application's performance.... | 0 | 2024-06-09T18:36:12 | https://www.milanjovanovic.tech/blog/caching-in-aspnetcore-improving-application-performance | aspnetcore, caching, dotnet, redis | ---
title: Caching in ASP.NET Core: Improving Application Performance
published: true
date: 2024-06-08 00:00:00 UTC
tags: aspnetcore,caching,dotnet,redis
canonical_url: https://www.milanjovanovic.tech/blog/caching-in-aspnetcore-improving-application-performance
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/af40gkmtj4trk0dx6hii.png
---
Caching is one of the simplest techniques to significantly improve your application's performance. It's the process of temporarily storing data in a faster access location. You will typically cache the results of expensive operations or frequently accessed data.
Caching allows subsequent requests for the same data to be served from the cache instead of fetching the data from its source.
ASP.NET Core offers several types of caches, such as `IMemoryCache`, `IDistributedCache`, and the upcoming `HybridCache` (.NET 9).
In this newsletter, we will explore how to implement [caching in ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/performance/caching/memory) applications.
## How Caching Improves Application Performance
Caching improves your application's performance by reducing latency and server load while enhancing scalability and user experience.
- **Faster data retrieval**: Cached data can be accessed much faster than retrieving it from the source (like a database or an API). Caches are typically stored in memory (RAM).
- **Fewer database queries**: Caching frequently accessed data reduces the number of database queries. This reduces the load on the database server.
- **Lower CPU usage**: Rendering web pages or processing API responses can consume significant CPU resources. Caching the results reduces the need for repetitive CPU-intensive tasks.
- **Handling increased traffic**: By reducing the load on backend systems, caching allows your application to handle more concurrent users and requests.
- **Distributed caching**: Distributed cache solutions like [Redis](https://redis.io/) enable scaling the cache across multiple servers, further improving performance and resilience.
In a recent project I worked on, we used Redis to scale to more than 1,000,000 users. We only had one SQL Server instance with a read-replica for reporting. The power of caching, eh?
## Caching Abstractions in ASP.NET Core
ASP.NET Core provides two primary abstractions for working with caches:
- `IMemoryCache`: Stores data in the memory of the web server. Simple to use but not suitable for distributed scenarios.
- `IDistributedCache`: Offers a more robust solution for distributed applications. It allows you to store cached data in a distributed cache like Redis.
We have to register these services with DI to use them.`AddDistributedMemoryCache` will configure the in-memory implementation of `IDistributedCache`, which isn't distributed.
```csharp
builder.Services.AddMemoryCache();
builder.Services.AddDistributedMemoryCache();
```
Here's how you can use the `IMemoryCache`. We will first check if the cached value is present and return it directly if it's there. Otherwise, we must fetch the value from the database and cache it for subsequent requests.
```csharp
app.MapGet(
"products/{id}",
(int id, IMemoryCache cache, AppDbContext context) =>
{
if (!cache.TryGetValue(id, out Product product))
{
product = context.Products.Find(id);
var cacheEntryOptions = new MemoryCacheEntryOptions()
.SetAbsoluteExpiration(TimeSpan.FromMinutes(10))
.SetSlidingExpiration(TimeSpan.FromMinutes(2));
cache.Set(id, product, cacheEntryOptions);
}
return Results.Ok(product);
});
```
Cache expiration is another important topic to discuss. We want to remove cache entries that aren't used and become stale. You can pass in the `MemoryCacheEntryOptions`, allowing you to configure cache expiration. For example, we can set the `AbsoluteExpiration` and `SlidingExpiration` values to control when the cache entry will expire.
## Cache-Aside Pattern
The cache-aside pattern is the most common caching strategy. Here's how it works:
1. **Check the cache**: Look for the requested data in the cache.
2. **Fetch from source (if cache miss)**: If the data isn't in the cache, fetch it from the source.
3. **Update the cache**: Store the fetched data in the cache for subsequent requests.

Here's how you can implement the cache-aside pattern as an extension method for `IDistributedCache`:
```csharp
public static class DistributedCacheExtensions
{
public static DistributedCacheEntryOptions DefaultExpiration => new()
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(2)
};
public static async Task<T> GetOrCreateAsync<T>(
this IDistributedCache cache,
string key,
Func<Task<T>> factory,
DistributedCacheEntryOptions? cacheOptions = null)
{
var cachedData = await cache.GetStringAsync(key);
if (cachedData is not null)
{
return JsonSerializer.Deserialize<T>(cachedData);
}
var data = await factory();
await cache.SetStringAsync(
key,
JsonSerializer.Serialize(data),
cacheOptions ?? DefaultExpiration);
return data;
}
}
```
We're using `JsonSerializer` to manage serialization to and from a JSON string. The `SetStringAsync` method also accepts a `DistributedCacheEntryOptions` argument to control cache expiration.
Here's how we would use this extension method:
```csharp
app.MapGet(
"products/{id}",
(int id, IDistributedCache cache, AppDbContext context) =>
{
var product = cache.GetOrCreateAsync($"products-{id}", async () =>
{
var productFromDb = await context.Products.FindAsync(id);
return productFromDb;
});
return Results.Ok(product);
});
```
## Pros and Cons of In-Memory Caching
Pros:
- Extremely fast
- Simple to implement
- No external dependencies
Cons:
- Cache data is lost if the server restarts
- Limited to the memory (RAM) of a single server
- Cache data is not shared across multiple instances of your application
## Distributed Caching With Redis
[Redis](https://redis.io/) is a popular in-memory data store often used as a high-performance distributed cache. To use Redis in your ASP.NET Core application, you can use the `StackExchange.Redis` library.
However, there's also the `Microsoft.Extensions.Caching.StackExchangeRedis` library, allowing you to integrate Redis with `IDistributedCache`.
```csharp
Install-Package Microsoft.Extensions.Caching.StackExchangeRedis
```
Here's how you can configure it with DI by providing a connection string to Redis:
```csharp
string connectionString = builder.Configuration.GetConnectionString("Redis");
builder.Services.AddStackExchangeRedisCache(options =>
{
options.Configuration = connectionString;
});
```
An alternative approach is to register an `IConnectionMultiplexer` as a service. Then, we will use it to provide a function for the `ConnectionMultiplexerFactory`.
```csharp
string connectionString = builder.Configuration.GetConnectionString("Redis");
IConnectionMultiplexer connectionMultiplexer =
ConnectionMultiplexer.Connect(connectionString);
builder.Services.AddSingleton(connectionMultiplexer);
builder.Services.AddStackExchangeRedisCache(options =>
{
options.ConnectionMultiplexerFactory =
() => Task.FromResult(connectionMultiplexer);
});
```
Now, when you inject `IDistributedCache`, it will use Redis under the hood.
## Cache Stampede and HybridCache
The in-memory cache implementations in ASP.NET Core are susceptible to race conditions, which can cause a cache stampede. A [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) happens when concurrent requests encounter a cache miss and try to fetch the data from the source. This can overload your application and negate the benefits of caching.
Locking is one solution for the cache stampede problem. .NET offers many options for [locking and concurrency control](https://milanjovanovic.tech/blog/introduction-to-locking-and-concurrency-control-in-dotnet-6). The most commonly used locking primitives are the `lock` statement and the `Semaphore` (or `SemaphoreSlim`) class.
Here's how we could use `SemaphoreSlim` to introduce locking before fetching data:
```csharp
public static class DistributedCacheExtensions
{
private static readonly SemaphoreSlim Semaphore = new SemaphoreSlim(1, 1);
// Arguments omitted for brevity
public static async Task<T> GetOrCreateAsync<T>(...)
{
// Fetch data from cache, and return if present
// Cache miss
try
{
await Semaphore.WaitAsync();
var data = await factory();
await cache.SetStringAsync(
key,
JsonSerializer.Serialize(data),
cacheOptions ?? DefaultExpiration);
}
finally
{
Semaphore.Release();
}
return data;
}
}
```
The previous implementation has a lock contention issue since all requests have to wait for the semaphore. A much better solution would be locking based on the `key` value.
.NET 9 introduces a new caching abstraction called `HybridCache`, which aims to solve the shortcomings of `IDistributedCache`. Learn more about this in the [Hybrid cache documentation](https://learn.microsoft.com/en-us/aspnet/core/performance/caching/hybrid).
## Summary
Caching is a powerful technique for improving web application performance. ASP.NET Core's caching abstractions make it easy to implement various caching strategies.
We can choose from `IMemoryCache` for in-memory cache and `IDistributedCache` for distributed caching.
Here are a few guidelines to wrap up this week's issue:
- Use `IMemoryCache` for simple, in-memory caching
- Implement the cache aside pattern to minimize database hits
- Consider Redis as a high-performance distributed cache implementation
- Use `IDistributedCache` for sharing cached data across multiple applications
That's all for today.
See you next week.
* * *
**P.S. Whenever you're ready, there are 3 ways I can help you:**
1. [**Modular Monolith Architecture (NEW):**](https://www.milanjovanovic.tech/modular-monolith-architecture?utm_source=dev.to&utm_medium=website&utm_campaign=cross-posting) Join 600+ engineers in this in-depth course that will transform the way you build modern systems. You will learn the best practices for applying the Modular Monolith architecture in a real-world scenario.
2. [**Pragmatic Clean Architecture:**](https://www.milanjovanovic.tech/pragmatic-clean-architecture?utm_source=dev.to&utm_medium=website&utm_campaign=cross-posting) Join 2,750+ students in this comprehensive course that will teach you the system I use to ship production-ready applications using Clean Architecture. Learn how to apply the best practices of modern software architecture.
3. [**Patreon Community:**](https://www.patreon.com/milanjovanovic) Join a community of 1,050+ engineers and software architects. You will also unlock access to the source code I use in my YouTube videos, early access to future videos, and exclusive discounts for my courses. | milanjovanovictech |
1,880,944 | .NET Monthly Roundup - May 2024 - .NET at Build, .NET Aspire GA, and more! | Welcome to our February .NET Monthly Roundup with Jon Galloway! In just 3 minutes, Jon breaks... | 0 | 2024-06-07T23:57:44 | https://dev.to/jongalloway/net-monthly-roundup-may-2024-net-at-build-net-aspire-ga-and-more-3kne | dotnet, aspire, visualstudio | {% embed https://www.youtube.com/watch?v={% embed https://www.youtube.com/watch?v=_eO-1zPdB-U %}
Welcome to our February .NET Monthly Roundup with Jon Galloway! In just 3 minutes, Jon breaks down the latest news from the month of May 2024 that .NET developers need to know.
### Top links
📚[All the links](http://aka.ms/dnmr-2405)
🌟.NET at Build 2024🌟
➡️[.NET Announcements and Updates from Microsoft Build 2024](https://devblogs.microsoft.com/dotnet/dotnet-build-2024-announcements/)
➡️[Catch Up on Microsoft Build 2024: Essential Sessions for .NET Developers](https://devblogs.microsoft.com/dotnet/catching-up-on-microsoft-build-2024-essential-sessions-for-dotnet-developers/)
➡️[(200) .NET at Microsoft Build 2024 - YouTube](https://www.youtube.com/playlist?list=PLdo4fOcmZ0oUZz7p8H1HsQjgv5tRRIvAS)
🚢.NET Aspire Release🚢
➡️[General Availability of .NET Aspire: Simplifying .NET Cloud-Native Development](https://devblogs.microsoft.com/dotnet/dotnet-aspire-general-availability/)
➡️[(200) Welcome to .NET Aspire! - YouTube](https://www.youtube.com/playlist?list=PLdo4fOcmZ0oUfIayQMrRqaSL55Rkck-GD)
🛠️Tools Updates🛠️
➡️[Visual Studio 2022 Release Notes | Microsoft Learn](https://learn.microsoft.com/en-us/visualstudio/releases/2022/release-notes#17100--visual-studio-2022-version-17100)
➡️[Copilot 2024 Series - Visual Studio Blog](https://devblogs.microsoft.com/visualstudio/category/visual-studio/copilot-2024-series/)
➡️[Mastering Slash Commands with GitHub Copilot in Visual Studio - Visual Studio Blog](https://devblogs.microsoft.com/visualstudio/mastering-slash-commands-with-github-copilot-in-visual-studio/)
➡️[Package Management & improved .NET Aspire support come to C# Dev Kit](https://devblogs.microsoft.com/dotnet/may-release-of-csharp-dev-kit/)
🔝Recommended Posts and Videos🔝
➡️[Refactor your code with C# collection expressions](https://devblogs.microsoft.com/dotnet/refactor-your-code-with-collection-expressions/)
➡️[(200) Deep .NET - YouTube](https://www.youtube.com/playlist?list=PLdo4fOcmZ0oX8eqDkSw4hH9cSehrGgdr1)
🤝Community News🤝
➡️[Updates and other news from the .NET Foundation](https://mailchi.mp/dotnetfoundation/may2024)
### What did you think?
With so much new information coming at you each month, we're looking at ways to bring you the highlights. Let us know if this helps, and how we can do it better next month!
Jon | jongalloway |
1,880,943 | Exception caught by widgets library, Incorrect use of ParentDataWidget. | Hi Every one, I'm from Indonesia i'm a newbie in programming... I have working with a table while... | 0 | 2024-06-07T23:55:37 | https://dev.to/ahmad_rifai_54a20be09025e/exception-caught-by-widgets-library-incorrect-use-of-parentdatawidget-1219 | singlechildscrollview, api, pageview, flutter | Hi Every one, I'm from Indonesia
i'm a newbie in programming...
I have working with a table while data get from API (in Flutter dart), i made a Scrollable table with Fix First Row (which is the column header) table was successfull and function but it's always show this message in debug console:
Exception caught by widgets library ═══════════════════════════════════
Incorrect use of ParentDataWidget.
is ther anybody help me to solve this, even it's function correctly, those message may be decrease performance application or become an error some day.
My Code is like this:
```
// ignore_for_file: camel_case_types, prefer_interpolation_to_compose_strings
import 'package:bps_cilacap/restAPI/repository_penduduk_kecamatan.dart';
import 'package:flutter/material.dart';
import 'package:bps_cilacap/format_angka.dart';
class JumlahPendudukKecamatanA extends StatefulWidget {
const JumlahPendudukKecamatanA({Key? key}) : super(key: key);
@override
State<JumlahPendudukKecamatanA> createState() =>
_JumlahPendudukKecamatanAState();
}
RepositoryJumlahPendudukKecamatan jumlahPendudukKecamatan =
RepositoryJumlahPendudukKecamatan();
class _JumlahPendudukKecamatanAState extends State<JumlahPendudukKecamatanA> {
@override
Widget build(BuildContext context) {
final screenHeight = MediaQuery.of(context).size.height -
MediaQuery.of(context).padding.top -
MediaQuery.of(context).padding.bottom;
// ignore: unused_local_variable
final screenWidth = MediaQuery.of(context).size.width -
MediaQuery.of(context).padding.left -
MediaQuery.of(context).padding.right;
return Scaffold(
body: FutureBuilder(
future: jumlahPendudukKecamatan.getData(),
builder: (context, snapshot) {
if (snapshot.hasData) {
List isipendudukkecamatan = snapshot.data as List;
return PageView.builder(
itemCount: 1,
itemBuilder: (context, index) {
String kec1 = " 1. " + isipendudukkecamatan[index = 0].kecamatan;
String kec2 = " 2. " + isipendudukkecamatan[index = 1].kecamatan;
…
……
String kec24 =24. " + isipendudukkecamatan[index = 23].kecamatan;
String kab = " " + isipendudukkecamatan[index = 24].kecamatan;
int lk1 = int.parse(isipendudukkecamatan[index = 0].lk1);
int lk2 = int.parse(isipendudukkecamatan[index = 1].lk1);
……
…..
int lk24 = int.parse(isipendudukkecamatan[index = 23].lk1);
int lkTotal = int.parse(isipendudukkecamatan[index = 24].lk1);
int pr1 = int.parse(isipendudukkecamatan[index = 0].pr1);
int pr2 = int.parse(isipendudukkecamatan[index = 1].pr1);
int pr3 = int.parse(isipendudukkecamatan[index = 2].pr1);
………
………
int pr24 = int.parse(isipendudukkecamatan[index = 23].pr1);
int prTotal = int.parse(isipendudukkecamatan[index = 24].pr1);
return Scaffold(
body: Column(
children: <Widget>[
//This is the part of Fixed Row Header
Row(
children: [
Flexible(
fit: FlexFit.tight,
flex: 4,
child: Container(
height: screenHeight * 0.06,
padding: const EdgeInsets.symmetric(
horizontal: 2, vertical: 10),
color: Colors.green,
child: const Center(
child: Text(
"Kecamatan",
textAlign: TextAlign.center,
style: TextStyle(
color: Colors.white,
fontWeight: FontWeight.bold),
),
),
),
),
Flexible(
fit: FlexFit.tight,
flex: 2,
child: Container(
height: screenHeight * 0.06,
padding: const EdgeInsets.symmetric(
horizontal: 2, vertical: 10),
color: Colors.green,
child: const Center(
child: Text(
"Lk",
style: TextStyle(
color: Colors.white,
fontWeight: FontWeight.bold),
),
),
),
),
Flexible(
fit: FlexFit.tight,
flex: 2,
child: Container(
height: screenHeight * 0.06,
padding: const EdgeInsets.symmetric(
horizontal: 2, vertical: 10),
color: Colors.green,
child: const Center(
child: Text(
"Pr",
style: TextStyle(
color: Colors.white,
fontWeight: FontWeight.bold),
),
),
),
),
Flexible(
fit: FlexFit.tight,
flex: 2,
child: Container(
height: screenHeight * 0.06,
padding: const EdgeInsets.symmetric(
horizontal: 2, vertical: 10),
color: Colors.green,
child: const Center(
child: Text(
"Lk + Pr",
style: TextStyle(
color: Colors.white,
fontWeight: FontWeight.bold),
),
),
),
),
],
),
// This the part of Scrollable Row
Expanded(
child: SingleChildScrollView(
child: Flexible(
child: SizedBox(
width: screenWidth,
height: screenHeight,
child: Column(
children: [
//First Row
Container(
width: screenWidth * 1.0,
height: screenHeight * 0.032,
color: Colors.transparent,
child: Row(
children: [
Flexible(
fit: FlexFit.tight,
flex: 4,
child: Container(
color: Colors.transparent,
padding: const EdgeInsets.only(
right: 10, top: 1, bottom: 1),
child: Text(
kec1,
textAlign: TextAlign.left,
style: const TextStyle(
fontWeight: FontWeight.normal),
),
),
),
Flexible(
fit: FlexFit.tight,
flex: 2,
child: Container(
color: Colors.transparent,
padding: const EdgeInsets.only(
right: 10, top: 1, bottom: 1),
child: Text(
Format.convertTo(lk1, 0),
textAlign: TextAlign.right,
style: const TextStyle(
fontWeight: FontWeight.normal),
),
),
),
Flexible(
fit: FlexFit.tight,
flex: 2,
child: Container(
color: Colors.transparent,
padding: const EdgeInsets.only(
right: 10, top: 1, bottom: 1),
child: Text(
Format.convertTo(pr1, 0),
textAlign: TextAlign.right,
style: const TextStyle(
fontWeight: FontWeight.normal),
),
),
),
Flexible(
fit: FlexFit.tight,
flex: 2,
child: Container(
color: Colors.transparent,
padding: const EdgeInsets.only(
right: 5, top: 1, bottom: 1),
child: Text(
Format.convertTo((lk1 + pr2), 0),
textAlign: TextAlign.right,
style: const TextStyle(
fontWeight: FontWeight.normal),
),
),
),
],
),
),
//Second row
……
//And more row here
…
//21 st row
),
),
),
))
],
));
},
);
}
if (snapshot.hasError) {
return const Text('Database Error');
} else {
return const Center(child: CircularProgressIndicator(strokeWidth: 3));
}
},
));
}
}
```
i Hope anyone can help.
PS. Sorry for my english. | ahmad_rifai_54a20be09025e |
1,880,942 | Learning from Code Reviews: Fostering Collaboration | Just finished watching a presentation by Derrick Pryor about making code reviews more effective. Who... | 0 | 2024-06-07T23:52:22 | https://dev.to/aborov/learning-from-code-reviews-fostering-collaboration-pg0 | codereview, beginners | Just finished watching a [presentation by Derrick Pryor](https://www.youtube.com/watch?v=PJjmw9TRB7s) about making code reviews more effective. Who knew they were about more than just catching bugs? Apparently, a good code review culture can be a game-changer for learning and teamwork.
Here's the big takeaway for me: it's all about clear communication.
* **Authors gotta set the stage.** Before hitting that submit button, gotta explain why the code changed. This way, reviewers can understand the "what" and "why" behind the code and give better feedback.
* **Reviewers: question, not criticize.** Instead of just pointing out problems, asking questions helps the author understand the reasoning and learn from it. Makes the whole thing more of a conversation than a one-sided critique.
This makes me think about code reviews differently. It's not about people finding mistakes in my code. It's a chance to learn from my teammates and become a better developer.
Also, the presentation covered some other cool stuff like dealing with merge conflicts (yikes!) and asynchronous reviews (reviews that happen over time, not all at once). Definitely some things to keep in mind as I keep coding! | aborov |
1,880,935 | Um relato sobre a prova de Certificação AWS Cloud Practitioner (CLF-C02) em 2024 | Conteúdo para a prova de Certificação Cloud Practitioner (CLF-C02) em 2024 Data: June 7,... | 0 | 2024-06-07T23:29:13 | https://dev.to/coelhodiana/um-relato-sobre-a-prova-de-certificacao-aws-cloud-practitioner-clf-c02-em-2024-p12 | aws, clfc02, cloud | # Conteúdo para a prova de Certificação Cloud Practitioner (CLF-C02) em 2024
Data: June 7, 2024
Recentemente, tive a satisfação de ser aprovada no exame de certificação AWS Cloud Practitioner (CLF-C02). Este marco significativo na minha jornada de aprendizado só foi possível graças a uma série de recursos e estratégias de estudo que adotei ao longo do caminho. Neste artigo, decidi compartilhar esses conteúdos e dicas que foram extremamente úteis na minha preparação para o exame. Minha esperança é que essas informações possam servir como um guia útil para outros que estão se preparando para esta certificação.
Marquei com estrela (🌟) os conteúdos mais legais.
# Simulados
## 🌟 [Simuladão Intensivo AWS Certified Cloud Practitioner](https://www.udemy.com/course/simuladao-intensivo-aws-certified-cloud-practitioner/)
Criador: Miguel Alexsander Do Nascimento
Muitas questões destes simulados apareceram de forma muito semelhante na prova.
## [6 Practice Exams | AWS Certified Cloud Practitioner CLF-C02](https://www.udemy.com/course/practice-exams-aws-certified-cloud-practitioner/)
Criador: Stephane Maarek
## [**AWS Cloud Practitioner (CLF-C02) - Simulados em Português**](https://www.udemy.com/course/aws-practitioner-em-portugues/?couponCode=ST21MT60724)
Criador: Certifica Tech
# Cursos
## [**Certificação Amazon AWS Cloud Practitioner CLF-C02**](https://www.udemy.com/course/certificacao-amazon-aws-cloud-practitioner-clf-c02/?couponCode=ST21MT60724)
Criador: Andre Iacono
[Certificação Amazon AWS Cloud Practitioner CLF-C02](https://www.udemy.com/course/certificacao-amazon-aws-cloud-practitioner-clf-c02/?couponCode=ST21MT60724)
# 🌟 [Elementos essenciais do AWS Cloud Practitioner (Português) | AWS Cloud Practitioner Essentials (Portuguese) (Na)](https://explore.skillbuilder.aws/learn/course/8287/play;state=%5Bobject%20Object%5D;autoplay=0)
Skillbuilder
Neste curso, os professores apresentaram analogias interessantes que contribuíram significativamente para a memorização do conteúdo.
## [AWS Certified Cloud Practitioner: Perguntas de prática oficiais (CLF -C02 Português (Brasil))](https://explore.skillbuilder.aws/#)
Skillbuilder
# Dicas
- Sempre que eu me confundia entre dois serviços, como por exemplo, AWS GuardDuty e AWS Inspector, eu recorria ao ChatGPT. Pedia para ele fazer analogias, destacar as principais diferenças e apontar os casos de uso.
- Para memorizar pilares, perspectivas, serviços e etc., criei acrônimos, palavras que usam a primeira letra de cada item. Por exemplo:
**ESSECO** - Pilares do AWS Well Architected Framework
E - Excelência Operacional
S - Segurança
S - Sustentabilidade
E - Eficiência de Desempenho
C - Confiabilidade
O - Otimização de Custo
**PPONGOS** - Perspectivas do AWS CAF
P - Pessoas
P - Plataforma
O - Operações
N - Negócios
G - Governança
O - … Esse 'O’ serviu apenas para formar a palavra
S - Segurança
- Eu assistia às aulas sem anotar nada. No final, escrevia tudo que lembrava em um caderno à mão. Depois, revisava e adicionava ou corrigia as informações.
- Flashcards - Criei cartões de papel onde anotava o tópico de um lado e o conceito do outro. Sempre que tinha tempo livre, tentava revisar usando os flashcards. Separava os que já havia memorizado dos que ainda tinha dificuldade, para revisá-los com mais frequência.
- Realizei vários simulados, sempre revisando as questões incorretas. Próximo à prova, atingia cerca de 70 a 80% de acertos em cada um. Também fiz simulados em inglês, que eram mais difíceis, neles atingi aproximadamente 50% de acertos.
Em alguns casos, não lia as questões corretamente, principalmente nas que era necessário marcar mais de uma resposta, muitas vezes marcava apenas uma e perdia a questão. Entretanto, após conversar com uma amiga, Nicole, ela me aconselhou a avaliar a revisão ao final da prova, pois essas questões ficavam com um status diferente se não estivessem completas. | coelhodiana |
1,880,941 | LoyaltyRoller: Zapier Automation of sending NFTs to users on sucessful Stripe payments using Owl Protocol | Introduction Owl Protocol, a web3 integration platform that simplifies blockchain development by... | 0 | 2024-06-07T23:51:42 | https://dev.to/kamalthedev/loyaltyroller-zapier-automation-of-sending-nfts-on-stripe-payments-using-owl-protocol-1m44 | owlprotocol, stripeautomation, zapierwithowlprotocol | Introduction
Owl Protocol, a web3 integration platform that simplifies blockchain development by providing APIs and Zapier integration for any EVM or Rollup. This allows developers to build blockchain applications without dealing with private keys, gas fees, or cryptocurrencies, enabling them to focus on the core aspects of their applications.
In this tutorial we will integrate automation of sending the NFTs directly to the users as soon as they complete payments using stripe , this would be made possible via zapier and owl protocol.
By combining Owl Protocol with Zapier and stripe payments, we will develop a seamless workflow that sends NFTs to the users upon sucessfull payments via stripe.
# Steps
1. Create a Project in Owl Protocol Dashboard: Log in, set up your team, and create a new project.
2. Set up Your Collection: Add a custom network, configure network settings, and create your NFT collection.
3. Integrate Stripe and Owl protocol with Zapier: Create a new Zap, select Stripe payments as the trigger, configure NFT minting via Owl protocol in the actions, and set up the zap ⚡.
4. Test the Integration: Ensure everything works correctly and activate your Zap.
Owl protocol Dashboard
1) Make sure to log in at the Owl Protocol Dashboard
2)Create Team and Project: Navigate to the team section to set up your team, then create a new project.You can collaborate with others working over the same project.
3)Add Custom Network: Within your project, click on “Add Custom Network”, enter the network name, paste your RPC URL, and input the Chain ID. Also make sure to get funded using ETH or Sepolia ETH.

You can also add from default Chain IDs such as:
Ethereum (1), OP Mainnet (10), Sepolia (11155111), OP Sepolia (11155420), OP Celestia Raspberry Testnet (123420111),etc.
4)By default, your utility address will be funded with 1.45 ETH as shown below. If you need more, make sure to send additional funds to your gas tank address.
Setting Up Your Collection:
1) Enter the project you created or navigate to the previous project that you have created.
2)Create and Configure Collection: Click on “Create Collection” and fill in the Collection Name and with a Symbol, upload your image with metadata.As you can see we have created cat collection using dashboard.
Now that your collection is set up, proceed with integrating Zapier to automate the reward process for contributors.
Setting Up Zapier Integration
To set up the Zapier integration and start rewarding contributors, follow these steps:
1)Sign Up for Zapier: If you haven’t already, sign up for a Zapier account at Zapier
2)In your Zapier dashboard, click on “Make a Zap” to create a new automation.
3)Select Stripe as the Trigger App: Search for Stripe and select it as the trigger that starts our app.

4)Choose Trigger Event: Choose the trigger event which is our ‘New Payment’
5)Connect Your Stripe Account: Authenticate your Stripe account with Zapier.
6)Set Up the Trigger by adding the next step: Select the Owl Protocol

7)Click on “+” to add an action, then search for “Mint ERC721 in Owl Protocol” , choose the field Mint ERC-721.
8) Now under Action, select the collection address from the OwlProtocol Dashboard available NFT collections that you earlier created.

9) Finally, test it all Out.
This is how our flow looks like with zapier automation combining Stripe payments and Owl Protocol.

10)Now everything is ready, test the Zap to ensure everything works correctly, and test it, within a short moment you will see as shown above and if everything works perfectly then publish it.
# Conclusion
The seamless integration of Stripe Payments with Zapier and Owl protocol automates the minting and distribution of NFTs, allowing developers to focus on building their applications.
Owl protocol also supports over 6000 additional web2 apps to integrate, bringing connectivity and versatility, so get started with their api documentation and dive in to onboard the next billion users to the web3 ecosystem.
| kamalthedev |
1,880,940 | [Game of Purpose] Day 20 - Drone basic movement somewhat works | Today I created a dedicated drone blueprint. I manually configured Event graph, so that the drone... | 27,434 | 2024-06-07T23:51:42 | https://dev.to/humberd/game-of-purpose-day-20-drone-basic-movement-somewhat-works-72i | gamedev | Today I created a dedicated drone blueprint. I manually configured Event graph, so that the drone moves horizontally and the camera rotates accordingly. It's basically the same as in `BP_ThirdPersonCharacter` blueprint that is a default Manny player, but I played around with each node and I think I understand what is going on there.

The top setup section basically finds Enhanced Input Local Player Subsystem and sets our key mapping file (WSAD, mouse, etc.).
The middle camera section takes raw mouse x, y values and sets them as yaw and pitch.
The bottom movement section is a bit more complicated. I'm not sure what `Get Control Rotation` and what `Get Right Vector` do. Will need to debug it more.
{% embed https://youtu.be/JjzY3M-L4pg %} | humberd |
1,880,938 | Tech Support: A Close Look at Remote and Onsite Services | Nowadays, businesses have their business operations running very smoothly at the back all because of... | 0 | 2024-06-07T23:48:59 | https://dev.to/liong/tech-support-a-close-look-at-remote-and-onsite-services-2d9h | online, techtalks, malaysia, kualalumpur | Nowadays, businesses have their business operations running very smoothly at the back all because of their reliability on the IT support but on the other side, there is a time in which you have to make the final and best decision to choose the right support model. you may be able to get the idea that there are two IT supports one is remote and the other one is onsite. Each of the supports whether it is onside or remote, they both have many benefits at one point. The good choice of choosing your support depends on the specific needs and requirements of a person.
In this blog, you are going to get a look at the [IT support onsite vs remote](https://ithubtechnologies.com/top-developer-in-malaysia/?utm_source=dev.to&utm_campaign=itsupportonsiteremote&utm_id=Offpageseo+2024). It mentions brief ideas and points about the advantages and disadvantages of remote or onsite. This will help you to make correct decisions in future.
## Remote Support: Speed Demon with a Budget-Friendly Smile
Let's just imagine a situation in which you are making a presentation on your laptop or computer but remain frozen, then here you are stressed and panicking. The solution to this situation is that the tech magic is already present at your fingertips which will help you to be at your virtual fingertips within just a few minutes. The professionals or experts use their secure system and afterward, you can easily and more quickly get with the presence of remote support you can easily free up the computer presentations of the mid-presentation. In addition, your support can be our virtual fingerprints, after this, you are more smoothly able to resale it. Some professional experts are working nonstop and tirelessly just to make sure that the tech system magic is present in your tips and that you need it within just a few minutes. The professionals or experts use secured software to access the device of the person remotely and just automatically allows the professionals to easily diagnose the tissue and also most commonly it is on the spot.
Remote work is all about the reduced or lower costs of travel expenses for our experts. This will automatically allow them to need providers who can offer many awesome win-win prices according to the money management structure. Now, you have the part to choose the right amount of money from hourly rates or monthly subscriptions that fully match your budget. as budget matters for teenage girls or boys. Plus, you need to know that the remote teams work together for multiple clients so at least it is important to streamline the process
## Security Concerns? Not with the Right Partner
It can be admitted from the fact that security might be raising such questions when they get started with the remote process more quickly. The concern arises when security might be a lingering concern when it comes to remote access. However, respected providers are very much able to prioritize strong security measures. Here their main work involves the use of secure encryption and also that makes sure to resolve the issue by getting two step authentication. All this ensures that the data or our information remains in the safe hands before doing any such remote intervention. You need to know the security practices are done for safety and peace of mind. You do not need to hesitate before doing anything for an HR audit.
## Ideal for Everyday IT Woes and Speedy Resolutions
Remote IT support is the best option and outshines when giving us a helping hand in everyday IT problems or consequences. The IT problems can be like password retting, troubleshooting problems, and sometimes network configuration. Remote support is amazing and the best option when you need quick fixes and more real-time problem solutions. In addition to this, the re-one is a much better option because it does not need any geographical teams and they can easily and more practically work at home regardless of the area or location of the place.
## Onsite Support: The Hands-On Hero
The time comes when the technical issues require onsite support like it requires physical fixing. The onsite support makes the expert come in physically to the office and makes sure that they are fully equipped to tackle and solve the problems. This is very important for doing hardware repairs, difficult network installations, and some situations requiring more personal walkthroughs for employees.
The main advantage of having an expert is to get an idea of the problems by the physical presence and check the problem from where it first arises. This is very important because through this we can get the diagnoses of problems and also to identify potential hardware failures. onsite support is helping out in boosting the relationship with our IT providers as this will allow individual understanding of specific needs.
## The Flip Side: Scheduling and Potential Disruption
The onsite support does come with its own set of references. When you are scheduling a technician you can easily visit and can introduce some waiting time, this will potentially cause a slight disruption to your workflow. Additionally, onsite services are considered to be very expensive at one point due to the labor that is used.
## Finding the Right Partner for Your Needs
Furthermore, you need a way to get the best kind of partner who is considered to be the key to the success of your business in this world. The right partner is the one who knows all about our needs and infrastructure. Here at this point, you do not need to hesitate to ask for their experience, the services they are offering, all pricing models, and the security techniques and protocols they use. We should consider that response time or communication style is all about professionalism.
## Conclusion
According to the above points, it can be concluded that IT support is an investment that is needed when you are working on your business. This investment is not just about expense but it is about the way that makes it possible for companies to run more smoothly and efficiently. All this IT support technique is achieved when you combine both the strengths and limitations of the remote or onsite support and then at least create a vision and a winning strategy. The winning support is Onsite IT Support because many businesses need one to eliminate their IT worries.
| liong |
1,880,937 | Backup and Recovery of Data: The Essential Guide | Our digital world is getting modernized day by day and more incidents are happening in the tech field... | 0 | 2024-06-07T23:41:13 | https://dev.to/liong/backup-and-recovery-of-data-the-essential-guide-1c2c | data, malaysia, kualalumpur, backup | Our digital world is getting modernized day by day and more incidents are happening in the tech field related to data losses. Data is the raw information that is considered to be more important than ever in today's world. It is devastating to lose data whether it's personal photos, business documents, or critical software. If any kind of disaster strikes, many useful tactics or strategies can be used to protect and recover the data. In this blog, you are going to explore [data recovery and backup solution](https://ithubtechnologies.com/data-company-in-malaysia/?utm_source=dev.to&utm_campaign=datarecoverybackupsolution&utm_id=Offpageseo+2024). Here you can get the basic idea about the main important steps for backing up, recovering data, and ensuring that you are ready for anything.
## Why Data Backup and Recovery are important?
There are different reasons why Data loss occurs which are listed as follows:
1. Hardware Failure
2. Accidental Deletion
3. Cyberattacks
4. Natural disasters
Further, this results in an increased financial situation, defame and also destroys reputation because of the consequences that were increasing from slight to severe. By using the method of strong backup and rehabilitation solutions, you can reduce the possibility of losing the data.
## Types of Data Backups
1. **Full Backup:** This is the method by which you can copy all the data that is given to the backup area. It consists of brief detailing which is quite time-consuming and requires more storage space.
2. **Incremental Backup:** This is a backup process in which files and data are remodeled that were previously saved by doing a backup. So, this provides a balance between the differential backup and incremental backup as a differential backup saves the changes that were made after the last backup and It is a method by which storage and time are managed more easily through less storage space and less time to restore.
3. **Differential Backup:** After the last backup is complete this process saves all the files that are changed. This involves each and everything that is changed after doing the last backup.
## Backup Methods
1. **Local Backup:** It is a process in which all the saved data after backup is stored on devices such as USB drives, Hard drives, and Local servers. This method is fast and easy to use and it is responsive to physical damage and theft.
2. **Cloud Backup:** Data is stored in the cloud by various online services as it provides the best and highest quality protection in case of local disasters and approves remote access but it depends on the connectivity of the internet.
3. **Hybrid Backup:** There are various advantages of using both local and cloud backup together because the data is saved in different places and it becomes safe and reliable. Even if the data is deleted from one place it would be safe at another place.
## Best Practices for Data Backup
1. **Regular Backups:** If you want to protect your data and store it you should schedule regular backups. If your data is more important you can back up your data daily, weekly basis, or monthly basis.
2. **Automate Backups:** If you want to automate the backup process you have to use software that is required for it which reduces the risk factor of human error and ensures consistency.
3. **Verify Backups:** You need to check your backups daily whether they are complete and available. This includes tests and procedures that can be used to recover lost and deleted data.
4. **Encrypt Data:** By the encryption method you can protect your backups, especially for cloud storage because it helps to keep your data safe so that no one can access it.
## Data Recovery Strategies
- **Identify the Problems:** It is very important to identify, why the data was lost before finding out how it was lost by knowing the cause whether it is due to human error, hardware failure, or corruption in the software, this will help you to find out the best way to recover your data.
- **Use recovery software:** To recover the files that have been lost, you should use software tools that are designed for recovery purposes. These specialized programs can restore the files that are deleted or damaged accidentally.
- **Restore from Backup:** If the recovery software is not working, you can use the backup process to restore the lost data. This is one of the fastest and easiest methods.
- **Consultation:** If a meaningful amount of data is lost and the hardware is damaged, it is very important to get assistance from a Professional Data recovery service. These are the experts who have modern Techniques, Knowledge, and tools that help to recover the data that simple software cannot recover.
## Creating a Backup and Recovery Plan
There are some very important steps that you need to follow to plan out the backup of the data that you mistakenly lost from your PC, computer, or smartphone. the following steps include:
1. **Set Your Data:**
Firstly you need to check the important data that needs regular backups. then you must consider the type, style, or importance of your data to find out the best backup technique.
2. **Choose Backup Solutions:**
Secondly, you should be using some right and important tools and services. These services depend upon the assessment being done and this may include local or cloud backups, depending on the requirements.
3. **Testing:**
Thirdly, the setting up of the backup system and checking it out regularly. you need to make sure that backups are scheduled in order and accurately as this may boost the backup procedures.
4. **Documentation:**
Last is the documentation part where you need to keep the record in paper form of all the detailed backup and recovery methods. Documentation is the most valuable thing when mistakenly a loss incident happens.
## Conclusion
According to the above highlighted points, it can be concluded that the most effective and good type of data recovery and backup solutions are needed to survive the common data losses, and these methods are going to help in the protection of our digital assets. When you have different types of backups and the best recovery plans, then you are easily able to manage your account and everything inside it whether it consists of sensitive information or not.
| liong |
1,882,429 | Circuit Breakers in Go: Stop Cascading Failures | Circuit Breakers A circuit breaker detects failures and encapsulate the logic of... | 0 | 2024-06-26T23:01:44 | https://medium.com/@oluwafemiakinde/circuit-breakers-in-go-stop-cascading-failures-c81c14f7154e | microservices, resilience, go, circuitbreaker | ---
title: Circuit Breakers in Go: Stop Cascading Failures
published: true
date: 2024-06-07 23:32:27 UTC
tags: microservices,resilience,golang,circuitbreaker
canonical_url: https://medium.com/@oluwafemiakinde/circuit-breakers-in-go-stop-cascading-failures-c81c14f7154e
---

#### **Circuit Breakers**
A circuit breaker detects failures and encapsulate the logic of handling those failures in a way that prevents the failure from constantly recurring. For example, they’re useful when dealing with network calls to external services, databases, or really, any part of your system that might fail temporarily. By using a circuit breaker, you can prevent cascading failures, manage temporary errors, and maintain a stable and responsive system amidst a system breakdown.
#### Cascading Failures
Cascading failures occur when a failure in one part of the system triggers failures in other parts, leading to widespread disruption. An example is when a microservice in a distributed system becomes unresponsive, causing dependent services to timeout and eventually fail. Depending on the scale of the application, the impact of these failures can be catastrophic which is going to degrade performance and probably even impact user experience.
### Circuit Breaker Patterns
A circuit breaker itself is a technique/pattern and there are three different states it operates which we will talk about:
1. **Closed State:** In a closed state, the circuit breaker allows all requests to pass through to the target service normally as they would. If the requests are successful, the circuit remains closed. However, if a certain threshold of failures is reached, the circuit transitions to the open state. Think of it like a fully operational service where users can log in and access data without issues. Everything is running smoothly.

**2. Open State** : In an open state, the circuit breaker immediately fails all incoming requests without attempting to contact the target service. The state is entered to prevent further overload of the failing service and give it time to recover. After a predefined timeout, the circuit breaker moves to the half-open state. A relatable example is this; Imagine an online store experiences a sudden issue where every purchase attempt fails. To avoid overwhelming the system, the store temporarily stops accepting any new purchase requests.

**3. Half-Open State** : In the half-open state, the circuit breaker allows a (configurable) limited number of test requests to pass through to the target service. And if these requests are successful, the circuit transitions back to the closed state. If they fail, the circuit returns to the open state. In the example of the online store i gave in the open state above, this is where the online store starts to allow a few purchase attempts to see if the issue has been fixed. If these few attempts succeed, the store will fully reopen its service to accept new purchase requests.
This diagram shows when the circuit breaker tries to see if requests to **Service B** are successful and then it fails/breaks:

The follow up diagram then shows when the test requests to **Service B** succeeds, the circuit is closed, and all further calls are routed to **Service B** again:

**Note** : Key configurations for a circuit breaker include the failure threshold (number of failures needed to open the circuit), the timeout for the open state, and the number of test requests in the half-open state.
### Implementing Circuit Breakers in Go
It’s important to mention that prior knowledge of Go is required to follow along in this article.
As with any software engineering pattern, circuit breakers can be implemented in various languages. However, this article will focus on implementation in Golang. While there are several libraries available for this purpose, such as goresilience, go-resiliency, and gobreaker, we will specifically concentrate on using the gobreaker library.
**Pro Tip** : You can see the internal implementation of the gobreaker package, check [here](https://github.com/sony/gobreaker/blob/master/v2/gobreaker.go).
Let’s consider a simple Golang application where a circuit breaker is implemented to handle calls to an external API. This basic example demonstrates how to wrap an external API call with the circuit breaker technique:
Let’s touch on a few important things:
1. **`gobreaker.NewCircuitBreaker`** function initializes the circuit breaker with our custom settings
2. **`cb.Execute`** method wraps the HTTP request, automatically managing the circuit state.
3. **MaximumRequests** is the maximum number of requests allowed to pass through when the state is half-open
4. **Interval** is the cyclic period of the closed state for the circuit breaker to clear the internal counts
5. **Timeout** is the duration before transitioning from open to half-open state.
6. **ReadyToTrip** is called with a copy of counts whenever a request fails in the closed state. If ReadyToTrip returns true, the circuit breaker will be placed into the open state. In our case here, it returns true if requests have failed more then three consecutive times.
7. **OnStateChange** is called whenever the state of the circuit breaker changes. You would usually want to collect the metrics of the state change here and report to any metrics collector of your choice.
Let’s write some unit tests to verify our circuit breaker implementation. I will only be explaining the most critical unit tests to understand. You can check [here](https://github.com/SirPhemmiey/circuit-breaker-with-go/blob/main/main_test.go) for the full code.
1. We will write a test that simulates consecutive failed requests and checks if the circuit breaker trips to the open state. Essentially, after 3 failures, when the forth failure occurs, we expect the circuit breaker to trip (open) since our condition says counts.ConsecutiveFailures > 3 . Here's what the test looks like:
```
t.Run("FailedRequests", func(t *testing.T) {
// Override callExternalAPI to simulate failure
callExternalAPI = func() (int, error) {
return 0, errors.New("simulated failure")
}
for i := 0; i < 4; i++ {
_, err := cb.Execute(func() (interface{}, error) {
return callExternalAPI()
})
if err == nil {
t.Fatalf("expected error, got none")
}
}
if cb.State() != gobreaker.StateOpen {
t.Fatalf("expected circuit breaker to be open, got %v", cb.State())
}
})
```
2. We will test the **open** > **half** - **open** > **closed** states. But we will first simulate an open circuit and call a timeout. After a timeout, we need to make at least one success request for the circuit to transition to half-open. After the half-open state, we need to make another success request for the circuit to be fully closed again. If for any reason, there’s no record of a success request in the case, it will go back to being open. Here’s how the test looks like:
```
//Simulates the circuit breaker being open,
//wait for the defined timeout,
//then check if it closes again after a successful request.
t.Run("RetryAfterTimeout", func(t *testing.T) {
// Simulate circuit breaker opening
callExternalAPI = func() (int, error) {
return 0, errors.New("simulated failure")
}
for i := 0; i < 4; i++ {
_, err := cb.Execute(func() (interface{}, error) {
return callExternalAPI()
})
if err == nil {
t.Fatalf("expected error, got none")
}
}
if cb.State() != gobreaker.StateOpen {
t.Fatalf("expected circuit breaker to be open, got %v", cb.State())
}
// Wait for timeout duration
time.Sleep(settings.Timeout + 1*time.Second)
//We expect that after the timeout period,
//the circuit breaker should transition to the half-open state.
// Restore original callExternalAPI to simulate success
callExternalAPI = func() (int, error) {
resp, err := http.Get(server.URL)
if err != nil {
return 0, err
}
defer resp.Body.Close()
return resp.StatusCode, nil
}
_, err := cb.Execute(func() (interface{}, error) {
return callExternalAPI()
})
if err != nil {
t.Fatalf("expected no error, got %v", err)
}
if cb.State() != gobreaker.StateHalfOpen {
t.Fatalf("expected circuit breaker to be half-open, got %v", cb.State())
}
//After verifying the half-open state, another successful request is simulated to ensure the circuit breaker transitions back to the closed state.
for i := 0; i < int(settings.MaxRequests); i++ {
_, err = cb.Execute(func() (interface{}, error) {
return callExternalAPI()
})
if err != nil {
t.Fatalf("expected no error, got %v", err)
}
}
if cb.State() != gobreaker.StateClosed {
t.Fatalf("expected circuit breaker to be closed, got %v", cb.State())
}
})
```
3. Let’s test the ReadyToTrip condition which triggers after 2 consecutive failure requests. We'll have a variable that tracks for consecutive failures. The ReadyToTrip callback is updated to check if the circuit breaker trips after 2 failures ( counts.ConsecutiveFailures > 2). We will write a test that simulates failures and verifies the count and that the circuit breaker transitions to the open state after the specified number of failures.
```
t.Run("ReadyToTrip", func(t *testing.T) {
failures := 0
settings.ReadyToTrip = func(counts gobreaker.Counts) bool {
failures = int(counts.ConsecutiveFailures)
return counts.ConsecutiveFailures > 2 // Trip after 2 failures
}
cb = gobreaker.NewCircuitBreaker(settings)
// Simulate failures
callExternalAPI = func() (int, error) {
return 0, errors.New("simulated failure")
}
for i := 0; i < 3; i++ {
_, err := cb.Execute(func() (interface{}, error) {
return callExternalAPI()
})
if err == nil {
t.Fatalf("expected error, got none")
}
}
if failures != 3 {
t.Fatalf("expected 3 consecutive failures, got %d", failures)
}
if cb.State() != gobreaker.StateOpen {
t.Fatalf("expected circuit breaker to be open, got %v", cb.State())
}
})
```
### Advanced Strategies
We can take it a step further by adding an exponential backoff strategy to our circuit breaker implementation. We will this article keep it simple and concise by demonstrating an example of the exponential backoff strategy. However, there are other advanced strategies for circuit breakers worth mentioning, such as load shedding, bulkheading, fallback mechanisms, context and cancellation. These strategies basically enhance the robustness and functionality of circuit breakers. Here’s an example of using the exponential backoff strategy:
**Exponential Backoff**
[Circuit breaker with exponential backoff](https://gist.githubusercontent.com/SirPhemmiey/a19af4b469d5a67787ba14f8eeccb1d4)
Let’s make a couple of things clear:
**Custom Backoff Function:** The exponentialBackoff function implements an exponential backoff strategy with a jitter. It basically calculates the backoff time based on the number of attempts, ensuring that the delay increases exponentially with each retry attempt.
**Handling Retries:** As you can see in the /api handler, the logic now includes a loop that attempts to call the external API up to a specified number of attempts ( attempts := 5). After each failed attempt, we wait for a duration determined by the exponentialBackoff function before retrying.
**Circuit Breaker Execution:** The circuit breaker is used within the loop. If the external API call succeeds ( err == nil), the loop breaks, and the successful result is returned. If all attempts fail, an HTTP 503 (Service Unavailable) error is returned.
Integrating custom backoff strategy in a circuit breaker implementation indeed aims to handle transient errors more gracefully. The increasing delays between retries help reduce the load on failing services, allowing them time to recover. As evident in our code above, our exponentialBackoff function was introduced to add delays between retries when calling an external API.
Additionally, we can integrate metrics and logging to monitor circuit breaker state changes using tools like Prometheus for real-time monitoring and alerting. Here’s a simple example:
[Implementing a circuit breaker pattern with advanced strategies in go](https://gist.githubusercontent.com/SirPhemmiey/e9af8e9d0e0adf13e2058beb1fc3ee42/)
As you’ll see, we have now done the following:
1. In L16–21, we define a prometheus counter vector to keep track of the number of requests and their state (success, failure, circuit breaker state changes).
2. In L25–26, the metrics defined are registered with Prometheus in the init function.
**Pro Tip** : The init function in Go is used to initialize the state of a package before the main function or any other code in the package is executed. In this case, the init function registers the requestCount metric with Prometheus. And this essentially ensures that Prometheus is aware of this metric and can start collect data as soon as the application starts running.
3. We create the circuit breaker with custom settings, including the ReadyToTrip function that increases the failure counter and determines when to trip the circuit.
4. OnStateChange to log state changes and increment the corresponding prometheus metric
5. We expose the Prometheus metrics at /metrics endpoint
### Wrapping Up
To wrap up this article, i hope you saw how circuit breakers play a huge role in building resilient and reliable systems. By proactively preventing cascading failures, they fortify the reliability of microservices and distributed systems, ensuring a seamless user experience even in the face of adversity.
Keep in mind, any system designed for scalability must incorporate strategies to gracefully handle failures and swiftly recover — **Oluwafemi** , **2024**
_Originally published at_ [_https://oluwafemiakinde.dev_](https://oluwafemiakinde.dev/circuit-breakers-in-go-preventing-cascading-failures) _on June 7, 2024._ | oluwafemiakind1 |
1,880,555 | AWS Services for Microservice Architectures: A Beginner's Overview (Part 1 - Computing) | In today's rapidly evolving tech landscape, microservices have become a cornerstone for building... | 27,631 | 2024-06-07T22:29:05 | https://dev.to/edriste/aws-services-for-microservice-architectures-a-beginners-overview-part-1-computing-4c36 | aws, microservices, beginners, cloudcomputing | <p>
In today's rapidly evolving tech landscape, microservices have become a
cornerstone for building scalable and maintainable software solutions.
Leveraging the right tools and services is crucial for effectively
implementing a microservice architecture, and AWS offers a comprehensive suite
of services tailored to meet these needs. In this part, we will be delving
into the AWS services you can use for actually running your application with a
microservice architecture.
</p><br />
<h2>What are Microservices?</h2>
<p>
Microservices are a fundamental component of many modern software solutions.
They facilitate an approach in which software is composed of small,
independent services that communicate over lightweight APIs. Unlike monolithic
architectures, where all processes are tightly coupled and run together, each
microservice performs a single function and is ideally owned by a small,
self-contained team.
</p>
<p>Adopting a microservice architecture offers several benefits, such as:</p>
<ul>
<li>
Easier scaling of applications: Individual services can be scaled
independently based on demand.
</li>
<li>
Improved technology choice, code quality, and readability due to separation
of concerns: Teams can choose the best technologies for each service.
</li>
<li>
Isolated deployments and rollbacks: Changes to one service do not affect the
entire application, reducing risk and improving deployment flexibility.
</li>
</ul><br />
<h2>Running microservices with AWS</h2>
AWS provides four major managed services that can be used to supply the
computing power needed to run microservices:
<h4>Amazon Elastic Container Service (ECS)</h4>
<img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb22fmgchjp1cajbgi2k.png" alt="Image description" loading="lazy"/>
<p>
ECS is a container management service that runs code within Docker containers.
You can run your applications on a managed cluster of Amazon Elastic Compute
Cloud (EC2) instances or opt for AWS Fargate if you prefer a serverless
approach.
</p>
<blockquote>
<p>
<b>Amazon EC2</b> provides computing capacity in the form of virtual servers
that can run one or more containers. EC2 requires you to configure server
instances according to your application's needs. There are various instance
types to choose from, each with its own technical specifications and price
points. For instance, a t3.nano instance with 0.5 GiB of memory currently
costs $0.0052 per hour, while a t3.large instance with 8 GiB of memory is
currently priced at $0.0835 per hour.
</p>
<p>
<b>AWS Fargate</b> is a serverless option, meaning you don't need to manage
any virtual servers to run your containers. Many scaling and configuration
related processes are handled automatically, but you still need to configure
certain aspects of your application deployment. For example, you may need to
set up auto-scaling policies based on specific metrics to ensure your
application can handle varying levels of traffic. Similarly, you might need
to configure load balancing to distribute incoming traffic across multiple
container instances for high availability and performance.
</p>
<p>
Generally speaking, Fargate is less complicated to set up and maintain
compared to EC2, but it is also slightly more expensive per hour for the
same computing power. EC2 instances are ideal for scenarios where the team
requires granular control over container instances, networking, and storage.
Given a stable workload, it is possible to optimize costs using EC2
instances. Conversely, EC2 instances are often only partly utilized,
rendering some of the resources you pay for superfluous. Fargate, on the
other hand, only uses the application's actual computing needs, ensuring you
only pay for what you use.
</p>
</blockquote>
<p>
ECS is an excellent choice for a managed container service and is used in many
projects worldwide. It is suitable for long-running processes (such as web
pages that need to be available around the clock) and is ideal for teams with
Docker expertise.
</p>
<h4>Amazon Elastic Kubernetes Service (EKS)</h4>
<img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv08sml2vy0k33pbe8qv.png" alt="Image description" loading="lazy"/>
<p>
EKS is a Kubernetes service for building, securing, operating, and maintaining
Kubernetes clusters. It integrates seamlessly with core AWS services,
providing monitoring, scaling, and load balancing capabilities for
containerized applications. Like ECS, the computing power for EKS can be
provided through EC2 instances or AWS Fargate.
</p>
<p>
EKS is best for teams already invested in Kubernetes and wanting to leverage
Kubernetes tooling and community support. It is also ideal for applications
that need advanced orchestration features.
</p>
<h4>AWS Lambda</h4>
<img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxi52cqu5r71sb3spkaj4.png" alt="Image description" loading="lazy"/>
<p>
Lambda is a fully managed, serverless solution similar to AWS Fargate. You
just upload your code, and Lambda manages everything required to run and scale
it.
</p>
<p>
An important aspect of Lambda is its event-driven nature. This means your code
is executed in response to specific events, such as API calls, file uploads,
or scheduled time intervals.
</p>
<p>
Lambda is a great choice if you want to go for an event-driven architecture
and microservices that require minimal infrastructure management. It is also
well-suited for applications that need to scale quickly due to large spikes of
traffic at unexpected times. Lambda provides 1 million free requests and
400,000 GB-seconds of free compute time per month, making it a cost-effective
option for smaller projects or lightweight jobs like resizing images for
thumbnails.
</p>
<h4>AWS App Runner</h4>
<img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fo559jp1lmrl86g6fva.png" alt="Image description" loading="lazy"/>
<p>
App Runner is a fully managed service that simplifies the deployment of
containerized web applications and APIs, requiring no prior experience. Its
primary aim is to eliminate the complexities of infrastructure management,
enabling developers to focus solely on building and deploying their
applications. Powered by AWS Fargate, App Runner requires even less
configuration compared to direct usage of Fargate. It offers hands-off
scaling, includes a built-in load balancer and provides HTTPS Support, amongst
other features.
</p>
<p>
Due to the additional features, using App Runner tends to be more expensive than the
previously mentioned services. Its ability to automate scaling entirely,
however, makes it a perfect fit for applications that do not receive traffic
at all times. An example of this is a development environment that is only
used during business hours. This way, one benefits from the streamlined
deployment process while benefiting from automatic savings during the times
the application is inactive.
</p><br />
<h2>Conclusion</h2>
<p>
As we've explored the AWS services tailored for running applications with a
microservice architecture, it's evident that each service brings its unique
advantages to the table. Whether it's the granular control of EC2 instances,
the simplicity of Fargate, the orchestration capabilities of EKS, the
event-driven nature of Lambda, or the streamlined deployment process of App
Runner, AWS caters to diverse needs and preferences. I hope this post has
proven useful in helping you get an overview of these technologies. If you are
interested in learning more, I have included some links below.
</p>
<p>
Stay tuned for part 2 where I'll be addressing the topics of database and
storage in AWS!
</p><br />
<h2>Further Reading</h2>
<ul>
<li>
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html"
>Amazon ECS Documentation</a
>
</li>
<li>
<a href="https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html"
>Amazon EKS Documentation</a
>
</li>
<li>
<a href="https://docs.aws.amazon.com/lambda/latest/dg/welcome.html"
>AWS Lambda Documentation</a
>
</li>
<li>
<a
href="https://docs.aws.amazon.com/apprunner/latest/dg/what-is-apprunner.html"
>AWS App Runner Documentation</a
>
</li>
</ul>
| edriste |
1,880,872 | Resolving the "Length of LOB Data (78862) to be Replicated Exceeds Configured Maximum 65536" Error | Understanding the Error The error indicates that the LOB data size (78862 bytes) exceeds... | 27,304 | 2024-06-07T22:19:22 | https://shekhartarare.com/Archive/2024/6/resolving-length-of-lob-data-to-be-replicated-exceeds-configured-maximum-65536-error | sqlserver, tutorial, database, sql | ## Understanding the Error
The error indicates that the LOB data size (78862 bytes) exceeds the configured maximum limit (65536 bytes) set for replication in SQL Server. This typically happens during the replication process, leading to the failure of data transfer.
## Common Causes
1. **Default Configuration Limits:** SQL Server has default settings for the maximum size of LOB data that can be replicated.
2. **Large Data Inserts:** Inserting large multimedia files or extensive text data can exceed the default LOB size limit.
3. **Inadequate Configuration Settings:** The database settings might not be optimized for handling large LOB data, resulting in replication issues.
## Solutions to Resolve the Error
**Adjusting the 'max text repl size' Configuration Option**
- SQL Server provides a simple yet effective way to handle large LOB data during replication by adjusting the max text repl size configuration option. Here's how you can do it:
```
EXEC sp_configure 'max text repl size', <desired_value>;
RECONFIGURE;
```
Replace <desired_value> with the desired maximum size. You can also set it to -1 for the maximum supported size (2 GB).
- After making the configuration changes, restart the SQL Server service to apply the modifications.
**Adjusting max text repl size Through SQL Server Management Studio**
Additionally, you can adjust another setting in SQL Server to accommodate larger LOB data during replication. This setting is called max text repl size. Here's how you can change it through SQL Server Management Studio (SSMS):
- **Open SQL Server Management Studio (SSMS):**
Launch SSMS and connect to your SQL Server instance.
- **Right-click the server and select Properties:**
In the Object Explorer, right-click on the server name and
select Properties from the context menu.
- **Go to the Advanced page:**
In the Server Properties window, select the Advanced tab.
- **Change the max text replication size:**
In the Miscellaneous section, find the Max Text Replication
Size option and change it to the desired value. You can set
it to -1 for the maximum supported size (2 GB).
- **Apply and restart:**
Click OK to apply the changes and then restart the SQL Server
service for the changes to take effect.
## Why Adjusting max text repl size Works
By adjusting both the max text repl size and max text repl size configuration options, you're ensuring that SQL Server can handle larger LOB data sizes during replication. This prevents the error from occurring and enables seamless replication processes for your database.
## Conclusion
Don't let the "Length of LOB data to be replicated exceeds configured maximum" error halt your database replication efforts. With simple adjustments to the max text repl size configuration option in SQL Server, both through SQL scripts and SQL Server Management Studio, you can overcome this hurdle and ensure seamless replication processes. | shekhartarare |
1,879,750 | A Step-by-Step Guide to Writing Your First Move Smart Contract on Aptos | A Step-by-Step Guide to Writing Your First Move Smart Contract on Aptos Aptos is one of the... | 0 | 2024-06-07T22:16:24 | https://dev.to/amity808/a-step-by-step-guide-to-writing-your-first-move-smart-contract-on-aptos-ae8 | _A Step-by-Step Guide to Writing Your First Move Smart Contract on Aptos_
Aptos is one of the independent layer 1 laying focus on scalability, security, and reliability among other blockchain. It supports smart contracts which its smart contract is written in move programming. The blockchain network utilizes Proof of Stake as the consensus mechanism We will create a simple smart contract a waste management system for people in a community. Aptos blockchain has maintained high-level security features and reduced transaction costs. We should delve into how to build a smart with move building waste management system.
To delve into move smart contract, you can either use remix or your local code editor such as vs for the sake of this tutorial we will utilize remix
Open your [Remix IDE](https://remix.ethereum.org/)

Look at the left corner of your screen you see a plugin icon click

A sidebar will pop up find `CODE BY WELLDONE STUDIO` activates in the `code studio `

Select Aptos
You need to install a wallet to interact with your smart contract. Visit this link [wallet connect Aptos learning move](http://abit.ly/install-welldone-wallet) to download the wallet from chrome extension.

Setup your wallet account to interact with your wallet

Click Aptos to create a wallet after creating you can visit [claim fuacet Aptos](https://www.aptosfaucet.com/) to claim testnet or devnet copy your address.
After you finish setting up the account, copy your seed phrase somewhere save.
After successfully claim
-------------------------------------------------------------------------
Let move to our smart contract using waste manager as an example
In your remix IDE on your left sidebar, create a new create project input the name of your project

Locate your Move.toml
paste this code
```rust
[package]
name = "Examples"
version = "0.0.0"
[addresses]
wastes_Insured_addr = "paste your account address"
[dependencies]
AptosFramework = { git = "https://github.com/aptos-labs/aptos-core.git", subdir = "aptos-move/framework/aptos-framework/", rev = "aptos-node-v1.13.1"}
```
To get the address generated for this account navigate to .aptos/config.yaml you will find the generated account with the public key and private key
```rust
profiles:
default:
private_key: "0xee8f387ef0b4bb0018c4b91d1c0f71776a9b85935b4c6ec2823d6c0022fbf5cb"
public_key: "0xc6c07218d79a806380ca67761905063ec7a78d41f79619f4562462a0f8b6be11"
account: cbddf398841353776903dbab2fdaefc54f181d07e114ae818b1a67af28d1b018
rest_url: "https://api.devnet.aptoslabs.com"
faucet_url: "https://faucet.devnet.aptoslabs.com"
```
To get started with the smart contract, you need to define a module that is placed under the account address which
```rust
module <account-address>::<module-name> {
}
```
We will define a module, this module will be the container for our entire smart contract
```rust
module wastes_Insured_addr::wastes-Insured {
}
```
We import our library which we are going to use in this smart contract.
Event is for emitting any event that occurred in our smart contract when a function is triggered.
Table: we utilize table to define our table for the smart contract input data
Signer is who is calling the smart contract at a particular time.
An account is associated with our smart contract.
```rust
use aptos_framework::event;
use std::string::String;
use aptos_std::table::{Self, Table};
use aptos_framework::account;
use std::signer;
```
We will define a struct that will hold our typed fields, it has the capability to store, drop, and copy.
```rust
struct Waste has store, drop, copy, {
wast_id: u64,
wasteType:String,
collectionLocation: String,
weigth: u64,
isRecorded: bool,
isValidated: bool
}
```
We will define our WasteList which will take waste array, new events emit when waste is recorded, and counter which will serve as the length of the waste store.
```rust
struct WasteList has key {
waste: Table<u64, Waste>,
waste_count: u64
}
```
We initialized error const, error are represented in number inmove language
```rust
const E_NOT_INITIALIZED: u64 = 1;
const EWASTE_DOESNT_EXIST: u64 = 2;
const EWASTE_IS_VALIDATED: u64 = 3;
```
Next, we create a list function which is the first an account must have, it's essential for submitting transactions which we will associate with a signer
```rust
public entry fun create_list(account: &signer) {
let waste_holder = WasteList {
waste: table::new(),
set_waste_event: account::new_event_handle<Waste>(account),
waste_count: 0
};
move_to(account, waste_holder);
}
```
We create a waste add function that submits the new waste transaction to the blockchain. We need to know the user submitting the transaction to the chain. it will accept all the fields typed in our struct which looks like this
```rust
public entry fun register_waste(account: &signer, wasteType: String, collectionLocation: String,
weight: u64, wasteAmount: u64, hospitalAddress: address) acquires WasteList {
// we are getting the signer address
let signer_address = signer::address_of(account);
// we check if the signer has been initialized or not
assert!(exists<WasteList>(signer_address), E_NOT_INITIALIZED);
//We are getting waste list resources
let waste_list = borrow_global_mut<WasteList>(signer_address);
// we increament the count of waste recorded
let counter = waste_list.waste_count + 1;
// record a new waste
let new_record_waste = Waste {
wast_id: counter,
wasteType:String,
collectionLocation: String,
weigth: u64,
isRecorded: bool,
isValidated:
};
// insert new waste recorded in to the waste table
table::upsert(&mut waste_list.waste, counter, new_record_waste);
// set the waste count
waste_list.waste_count = counter;
// emit the waste event
event::emit(new_record_waste);
}
```
We will have to validate waste when collect brings it to the company which each collected will have to be verified.
```rust
public entry fun validate_waste(account: &signer, waste_id: u64) acquires WasteList {
// initialized signer_address to get the signer address
let signer_address = signer::address_of(account);
assert!(exists<WasteList>(signer_address), E_NOT_INITIALIZED);
// we get the waste resources
let waste_list = borrow_global_mut<WasteList>(signer_address);
// we check if waste exist using assert
assert!(table::contains(&waste_list.waste, waste_id), EWASTE_DOESNT_EXIST);
// get the waste that match the waste id
let waste_track = table::borrow_mut(&mut waste_list.waste, waste_id);
// check if the waste is not validated yet
assert!(waste_track.isValidated == false, EWASTE_IS_VALIDATED);
// validate the waste
waste_track.isValidated = true;
}
```
Here is how to get started with Move programming language.
Next, click the compile button which appears inside the sidebar
You should have results similar to this
```rust
INCLUDING DEPENDENCY AptosFramework
INCLUDING DEPENDENCY AptosStdlib
INCLUDING DEPENDENCY MoveStdlib
BUILDING Examples
{
"Result": [
"92c945f0ec6423e8ec1414a597f1d6fbc954c309f5846cbc73a43b62bfc37eba::waste_insure"
]
}
```
To deploy, hit the deploy button, to deploy.
Coming next how can we pay the waste collector using a
move smart contract?
| amity808 | |
1,880,871 | Understanding Primary Keys and Foreign Keys in SQL: A Simple and Detailed Guide | Introduction Imagine a database as a digital filing system where you store different kinds... | 0 | 2024-06-07T22:12:13 | https://dev.to/kellyblaire/understanding-primary-keys-and-foreign-keys-in-sql-a-simple-and-detailed-guide-28jm | sql, database, analysis, saas | #### Introduction
Imagine a database as a digital filing system where you store different kinds of information. Just like how a library uses a catalog to keep track of books, databases use special markers called **Primary Keys** and **Foreign Keys** to organize and connect data efficiently. Let's dive into what these keys are and how they work, using simple, real-life examples.
#### What is a Primary Key?
Think of a Primary Key as a unique identifier, like a social security number or a student ID. It's a special attribute in a table that uniquely identifies each record. No two records can have the same Primary Key value, ensuring each entry is distinct.
**Example: Student Identification**
Consider a table that stores student information:
| StudentID | Name | Age |
|-----------|----------|-----|
| 1 | Alice | 6 |
| 2 | Bob | 7 |
| 3 | Charlie | 6 |
In this table:
- **StudentID** is the Primary Key.
- Each student has a unique StudentID.
- Even if two students have the same name or age, their StudentID will always be different.
This uniqueness helps us quickly find, update, or delete a specific student’s record without confusion.
#### What is a Foreign Key?
A Foreign Key is like a reference that links one table to another. It’s a field (or combination of fields) in one table that uniquely identifies a row of another table. This creates a relationship between the two tables.
**Example: Student Enrollment**
Let's extend our student example to include information about class enrollments. We'll have two tables: Students and Enrollments.
**Students Table:**
| StudentID | Name | Age |
|-----------|----------|-----|
| 1 | Alice | 6 |
| 2 | Bob | 7 |
| 3 | Charlie | 6 |
**Enrollments Table:**
| EnrollmentID | StudentID | Class |
|--------------|-----------|-------------|
| 101 | 1 | Math |
| 102 | 2 | Science |
| 103 | 1 | Art |
| 104 | 3 | Math |
In the Enrollments table:
- **EnrollmentID** is the Primary Key.
- **StudentID** is a Foreign Key that references the StudentID in the Students table.
The Foreign Key (StudentID in Enrollments) links each enrollment record to a specific student in the Students table. This tells us which student is taking which class.
#### How Do Primary Keys and Foreign Keys Work Together?
Primary Keys and Foreign Keys work together to maintain relationships and ensure data integrity across tables. Let’s look at another example to see this in action.
**Example: Library System**
Imagine a library system with three tables: Books, Members, and Loans.
**Books Table:**
| BookID | Title | Author |
|--------|-------------------------|--------------|
| 1 | Harry Potter | J.K. Rowling |
| 2 | The Hobbit | J.R.R. Tolkien |
| 3 | Charlie and the Chocolate Factory | Roald Dahl |
**Members Table:**
| MemberID | Name | JoinDate |
|----------|-------------|------------|
| 1 | Emily | 2023-01-15 |
| 2 | James | 2023-02-20 |
| 3 | Sophie | 2023-03-25 |
**Loans Table:**
| LoanID | BookID | MemberID | LoanDate |
|--------|--------|----------|------------|
| 1 | 1 | 2 | 2023-04-01 |
| 2 | 3 | 1 | 2023-04-02 |
| 3 | 2 | 3 | 2023-04-03 |
| 4 | 1 | 3 | 2023-04-04 |
In this system:
- **BookID** is the Primary Key in the Books table.
- **MemberID** is the Primary Key in the Members table.
- **LoanID** is the Primary Key in the Loans table.
- **BookID** and **MemberID** in the Loans table are Foreign Keys that link to the Books and Members tables respectively.
When a book is loaned out, the Loans table uses the BookID and MemberID to reference which book was borrowed by which member. This creates a relationship between the tables and ensures that we can track book loans accurately.
#### Benefits of Using Primary Keys and Foreign Keys
1. **Uniqueness:** Primary Keys ensure each record in a table is unique, making it easy to identify and manage individual records.
2. **Relationships:** Foreign Keys create relationships between tables, allowing us to organize data in a connected and meaningful way.
3. **Data Integrity:** These keys enforce rules that maintain the correctness and consistency of data. For example, a Foreign Key ensures that a loaned book must exist in the Books table, and a member must exist in the Members table.
#### Summary
- **Primary Key:** A unique identifier for each record in a table, ensuring no two records are the same (like a student ID or social security number).
- **Foreign Key:** A field in one table that links to a Primary Key in another table, creating a relationship between the two tables (like referencing a student ID in a class enrollment).
By understanding and using Primary Keys and Foreign Keys, we can effectively organize, manage, and relate data in a database, ensuring our digital filing system is efficient and reliable. | kellyblaire |
1,880,774 | CONSULT GRAYWARE TECH SERVICES TO RECOVER STOLEN CRYPTOCURRENCY | The allure of cryptocurrency, the promise of financial freedom, is a seductive siren song. I, like... | 0 | 2024-06-07T19:18:00 | https://dev.to/jack_daniels_e0c666037743/consult-grayware-tech-services-to-recover-stolen-cryptocurrency-20m2 | The allure of cryptocurrency, the promise of financial freedom, is a seductive siren song. I, like many others, was captivated by its siren call, lured into a world of digital investment with the promise of astronomical returns. The initial investment, a mere $3,200, seemed inconsequential, a small price to pay for the potential windfall that lay ahead. But as I delved deeper into this new world, the reality of the situation became increasingly clear. The promises of easy wealth were a mirage, a smokescreen designed to conceal a ruthless scheme of deception and theft.The demands for additional funds, a constant barrage of requests for more and more money, became a relentless burden. I poured hundreds of thousands of dollars into this digital abyss, hoping against hope that my investment would finally bear fruit. But the reality was far more sinister. I had fallen victim to a cunning scam, a wolf in sheep's clothing disguised as a legitimate investment opportunity.The realization hit me like a ton of bricks. My entire life savings, a staggering $826,000 worth of Bitcoin, was gone, stolen by the very people I had trusted with my financial future. Despair washed over me, a cold wave of dread that threatened to drown me in its icy grip. I was lost, alone, and utterly helpless.
I sought help, desperately searching for a way to reclaim what had been stolen from me. I reached out to authorities, hoping for a miracle, a chance to salvage what remained of my life's work. But my pleas for help fell on deaf ears. The scammers had vanished into the digital ether, leaving me stranded on a desolate island of financial ruin.It was then, amidst the despair and hopelessness, that I stumbled upon GRAYWARE TECH SERVICES. How I found them, I cannot recall, but their appearance in my life was nothing short of a miracle. They offered a lifeline, a glimmer of hope in the darkness.Their expertise, their understanding of the complexities of the digital world, and their commitment to justice instilled in me a sense of trust that I had lost long ago. They didn't offer empty promises; instead, they provided a clear roadmap, explaining the challenges and the potential pitfalls of recovering my stolen Bitcoin.And within a matter of hours, a miracle occurred. GRAYWARE TECH SERVICES, with their unparalleled skills and unwavering determination, succeeded in recovering my stolen Bitcoin. They had navigated the treacherous labyrinth of the digital world, outwitting the scammers and reclaiming what was rightfully mine.
My experience with GRAYWARE TECH SERVICES was a testament to the power of hope, a beacon of light in the darkness of my despair. They are more than just a recovery agency; they are champions of justice, digital knights fighting against the forces of corruption that plague the online world. Their expertise and unwavering commitment to reclaiming what has been stolen make them an invaluable resource for anyone who has fallen victim to online scams.The digital world can be a dangerous place, but with GRAYWARE TECH SERVICES, there is always a fighting chance to reclaim what has been stolen. Truly GRAYWARE TECH SERVICES is a testament to the enduring power of human ingenuity and determination, a force for good in a world that can be fraught with deceit. Get in touch with GRAYWARE TECH SERVICES via follow details below
WHATSAPP: +16022977244EMAIL: contact @ graywaretechservices . comWEBSITE: https://graywaretechservices.com | jack_daniels_e0c666037743 | |
1,880,867 | How to Achieve Net Negative Churn in 2024 | This Blog was Originally Posted to Churnfree Blog Net Negative Churn is the most valuable negative... | 0 | 2024-06-07T21:52:00 | https://churnfree.com/blog/net-negative-churn/ | churnreduction, saaschurn, churnrate, negativechurn | This Blog was Originally Posted to **[Churnfree Blog](https://churnfree.com/blog/net-negative-churn/?utm_source=Dev.to&utm_medium=referral&utm_campaign=content_distribution)**
Net Negative Churn is the most valuable negative metric in modern business models. Let’s know the WHATs and HOWs of this much-prized indicator of SaaS business.
Negative churn is the most powerful growth engine for SaaS businesses. It points out that your existing customers find enough value in your service to increase their spending over time, which offsets the losses from those who leave. This dynamic is necessary for long-term sustainability.
Keep reading, and you’ll learn what net negative churn is, the net negative churn rate formula, and how to achieve it.
**What is Net Negative Churn**
Net negative churn means your overall business revenue increases during a specific period despite losing customers. It occurs when you earn more from your existing customers than you lose in service cancellation or downgrade from customer churn.
**Can You Have a Negative Churn Rate**
Of course – YES.
Net negative churn, or negative churn rate, happens when revenue gained from existing customers through expansion strategies like upgrades and cross-selling is higher than the revenue lost due to customer churn.
For example, you may lose some customers that makeup 5% of your annual revenue. However, your existing customers purchased additional services valued at 10% of annual revenue. So, despite the fact that you lost customers, you came out in the green with 5% revenue growth.
**Why Should You Achieve Net Negative Churn and How It Relates to Revenue Growth**
Customer churn is a must in every business, and the negative churn rate of a business clearly speaks to the integrity of the organization among its customers.
Net negative churn is mainly associated with SaaS and subscription-based business models, and this SaaS metric solely focuses on your existing customers. Seeing a negative churn rate in your revenue books indicates that your company is growing its revenue from its current customer base despite losing some customers. It’s also a powerful sign that your business is successfully enhancing its value over time and can retain valuable customers. It also indicates the following:
**Steady Revenue Growth:** Shows that the extra money from current customers is more than what’s lost from those who leave, keeping your revenue growing consistently.
**Loyal Customers:** This indicates that customers love your product enough to spend more, staying around longer and providing a stable base.
**Cost-Effective:** It’s cheaper to keep and upsell to existing customers than to find new ones, giving you a better bang for your buck.
**Competitive Edge:** Helps you maintain revenue growth even if new customer sign-ups slow down, keeping your business financially healthy.
**Financial Stability:** This leads to better financial planning and stability, so you’re not always scrambling to replace the lost revenue with new customers.
How to Calculate Net Negative Churn
**Below is the Net Negative Churn Formula:**

**Churned MRR** is the total monthly recurring revenue (MRR) lost from churned customers.
**Expansion MRR** is the total monthly recurring revenue (MRR) gained from existing customers through upsells, cross-sells, and upgrades.
**Start of Period MRR** is the total monthly recurring revenue (MRR) at the beginning of the period.
**How to Apply the Negative Churn Rate Formula**
Example Calculation:
**Start of Period MRR:** $100,000
**Churned MRR:** $5,000
**Expansion MRR:** $10,000
**Net Churn Rate:** ($5,000 - $10,000) / $100,000 = -5%
**5 Best Strategies to Help You Achieve Net Negative Churn Rate**
To help you keep your existing customers happy and take advantage of the benefits of negative churn, we will discuss the five most effective strategies to increase customer value, reduce customer churn, and speed up business growth.

**1. Cross-Selling**
Cross-selling is all about encouraging your customers to buy additional products or services that complement their existing purchases.
Cross-selling provides extra value to your clients. The best cross-sell offers are not only related to the initial purchase but also come at a more attractive price. You’ve probably seen this in action with consumer advertising.
Imagine you’re shopping online for a new coffee maker. As you’re about to check out, the site suggests you add a pack of specialty coffee beans to go with it. You skip the offer for now. Over the next few weeks, you start getting emails from the store offering discounted coffee beans. Since you’re enjoying your new coffee maker and the beans are cheaper, you’re more tempted to buy them. The store knows you love your coffee maker and figures you might want to enhance your coffee experience with premium beans, so they keep reminding you about this related product.
This cross-selling approach works just as well in the B2B world. For instance, if a client has purchased a software package, you could offer them additional modules or services that complement their current setup. By presenting these add-ons at a lower price point, you make it easier for them to say yes, adding more value to their original purchase.
So, when you’re looking to boost your customer value, think about what additional products or services you can offer that would make their experience even better. Your clients will appreciate the extra value, and you’ll see increased growth.
**2. Up-Selling**
Upselling or upgrade strategies involve offering customers a more expensive or upgraded version of a product they already own.
For instance, if a client has a subscription to your basic software package, you could offer them a premium version with advanced features and benefits. Highlight how these upgrades will enhance their experience, save time, or improve productivity. This added value makes it easier for them to see the benefit of spending more on a product they already enjoy.
So, when you’re looking to boost your customer value, consider how you can entice them with an improved version of what they already love. Your clients will appreciate the enhanced experience, and you’ll see increased revenue growth.
**3. Seat Expansion**
Seat expansion is when you let your customers add more users. Seat expansion encourages additional purchases within the product your customer already owns. This model is standard in the software industry. If you charge a fee per product administrator or software user, seat expansion involves encouraging customers to buy more licenses for additional people who need access.
For example, imagine you’ve purchased access for three users, but then your team grows, and you need a fourth. That’s seat expansion.
Companies using this type of upselling strategy often add features that appeal to a broader range of users. For instance, if you sell financial software for businesses, you could add features that help HR manage payroll and employee information, encouraging more users to sign up.
When promoting seat expansion, clearly explain the benefits of adding additional users. How will having more seats at the table benefit your customers? Highlight how the extra users can improve productivity, streamline processes, or enhance collaboration within their team. Your clients will appreciate the added functionality and efficiency.
**4. Increasing CHI (Customer Happiness Index)**
Invest in a strong customer success team. Customers who fully utilize and benefit from your product are more likely to upgrade, see its value, and stay loyal. This leads to achieving a negative churn rate and [reduced customer churn.](https://churnfree.com/blog/how-to-reduce-customer-churn-rate-in-your-saas-company/?utm_source=Dev.to&utm_medium=referral&utm_campaign=content_distribution)
To encourage customer engagement, you can start by consistently and meaningfully communicating. This could involve personalized email campaigns and valuable content that addresses customers’ needs and interests.
Investing in a strong customer success team can significantly enhance customer satisfaction. These teams ensure that customers fully utilize your product and can help troubleshoot any issues.
For instance, imagine you run a project management software company. Your customer success team could proactively contact clients to offer training sessions, helping them discover advanced features that streamline their workflows. When customers see the tangible benefits of your product, they’re more likely to stay loyal and consider upgrades.
Offering excellent customer support is also crucial. Quick and effective problem resolution can turn potentially harmful experiences into positive ones. For example, if a customer encounters a bug in your software, your support team’s prompt and helpful response can reinforce their trust in your company.
Additionally, gathering and acting on customer feedback shows that you value their input and are committed to improving their experience. Implementing a customer loyalty program can further boost engagement by rewarding repeat business and long-term loyalty. Think of it like this: a customer who consistently uses your software might earn points toward free months of service or exclusive access to beta features – Voila!
**5. Leveraging Customer Data**
Use data insights to personalize offers and services.
Specially designed recommendations make upselling and cross-selling efforts much more effective and natural.
Start by collecting data on your customers’ behavior, preferences, and usage patterns. This information can help you understand what they value most about your product. For instance, if you notice that a customer frequently uses a particular feature of your software, you can recommend an upgrade or an add-on that enhances that feature.
Imagine you run an e-commerce platform. By analyzing purchase history and browsing behavior, you can suggest complementary products that align with your customers’ interests. For example, if a customer frequently buys running gear, you could recommend new running shoes or fitness trackers tailored to their preferences.
Using customer data to segment your audience can also improve your marketing campaigns. Tailored messaging and emails can get higher customer engagement.
Leveraging customer data also helps identify at-risk / possible churn customers. By monitoring usage patterns and engagement levels, you can proactively reach out to these customers with personalized offers or support to re-engage them.
This is where [churn prediction software](https://churnfree.com/blog/churn-prediction-software/?utm_source=Dev.to&utm_medium=referral&utm_campaign=content_distribution) like Churnfree comes in handy. [Churnfree](https://churnfree.com/?utm_source=Dev.to&utm_medium=referral&utm_campaign=content_distribution) offers a wide range of solutions to help you study and analyze customer data effectively. Their platform provides insights into customer behavior, helping you create targeted retention strategies and personalized marketing campaigns.
Once you have your customer data and tools like Churnfree, you can create more relevant and appealing offers, enhance customer satisfaction, and reduce churn. Personalized interactions show your customers that you understand their needs and are committed to providing the best possible experience.
**What is the Difference Between Net Negative (Revenue) Churn and Net Negative Churn Rate?**
There is essentially no difference between net negative (Revenue) churn and net negative churn rate, and both terms can be used interchangeably. They both share the same formula. Here, note that net negative churn is also called net negative Revenue churn.
However, when detailed, negative revenue churn and negative net churn rate describe slightly different customer and revenue dynamics in SaaS and subscription-based business models.
Here’s how they differ:
**1.Net Negative Revenue Churn** emphasizes specifically the revenue aspect. It focuses on the net change in revenue due to churn and expansion. Meanwhile, net negative churn rate is a broader term used to describe the overall churn from customers and revenue.
**2.Net Negative Revenue Churn** is used in financial and revenue reporting to highlight how revenue from existing customers can balance losses from churn. Meanwhile, the Net Negative Churn Rate includes discussions about customer retention strategies, lifetime value, and overall business health.
**Conclusion**
This article has tried to help you understand net negative churn, why it is important within the SaaS industry, how to approach negative churn rate strategically by reducing churn and increasing revenue, and how to calculate it.
Hopefully, we understand now that net negative churn improves a business’s financial health, develops a system for loyal customers, and helps the business be more innovative.
If you want to learn more on [SaaS churn rate](https://churnfree.com/blog/b2b-saas-churn-rate-benchmarks/?utm_source=Dev.to&utm_medium=referral&utm_campaign=content_distribution), B2B SaaS Benchmarks, [customer retention strategy](https://churnfree.com/blog/customer-retention-strategies/?utm_source=Dev.to&utm_medium=referral&utm_campaign=content_distribution), and winback campaign strategies, then visit the [Churnfree Blog](https://churnfree.com/blog/?utm_source=Dev.to&utm_medium=referral&utm_campaign=content_distribution) for detailed insights.
**FAQs**
**What does net churn mean?**
Net churn, or Net MRR Churn Rate, represents the overall percentage of monthly recurring revenue (MRR) that is lost from existing subscriptions or customers during a specific period. It considers the MRR gained from expansions and upgrades among the remaining customers.
**What is the standard formula for calculating the churn rate?**
It is calculated by:
Churn rate = (Lost Customers ÷ Total Customers at the Start of the Time Period) x 100
For instance, if a business starts the month with 250 customers and loses 10 by the end of the month, you would calculate ten divided by 250, resulting in a churn rate of 4%.
**What is considered an acceptable net churn rate?**
An acceptable net churn rate typically falls within 5 to 7% annually.
**What is Net Negative Revenue Churn?**
Net negative revenue churn focuses on revenue gained from existing customers and does not focus on revenue generated from new customers.
**How to Check Negative Revenue Churn?**
The Net Negative Revenue churn formula is similar to the calculation of negative net churn. Here is how to check negative revenue churn.
Net Negative Revenue Churn Rate = (Revenue Lost from Churn - Revenue Gained from Expansions) / Total Revenue at the Beginning of the Period
**What is Net Revenue Retention?**
Net revenue retention (NRR) measures the percentage of revenue retained from existing customers over a specific period, including upgrades, downgrades, and churn. It provides a broader view of revenue growth from existing customers while considering positive and negative customer spending changes.
NRR = (Starting MRR + Expansion MRR - Churned MRR - Contraction MRR) × 100 / Starting MRR
| churnfree |
1,880,866 | Day 966 : The Next Chapter | liner notes: Professional : It's Friday + no meetings = I got a bunch of stuff done. haha I... | 0 | 2024-06-07T21:48:31 | https://dev.to/dwane/day-966-the-next-chapter-2cd7 | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : It's Friday + no meetings = I got a bunch of stuff done. haha I finished up a sample application and updated a library I created and posted it into a channel to get some feedback. I started the process to apply for a Visa, but had some questions. Yeah, not a bad way to start the weekend.
- Personal : Last night, I went through Bandcamp and picked up some projects. I spun up a quick project to get the start of an in-browser highlight clip video creator. I got it to the point where you can upload a video and a clip is created. The next chapter is to get a video from a URL, cut multiple clips at different timestamps, add a filter, generate start and end clips, join them together, show a preview and make it available for download.

Going to finish putting together the radio show, work on a logo for a side project and jump back on the highlight clip video creator. I would like to try it out for the study sessions on Sunday over at https://untilit.works.
Have a great night and weekend!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube GYSTHeBHKqQ %} | dwane |
1,880,864 | capstone debugging: learnings | The clock on my monitor silently ticked, while I cried in JSX fragments and Spring beans. It was Week... | 0 | 2024-06-07T21:34:46 | https://dev.to/ashleyd480/capstone-debugging-learnings-3495 | beginners, learning, bootcamp, fullstack | The clock on my monitor silently ticked, while I cried in JSX fragments and Spring beans. It was Week 15 and 16- the final stretch of my coding bootcamp and we were tasked with creating a full-stack app. Time was tight from having to research and execute new concepts, and the bugs- oh someone call pest control! 😅

Though it was tough, I did end up finishing and learned a bunch in the process. My project was a volunteer-driven maps application that allows users to crowdsource accessibility information as well as explore and search accessible places near them.
While my Github [readme](https://github.com/ashleyd480/access-map-app-capstone) shares more in-depth details of my learnings, I wanted to also share a behind the scenes view of 4 bugs I faced and how I debugged them:
## Table of Contents
1. [Duplicate Data Seeding](#1-duplicate-data-seeding)
2. [Rating Button Not Showing as Checked](#2-rating-button-not-showing-as-checked)
3. [Uncontrolled Component Warning](#3-uncontrolled-component-warning)
4. [Conditional API Calls](#4-conditional-api-calls)
---
## 1. Duplicate Data Seeding
**Issue:** When seeding data without using a list, duplicate entries were created.
**Solution:** Created a variable to store the data seeder return and referenced this variable in other functions. This prevented duplication when checked in Postman.
To ensure the app had places loaded along with user reviews (to simulate a database of Maps places) and feature tags, a data seeding mechanism was used. 🌱. My approach at the beginning was to call the return of the entity seeder class. For example, in my `reviewSeeder`, I called `userSeeder.seedUsers()`, thinking that would just get me the return value of the initial list of 10 seeded users. Instead, I ended up with duplicates—Postman showed 20 users instead of 10 😬. When checking that list, I noticed that the usernames repeated twice, i.e. `user1` appeared twice with `id` of 1 and 11.
After an hours-long trip down a rabbit hole, I realized that `userSeeder.seedUsers()` appeared to invoke the seeder function again instead of just returning the initial seeded list.
To fix this, I created a variable to hold the data seeder's return value and referenced this variable in subsequent functions. This change effectively prevented duplication, confirmed by rechecking in Postman.
```
public void run(String... args) throws Exception {
List<User> seededUsers= userSeeder.seedUsers();
// Seed users first
List<FeatureTag> seededTags = tagSeeder.seedTags(); // Then, seed tags first
List<Place> seededPlaces = placeSeeder.seedPlaces(seededTags); // Pass seeded tags to places
reviewSeeder.seedReviews(seededPlaces, seededUsers); // Pass seeded places and tags to reviews (because reviews can only exist with a place and places have tags)
}
```
## 2. Rating Button Not Showing as Checked
**Issue: **The rating button wasn’t properly working or showing as checked upon user selection.
**Solution:** Used the checked attribute to control the selected radio button based on the component's state, as well as corrected mapping logic.
When adding a review, users are also prompted to rate the accessibility of the place from 1-5. Getting the rating buttons to display as checked was another challenge ⭐.
Kudos to my instructor who worked through with me during office hours.
Firstly, we fixed the mapping logic. The way rating radio buttons are generated is to take the array of ratings [1, 2, 3,4,5] and then map through them.
```
{[1, 2, 3, 4, 5].map((value) => { ... })}
```
A `map` typically takes 2 parameters: the value (current element of the array being processed and the index of the current element in the array. In this case, `(value) => { ... }` is an anonymous function that takes value as its parameter. and it is saying for each rating number, we want to have it be a radio button.
```
{[1, 2, 3, 4, 5].map((value) => {
return ( // return a radio button for each number in the array
<Form.Check
key={value}
type="radio"
label={value} // set label text for radio button
name="rating"
value={value} // set value attribute to the current value we are mapping over
checked={formData.rating === value.toString()}
onChange={handleChange}
/>
)
```
After that, I was able to confirm that yes, the value of the user’s selection was read by using a `console.log` and confirming the value was read on click. However, the button still appeared as unchecked, and that can be confusing to the end user.
We researched and learned that the `checked` attribute is what helps React determine which button to select when the form renders. This meant that there was an issue with how we were defining the statement in `checked.` The values in the bracket had to evaluate to true and we were comparing `formData.rating` to `value` (which was the value of the radio button generated from our mapping).
We confirmed that this comparison had to evaluate to true as we wrote `checked = {false}`; the formData.rating value was read on the console, but the button was not checked - which proves that when the comparison is `false`, a check will not appear visually in our UI.
Therefore, we dug a bit further into how we were getting those values and comparing them.
formData.rating is set using the `handleChange` which sets the rating value when the user clicks on a radio button. (Essentially, the function looks at `event.target.name` aka fields that triggered the change, and gets its value and sets it to form data.
```
const handleChange = (event) => {
const { name, value } = event.target; // destructures the event.target with the keys in brackets
// this way, we can use `name` and `value` variables vs `event.target.name, event.target.value
setFormData((prevFormData) => ({
...prevFormData, //takes the form data and makes copy of it
[name]: value, //gets value for fields that triggered the change and sets it to form data.
}));
};
```
We ran a `console.log` to compare `formData.rating` and `value.` In the end, we saw the issue was a type mismatch. After researching and seeing a suggested `toString` method online, we used that with our own code. `formData.rating === value.toString()` generated `true` and the check was now appearing on the UI. ✅
We could also verify this with the `console.log`. You can see when the user clicks 2.
Line 88 is `formData.rating` and Line 89 is `value.toString()`. You can see 5 lines appear - which is from our mapping of the 5 ratings, and for each it checks to see if the `value` we are mapping over from the array is equal to the user’s selection. When it is mapped over 2, that matches what the user selected, so the check appears visually in the UI.

## 3. Uncontrolled Component Warning
**Issue:** Input fields were locked because they were directly bound to `userData`.
**Solution:** Made a copy of `userData` to allow edits and saved changes on submit. This prevented the form from locking while enabling updates.
When designing the `Edit Account` page, I wanted data from `My Account` (which was retrieved from a `GET` mapping call to also populate on `Edit Account`). I used the `UserData` context provider in React to carry over those values. While the information did port over correctly to `Edit Account`, the input fields were locked 🔒, preventing edits.
Shoutout to my mentor who helped me to battle this bug. The console showed an error of “uncontrolled component warning.” We learned this error is when the state of a component is not being controlled by React itself, aka React doesn’t have complete control over the `Edit Account` form’s input fields. Yes, React is a control freak. 😉
Fields were directly bound to `userData` (which was set from that aforementioned API call on `My Account`). This resulted in the fields being "locked" and preventing any edits. This also means that when I was trying to edit the input fields, I was essentially trying to edit the original userData. React doesn't allow direct changes to props because they are supposed to be immutable. So, trying to edit the input fields directly would essentially be trying to modify immutable data, which React won't allow.
Also, when an input field is directly bound to a piece of data- in this case `userData`, React cannot fully control the state of that input field.
We resolved it by creating a copy of `userData`, allowing modifications without altering the original until submission.
```
const [formData, setFormData] = useState({ ...userData });
```
`formData` is a variable that refers to that copy of `userData` (with its key-value pairs of data details), so in our form fields, we can use the dot notation of `value={formData.email}`.
This also fixed the uncontrolled component warning, ensuring form fields were populated with the initial `userData` values but remained editable. Upon submitting, changes were saved back to the original user data with a `PUT` request, ensuring a smooth and functional user experience.
Finally, the user is redirected back to `My Account` after a successful `PUT` call, and that is where `GET` mapping happens to retrieve the user info and set it to the `userData` context provider- ensuring both the backend and the frontend context providers’ values are updated 💾.
## 4. Conditional API Calls
**Issue:** Fetch API calls wouldn’t populate with data on the frontend
**Solution:** Ensured API call was made only if the `username` was truthy, triggering the call once the username was available.
A peculiar issue with populating data from fetch API arose 🚧. The first time I noticed this was when trying to get `My Account` details to populate by username. My API call required the `username` value.
```
const responseData = await fetchData(`users?username=${username}`);
```
I started with using local storage to store the username upon a successful sign in, but the API call to get account details would not return anything. I even did a `console.log` to ensure the username was being correctly read. The instructor gave a hint on how local storage can be slow to load. Thus, I then tested using a `username` context provider to pass the value - thinking this would resolve it. Still, there was no luck in rendering the API call’s return.
To prove that it was not an issue with the system reading the `username` value, I even tried to hardcode a `username` value of const username = `user1` before the API call, and that worked. Something else was brewing.
As I initially had an `address` entity linking to `user`, I tried to change the dot notation format to `users.address`, `users.` , `address`, etc- all to no avail either. I then thought perhaps the `address` entity was giving me issues due to how it was set up on Spring Boot with the one-to-one cascade, so I commented it out to see if I could at least get the `user` information to populate. It did!
When I uncommented out `address`, then the `address` would populate. I tested this a few times with mixed results, and noticed I had to wait a bit to uncomment out `address` for both the user and address to display. This gave me another hint, that perhaps we had to wait for the user information to populate.
A few hours later, what I learned is that given that API calls are asynchronous, there was a possibility that the data might not be available at the time of the call. Also, local storage and context providers are asynchronous too. That means JavaScript won't wait for the local storage or context operation to finish before continuing to execute the API calls.
That means that when we attempted to fetch "My Account" details based on the `username`, there was no guarantee that the username would be available immediately. It could take some time for the local storage to be accessed and the username to be retrieved.
By implementing a conditional API call that triggered only if the username was truthy, I ensured the address field was populated correctly. This method checked if the username was available before making the API call, allowing the address data to load appropriately. This method highlighted the importance of conditional logic in ensuring seamless data fetching and rendering in the UI 🎉
```
useEffect(() => {
if (username) {
fetchUserData(username);
}
}, [username]);
```
| ashleyd480 |
1,880,862 | Learn How to Create a Simple PHP REST API | Introduction In this tutorial, we'll guide you through the process of creating a simple REST API... | 0 | 2024-06-07T21:29:21 | https://dev.to/ayas_tech_2b0560ee159e661/learn-how-to-create-a-simple-php-rest-api-185i | **Introduction**
In this tutorial, we'll guide you through the process of creating a simple REST API using PHP. REST APIs are essential for web applications as they allow for communication between different software systems. By the end of this tutorial, you'll have a working API that can handle basic CRUD (Create, Read, Update, Delete) operations.
**Step 1: Setting Up the Project**
Create a new directory for your project and set up the necessary files.
- index.php
- config.php
- api.php
**Step 2: Database Configuration**
Create a MySQL database and a table for this example. We'll use a table called users.
```
CREATE DATABASE rest_api_db;
USE rest_api_db;
CREATE TABLE users (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL UNIQUE,
age INT NOT NULL
);
```
**Step 3: Configuring Database Connection**
In config.php, add the database connection details.
```
<?php
$host = 'localhost';
$db_name = 'rest_api_db';
$username = 'root';
$password = '';
try {
$pdo = new PDO("mysql:host=$host;dbname=$db_name", $username, $password);
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
} catch (PDOException $e) {
echo "Connection failed: " . $e->getMessage();
}
?>
```
**Step 4: Creating the API Endpoints**
In api.php, we'll handle the different API requests.
```
<?php
require 'config.php';
$method = $_SERVER['REQUEST_METHOD'];
$request = explode('/', trim($_SERVER['PATH_INFO'], '/'));
switch ($method) {
case 'GET':
if (isset($request[0]) && is_numeric($request[0])) {
getUser($request[0]);
} else {
getUsers();
}
break;
case 'POST':
createUser();
break;
case 'PUT':
if (isset($request[0]) && is_numeric($request[0])) {
updateUser($request[0]);
} else {
echo json_encode(['error' => 'Invalid User ID']);
}
break;
case 'DELETE':
if (isset($request[0]) && is_numeric($request[0])) {
deleteUser($request[0]);
} else {
echo json_encode(['error' => 'Invalid User ID']);
}
break;
default:
echo json_encode(['error' => 'Invalid Request Method']);
}
function getUsers() {
global $pdo;
$stmt = $pdo->query("SELECT * FROM users");
$users = $stmt->fetchAll(PDO::FETCH_ASSOC);
echo json_encode($users);
}
// Function to fetch a single user by ID
function getUser($id) {
global $pdo;
$stmt = $pdo->prepare("SELECT * FROM users WHERE id = ?");
$stmt->execute([$id]);
$user = $stmt->fetch(PDO::FETCH_ASSOC);
echo json_encode($user);
}
function createUser() {
global $pdo;
$data = json_decode(file_get_contents('php://input'), true);
$stmt = $pdo->prepare("INSERT INTO users (name, email, age) VALUES (?, ?, ?)");
if ($stmt->execute([$data['name'], $data['email'], $data['age']])) {
echo json_encode(['success' => 'User created successfully']);
} else {
echo json_encode(['error' => 'Failed to create user']);
}
}
function updateUser($id) {
global $pdo;
$data = json_decode(file_get_contents('php://input'), true);
$stmt = $pdo->prepare("UPDATE users SET name = ?, email = ?, age = ? WHERE id = ?");
if ($stmt->execute([$data['name'], $data['email'], $data['age'], $id])) {
echo json_encode(['success' => 'User updated successfully']);
} else {
echo json_encode(['error' => 'Failed to update user']);
}
}
function deleteUser($id) {
global $pdo;
$stmt = $pdo->prepare("DELETE FROM users WHERE id = ?");
if ($stmt->execute([$id])) {
echo json_encode(['success' => 'User deleted successfully']);
} else {
echo json_encode(['error' => 'Failed to delete user']);
}
}
?>
```
**Step 5: Testing the API**
You can test the API endpoints using tools like Postman or curl.
```
// GET all users
curl -X GET http://localhost/simple-php-rest-api/api.php
// GET a single user by ID
curl -X GET http://localhost/simple-php-rest-api/api.php/1
// POST a new user
curl -X POST -H "Content-Type: application/json" -d '{"name": "John Smith", "email": "john.smith@gmail.com", "age": 30}' http://localhost/simple-php-rest-api/api.php
// PUT update a user by ID
curl -X PUT -H "Content-Type: application/json" -d '{"name": "John Smith", "email": "ohn.smith@gmail.com", "age": 25}' http://localhost/simple-php-rest-api/api.php/1
// DELETE a user by ID
curl -X DELETE http://localhost/simple-php-rest-api/api.php/1
```
**Conclusion**
Creating a simple REST API with PHP is straightforward and powerful. With the knowledge gained from this tutorial, you can expand and customize your API to suit your specific needs, adding more functionality and enhancing its robustness.
| ayas_tech_2b0560ee159e661 | |
1,880,863 | AI Image and AI Chat pocket tool | Python GUI for windows. | Credentials Area {Only asks 1 time} Main Interface The Prompt... | 0 | 2024-06-07T21:27:32 | https://dev.to/drake_x_97c37b3163e072b54/cloudflare-ai-image-and-ai-chat-pocket-tool-python-gui-for-windows-9kc | python, gui, exe | **Credentials Area** {Only asks 1 time}

**Main Interface**

**The Prompt area:**

**Generating**

**Generated**

**Chat Area**

**Conversation**

> Its simple Easy to use and user friendly.
**`Let me know of your thoughts`**
> Check my Instagram for Images i generated using my own custom prompts: [@nxm.ai](https://instagram.com/nxm.ai) you can contact me regarding the tool as well.
| drake_x_97c37b3163e072b54 |
1,880,860 | Shadcn-ui codebase analysis: site-footer.tsx explained. | I wanted to find out how the below shown footer component is developed on ui.shadcn.com, so I looked... | 0 | 2024-06-07T21:25:31 | https://dev.to/ramunarasinga/shadcn-ui-codebase-analysis-site-footertsx-explained-2bgh | javascript, opensource, nextjs, shadcnui | I wanted to find out how the below shown footer component is developed on [ui.shadcn.com](http://ui.shadcn.com), so I looked at its [source code](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx). Because shadcn-ui is built using app router, the files I was interested in were [layout.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx) and [footer.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/site-footer.tsx#L3)

site-footer is a small component with code related to footer as shown above.
[site-footer code snippet](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/site-footer.tsx#L3)
```javascript
import { siteConfig } from "@/config/site"
export function SiteFooter() {
return (
<footer className="py-6 md:px-8 md:py-0">
<div className="container flex flex-col items-center justify-between gap-4 md:h-24 md:flex-row">
<p className="text-balance text-center text-sm leading-loose text-muted-foreground md:text-left">
Built by{" "}
<a
href={siteConfig.links.twitter}
target="\_blank"
rel="noreferrer"
className="font-medium underline underline-offset-4"
>
shadcn
</a>
. The source code is available on{" "}
<a
href={siteConfig.links.github}
target="\_blank"
rel="noreferrer"
className="font-medium underline underline-offset-4"
>
GitHub
</a>
.
</p>
</div>
</footer>
)
}
```
Have you noticed the Built by{“ “}? I did not know this, I had struggled with providing a space while keeping words equally spaced when there is an anchor tag. The problem is words can be spaced equally until there is anchor tag.
For example, if you had written your footer like below:
```html
Built by
<a
href={siteConfig.links.twitter}
target="\_blank"
rel="noreferrer"
className="font-medium underline underline-offset-4"
>
shadcn
</a>
```
Your footer would load this as
```
Built byshadcn
```
But what you want is
```
Built by shadcn
```
Hence the reason why you have {" "}
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://github.com/Ramu-Narasinga/build-from-scratch) _and give it a star if you like it. Sovle challenges to build shadcn-ui/ui from scratch. If you are stuck or need help?_ [_solution is available_](https://tthroo.com/build-from-scratch)_._
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx)
2. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/site-footer.tsx#L3](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/site-footer.tsx#L3) | ramunarasinga |
1,880,840 | Gestión de Listas de Compra en Python con Archivos JSON | En este artículo, exploraremos cómo utilizar Python para gestionar listas de compra utilizando... | 0 | 2024-06-07T21:18:02 | https://dev.to/abrahanmaigua/gestion-de-listas-de-compra-en-python-con-archivos-json-3nh1 | python, json, spanish |
En este artículo, exploraremos cómo utilizar Python para gestionar listas de compra utilizando archivos JSON. Utilizaremos las capacidades de Python para crear, listar, actualizar, eliminar, guardar y cargar listas de compra en un formato JSON, lo que proporciona una forma eficiente y estructurada de mantener y modificar nuestras listas de compras.
## Introducción
Las listas de compra son una herramienta esencial para organizar lo que necesitamos comprar. En lugar de gestionarlas manualmente, podemos utilizar Python para automatizar este proceso utilizando archivos JSON como almacenamiento persistente. A continuación, crearemos una clase en Python llamada `ListaCompra` que nos permitirá realizar todas las operaciones necesarias con nuestras listas de compra.
## Creación de la Clase `ListaCompra`
```python
import os
import json
class ListaCompra:
def __init__(self):
self.listas = {}
def crear(self, lista: list, name: str, tipo: int = 0):
"""
Crea una nueva lista de compra.
Args:
- lista (list): Elementos de la lista de compra.
- name (str): Nombre de la lista.
- tipo (int, opcional): Tipo de lista (por defecto 0).
"""
if name in self.listas:
print(f"La lista '{name}' ya existe.")
else:
if tipo == 0:
self.listas[name] = lista
return self.listas
def count(self):
"""
Retorna la cantidad de listas de compras creadas.
"""
return len(self.listas.keys())
def listar(self):
"""
Lista todas las listas de compras con sus elementos.
"""
for k, v in self.listas.items():
print(k)
for item in v:
if isinstance(item, list):
item = ' '.join(item)
print(f" - {item}")
print('')
def actualizar(self, name, content):
"""
Actualiza una lista de compra existente añadiendo un nuevo elemento.
Args:
- name (str): Nombre de la lista a actualizar.
- content: Elemento a añadir a la lista.
"""
if name in self.listas:
self.listas[name].append(content)
else:
print(f"La lista '{name}' no existe.")
return self.listas.get(name, [])
def eliminar(self, name):
"""
Elimina una lista de compra existente.
Args:
- name (str): Nombre de la lista a eliminar.
"""
if name in self.listas:
del(self.listas[name])
else:
print(f"La lista '{name}' no existe.")
def guardar(self):
"""
Guarda las listas de compra en un archivo JSON llamado 'lista_compra.json'.
"""
with open('lista_compra.json', 'w') as file:
json.dump([{k: v} for k,v in self.listas.items()], file, indent=4)
def cargar(self):
"""
Carga las listas de compra desde el archivo JSON 'lista_compra.json'.
"""
if os.path.exists('lista_compra.json'):
with open('lista_compra.json', 'r') as file:
data = json.load(file)
for i, lista in enumerate(data):
self.listas[f"lista_{i+1}"] = lista["compras"]
else:
print("El archivo 'lista_compra.json' no existe.")
```
## Explicación de la Clase `ListaCompra`
### Inicialización y Atributos
- `__init__(self)`: Inicializa la clase con un diccionario vacío `self.listas` donde se almacenarán las listas de compras.
### Métodos de la Clase
- `crear(self, lista, name, tipo=0)`: Crea una nueva lista de compra con un nombre único `name` y una lista de elementos `lista`. El parámetro opcional `tipo` permite especificar el tipo de lista.
- `count(self)`: Retorna el número de listas de compras creadas hasta el momento.
- `listar(self)`: Lista todas las listas de compras existentes con sus respectivos elementos.
- `actualizar(self, name, content)`: Actualiza una lista de compra existente añadiendo un nuevo elemento `content`.
- `eliminar(self, name)`: Elimina una lista de compra existente según su nombre `name`.
- `guardar(self)`: Guarda todas las listas de compras en un archivo JSON llamado `lista_compra.json`.
- `cargar(self)`: Carga las listas de compras desde el archivo JSON `lista_compra.json` al diccionario `self.listas`.
## Uso de la Clase `ListaCompra`
```python
os.system('clear')
app = ListaCompra()
app.crear(['patatas', 'arroz', 'potato'], 'comprar')
app.crear(['galletas'], 'deseos')
app.guardar()
app.cargar()
```
### Explicación del Uso
- `os.system('clear')`: Limpia la consola antes de ejecutar la aplicación para una mejor visualización.
- Creación de una instancia `app` de la clase `ListaCompra`.
- Creación de dos listas de compra utilizando el método `crear`.
- Guardado de las listas de compra en el archivo JSON utilizando el método `guardar`.
- Carga de las listas de compra desde el archivo JSON utilizando el método `cargar`.
## Conclusión
En este artículo, hemos visto cómo implementar una clase en Python para gestionar listas de compra utilizando archivos JSON como almacenamiento persistente. Esta técnica no solo simplifica la gestión de datos, sino que también ofrece una forma estructurada y eficiente de mantener nuestras listas de compras actualizadas y accesibles desde cualquier lugar.
¡Espero que este artículo te haya proporcionado una buena introducción al manejo de datos estructurados con Python y archivos JSON!
---
Este esquema te proporciona una guía detallada para escribir un artículo de blog sobre cómo gestionar listas de compra en Python utilizando archivos JSON. Puedes expandir cada sección según sea necesario y añadir ejemplos adicionales o detalles técnicos según el público objetivo del artículo. | abrahanmaigua |
1,878,824 | Short-Circuiting and Logical Operators in JavaScript: &&, ||, ?? | Short-circuiting is a concept in logical operations in many programming languages, including... | 0 | 2024-06-07T21:14:04 | https://dev.to/atenajoon/short-circuiting-and-logical-operators-in-javascript--13eh | javascript, webdev, beginners, tutorial |
**Short-circuiting** is a concept in logical operations in many programming languages, including JavaScript where the evaluation stops as soon as the outcome is determined. such as the “AND operator” and “OR operator”. This feature only looks at the first value to decide what to return without even looking at the second value.
It is particularly useful for optimizing code and preventing unnecessary computations.
The **&& operator**: if the first value is **true**, it immediately returns “the second value”, and if the first value is **falsy** it will immediately return “the falsy value”.
```
// falsy values: 0, '', undefined, null
console.log(true && 'Something'); // Something
console.log(false && 'Something'); // false
console.log(0 && 'Something'); // 0
11
```
The **|| operator**: if the first value is **true**, it immediately returns “true”, and if the first value is **falsy** it will immediately return “the second value”.
```
// falsy values: 0, '', undefined, null
console.log(true || 'Something'); // true
console.log(false || 'Something'); // Something
console.log(0 || 'Something'); // Something
```
But there can be a problem with the returned value since if the first value is 0 but it’s not actually false, it’s just falsy! Then we would need to get the actual value of 0, not the second value.
Another example:
```
const count1 = undefined;
const count2 = 0;
console.log(count1 || 'No data'); // No data
console.log(count2 || 'No data'); // No data
```
Here, the second console log should show 0 as the total number, not undefined. To fix this, we should give the condition some flexibility with falsy values other than null and undefined. That’s where we use the **Nullish coalescing operator**. This operator returns the falsy value if it’s anything other than undefined or null. So, for the values of 0 and empty strings, it will return the actual value of them.
```
const count1 = undefined;
const count2 = 0;
const count3 = 5;
console.log(count1 ?? 'No data'); // No data
console.log(count2 ?? 'No data'); // 0
```
**Optional chaining operator**: this can be very useful when dealing with objects that may have missing properties or when you want to safely access deeply nested properties without running into _null_ or _undefined_ errors.
```
const data = {
value1: undefined, // or it could be { nested: undefined }
value2: { nested: 3 },
value3: { nested: 5 }
};
function getMultiplicationResult(obj1, obj2) {
const val1 = obj1?.nested ?? 0;
const val2 = obj2?.nested ?? 0;
return val1 * val2;
}
const result = getMultiplicationResult(data.value1, data.value2); // 0 * 3 = 0
const result = getMultiplicationResult(data.value2, data.value3); // 3 * 5 = 15
```
Here, we used **?** (Optional Chaining) to safely access the _nested_ property of the objects _obj1_ and _obj2_.
And used **??** (Nullish Coalescing) to provide a default value of _0_ if _nested_ is _null_ or _undefined_.
So, if _data.value1_ is undefined or if _data.value1.nested_ is undefined, _val1_ will be set to _0_. | atenajoon |
1,880,839 | Introducción a las Funciones en Python | Las funciones en Python son bloques de código reutilizables diseñados para realizar una sola,... | 0 | 2024-06-07T21:14:01 | https://dev.to/abrahanmaigua/introduccion-a-las-funciones-en-python-3lb3 | python, functional, beginners | Las funciones en Python son bloques de código reutilizables diseñados para realizar una sola, relacionada acción. Las funciones nos permiten modularizar el código, hacer que sea más limpio, más legible y más fácil de mantener. En este artículo, exploraremos cómo definir y usar funciones en Python, junto con ejemplos y explicaciones detalladas.
## Definición de una Función
Para definir una función en Python, utilizamos la palabra clave `def`, seguida del nombre de la función, paréntesis y dos puntos. El código que realiza la función se escribe indentado a continuación.
### Sintaxis básica:
```python
def nombre_de_la_funcion(parametros):
# Cuerpo de la función
...
```
### Ejemplo:
```python
def saludar():
print("¡Hola, mundo!")
```
Esta función se llama `saludar` y simplemente imprime "¡Hola, mundo!" cuando se llama.
## Llamar a una Función
Para usar una función que hemos definido, simplemente escribimos su nombre seguido de paréntesis.
### Ejemplo:
```python
saludar() # Esto imprimirá "¡Hola, mundo!"
```
## Parámetros y Argumentos
Las funciones pueden aceptar parámetros, que son valores que proporcionamos a la función para que los use.
### Ejemplo con Parámetros:
```python
def saludar(nombre):
print(f"¡Hola, {nombre}!")
```
Al llamar a esta función, debemos proporcionar un argumento que se pasará al parámetro `nombre`.
```python
saludar("Alice") # Esto imprimirá "¡Hola, Alice!"
```
## Valores de Retorno
Las funciones pueden devolver valores usando la palabra clave `return`.
### Ejemplo con Valor de Retorno:
```python
def sumar(a, b):
return a + b
```
Podemos usar el valor devuelto por la función en otras partes de nuestro programa.
```python
resultado = sumar(5, 3)
print(resultado) # Esto imprimirá "8"
```
## Parámetros Predeterminados
Podemos asignar valores predeterminados a los parámetros. Esto significa que si no se proporciona un argumento, el parámetro tomará el valor predeterminado.
### Ejemplo con Parámetros Predeterminados:
```python
def saludar(nombre="mundo"):
print(f"¡Hola, {nombre}!")
```
Podemos llamar a esta función con o sin un argumento.
```python
saludar() # Esto imprimirá "¡Hola, mundo!"
saludar("Alice") # Esto imprimirá "¡Hola, Alice!"
```
## Funciones con Varios Parámetros
Las funciones pueden tener múltiples parámetros, separados por comas.
### Ejemplo con Múltiples Parámetros:
```python
def presentar(nombre, edad):
print(f"Me llamo {nombre} y tengo {edad} años.")
```
```python
presentar("Alice", 30) # Esto imprimirá "Me llamo Alice y tengo 30 años."
```
## Funciones Anidadas
Podemos definir funciones dentro de otras funciones.
### Ejemplo de Funciones Anidadas:
```python
def exterior():
print("Esta es la función exterior.")
def interior():
print("Esta es la función interior.")
interior()
```
```python
exterior()
# Esto imprimirá:
# Esta es la función exterior.
# Esta es la función interior.
```
## Funciones Lambda
Las funciones lambda son funciones anónimas de una sola línea definidas con la palabra clave `lambda`.
### Ejemplo de Función Lambda:
```python
sumar = lambda a, b: a + b
print(sumar(5, 3)) # Esto imprimirá "8"
```
## Documentación de Funciones
Podemos documentar nuestras funciones usando cadenas de documentación (docstrings).
### Ejemplo con Docstrings:
```python
def sumar(a, b):
"""
Esta función suma dos números y devuelve el resultado.
Parámetros:
a (int): El primer número.
b (int): El segundo número.
Retorna:
int: La suma de a y b.
"""
return a + b
```
Podemos acceder a la docstring de una función con el atributo `__doc__`.
```python
print(sumar.__doc__)
```
## Funciones de Orden Superior
Las funciones de orden superior son funciones que toman otras funciones como argumentos o devuelven funciones como resultado.
### Ejemplo de Función de Orden Superior:
```python
def aplicar_operacion(funcion, a, b):
return funcion(a, b)
def multiplicar(x, y):
return x * y
resultado = aplicar_operacion(multiplicar, 5, 3)
print(resultado) # Esto imprimirá "15"
```
## Conclusión
Las funciones son una herramienta fundamental en Python para escribir código modular y reutilizable. Entender cómo definir, llamar y utilizar funciones es esencial para cualquier programador que trabaje en Python. Con esta base, puedes empezar a escribir tus propias funciones y explorar más conceptos avanzados como la recursión, las funciones de orden superior y mucho más.
---
Espero que este artículo te haya ayudado a entender mejor las funciones en Python. ¡Feliz programación!
| abrahanmaigua |
1,880,838 | From sticks and levers to worlds and chasms | Technology, and now AI, are modern equivalents of the Arquimedes lever for your mind. If we want to... | 0 | 2024-06-07T21:13:41 | https://dev.to/leonardoventurini/from-sticks-and-levers-to-worlds-and-chasms-egf | ai, webdev, machinelearning, datascience | Technology, and now AI, are modern equivalents of the Arquimedes lever for your mind. If we want to accomplish more as humans we have think on how to extend our innate capabilities.
We have been doing exactly that for thousands of years, from when we created our first handheld tool, to when we started creating the first machines to do work for us.
Today we have programmers, this rare subspecies of Homo sapiens who create incredible things with their hands and minds, by writing scripts and instructions for machines to follow and execute.
It's amazing how many abstraction levels we have gone through in the last centuries alone. Now we face an unprecedented phase of human amplification with AI.
There is a big dichotomy though: Will AI be an excuse for the dumbification of the Homo sapiens, or will it really help us achieve greater intellectual feats by sharpening the human mind?
Feels like it will be both, as it has always been. We already have the entirety of our shared knowledge in our pocket and very few use it well. It will not be different with AI.
However, we need to be careful. AI can not just increase the divide between the creative producers and consumers in our world, it can dig an uncrossable chasm. It can separate, it can even create a completely new species of humans by genetic manipulation and amplification.
What is too much? Can we stop ourselves from doing it? Or the mere possibility of others doing it will cause us to have to do it _first_? Will we be forced to transcend our human frailty by our very own nature?
With that comes diversification, it for sure will create angels, but also demons.
Perhaps that's inevitable. There are many other things we need to worry, like the Earth exhausting its resources, and even our sun dying. What if we can create whole worlds just for us? What if we can rule over the universal laws of creation?
The future is bright, hopefully not too bright. | leonardoventurini |
1,878,563 | Docker Mastery: A Comprehensive Guide for Beginners and Pros | Docker is a powerful platform that simplifies the creation, deployment, and management of... | 0 | 2024-06-07T21:12:20 | https://dev.to/theyasirr/docker-mastery-a-comprehensive-guide-for-beginners-and-pros-2p18 | docker, webdev, devops, beginners | Docker is a powerful platform that simplifies the creation, deployment, and management of applications within lightweight, portable containers. It allows developers to package applications and their dependencies into a standardized unit for seamless development and deployment. Docker enhances efficiency, scalability, and collaboration across different environments, making it an essential tool for modern software development and DevOps practices.
We'll delve into every aspect of Docker, from installation and configuration to mastering images, storage, networking, and security.
## Installation and configuration
Basic guides for installing Docker Community Edition (CE) on CentOS and Ubuntu are given below.
**Install Docker CE on CentOS**
- Install the required packages:
`sudo yum install -y () device-mapper-persistent-data lvm2`
- Add Docker CE yum repository:
`sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo`
- Install the Docker CE packages:
`sudo yum install -y docker-ce-18.09.5 docker-ce-cli-18.09.5 containerd.io`
- Start and enable Docker Service:
`sudo systemctl start docker
sudo systemctl enable docker`
Add the user to Docker group to grant the user permission to run Docker commands. It will have access to Docker after its next login.
`sudo usermod -a -G docker`
**Installing Docker CE on Ubuntu**
- Install the required packages:
`sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common`
- Add the Docker repo's GNU Privacy Guard (GPG) key:
`curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -`
- Add the Docker Ubuntu repository:
`sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable`
- Install packages:
`sudo apt-get install -y docker-ce=5:18.09.5~3-0~ubuntu-bionic docker-ce-cli=5:18.09.5~3-0~ubuntu-bionic containerd.io`
Add the user to Docker group to grant the user permission to run Docker commands. It will have access to Docker after its next login.
`sudo usermod -a -G docker`
**Selecting a storage driver**
A storage driver is a pluggable driver that handles internal storage for containers. The default driver for CentOS and Ubuntu systems is overlay2.
To determine the current storage driver:
`docker info | grep "Storage"`
One way to select a different storage driver is to pass the `--storage-driver` flag over to the Docker daemon. The recommended method to set the storage driver is using the Daemon Config file.
- Create or edit the Daemon config file:
`sudo vi /etc/docker/daemon.json`
- Add the storage driver value:
`"storage-driver": "overlay2"`
Remember to restart Docker after any changes, and then check the status.
`sudo systemctl restart docker
sudo systemctl status docker`
**Running a container**
`docker run IMAGE[:TAG] [COMMAND] [ARGS]`
`IMAGE`: Specifies the image to run a container.
`COMMAND and ARGS`: Run a command inside the container.
`TAG`: Specifies the image tag or version
`-d`: Runs the container in detached mode.
`--name NAME`: Gives the container a specified name instead of the usual randomly assigned name.
`--restart RESTART`: Specifies when Docker should automatically restart the container.
- `no (default)`: Never restart the container.
- `on-failure`: Only if the container fails (exits with a non-zero exit code).
- `always`: Always restart the container whether it succeeds or fails.
- `unless-stopped`: Always restart the container whether it succeeds or fails, and on daemon startup unless the container is manually stopped.
`-p HOST_PORT`: CONTAINER_PORT: Publish a container's port. The HOST_PORT is the port that listens on the host machine, and traffic to that port is mapped to the CONTAINER_PORT on the container.
`--memory MEMORY`: Set a hard limit on memory usage.
`--memory-reservation MEMORY`: Set a soft limit on memory usage.
`docker run -d --name nginx --restart unless-stopped -p 8080:80 --memory 500M --memory-reservation 256M nginx:latest`
Some of the commands for managing running containers are:
`docker ps`: List running containers.
`docker ps -a`: List all containers, including stopped containers.
`docker container stop [alias: docker stop]`: Stop a running container.
`docker container start [alias: docker start]`: Start a stopped container.
`docker container rm [alias: docker rm]`: Delete a container (must be stopped first)
**Upgrading the Docker Engine**
Stop the Docker service:
`sudo systemctl stop docker`
Install the required version of docker-ce and docker-ce-cli:
`sudo apt-get install -y docker-ce=<new version> docker-ce-cli=<new version>`
Verify the current version
`docker version`
## Image creation, management, and registry
An image is an executable package containing all the software needed to run a container.
Run a container using an image with:
`docker run IMAGE`
Download an image with:
`docker pull IMAGE
docker image pull IMAGE`
Images and containers use a layered file system. Each layer contains only the differences from the previous layer.
View file system layers in an image with:
`docker image history IMAGE`
A Dockerfile is a file that defines a series of directives and is used to build an image.
```
# Use the official Nginx base image
FROM nginx:latest
# Set an environment variable
ENV MY_VAR=my_value
# Copy custom configuration file to container
COPY nginx.conf /etc/nginx/nginx.conf
# Run some commands during the build process
RUN apt-get update && apt-get install -y curl
# Expose port 80 for incoming traffic
EXPOSE 80
# Start Nginx server when the container starts
CMD ["nginx", "-g", "daemon off;"]
```
Build an image:
`docker build -t TAG_NAME DOCKERFILE_LOCATION`
Dockerfile directives:
`FROM`: Specifies the base image to use for the Docker image being built. It defines the starting point for the image and can be any valid image available on Docker Hub or a private registry.
`ENV`: Sets environment variables within the image. These variables are accessible during the build process and when the container is running.
`COPY or ADD`: Copies files and directories from the build context (the directory where the Dockerfile is located) into the image. COPY is generally preferred for simple file copying, while ADD supports additional features such as unpacking archives.
`RUN`: Executes commands during the build process. You can use RUN to install dependencies, run scripts, or perform any other necessary tasks.
`EXPOSE`: Informs Docker that the container will listen on the specified network ports at runtime. It does not publish the ports to the host machine or make the container accessible from outside.
`CMD or ENTRYPOINT`: Specifies the command to run when a container is started from the image. CMD provides default arguments that can be overridden, while ENTRYPOINT specifies a command that cannot be overridden.
`WORKDIR`: Sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY, or ADD instructions.
`STOPSIGNAL`: Sets a custom signal that will be used to stop the container process.
`HEALTHCHECK`: Sets a command that will be used by the Docker daemon to check whether the container is healthy
A multi-stage build in a Dockerfile is a technique used to create more efficient and smaller Docker images. It involves defining multiple stages within the Dockerfile, each with its own set of instructions and dependencies.
An example Dockerfile containing a multi-stage build definition is:
```
# Build stage
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app
# Copy and restore project dependencies
COPY *.csproj .
RUN dotnet restore
# Copy the entire project and build
COPY . .
RUN dotnet build -c Release --no-restore
# Publish the application
RUN dotnet publish -c Release -o /app/publish --no-restore
# Runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime
WORKDIR /app
COPY --from=build /app/publish .
# Expose the required port
EXPOSE 80
# Set the entry point for the application
ENTRYPOINT ["dotnet", "YourApplication.dll"]
```
**Managing images**
Some key commands for image management are:
List images on the system:
`docker image ls`
List images on the system including intermediate images:
`docker image ls -a`
Get detailed information about an image:
`docker image inspect <IMAGE>`
Delete an image:
`docker rmi <IMAGE>
docker image rm <IMAGE>
docker image rm -f <IMAGE>`
An image can only face deletion if no containers or other image tags reference it. Find and delete dangling or unused images:
`docker image prune`
**Docker registries**
Docker Registry serves as a centralized repository for storing and sharing Docker images. Docker Hub is the default, publicly available registry managed by Docker. By utilizing the registry image, we can set up and manage our own private registry at no cost.
Run a simple registry:
`docker run -d -p 5000:5000 --restart=always --name registry registry:2`
Upload an image to a registry:
`docker push <IMAGE>:<TAG>`
Download an image from a registry:
`docker pull <IMAGE>:<TAG>`
Login to a registry:
`docker login REGISTRY_URL`
There are two authentication methods for connecting to a private registry with an untrusted or self-signed certificate:
**Secure**: This involves adding the registry's public certificate to the /etc/docker/certs.d/ directory.
**Insecure**: This method entails adding the registry to the insecure-registries list in the daemon.json file or passing it to dockerd using the --insecure-registry flag.
**Storage and volumes**
The storage driver controls how images and containers are stored and managed on your Docker host. Docker supports several storage drivers, using a pluggable architecture.
**overlay2**: Preferred for all Linux distributions
**fuse-overlayfs**: Preferred only for running Rootless Docker (not Ubuntu or Debian 10)
**vfs**: Intended for testing purposes, and for situations where no copy-on-write filesystem can be used.
**Storage models**
**Filesystem storage: **
- Data is stored in the form of regular files on the host disk
- Efficient use of memory
- Inefficient with write-heavy workloads
- Used by overlay2
**Block Storage:**
- Stores data in blocks using special block storage devices
- Efficient with write-heavy workloads
- Used by btrfs and zfs
**Object Storage:**
- Stores data in an external object-based store
- Applications must be designed to use object-based storage.
- Flexible and scalable.
**Configuring the overlay2 storage driver**
Stop Docker service:
`sudo systemctl stop docker`
Create or edit the Daemon config file:
`sudo vi /etc/docker/daemon.json`
Add/edit the storage driver value:
`"storage-driver": "overlay2"`
Remember to restart Docker after any changes, and then check the status.
`sudo systemctl restart docker
sudo systemctl status docker`
**Docker Volumes**
There are two different types of data mounts on Docker:
**Bind Mount**: Mounts a specific directory on the host to the container. It is useful for sharing configuration files, and other data between the container and host.
**Named Volume**: Mounts a directory to the container, but Docker controls the location of the volume on disk dynamically.
There are different syntaxes for adding bind mounts or volumes to containers:
_-v syntax _
Bind mount: The source begins with a forward slash "/" which makes this a bind mount.
`docker run -v /opt/data:/tmp nginx `
Named volume: The source is just a string, which means this is a volume. It will be automatically created if no volume exists with the provided name.
`docker run -v my-vol:/tmp nginx`
_--mount syntax_
Bind mount:
`docker run --mount source=/opt/data,destination=/tmp nginx `
Named volume:
`docker run --mount source=my-vol,destination=/tmp nginx`
We can mount the same volume to multiple containers, allowing them to share data. We can also create and manage volumes by ourselves without running a container.
Some common and useful commands:
`docker volume create VOLUME`: Creates a volume.
`docker volume ls`: Lists volumes.
`docker volume inspect VOLUME`: Inspects a volume.
`docker volume rm VOLUME`: Deletes a volume.
**Image Cleanup**
Check Docker's disk usage:
`docker system df`
`docker system df -v`
Delete unused or dangling images:
`docker image prune
docker image prune -a`
## Docker networking
Docker Container Networking Model (CNM) is a conceptual model that describes the components and concepts of Docker networking.
There are multiple implementations of the Docker CNM:
**Sandbox**: An isolated unit containing all networking components associated with a single container.
**Endpoint**: Connects one sandbox to one network.
**Network**: A collection of endpoints that can communicate with each other. **Network Driver**: A pluggable driver that provides a specific implementation of the CNM.
**IPAM Driver**: Provides IP address management. Allocates and assigns IP addresses.
**Built-In Network Drivers**
**Host**: This driver connects the container directly to the host's networking stack. It provides no isolation between containers or between containers and the host.
`docker run --net host nginx`
**Bridge**: This driver uses virtual bridge interfaces to establish connections between containers running on the same host.
`docker network create --driver bridge my-bridge-net
docker run -d --network my-bridge-net nginx`
**Overlay**: This driver uses a routing mesh to connect containers across multiple Docker hosts, usually in a Docker swarm.
`docker network create --driver overlay my-overlay-net
docker service create --network my-overlay-net nginx`
**MACVLAN**: This driver connects containers directly to the host's network interfaces but uses a special configuration to provide isolation.
`docker network create -d macvlan --subnet 192.168.0.0/24 --gateway 192.168.0.1 -o parent=eth0 my-macvlan-net
docker run -d --net my-macvlan-net nginx`
**None**: This driver provides sandbox isolation, but it does not provide any implementation for networking between containers or between containers and the host.
`docker run --net none -d nginx`
**Creating a Docker Bridge network**
It is the default driver. Therefore, any network that is created without specifying the driver will be a bridge network.
Create a bridge network.
`docker network create my-net `
Run a container on the bridge network.
`docker run -d --network my-net nginx`
By default, containers and services on the same network can communicate with each other simply by using their container or service names. Docker provides DNS resolution on the network that allows this to work.
Supply a network alias to provide an additional name by which a container or service is reached.
`docker run -d --network my-net --network-alias my-nginx-alias nginx`
Some useful commands for when one must interact with Docker networks are:
`docker network ls`: Lists networks.
`docker network inspect NETWORK`: Inspects a network.
`docker network connect CONTAINER NETWORK`: Connects a container to a network.
`docker network disconnect CONTAINER NETWORK`: Disconnects a container from a network.
`docker network rm NETWORK`: Deletes a network.
**Creating a Docker Overlay Network**
Create an overlay network:
`docker network create --driver overlay NETWORK_NAME `
Create a service that uses the network:
`docker service create --network NETWORK_NAME IMAGE`
**Network Troubleshooting**
View container logs:
`docker logs CONTAINER`
View logs for all tasks of a service:
`docker service logs SERVICE`
View Docker daemon logs:
`sudo jounralctl -u docker`
We can use the nicolaka/netshoot image to perform network troubleshooting. It comes packaged with a variety of useful networking-related tools. We can inject a container into another container's networking sandbox for troubleshooting purposes.
`docker run --network container:CONTAINER_NAME nicolaka/netshoot`
**Configuring Docker to Use External DNS**
Set the system-wide default DNS for Docker containers in daemon.json:
`{
"dns": ["8.8.8.8"]
} `
Set the DNS for an individual container.
`docker run --dns 8.8.4.4 IMAGE`
## Security
**Signing Images and Enabling Docker Content Trust**
Docker Content Trust (DCT) is a feature that allows us to sign images and verify signatures before running them. Enable Docker Content Trust by setting an environment variable:
`DOCKER_CONTENT_TRUST=1`
The system will not run images if they are unsigned or if the signature is not valid with Docker Content Trust enabled.
Sign and push an image with:
`docker trust sign`
With DOCKER_CONTENT_TRUST=1, docker push automatically signs the image before pushing it.
**Default Docker Engine Security**
Basic Docker security concepts:
Docker uses namespaces to isolate container processes from one another and the host. This prevents an attacker from affecting or gaining control of other containers or the host if they manage to gain control of one container.
The Docker daemon must run with root access. Before allowing anyone to interact with the daemon, be aware of this. It could be used to gain access to the entire host.
Docker leverages Linux capabilities to assign granular permissions to container processes. For example, listening on a low port (below 1024) usually requires a process to run as root, but Docker uses Linux capabilities to allow a container to listen on port 80 without running as root.
**Securing the Docker Daemon HTTP Socket**
Generate a certificate authority and server certificates for the Docker server.
```
openssl genrsa -aes256 -out ca-key.pem 4096`
`openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem -subj "/C=US/ST=Texas/L=Keller/O=Linux Academy/OU=Content/CN=$HOSTNAME" openssl genrsa -out server-key.pem 4096 `
`openssl req -subj "/CN=$HOSTNAME" -sha256 -new -key server-key.pem -out server.csr \ echo subjectAltName = DNS:$HOSTNAME,IP:,IP:127.0.0.1 >> extfile.cnf `
`echo extendedKeyUsage = serverAuth >> extfile.cnf `
`openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf`
Generate client certificates:
`openssl genrsa -out key.pem 4096
openssl req -subj '/CN=client' -new -key key.pem -out client.csr
echo extendedKeyUsage = clientAuth > extfile-client.cnf
openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -extfile extfile-client.cnf
```
Set appropriate permissions on the certificate files:
`chmod -v 0400 ca-key.pem key.pem server-key.pem chmod -v 0444 ca.pem server-cert.pem cert.pem`
Configure the Docker host to use tlsverify mode with the certificates created earlier:
```
sudo vi /etc/docker/daemon.json
{
"tlsverify": true,
"tlscacert": "/home/user/ca.pem",
"tlscert": "/home/user/server-cert.pem",
"tlskey": "/home/user/server-key.pem"
}
```
Edit the Docker service file, look for the line that begins with ExecStart and change the -H.
`sudo vi /lib/systemd/system/docker.service`
`ExecStart=/usr/bin/dockerd -H=0.0.0.0:2376 --containerd=/run/containerd/containerd.sock`
`sudo systemctl daemon-reload`
`sudo systemctl restart docker`
Copy the CA cert and client certificate files to the client machine.
On the client machine, configure the client to connect to the remote Docker daemon securely:
`mkdir -pv ~/.docker `
`cp -v {ca,cert,key}.pem ~/.docker`
`export DOCKER_HOST=tcp://:2376 DOCKER_TLS_VERIFY=1`
Test the connection:
`docker version`
## Conclusion
In conclusion, mastering Docker transforms your development workflow by streamlining installation, configuration, image management, storage, networking, and security. This guide equips you with essential knowledge and practical skills, enabling you to build, ship, and run applications efficiently. Embrace Docker's power to elevate your container management to the next level. | theyasirr |
1,880,836 | Security news weekly round-up - 7th June 2024 | Weekly review of top security news between May 31, 2024, and June 7, 2024 | 6,540 | 2024-06-07T21:12:10 | https://dev.to/ziizium/security-news-weekly-round-up-7th-june-2024-84b | security | ---
title: Security news weekly round-up - 7th June 2024
published: true
description: Weekly review of top security news between May 31, 2024, and June 7, 2024
tags: security
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/0jupjut8w3h9mjwm8m57.jpg
series: Security news weekly round-up
---
## __Introduction__
Welcome everyone. In this edition of our security news review, we'll cover articles that are about the following:
1. Malware
2. Online scams and extortion
Let's begin.
<hr/>
## [Beware: Fake Browser Updates Deliver BitRAT and Lumma Stealer Malware](https://thehackernews.com/2024/06/beware-fake-browser-updates-deliver.html)
This article shows the level that threat actors are willing to go to to compromise your computer system. So, be careful, and only download updates from the official vendor's website. Speaking of the latter, always double-check the address bar before clicking on the supposed download link.
The following excerpt from the article is what the threat actors hope to achieve from this:
> BitRAT is a feature-rich RAT that allows attackers to harvest data, mine cryptocurrency, download more binaries, and remotely commandeer the infected hosts. Lumma Stealer, a commodity stealer malware available for $250 to $1,000 per month since August 2022, offers the ability to capture information from web browsers, crypto wallets, and other sensitive details.
## [Researchers Show How Malware Could Steal Windows Recall Data](https://www.securityweek.com/researchers-show-how-malware-could-steal-windows-recall-data/)
The first time I saw the headlines about this feature Windows Recall, the only time I thought was the privacy implications if the data got into the wrong hands. What's more, it could make the job of malware authors more easy. I mean, everything the user is doing on the system in a single database? Write a malware that can grab that and it could be a horror show for whoever is affected.
Luckily, [Microsoft has bowed to pressure and the Windows Recall will now be off by default](https://www.securityweek.com/microsoft-bows-to-public-pressure-disables-controversial-windows-recall-by-default/). Still, the excerpt below highlights the job of two researchers and how they manage to get the data captured by Windows Recall.
> Researcher Marc-André Moreau [showed](https://x.com/awakecoding/status/1797724492812431677) how a remote desktop manager password collected by Recall can easily be recovered from a local unencrypted SQLite database, making it easy for information-stealing malware to obtain.
>
> Another cybersecurity expert, Alexander Hagenah, has made available an open source tool, named [TotalRecall](https://github.com/xaitax/TotalRecall), that can easily extract and display data from the Recall database.
## [The job hunter’s guide: Separating genuine offers from scams](https://www.welivesecurity.com/en/scams/the-job-hunters-guide-separating-genuine-offers-from-scams/)
If it's too good to be through, you'll lose nothing by walking away. That's candid advice that I'll give myself and anyone else that's hunting for a job. Moreover, as discussed in the article, be wary of what you post on your social media accounts, especially LinkedIn.
Here is an excerpt from the article:
> As outlined in a previous WeLiveSecurity blog by Daniel Cunha Barbosa, people often reveal too much about themselves online, especially on sites such as LinkedIn, which serves both as a professional social media service and as a job board. This can make it easier for crooks to harvest data – be it by purchasing leaked account credentials or by doing a bit of web scraping.
## [New Gitloker attacks wipe GitHub repos in extortion scheme](https://www.bleepingcomputer.com/news/security/new-gitloker-attacks-wipe-github-repos-in-extortion-scheme/)
Try your best and back up your GitHub repository. This ensures that you are safe from this type of attack. What's more, the perpetrator behind this ultimately asks the victim to communicate via Telegram.
No excerpt can capture the essence of the article, so, have fun reading.
## __Credits__
Cover photo by [Debby Hudson on Unsplash](https://unsplash.com/@hudsoncrafted).
<hr>
That's it for this week, and I'll see you next time. | ziizium |
1,880,837 | Ever felt the pain of not mastering something immediately? | Sometimes, when we want to start a new project, or maybe learn a new skill or language, we get caught... | 0 | 2024-06-07T21:11:59 | https://dev.to/hmsdev/ever-felt-the-pain-of-not-mastering-something-immediately-4574 | productivity, improvement, yougotthis, fridaythinking | Sometimes, when we want to start a new project, or maybe learn a new skill or language, we get caught up in the minute details or even get frustrated when we don’t grasp something entirely right away. (Hello!)
We have to remind ourselves that these big leaps and huge jumps in improvement that might be visible on the outside come with a lot of long days and nights of practice, hard work, and repetition.
Embracing continuous improvement over seeking delayed perfection acknowledges the value of progress and iteration!
When you’re constantly striving for perfection, you can face prolonged timelines (and possibly burn out on the thing you were so excited to complete), while a commitment to continuous improvement fosters adaptability and allows you to learn through practical experiences!
Remember, growth flourishes in the garden of ongoing development, not in the waiting room of flawless outcomes.
Happy Friday! | hmsdev |
1,880,825 | Creating a Resource Group in Microsoft Azure | Creating a Resource Group in Microsoft Azure In this tutorial, we'll walk through the... | 0 | 2024-06-07T21:11:57 | https://dev.to/jimiog/creating-a-resource-group-in-microsoft-azure-4hp3 | azure, cloud | # Creating a Resource Group in Microsoft Azure
In this tutorial, we'll walk through the steps of creating a Resource Group on Microsoft Azure, providing a structured approach to manage related applications within the Azure environment.
## Prerequisites
Before proceeding, ensure you have an active Azure account and have logged into the Azure portal.
## Step 1: Navigate to Resource Groups
- Click on the Search Bar located at the top of the Azure portal.

- Type "Resource Groups" and select the corresponding option from the dropdown menu. Alternatively, you can find "Resource Groups" under the Azure Services section.

## Step 2: Create a Resource Group
- Click on the "Create" button either under the Resource Group title or in the center of the screen.

## Step 3: Basics
- Choose your subscription tier.
- Define a name for your resource group.
- Select a region for deployment, ideally one closest to your location.

- Optionally, add tags to better organize and monitor your resources. For this demonstration, we'll skip this step.

- Click on "Review + Create" at the top.
## Step 4: Review and Create
- Wait for the validation process to complete.
- Once validation is successful, click the "Create" button at the bottom of the screen.

## Step 5: Navigate to the Resource Group
- After the Resource Group is successfully created, either click on the "Go To Resource" option in the pop-up menu or select the name of the Resource Group from the dropdown menu in the center of the screen.

Congratulations! You have successfully created a Resource Group using Microsoft Azure.
## Step 6: Clean Up
To avoid any unnecessary charges, follow these steps to delete the Resource Group:
- Click on the "Delete Resource Group" button located in the menu beside the local search bar.

- Enter the name of your Resource Group at the bottom, or copy and paste it. Then click Delete.

- Click on the "Delete" button on the pop-up window to confirm deletion.
 | jimiog |
1,880,835 | HIRE A HACKER TO FIND AND RECOVER YOUR STOLEN BTC/ETH/USDT/NFT'S AND ALL TYPES OF DIGITAL ASSETS | HIRE A HACKER TO FIND AND RECOVER YOUR STOLEN BTC/ETH/USDT/NFT'S AND ALL TYPES OF DIGITAL ASSETS... | 0 | 2024-06-07T21:09:45 | https://dev.to/nancy_rosales_3f87b540233/hire-a-hacker-to-find-and-recover-your-stolen-btcethusdtnfts-and-all-types-of-digital-assets-4ec7 | webdev, tutorial, productivity, news |

HIRE A HACKER TO FIND AND RECOVER YOUR STOLEN BTC/ETH/USDT/NFT'S AND ALL TYPES OF DIGITAL ASSETS
Someone I met online scammed me out of approximately $367,000 on a fictitious investment proposal. After I started looking for legal assistance to get my money back, I found a number of testimonials about WEB GENIE RECOVERY on www.webgenierecovery.com. I contacted them with all the information I needed, and it took the specialists around 72 hours to find and assist with getting my money back. I am quite relieved, and I hope that this will assist many others who have fallen prey to these fraudulent internet investment con artists. I heartily urge using the expert services to help with a quick and effective recuperation. Please get in touch with them at webgenierecoverys@proton.me.
webgenierecovery@outlook.com
via WhatsApp (918) 809-0113
Telegram: @WEBGENIERECOVERY | nancy_rosales_3f87b540233 |
1,880,833 | How to become an iOS developer and start your own business | Kirill, 33 years old, iOS developer, never worked as a programmer, income from programming: 0. So... | 0 | 2024-06-07T21:08:47 | https://dev.to/bitb2112/how-to-become-an-ios-developer-and-start-your-own-business-251j | vpn, ios, iphone, ipad | Kirill, 33 years old, iOS developer, never worked as a programmer, income from programming: 0.
So why read this message?
I created my own VPN application for iOS – both backend and frontend. Currently, the app is available on the App Store, translated into 53 languages, and has 127 ratings worldwide.
I want to tell you about a different way to earn money in IT:
- without experience in IT (and working in a team)
- without despised (by me) interviews
Let's start.
Background:
I worked and am still working in a company not related to programming. By the end of my 29th year, I changed jobs, but suddenly realized that I didn't want to spend my life on jobs that irritated me and decided to try myself in IT. I almost immediately realized that I wanted to study iOS -> Swift because the iPhone is always at hand, so developing applications for myself seemed like a cool idea.
In 3.5 years of studying to become an iOS developer (with an 11-month break), I encountered 4 mentors, the last of whom was Anton Nazarov. His sale of "successful success" with the response "for real" seemed convincing to me. For 8 months, he helped me study UIKit, and we wrote a test app together. In my opinion, this experience became a decisive factor in my further path, for which I am immensely grateful to him.
After finishing my lessons with the mentor, I started looking for my first interview. After composing a resume (together with Anton, in advance), I spent 3 days looking for a job, sending around 500 applications. Guess how many interview invitations I received? Correct, 0. After that, I decided, "screw it, I'll come up with a project and implement it myself." This is where the most interesting part began.
From idea to today.
- In 5 days, I chose the theme of the application - a VPN app based on the modern WireGuard protocol.
- In 1 month, I was able to launch a WireGuard VPN tunnel on my physical device, i.e., the app worked as a full-fledged VPN.
- In 3 months, the backend on Swift with a database and the application itself (client part/front end) were ready.
- In 5 months, I managed to open a developer account as an individual (in Russia), but it turned out that publishing VPNs is ONLY available to legal entities. I had to "find" a legal entity and open a developer account on it. After that, the first version of the app was published on the App Store.
- In 7 months, I talked to one of my mentors (another mentor, not from Russia), and he suggested I change the design. The design was indeed terrible; it’s much better now, though still far from ideal.
And so, a year has passed.
The quality of the app has grown significantly. Servers have become much better and faster, the app is translated into 53 languages, and obfuscation (masking from blocks) has been implemented, which works excellently even with mobile operators. In recent months, I haven't received any negative reviews (they can be sent directly in the app under "Problem?").
Currently, a new company (foreign) has been registered to enable in-app purchases from Apple, but until this is done, the app is completely free.
My workday.
07:00 - woke up, took a bath, ate, took the child to kindergarten
09:30 - meeting with a colleague at work, which means work has begun
15:00 - 18:00 - pick up the child from kindergarten, the main job ends, and I can finally start my favorite job
19:00 - by this time, I've usually eaten and rested for an hour, then I start thinking about what to do today, what I feel like doing at the moment. It can be anything:
- finding new countries for VPN (including analyzing censorship in each country)
- coming up with new features (new function, button, adding servers)
- planning a new design
- fixing bugs
- analyzing countries with high censorship to translate the app into those languages
- optimizing the iOS app (ASO) in the App Store (like SEO for the App Store)
- creating a promotional video for VPN
- writing an article (like this one) to expand the app's audience
- responding to user reviews/problems (rarely)
- legal issues related to the company, bank account, developer account, server payments, etc.
- setting up servers/writing Bash scripts together with "chat" for quick deployment
00:00 - finish the "main" work for/about the app and go to rest (e.g., read Habr, ask ChatGPT questions)
01:00 - go to bed
Why did I choose VPN?
Surprisingly, creating a VPN application turned out to be much easier than I imagined, especially when you have an amazing assistant-mentor ChatGPT Plus.
How would I recommend starting your path in iOS development?
Step 1: Vasily Usov's book (part 1, didn't like the second part) + all the free courses on the internet - study in parallel
Step 2: Mentor, mentor, and mentor again. Do a test project with them on UIKit or even just SwiftUI. Ideally, the mentor should be a senior (up to 20k/month, no more).
Step 3: Get a paid subscription to ChatGPT - this technology is like the internet and bitcoin - the future is behind them. The chat will become your best friend-mentor, cooler than Google.
In 1-2 years of smooth learning (combined with your main job), you can go for interviews or come up with your startup/product, as I did.
Main conclusions of the training:
- I'm not stupid - studying is really EXTREMELY hard (daily psychological struggle with yourself)
- Without a mentor, it's impossible to understand some things (everyone has their own)
- While making the VPN, I was also looking for interviews/jobs - I haven't encountered a more nasty market (and I bought cars from unofficial dealers)
- First, you need to understand the basics (the language itself + how to make a UI + OOP) - you will understand everything else from articles and ChatGPT Plus (without a mentor anymore)
- Find an app in the App Store that you would like to make, find this app on Sensor Tower and see how much it brings the authors monthly - be amazed - start making your version of this app.
- While writing the app, I thought that in the process, I could get a job as a programmer, but that didn’t happen. Now that I've made my product from 0 to 1, I ask myself: "Why should I sell my time working on other people's products, given that I have a main job (as insurance), now I'm my own boss, and I have an app in the App Store?"
- While I was striving to work for someone else, I didn't notice how I created my startup.
Advertising my app.
iVPN is a VPN application for iPhone, iPad, and Mac (with M-series processors).
https://apps.apple.com/ru/app/ivpn/id6469724902
The app requires no registration, no ads, and is currently completely free. In the future, I plan to charge $2/month. Special thanks to everyone who read this article and to those who will rate it 5 stars in the App Store 🙂 | bitb2112 |
1,880,832 | Understanding Functions in Python | Functions are a fundamental building block in Python programming. They allow you to encapsulate code... | 0 | 2024-06-07T21:08:16 | https://dev.to/ayas_tech_2b0560ee159e661/understanding-functions-in-python-21me | Functions are a fundamental building block in Python programming. They allow you to encapsulate code into reusable blocks, making your code more modular, maintainable, and easier to understand.
## **Types of Functions in Python**
**1. User-Defined Function**
A simple function defined by the user using the def keyword.
```
def greet(name):
print(f"Hello, {name}!")
greet("John") # Hello John
```
**2. Built-in Functions**
Python comes with several built-in functions that are always available. For example, len() is a built-in function that returns the length of an object.
```
# Example usage of built-in function len()
my_list = [1, 2, 3, 4, 5]
print(len(my_list)) # Output: 5
```
More about built-in functions
```
my_list = [1, 2, 3, 4, 5]
print(sum(my_list)) # Output: 15
my_list = [5, 2, 3, 1, 4]
print(sorted(my_list)) # Output: [1, 2, 3, 4, 5]
my_list = [1, 2, 3, 4, 5]
print(list(reversed(my_list))) # Output: [5, 4, 3, 2, 1]
```
**3. Anonymous Functions (Lambda Functions)**
These are small, unnamed functions defined using the lambda keyword.
```
# A lambda function to add two numbers
add = lambda x, y: x + y
# Example usage
result = add(5, 3)
print(result) # Output: 8
```
**4. Higher-order Functions**
These are functions that take other functions as arguments or return them as results. Examples include map(), filter(), and reduce().
```
numbers = [1, 2, 3, 4, 5]
squared_numbers = map(lambda x: x ** 2, numbers)
# Example usage
print(list(squared_numbers)) # Output: [1, 4, 9, 16, 25]
```
**5.Generator Functions**
These use the yield keyword to return an iterator.
```
def countdown(n):
while n > 0:
yield n
n -= 1
# Example usage
for number in countdown(5):
print(number)
```
**Conclusion**
Functions are a fundamental aspect of Python, allowing you to organize code into reusable blocks. Thus, understanding the various types of functions and their use cases can greatly improve your programming skills and code organization.
| ayas_tech_2b0560ee159e661 | |
1,880,831 | Now connect SQL with no-code as a data source | Softr just introduced its integration with Relational databases which includes MySQL, PostgreSQL, SQL... | 0 | 2024-06-07T21:04:50 | https://dev.to/usama4745/now-connect-sql-with-no-code-as-a-data-source-5d4k | nocode, sql, postgres, mariadb | Softr just introduced its integration with Relational databases which includes MySQL, PostgreSQL, SQL Server, and MariaDB.
Softr is a tool that lets you create apps without writing code. It can connect to different sources of data, such as spreadsheets or databases, and use that data to build custom apps.
Users can build sophisticated no-code apps using huge SQL databases and can scale indefinitely without hitting records limit
Here is the official blog containing announcement [Softr now integrates with SQL Databases](https://softrplatformsgmbh.partnerlinks.io/sqlblog)
{% embed https://www.youtube.com/watch?v=QVWGEgOS83o %}
SQL databases store complex data that usually requires advanced technical skills to access. But with Softr, you can easily connect to your SQL databases and build powerful apps that use that data, all with just a few clicks
You can also connect your SQL data source just sign up for a free plan of Softr and give it a try [LINK ](https://softrplatformsgmbh.partnerlinks.io/homesoftr)
| usama4745 |
1,880,827 | Sensitive Information disclosure via Spring Boot Default Paths | Reward: $250 Program: Private Overview of the Vulnerability Disclosure of secrets for a publicly... | 0 | 2024-06-07T20:57:22 | https://dev.to/c4ng4c31r0/sensitive-information-disclosure-via-spring-boot-default-paths-h78 | Reward: $250
Program: Private
**Overview of the Vulnerability**
Disclosure of secrets for a publicly available asset occurs when sensitive data is not behind an authorization barrier. When this information is exposed it can place sensitive data, such as secrets, at risk. This can occur due to a variety of scenarios such as not encrypting data, secrets committed to GitHub within public repositories, or exposed external assets. Disclosure of secrets for publicly available assets could be leveraged by an attacker to gain privileged access to the application or the environment where the application is hosted. From here, an attacker could execute functions under the guise of an Administrator user, depending on the permissions level they are able to access.
**Business Impact**
Disclosure of secrets for a publicly available asset can lead to indirect financial loss due to an attacker accessing, deleting, or modifying data from within the application. Reputational damage for the business can also occur via the impact to customers’ trust that these events create. The severity of the impact to the business is dependent on the sensitivity of the data being stored in, and transmitted by the application.
Spring Boot Paths are exposing critical information about c4ng4c31r0[.]com such as paths, environment configuration.
Spring Boot paths found:
https://c4ng4c31r0[.]com/api/maintenance/actuator/heapdump
https://c4ng4c31r0[.]com/api/maintenance/actuator
https://c4ng4c31r0[.]com/api/maintenance/actuator/beans
https://c4ng4c31r0[.]com/api/maintenance/actuator/caches
https://c4ng4c31r0[.]com/api/maintenance/actuator/conditions
ttps://c4ng4c31r0[.]com/api/maintenance/actuator/configprops
https://c4ng4c31r0[.]com/api/maintenance/actuator/env
https://c4ng4c31r0[.]com/api/maintenance/actuator/env/home
https://c4ng4c31r0[.]com/api/maintenance/actuator/env/lang
https://c4ng4c31r0[.]com/api/maintenance/actuator/env/language
https://c4ng4c31r0[.]com/api/maintenance/actuator/env/path
https://c4ng4c31r0[.]com/api/maintenance/actuator/env/hostname
https://c4ng4c31r0[.]com/api/maintenance/actuator/features
https://c4ng4c31r0[.]com/api/maintenance/actuator/health
https://c4ng4c31r0[.]com/api/maintenance/actuator/info
https://c4ng4c31r0[.]com/api/maintenance/actuator/mappings
https://c4ng4c31r0[.]com/api/maintenance/actuator/metrics
https://c4ng4c31r0[.]com/api/maintenance/actuator/loggers
https://c4ng4c31r0[.]com/api/maintenance/actuator/scheduledtasks
https://c4ng4c31r0[.]com/api/maintenance/actuator/threaddump
Steps to reproduce:
1 - Use the wordlist [https://github.com/emadshanab/DIR-WORDLISTS/blob/main/spring-boot.txt] to perform a brute force attack on the https://c4ng4c31r0[.]com/api/maintenance/ endpoint.
2 - Note that the heapdump endpoint was identified. When accessing it, an automatic download is performed containing a binary file.
Using visualvm https://visualvm.github.io/, we can read the contents of the file in plain text.
**PoC**
Using visualvm to decompile and read plain text credentials:


Status/Reward:
Resolved!

| c4ng4c31r0 | |
1,880,826 | Migrate from Heroku to AWS: A Best Practices Guide | In an era dominated by cloud solutions, businesses often find themselves at a crossroads when... | 0 | 2024-06-07T20:56:41 | https://dev.to/the_real_zan/migrate-from-heroku-to-aws-a-best-practices-guide-29f | In an era dominated by cloud solutions, businesses often find themselves at a crossroads when choosing the right platform to host their applications. This article explores the key considerations, challenges, and best practices involved in migrating from Heroku to Amazon Web Services (AWS). We compare Heroku and AWS across various dimensions like scalability, ease of use, and cost to highlight why enterprises may prefer the increased flexibility and control AWS offers over Heroku's simplicity. The article also examines specific migration steps like setting up networking, databases, caches, and automation pipelines in AWS as well as common pitfalls with manual migration.
## Understand UI Differences
Transitioning from Heroku’s streamlined interface to the AWS management console can initially be challenging. Heroku offers a more straightforward navigation structure and deployment process, while AWS provides a more intricate console with extensive deployment, monitoring, and scaling options.
The screenshot below shows how streamlined the User Interface can be with “Create New App” in the center of the screen. The various features are consolidated into one user interface, wizard, or menu system.

When it comes to AWS, implementing proper access control and permissions management using AWS Organizations, IAM Identity Center, and IAM roles is essential to maintain security and governance within your AWS environment, but the configurations are more involved.
Familiarizing yourself with these differences and leveraging AWS documentation and training resources can help ease the transition and unlock the full potential of AWS services.
The following shows the difference in UI, where you can see the wide variety of services that AWS provides, with each service having its own UI with various options and features, in contrast to the more streamlined user experience in Heroku.

Some best practices when getting accustomed to the AWS UI include:
- Take advantage of AWS training courses to understand the capabilities of services
- Start small and slowly expand your use of services to manage complexity
- Refer to documentation when exploring new services instead of relying on prior knowledge
- Consider getting certified in key services like EC2, S3, and VPC to cement knowledge
While the intricate AWS interface may seem daunting at first, dedicating time to learn best practices can unlock the full potential of AWS.
## Migrate Networks Effectively
Replicating the network isolation on Heroku to your AWS VPC architecture is crucial for the security of your application.

Here are some best practices to be considered when setting up a VPC architecture in your AWS environment:
- Define subnets, route tables, and security groups that mirror or strengthen the isolation offered by Heroku.
- Segregate resources, such as databases, ECS instances, and ElastiCache Redis instances, into private subnets to prevent direct external access. Allocate public subnets for resources requiring external connectivity.
- Leverage the redundancy of multiple availability zones for fault tolerance.
- Regulate inbound and outbound traffic flow within the VPC using network access control lists (NACLs) and security groups.
- Utilize VPC Flow Logs and AWS Network Firewall to monitor and safeguard network traffic, further increasing your infrastructure’s security.
Some key steps when setting up a VPC include:
- Design a VPC diagram mapping out public, private, database, ElastiCache, and other subnets
- Configure route tables to manage inter-subnet and internet traffic flows
- Set up NACLs and security groups aligned to the VPC diagram
- Launch EC2 instances in subnets based on public vs private segmentation
- Enable VPC Flow Logs to monitor traffic
Properly configuring VPC infrastructure is complex but critical in securing AWS-hosted applications. Referencing AWS best practices and documentation can ease the transition from Heroku’s simplified networking.
## Migrate the Database
To migrate from the Heroku Database to Amazon RDS, follow these steps:
1. Verify version compatibility with your existing database engine on Heroku.
2. Evaluate your database requirements like storage, memory, and compute needs and choose the appropriate RDS instance type.
3. Follow the AWS tutorial to create a database instance using the RDS management console or APIs.
4. Once the database is set up, leverage the AWS Database Migration Service (DMS) to minimize downtime during data migration. DMS can replicate data changes from Heroku database to RDS in real-time.
5. Thoroughly test and optimize your RDS instances’ sizes and configurations to match your workload demands.
6. Finally, enable automated backup and database snapshots for disaster recovery.

Some best practices around RDS database migration include:
- Set up staging environments to test migration before production switchover
- Preserve capacity for traffic spikes during the migration to prevent bottlenecks
- Redirect a portion of traffic to RDS before complete switchover to validate
- Monitor database metrics in CloudWatch during each migration stage
- Execute migration during periods of low traffic to minimize impact
While migrating databases involves downtime and complexity, careful planning following AWS best practices can ensure a smooth transition to RDS with no data loss.
After migration, enhance reliability further through multi-AZ deployments, read replicas, and advanced backup/restore capabilities.
## Conclusion
Migrating from Heroku to AWS is a major undertaking, requiring careful planning and execution across networks, databases, automation, monitoring, and more. While Heroku provides simplicity, AWS unlocks scalability, flexibility, and infrastructure control that growing enterprises demand.
This migration guide covered critical considerations like grasping AWS UI complexity, VPN architecture, RDS database migration, cache migration techniques, CI/CD pipeline automation, DNS changes, and CloudWatch monitoring.
Some key takeaways include:
- Leverage AWS training and documentation to unlock the full potential of its extensive capabilities
- Build VPC diagrams aligning isolation needs before implementation
- Choose DMS real-time replication to prevent database downtime
- Implement CodePipeline and CodeDeploy for rapid updates
- Monitor with CloudWatch and audit with CloudTrail across regions
While migrating from Heroku to AWS has its challenges, companies that invest the time and resources required can reap substantial rewards in scale, cost savings, and innovation velocity over the long term.
Read more at https://www.withcoherence.com/post/migrate-from-heroku-to-aws. | the_real_zan | |
1,880,824 | 648. Replace Words | 648. Replace Words Medium In English, we have a concept called root, which can be followed by some... | 27,523 | 2024-06-07T20:52:36 | https://dev.to/mdarifulhaque/648-replace-words-4f20 | php, leetcode, algorithms, programming | 648\. Replace Words
Medium
In English, we have a concept called root, which can be followed by some other word to form another longer word - let's call this word **derivative**. For example, when the root `"help"` is followed by the word `"ful"`, we can form a derivative `"helpful"`.
Given a `dictionary` consisting of many **roots** and a `sentence` consisting of words separated by spaces, replace all the derivatives in the sentence with the **root** forming it. If a derivative can be replaced by more than one **root**, replace it with the **root** that has **the shortest length**.
Return _the `sentence` after the replacement_.
**Example 1:**
- **Input:** dictionary = ["cat","bat","rat"], sentence = "the cattle was rattled by the battery"
- **Output:** "the cat was rat by the bat"
**Example 2:**
- **Input:** dictionary = ["a","b","c"], sentence = "aadsfasf absbs bbab cadsfafs"
- **Output:** "a a b c"
**Constraints:**
- <code>1 <= dictionary.length <= 1000</code>
- <code>1 <= dictionary[i].length <= 100</code>
- <code>dictionary[i]</code> consists of only lower-case letters.
- <code>1 <= sentence.length <= 10<sup>6</sup></code>
- `sentence` consists of only lower-case letters and spaces.
- The number of words in `sentence` is in the range `[1, 1000]`
- The length of each word in `sentence` is in the range `[1, 1000]`
- Every two consecutive words in `sentence` will be separated by exactly one space.
- `sentence` does not have leading or trailing spaces.
**Solution:**
```
class Solution {
private $root;
public function __construct() {
$this->root = new TrieNode();
}
/**
* @param String[] $dictionary
* @param String $sentence
* @return String
*/
function replaceWords($dictionary, $sentence) {
foreach ($dictionary as $word) {
$this->insert($word);
}
$ans = '';
$iss = explode(' ', $sentence);
foreach ($iss as $s) {
$ans .= $this->search($s) . ' ';
}
$ans = rtrim($ans);
return $ans;
}
private function insert($word) {
$node = $this->root;
for ($i = 0; $i < strlen($word); $i++) {
$c = $word[$i];
$index = ord($c) - ord('a');
if ($node->children[$index] == null) {
$node->children[$index] = new TrieNode();
}
$node = $node->children[$index];
}
$node->word = $word;
}
private function search($word) {
$node = $this->root;
for ($i = 0; $i < strlen($word); $i++) {
$c = $word[$i];
if ($node->word != null) {
return $node->word;
}
$index = ord($c) - ord('a');
if ($node->children[$index] == null) {
return $word;
}
$node = $node->children[$index];
}
return $word;
}
}
class TrieNode {
public $children;
public $word;
public function __construct() {
$this->children = array_fill(0, 26, null);
$this->word = null;
}
}
```
**Contact Links**
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)**
| mdarifulhaque |
1,880,823 | Less is More: Why You May Don't Always Need JavaScript in Your B2B Web Apps | This blog post argues for reducing JavaScript usage in B2B web apps front-ends in favor of server-size rendering | 0 | 2024-06-07T20:52:01 | https://dev.to/mariomarroquim/when-less-is-more-why-you-dont-always-need-javascript-in-b2b-ruby-on-rails-web-apps-4ecp | b2b, javascript, nobuild, nojs | ---
title: Less is More: Why You May Don't Always Need JavaScript in Your B2B Web Apps
published: true
description: This blog post argues for reducing JavaScript usage in B2B web apps front-ends in favor of server-size rendering
tags: b2b, javascript, nobuild, nojs
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-06 14:00 +0000
---
In the bustling world of web development, JavaScript often takes center stage. Its versatility and power are undeniable, making it the go-to choice for adding interactivity and dynamism to applications. However, when it comes to developing B2B (Business-to-Business) applications, it's worth considering a different approach. Surprisingly, you might not always need to rely heavily on JavaScript. Here’s why.
#### Controlled Environments
B2B applications are typically used in controlled environments, such as office settings, where the hardware, software, and network conditions are relatively uniform and predictable. Unlike consumer-facing applications, where developers must account for a wide range of devices and varying internet speeds, B2B applications benefit from a stable and robust infrastructure. This consistency reduces the necessity for JavaScript-driven enhancements aimed at compensating for unpredictable user environments.
#### Simplified Development and Maintenance
JavaScript can add significant complexity to a web application, both in terms of development and maintenance. By minimizing the use of JavaScript, developers can focus more on user-facing features. In B2B applications, users typically prioritize functionality and reliability over flashy, interactive interfaces. The core features and workflows are what matter most. By focusing on delivering robust backend functionalities and ensuring that the application performs its primary tasks efficiently, you meet the essential needs of your users without unnecessary embellishments.
#### One size does not fit all
While JavaScript undoubtedly has its place in web development, it’s not always a necessity for B2B applications. Leveraging the strengths of modern web frameworks in a controlled, reliable environment can lead to simpler, more secure, and maintainable applications. By focusing on core functionalities and server-side rendering, you can deliver robust solutions that meet the specific needs of business users without the added complexity of extensive JavaScript. SPAs (Single Page Applications), for example, are fantastic, but they aren't the best solution for every situation. Traditional server-side rendering can still meet your your users' needs perfectly well. Sometimes, less truly is more. | mariomarroquim |
1,880,818 | Automatically Test Your Regex Without Writing a Single C# Line of Code | Introduction If you've ever used regular expressions (regex), you know there's a saying:... | 0 | 2024-06-07T20:41:18 | https://dev.to/dimonsmart/automatically-test-your-regex-without-writing-a-single-c-line-of-code-1k9p | regex, unittest, csharp | ## Introduction
If you've ever used regular expressions (regex), you know there's a saying: "Every problem can be solved with regex. But then you have one more problem."
I hope readers can decide for themselves whether to use regular expressions. However, making sure your regex works correctly can be challenging.
In many development teams, there's always someone who asks, "Have you written unit tests for this regex?"
Developers often add comments above their regex with sample strings that should match or not match.
Often, developers will add comments above their regex with sample strings that should match or not match. For example:
```csharp
// Matches email addresses
// Example: "test@example.com"
var emailRegex = @"^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$";
```
However, there’s a hidden problem here: these comments can be misleading. A comment might claim a regex matches an example, but without actual testing, there's no guarantee.
## The Ideal Solution
According to [TRIZ (Theory of Inventive Problem Solving)](https://en.wikipedia.org/wiki/TRIZ), the best solution is one where the system works effortlessly, without additional manual input. In our context, the perfect scenario would be a system that automatically tests your regex patterns without needing you to write explicit unit tests.
## Presenting the Solution: DimonSmart.RegexUnitTester.TestAdapter
The `DimonSmart.RegexUnitTester.TestAdapter` is here to save the day! This tool allows you to automatically check your regex patterns against provided samples using custom attributes. No more manual unit test writing or misleading comments—just straightforward and reliable regex testing.
### How It Works
This library leverages three main attributes to streamline regex testing:
1. **`ShouldMatchAttribute`**: This ensures that your regex correctly matches the expected data.
2. **`ShouldNotMatchAttribute`**: This confirms that your regex does not match unwanted data.
3. **`InfoMatchAttribute`**: This handles ambiguous cases, allowing further analysis without affecting the overall test outcome.
### Code Samples
Here are some examples of how to use these attributes in your code:
```csharp
[ShouldMatch("test@example.com")]
[ShouldNotMatch("test@example")]
public const string EmailRegex = @"^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$";
[InfoMatch("123-45-6789", "SSN format")]
public const string SSNRegex = @"^\d{3}-\d{2}-\d{4}$";
```
With these attributes, you can clearly define which strings should and shouldn’t match your regex patterns. The `InfoMatchAttribute` is particularly useful for documenting ambiguous cases that need further review.
## Conclusion
By using `DimonSmart.RegexUnitTester.TestAdapter`, you can ensure your regex patterns are accurate and reliable without the hassle of writing manual tests. Give it a try and see how it simplifies your development workflow.
I'm the author of this library. If you have any questions or feedback, please don't hesitate to write me a few words about my library. Happy coding!
---
You can find more details on installation and usage in the library’s [GITHUB](https://github.com/DimonSmart/RegexUnitTester). | dimonsmart |
1,880,821 | The Fates of Famous Figures Under the Pressure of Power from the Russian Empire to Our Days | In the history of Russia, many outstanding artists and public figures have faced repression and were... | 0 | 2024-06-07T20:36:45 | https://www.heyvaldemar.com/the-fates-of-famous-figures-under-the-pressure-of-power-from-the-russian-empire-to-our-days/ | discuss, learning, repressions, history | In the history of Russia, many outstanding artists and public figures have faced repression and were forced to emigrate because of their views and creativity, which contradicted the official line of power. In our time, the situation has changed little, and many modern oppositionists and cultural figures continue to face persecution, arrests, and are forced to leave the country. In this article, I have compiled brief biographies of famous personalities who have faced repression and emigration, starting from the times of the Russian Empire and ending with modern Russia. This list is far from complete and, unfortunately, continues to be supplemented with new names.
**Alexander Pushkin** (1799-1837, Russian Empire) – A poet who faced censorship and governmental pressure. His works, including "Ode to Liberty," displeased Emperor Alexander I, leading to his exile in the southern provinces. Although publication opportunities were limited there, he continued his creative work.
**Mikhail Lermontov** (1814-1841, Russian Empire) – A poet whose fate was tragically sealed after a duel and his poem "Death of the Poet," dedicated to Pushkin’s death. His open criticism of authority displeased the tsar, resulting in his exile to the Caucasus, where he eventually died in another duel.
**Fyodor Dostoevsky** (1821-1881, Russian Empire) – A writer sentenced to death for participating in the anti-government Petrashevsky Circle. His sentence was commuted to penal servitude in Siberia, followed by exile and military service, profoundly influencing his work and worldview.
**Leo Tolstoy** (1828-1910, Russian Empire) – A writer celebrated for his literary works and philosophical views, which led to his excommunication from the church. Tolstoy criticized the church and advocated for his ideas on morality and spirituality, resulting in his exclusion from the religious community.
**Ilya Repin** (1844-1930, Russian Empire) – An artist who moved to Finland seeking solitude and tranquility for his art amidst the revolutionary turmoil in Russia. His relocation was also motivated by a desire to avoid direct involvement in the political upheavals of the time.
**Vaslav Nijinsky** (1890-1950, Russian Empire) – A ballet master who left Russia during a time of political instability and revolutionary changes. Emigration proved to be a salvation for his career, though it came with challenges of adapting to new conditions abroad.
**Wassily Kandinsky** (1866-1944, USSR) – An artist who left Russia due to disagreements with Soviet policies on art. His pursuit of abstractionism was recognized and celebrated in the West, where he was able to fully realize his creative potential.
**Dmitry Merezhkovsky** (1866-1941, USSR) – A writer who emigrated after the October Revolution, rejecting Bolshevik power and fearing repression for his monarchist and religious views.
**Zinaida Gippius** (1869-1945, USSR) – A writer and wife of Dmitry Merezhkovsky, she emigrated with him due to their shared disdain for the new Soviet power and fear of repression.
**Ivan Bunin** (1870-1953, USSR) – A writer who emigrated in 1920, openly opposing communism. He became the first Russian to win the Nobel Prize in Literature in 1933.
**Fyodor Chaliapin** (1873-1938, USSR) – An opera singer who emigrated due to restrictions in artistic activity and disagreement with the Soviet government's policies on art.
**Sergei Rachmaninoff** (1873-1943, USSR) – A composer who emigrated after the October Revolution, unwilling to live under the new regime that limited creative freedom and threatened his family.
**Nikolai Berdyaev** (1874-1948, USSR) – A philosopher exiled from Soviet Russia in 1922 aboard the "Philosophers' Ship" along with other intellectuals whose views did not align with Bolshevik ideology.
**Vsevolod Meyerhold** (1874-1940, USSR) – A director who was arrested and killed during Stalin's purges. His innovative approach to theater did not meet the ideological requirements of the authorities.
**Zinaida Reich** (1894-1939, USSR) – An actress and wife of Meyerhold, she was killed during Stalin's purges following the arrest and torture of her husband.
**Teffi (Nadezhda Lokhvitskaya)** (1872-1952, USSR) – A writer who emigrated after the revolution due to her disagreement with the communist government and fears for her life and creative freedom.
**Marc Chagall** (1887-1985, USSR) – An artist who emigrated after 1917 because his artwork did not fit within the confines of socialist realism and was not recognized by the new authority.
**Nikolai Gumilev** (1886-1921, USSR) – A poet executed in the context of political repressions for alleged involvement in an anti-monarchist conspiracy. His death was a significant blow to Russian literature.
**Anna Akhmatova** (1889-1966, USSR) – A poetess who faced repression and a ban on publications. Her husband and son were arrested, and she was forced to live under constant fear and surveillance.
**Osip Mandelstam** (1891-1938, USSR) – A poet who was arrested and died in detention under harsh conditions and abuses. His work was deemed anti-Soviet, leading to his arrest.
**Sasha Chorny (Alexander Glikberg)** (1880-1932, USSR) – A poet who emigrated due to political pressure and the impossibility of freely publishing his works in Soviet Russia. After emigration, he continued his literary activity, but now abroad.
**Mikhail Chekhov** (1891-1955, Russian Empire) – An actor and director who left the USSR unable to continue his theatrical activity under repressive policies. He moved to the USA, where he became known for his teaching methodologies.
**Igor Stravinsky** (1882-1971, USSR) – A composer who emigrated after the October Revolution, as his music did not meet the requirements of the new authority and did not fit within the bounds of socialist realism.
**Vladimir Nabokov** (1899-1977, USSR) – A writer who emigrated due to revolutionary changes that threatened his safety and creative freedom. Nabokov became known for his works written in English, including "Lolita".
**Philosophers' Ship** (1922) – An event during which the Soviet government expelled more than 160 intellectuals, including Nikolai Berdyaev and Sergei Bulgakov, to rid themselves of dissenters and those whose views did not align with Bolshevik ideology.
**Sergei Yesenin** (1895-1925, USSR) – A poet who took his own life after conflicts with the authorities and due to pressure related to his literary activities and personal life. His death was a tragedy for Russian literature.
**Vladimir Mayakovsky** (1893-1930, USSR) – A poet who took his own life influenced by personal and professional crises, as well as pressure from the authorities. His work did not always align with the official party line, complicating his life and work.
**Nikolai Vavilov** (1887-1943, USSR) – A geneticist-scientist who was arrested and died in detention due to accusations of anti-Soviet activity. His work on plant breeding did not align with the pseudoscientific theories supported by the Soviet leadership.
**Solomon Mikhoels** (1890-1948, USSR) – An actor and director who was killed as part of an anti-Semitic campaign on Stalin's orders. His murder was disguised as a car accident.
**Isaac Babel** (1894-1940, USSR) – A writer who was arrested, tortured, and executed on suspicions of espionage and anti-Soviet activity. His works were banned and removed from libraries.
**Nikolai Zabolotsky** (1903-1958, USSR) – A poet who was sent to a camp for several years for his literary works, which did not meet the ideological standards of Soviet authority.
**Alexander Vvedensky** (1904-1941, USSR) – A poet who died en route to a camp where he was sent for anti-Soviet activity. His innovative poetic experiments did not align with the official literary line of the party.
**Olga Berggolts** (1910-1975, USSR) – A poetess who was beaten and lost a child during interrogations by the NKVD. Despite the repression, she became a symbol of the besieged Leningrad and continued her literary activity.
**Marina Tsvetaeva** (1892-1941, USSR) – A poetess who was exiled and driven to suicide due to the inability to freely publish her works and pressure from the authorities.
**Daniil Kharms** (1905-1942, USSR) – A writer who died of starvation in a psychiatric hospital after being arrested for anti-Soviet activity. His works were banned during his lifetime and published only posthumously.
**Dmitry Likhachev** (1906-1999, USSR) – An art historian who was arrested, exiled, and fired from his job for his scientific and literary research, which did not meet the ideological requirements of the Soviet authority.
**Yevgeny Schwartz** (1896-1958, USSR) – A playwright who faced bans on publications and criticism from the authorities for his satirical works that lampooned bureaucracy and totalitarianism.
**Anti-Fascist Committee** (1940s, USSR) – A committee whose members were executed during Stalin's purges on charges of anti-Soviet activity and espionage.
**Boris Pasternak** (1890-1960, USSR) – A writer who faced persecution and bans on publications for his novel "Doctor Zhivago," which was deemed anti-Soviet. Pasternak was forced to decline the Nobel Prize in Literature under pressure from the authorities.
**Mikhail Bulgakov** – A writer who faced censorship and bans. His works, including "The Master and Margarita," were not published during his lifetime and were recognized as anti-Soviet.
**Alexander Solzhenitsyn** (1918-2008, USSR) – A writer who was exiled and later forced to emigrate for his works criticizing the Soviet authority and the Gulag. He was stripped of his Soviet citizenship and lived in emigration until 1994.
**Yuri Lyubimov** (1917-2014, USSR/Russia) – A theatrical director who was stripped of citizenship in 1984 for criticizing the Soviet system. Lyubimov continued his career abroad and returned to Russia only after perestroika.
**Andrei Sakharov** (1921-1989, USSR) – A physicist and human rights advocate who was exiled for his criticism of Soviet policies and his fight for human rights. He was stripped of all awards and titles but continued his human rights work from exile.
**Sergei Dovlatov** (1941-1990, USSR) – A writer who emigrated in 1979 due to the impossibility of publishing his works in the Soviet Union. His works, written in emigration, were recognized only after his death.
**Grigory Rodchenkov** (b. 1958, USSR) – The former head of the Moscow anti-doping laboratory, who emigrated to the USA after exposing the state doping program. His testimony became the basis for investigations and sanctions against Russian sports.
**Maria Alekhina (Pussy Riot)** (b. 1988, Russia) – An activist and member of the punk group Pussy Riot, who left Russia after persecution for her political actions and criticism of the authorities. She continues her human rights activities in emigration.
**Anton Dolin** (b. 1976, Russia) – A film critic who moved to Riga due to threats and pressure related to his professional activity and criticism of Russian authorities. Dolin left Russia in 2022 after the start of the war against Ukraine, as his anti-war stance and critical statements elicited negative reactions from the authorities and war supporters.
**Artur Smolyaninov** (b. 1983, Russia) – An actor who left Russia in 2022 due to his criticism of government policies and the war in Ukraine. Smolyaninov repeatedly spoke out against the Russian government, leading to persecution and threats against him. He continues his professional activity abroad.
**Little Big** (Russia) – A musical group known for their satirical and provocative videos, left Russia due to disagreement with the political situation in the country and pressure from the authorities. In 2022, the group members emigrated, continuing their music career abroad.
**Anastasia Davydova** (b. 1983, Russia) – An Olympic champion in synchronized swimming, who left Russia due to political pressure and threats. She moved to another country to continue her sports and coaching career in safer conditions.
**Alexander Nevzorov** (b. 1958, Russia) – A journalist and publicist, known for his critical statements against Russian authorities and politics. In 2022, Nevzorov left Russia due to threats to his life and persecution for his journalistic activity. He continues his work abroad, actively speaking out against the war in Ukraine and the political regime in Russia.
**Yevgeny Berkovich and Svetlana Petriychuk** (b. 1984 and 1985, Russia) – Arrested for staging a theatrical play, which was perceived by the authorities as propaganda of extremism and an insult to the feelings of believers. This was part of a broader campaign to suppress freedom of creativity and critical statements in Russia.
**Boris Akunin (Grigory Chkhartishvili)** (b. 1956, Russia) – A writer against whom a criminal case was initiated for his critical statements and support for opposition movements. Akunin left Russia, fearing arrest and persecution, and continues his literary activity abroad.
**Vasily Berezin and Stas Falkov** (Russia) – Artists who founded a collective of exiled Russian artists in Paris. They left Russia due to pressure on freedom of creativity and threats from the authorities. Their works often had a political character and criticized contemporary Russian society and politics.
**Alexei Navalny** (1976-2024, Russia) – An opposition politician who died in custody in 2024. His death is widely regarded as murder linked to his political activity and fight against corruption in Russia. Navalny was known for his anti-corruption investigations and active opposition activity, for which he was repeatedly arrested and repressed.
**Memorial** (Russia) – A human rights organization that was liquidated in 2021. Memorial was engaged in investigating and documenting political repressions in the Soviet Union and modern Russia, as well as protecting human rights. The organization was recognized as a "foreign agent" and subjected to pressure from the authorities, leading to its closure.
**Oleg Orlov** (b. 1953, Russia) – The head of the human rights organization "Memorial," sentenced to 2.5 years in a colony. Orlov was accused of discrediting the Armed Forces of the Russian Federation, in the context of the ongoing campaign against human rights defenders and their activities.
**Dmitry Muratov** (b. 1961, Russia) – A journalist, Nobel Peace Prize laureate, who was attacked in 2022. Muratov, the chief editor of "Novaya Gazeta," repeatedly received threats and faced pressure due to publications criticizing the actions of Russian authorities and corruption.
**TV Channel "Dozhd"** (Russia) – An independent TV channel forced to cease broadcasting in Russia in 2022 due to pressure from the authorities. "Dozhd" is known for its objective and critical reports on political and social issues in Russia. After ceasing broadcasting in Russia, the channel continued its work abroad.
**Dmitry Ivanov (Kamikadze Di)** (b. 1986, Russia) – A blogger and journalist, known for his sharp and critical performances against the Russian government. Ivanov was attacked, which forced him to move to the Czech Republic. He continues to actively spread information about the political situation in Russia.
**Vladimir Kara-Murza** (b. 1981, Russia) – An opposition politician and journalist, known for his active anti-government performances. Kara-Murza was repeatedly poisoned, presumably due to his political activity. He was sentenced to 25 years in prison for criticizing Russia's military actions in Ukraine and ties with an "undesirable" organization. In May 2023, the court rejected his appeal, leaving the sentence unchanged. Kara-Murza also suffers from polyneuropathy, a condition that has worsened in prison conditions.
**Mark Feigin** (b. 1971, Russia) – A lawyer and human rights defender who emigrated to France. He gained wide recognition for defending politically persecuted individuals, including members of the group "Pussy Riot" and Ukrainian journalists. Due to his professional activity and criticism of the Russian government, Feigin was subjected to pressure and was declared wanted in 2023.
**Ilya Yashin** (b. 1983, Russia) – A politician and activist, one of the leaders of the Russian opposition, known for his criticism of the authorities. In 2022, Yashin was arrested and sentenced to 8.5 years for disseminating "false" data about Russia's actions in Ukraine.
**Artem Kamardin** (b. 1990, Russia) – A poet sentenced by the Tverskoy Court of Moscow to 7 years of imprisonment for reading poems against military actions of Russia in Ukraine.
**Yegor Shtovba** (b. 2000, Russia) – An accomplice in the literary reading of Artem Kamardin, sentenced by the Tverskoy Court of Moscow to 5.5 years of imprisonment. He was accused of disseminating "false" data about military actions of Russia in Ukraine, as part of the same judicial campaign against freedom of speech.
**Mail Naki** (b. 1993, Russia) – An artist and public activist, known for his anti-war performances and criticism of the authorities. Due to his position and public actions, he was subjected to pressure and threats from the state, which forced him to leave Russia and continue his activities abroad.
**Maxim Galkin** (b. 1976, Russia) – A humorist and TV presenter, who left Russia in 2022 after being recognized as a "foreign agent" for criticizing the Russian government and the war in Ukraine. Galkin continues his professional activity abroad, actively speaking out against the war.
**Alla Pugacheva** (b. 1949, Russia) – A singer and actress, who left Russia in 2022 due to disagreement with government policies and the start of military actions in Ukraine. Pugacheva openly supported her husband Maxim Galkin, who was recognized as a "foreign agent". She moved to Israel and continues her activity abroad.
**Group "Nogu Svelo!"** – A Russian rock group, whose leader Maxim Pokrovsky left Russia in 2022 due to disagreement with government policies and the war in Ukraine. The group continues its activity abroad, actively speaking out against the war and supporting anti-war actions.
**Group "Bi-2"** – A Russian rock group, whose members were subjected to pressure from the authorities for their political views. In 2022, the group was forced to cancel its concerts in Russia and partially emigrated, continuing their musical activity abroad.
**Vitaly Mansky** (b. 1963, Russia) – A documentary filmmaker who left Russia in 2014 and moved to Riga, Latvia. Mansky is known for his critical films, often focusing on political and social issues. In 2014, he initiated the signing of the open letter "We are with You!" in support of Ukrainian filmmakers against the Russian military intervention in Ukraine. In 2022, he spoke out against Russia's invasion of Ukraine and was declared wanted by the Russian Ministry of Internal Affairs on charges of defamation. In 2023, the Russian Ministry of Justice included him in the list of foreign agents. Mansky continues to actively work in the field of documentary cinema abroad, organizing the Artdocfest/Riga festival and receiving recognition for his films at international film festivals.
| heyvaldemar |
1,880,804 | #648. Replace Words | https://leetcode.com/problems/replace-words/description/?envType=daily-question&envId=2024-06-07 ... | 0 | 2024-06-07T20:16:32 | https://dev.to/karleb/648-replace-words-53om | ERROR: type should be string, got "\nhttps://leetcode.com/problems/replace-words/description/?envType=daily-question&envId=2024-06-07\n\n\n```js\n\n/**\n * @param {string[]} dictionary\n * @param {string} sentence\n * @return {string}\n */\nvar replaceWords = function(dictionary, sentence) {\n sentence = sentence.split(\" \")\n\n for (let i = 0; i < sentence.length; i++) {\n for (let j = 0; j < dictionary.length; j++) {\n if (sentence[i].startsWith(dictionary[j])) {\n sentence[i] = dictionary[j]\n }\n }\n }\n\n return sentence.join(\" \")\n};\n\n```" | karleb | |
1,880,796 | AWS Lambda and Celery for Asynchronous Tasks in Django | Building responsive Django applications often involves handling tasks that shouldn't block the user... | 0 | 2024-06-07T20:35:45 | https://dev.to/topunix/harnessing-aws-lambda-and-celery-for-scalable-asynchronous-tasks-with-django-h97 | aws, lambda, django, celery | Building responsive Django applications often involves handling tasks that shouldn't block the user experience. These background tasks, like sending emails or processing data, can be efficiently handled using asynchronous processing. This blog post explores two powerful tools for asynchronous tasks in Django: Celery and AWS Lambda. We'll delve into their strengths, weaknesses, and how they can complement each other in your project.
Both Celery and Lambda excel at handling background tasks, but they approach it differently. Understanding these differences will help you choose the right tool for the job in your Django application.
## Celery vs. Lambda: Choosing the Right Tool
Celery and Lambda, while both handling background tasks, address them in different ways. Here's a breakdown to help you decide if Celery eliminates the need for Lambda in your Django project:
**Celery:**
* **Deployment:** Requires managing worker processes and a message broker (RabbitMQ, Redis) alongside your Django application.
* **Scalability:** Requires manual scaling of worker processes to handle increased workload.
* **Cost:** Costs associated with running and maintaining worker instances.
* **Complexity:** More complex setup and requires additional infrastructure management.
**Lambda:**
* **Deployment:** Serverless deployment, handled by the cloud provider (AWS in this case).
* **Scalability:** Automatic scaling based on invocations.
* **Cost:** Pay-per-use model, ideal for bursty workloads.
* **Complexity:** Simpler deployment and management.
**When Celery might be sufficient:**
* **Control and Customization:** If you need fine-grained control over background tasks (prioritization, retries) or have specific infrastructure preferences, Celery offers more flexibility.
* **Long-running tasks:** Tasks exceeding Lambda's timeout limit (15 minutes) are better suited for Celery workers.
* **Existing Celery integration:** If you already have a Celery setup in your project, it might be simpler to continue using it.
**When Lambda might be a good choice:**
* **Scalability and Cost:** For tasks with unpredictable or bursty traffic, Lambda's automatic scaling and pay-per-use model can be cost-effective.
* **Short-lived Tasks:** Serverless excels at handling tasks that execute quickly (ideally under the Lambda timeout limit). Complex functionalities requiring longer execution might not be ideal.
* **Faster Deployment:** Lambda offers quicker deployment with minimal infrastructure management.
## Potential Use Cases for Celery and Lambda in a Django Project
**Celery Use Cases:**
* **Long-running data processing:** Celery is well-suited for tasks that take a significant amount of time to complete, such as
* Video encoding
* Large file uploads
* Data analysis and reporting jobs
* **Task retries and error handling:** Celery offers robust error handling mechanisms for retries and troubleshooting failed tasks. This is beneficial for critical tasks that must be completed successfully.
* **Scheduled tasks:** Celery can be used to schedule repetitive tasks at specific intervals. For instance, sending weekly newsletters or nightly data backups.
* **Custom workflows:** Celery's message broker and worker architecture allow for creating complex task dependencies and workflows. This can be useful for orchestrating a series of background tasks in a particular order.
**Lambda Use Cases:**
* **Welcome emails and password resets:** Triggered by user actions in your Django views, Lambda functions can asynchronously send email notifications without blocking the main application.
* **Image resizing and thumbnail generation:** When a user uploads an image, a Lambda function can be triggered to resize and generate thumbnails in the background.
* **Real-time updates:** Lambda integrates well with event-driven services like SNS/SQS. For instance, a Lambda function can be triggered by a new user signup event to update dashboards or send notifications in real-time.
* **Social media integrations:** APIs interacting with social media platforms (e.g., posting to Facebook) can be implemented as Lambda functions for better scalability and avoiding limitations on outbound connections from your Django application.
## Can Celery Be Used for Everything?
While Celery is a powerful tool for background tasks in Django, it's not a one-size-fits-all solution. Here's why Lambda offers some unique advantages that Celery cannot:
* **Serverless Benefits:**
* **Automatic Scaling:** Lambda scales automatically based on invocations. This eliminates the need to manually manage worker processes like in Celery, leading to simpler deployment and cost-efficiency for tasks with bursty workloads.
* **Pay-per-Use Model:** You only pay for the time your Lambda function executes. This can be cost-effective compared to running Celery worker instances continuously, especially for tasks that are triggered infrequently.
* **Faster Deployment:** Deploying Lambda functions is faster and requires minimal infrastructure management compared to setting up Celery workers and message brokers.
* **Celery Limitations:**
* **Cold Starts:** Celery worker processes might experience a performance hit when starting up after a period of inactivity. Lambda avoids this issue as functions are spun up on demand.
* **Limited Control:** Celery offers more control over worker processes and task execution compared to Lambda. However, this also comes with added complexity in managing the infrastructure.
* **Unique Lambda Capabilities:**
* **Event-driven Architecture:** Lambda integrates seamlessly with event-driven services like SNS/SQS. This allows for triggering functions based on specific events, making them ideal for real-time processing scenarios.
* **Integration with other AWS Services:** Lambda functions can easily interact with other AWS services like S3 for storage or DynamoDB for databases, simplifying development for serverless workflows.
**In summary:**
* **Celery excels at complex tasks, custom workflows, and scenarios where you need fine-grained control over worker processes.**
* **Lambda shines for short-lived, event-driven tasks, cost-efficiency with bursty workloads, and simpler deployments in a serverless environment.**
**Here's when a developer might not use Celery for everything:**
* The project has bursty workloads or unpredictable traffic patterns.
* Cost-efficiency is a major concern.
* The tasks are short-lived and event-driven.
* Integration with other AWS services is desired.
Remember, Celery and Lambda aren't mutually exclusive. You can leverage them strategically in your Django project to address different background processing needs. By understanding their strengths and weaknesses, you can make informed decisions to optimize your Django application's performance and scalability. | topunix |
1,880,819 | Mastering GitLab CI/CD with Advanced Configuration Techniques | As a Senior DevOps Engineer and a recognized Docker Captain, I understand the pivotal role that... | 0 | 2024-06-07T20:33:29 | https://www.heyvaldemar.com/mastering-gitlab-ci-cd-with-advanced-configuration-techniques/ | gitlab, cicd, devops, productivity | As a Senior DevOps Engineer and a recognized [Docker Captain](https://www.docker.com/captains/vladimir-mikhalev/), I understand the pivotal role that continuous integration and delivery (CI/CD) systems play in modern software development. GitLab's CI/CD platform is a robust tool that automates the steps in software delivery processes, ensuring that you can deploy applications swiftly and reliably.
## Understanding ".gitlab-ci.yml"
The **`.gitlab-ci.yml`** file is the backbone of GitLab’s CI/CD service. Located in the root directory of your repository, this YAML file defines the pipeline's configuration. Each push and merge request automatically triggers these pipelines, executed by GitLab Runner. Here’s how to leverage this powerful feature to its full potential.
### Key Configuration Elements
The **`.gitlab-ci.yml`** file orchestrates your CI/CD pipeline's workflow. Understanding its structure is key to harnessing GitLab’s automation capabilities:
- **Stages and Jobs:** Stages define the sequence of actions in your pipeline and are executed in the order they appear. Jobs within each stage run concurrently, boosting efficiency.
- **Scripts:** The actual commands your pipeline executes. These can range from build commands to test scripts.
- **Docker Integration:** As a Docker Captain, I frequently use Docker images to standardize environments across the CI/CD pipeline. Specifying an image ensures all jobs run in a consistent environment.
```yaml
stages:
- build
- test
- deploy
build_job:
stage: build
script: echo "Building the project..."
test_job:
stage: test
script: echo "Running tests..."
deploy_job:
stage: deploy
script: echo "Deploying the project..."
```
### Advanced Features
- **Artifacts and Caching:** Artifacts are files generated by jobs and retained after they complete, such as logs or compiled applications. Caching speeds up building processes by reusing unchanged parts of your environment.
```yaml
cache:
paths:
- node_modules/
build_job:
stage: build
script: npm install && npm run build
artifacts:
paths:
- build/
```
## Best Practices and Tips
- **Modular Configuration:** For complex systems, break down your configuration into multiple files using the **`include`** keyword. This makes managing large projects easier and your configurations clearer.
```yaml
include:
- local: 'path/to/another-file.yml'
- project: 'group/project-name'
file: '/templates/.gitlab-ci-template.yml'
```
Using **`include`**, you can maintain a cleaner and more organized configuration by referencing other files, whether they are in the same repository, a different project, or even a remote URL.
- **Security Practices:** Keep sensitive data like passwords or API keys in GitLab's environment variables, not in your **`.gitlab-ci.yml`** file.
```yaml
variables:
PROD_DB_PASSWORD: $PROD_DB_PASSWORD
```
Manage these variables securely through GitLab's UI at the project, group, or instance level. This approach ensures that sensitive information is not exposed in your version control.
## Integrating Advanced GitLab CI/CD Techniques
Enhance your CI/CD pipelines by incorporating more advanced GitLab functionalities:
- **before_script and after_script:** Prepare the environment before your main script runs and clean up afterwards.
```yaml
test_job:
stage: test
before_script:
- echo "Setting up test environment"
script:
- npm test
after_script:
- echo "Cleaning up after tests"
```
- **Dynamic Environment Management:** Dynamically set and modify environment conditions based on the job context, enhancing flexibility across multiple environments.
```yaml
deploy_job:
stage: deploy
variables:
DEPLOY_ENV: "production"
script:
- if [ "$DEPLOY_ENV" == "production" ]; then deploy_to_production; else deploy_to_staging; fi
```
- **Using "rules" for Conditional Job Execution:** Customize job execution based on complex conditions, such as changes to specific files or the status of previous tasks.
```yaml
cleanup_job:
stage: cleanup
script: cleanup_resources
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
when: always
- if: '$CI_PIPELINE_SOURCE == "push"'
when: never
```
- **Efficient Management of Artifacts and Caches:** Fine-tune your pipeline performance by effectively managing build artifacts and leveraging caching mechanisms.
```yaml
build_job:
stage: build
script: build_application
artifacts:
paths:
- output/
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
```
## Continuous Learning
The landscape of DevOps tools and practices is constantly evolving. As a [Docker Captain](https://www.docker.com/captains/vladimir-mikhalev/), I keep abreast of these changes through continuous learning and experimentation. Embracing new tools like GitLab’s CI/CD allows us to refine our deployment strategies and improve automation. For more detailed instructions and advanced configurations, refer to the [official GitLab CI/CD documentation](https://docs.gitlab.com/ee/ci/).
## My Services
💼 Take a look at my [service catalog](https://www.heyvaldemar.com/services/) and find out how we can make your technological life better. Whether it's increasing the efficiency of your IT infrastructure, advancing your career, or expanding your technological horizons — I'm here to help you achieve your goals. From DevOps transformations to building gaming computers — let's make your technology unparalleled!
## Refill the Author's Coffee Supplies
💖 [PayPal](https://www.paypal.com/paypalme/heyValdemarCOM)
🏆 [Patreon](https://www.patreon.com/heyValdemar)
💎 [GitHub](https://github.com/sponsors/heyValdemar)
🥤 [BuyMeaCoffee](https://www.buymeacoffee.com/heyValdemar)
🍪 [Ko-fi](https://ko-fi.com/heyValdemar)
| heyvaldemar |
1,880,817 | MY EXPERIENCE SO FAR AT WHITE CREATIVITY | My Journey at White Creativity Starting as a web developer at White Creativity has been an exciting... | 0 | 2024-06-07T20:30:40 | https://dev.to/danieln/my-experience-so-far-at-white-creativity-op8 | webdev, beginners, programming | **My Journey at White Creativity**
Starting as a web developer at White Creativity has been an exciting and educating experience. With a basic understanding of HTML and CSS, I quickly realized the depth of practical web development once I joined and started my practice
**Learning the Basics**
In the beginning, I focused on mastering HTML's structural elements and the aesthetic power of CSS. Understanding the semantic importance of HTML tags.
I'm getting better as the day goes by, and while practicing it gives me a clear picture of things I can achieve with the knowledge I'm acquiring to me that motivates me the more and gives me more reason to work harder to learn more.
**Overcoming Challenges**
Challenges like debugging CSS and ensuring that it reflects in the browser as per what I programmed , sometimes it will take time for me to go through the code just to figure out why mine isn't working
**Growth and Aspirations**
My skills in HTML and CSS have grown exponentially, and I've learned the importance of user experience, clean code, and continuous learning. I’m now eager to delve higher and become better in web development.
**Conclusion**
My journey at White Creativity has been one of growth and continuous learning. With the foundation I've built, I am excited to tackle new challenges and innovations in web development. | danieln |
1,880,810 | Computer Vision Meetup: Combining Hugging Face Transformer Models and Image Data with FiftyOne | Datasets and Models are the two pillars of modern machine learning, but connecting the two can be... | 0 | 2024-06-07T20:29:02 | https://dev.to/voxel51/computer-vision-meetup-combining-hugging-face-transformer-models-and-image-data-with-fiftyone-3ii6 | computervision, machinelearning, datascience, ai | Datasets and Models are the two pillars of modern machine learning, but connecting the two can be cumbersome and time-consuming. In this lightning talk, you will learn how the seamless integration between Hugging Face and FiftyOne simplifies this complexity, enabling more effective data-model co-development. By the end of the talk, you will be able to download and visualize datasets from the Hugging Face hub with FiftyOne, apply state-of-the-art transformer models directly to your data, and effortlessly share your datasets with others.
Speaker: [Jacob Marks](https://www.linkedin.com/in/jacob-marks/), PhD is a Machine Learning Engineer and Developer Evangelist at Voxel51, where he leads open source efforts in vector search, semantic search, and generative AI for the FiftyOne data-centric AI toolkit.
Prior to joining Voxel51, Jacob worked at Google X, Samsung Research, and Wolfram Research.
Not a Meetup member? Sign up to attend the next event:
https://voxel51.com/computer-vision-ai-meetups/
Recorded on May 30, 2024 at the AI, Machine Learning and Data Science Meetup. | jguerrero-voxel51 |
1,880,814 | Level up your Tailwind game | Use Tailwind like you mean it! Whether you're a seasoned developer or just starting out, this article will help you navigate Tailwind CSS and expand your knowledge with advanced practices. | 0 | 2024-06-07T20:28:07 | https://www.oh-no.ooo/articles/level-up-your-tailwind-game | tailwindcss, css, mixin, webdev | ---
title: Level up your Tailwind game
published: true
description: Use Tailwind like you mean it! Whether you're a seasoned developer or just starting out, this article will help you navigate Tailwind CSS and expand your knowledge with advanced practices.
tags: #tailwind #tailwindcss #css #mixin #webdev
cover_image: https://images.ctfassets.net/hzu7wkrk7tly/5yJtow80Yw1D8adU9DekLV/382e0bb9fba9ed030fcc5877049ad73c/level_up_your_tailwind_game.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-07 20:51 +0000
canonical_url: https://www.oh-no.ooo/articles/level-up-your-tailwind-game
---
This is my first article in Dev.to, please be patient with the poor markdown-ing!
<hr />
Alright, this article might or might not be for you.
If you're the type of person that finds some Tailwind UI components and copies all the classes without any second thought, this might be for you.
If you're passionate about Tailwind and you appreciate getting creative with its utilitarian classes, this might be for you.
If you struggle with Tailwind and wonder *why can't we just stick to plain old CSS*, this might be for you.
If you don't like repetition and suspect that (besides wasting your time with the previous statements) some of the things that I'll show here are something you're already aware of, this might still be for you — repetition can't be that bad and I really tried to bring in something for everybody!
The idea here is to __go through a Tailwind speed-run and find things that could help you use it more efficiently__. And if nothing resonates with you, share with me in the comment section what did I miss :D
*(And yes, the assumption is that you use VSCode, and occasionally Next.js for a thing or two, although a lot of these topics remain valid also with other frameworks.)*
Let's get started then, shall we?
Before you proceed reading, let's make sure you are already using <a href="https://marketplace.visualstudio.com/items?itemName=bradlc.vscode-tailwindcss" target="_blank">Tailwind CSS Intellisense</a>, because if you're not, you definitely should. It's impossible to remember all the classes that Tailwind offers, and a little while we type them out is really appreciated!
Speaking of classes...
<br/>
## Classes, classes everywhere!
Meme generated with <a href="https://imgflip.com/memegenerator" target="_blank">Imgflip Meme Generator</a>
<br />
Yes, let's make sure we start with the easy-peasy. Lots of people complain that Tailwind *litters* the project components with hecklots of classes, making it difficult to read through the component. To them I tell... have you heard of <a href="https://marketplace.visualstudio.com/items?itemName=stivo.tailwind-fold" target="_blank">Tailwind Fold</a> by <a href="https://marketplace.visualstudio.com/publishers/stivo" target="_blank">Stivo</a>? No? Now you have.
In his <a href="https://flaviocopes.com/hiding-classes-in-vs-code/" target="_blank">article about Hiding classes in VSCode</a>, our friend and constant source of inspiration <a href="https://flaviocopes.com/" target="_blank">Flavio Copes</a> goes through a quick look at how this VSCode extension simply shows and hides the classes through a simple click.
While this is a worthy approach, you might not want to click to toggle classes visibility (I know I don't), and therefore the next suggestion would be...
<br/>
## Tailwind + Sass (and a sprinkle of `@mixin` if you will)
*(We'll assume you're using Next.js, because, let's be honest, what else would you want to use if you had a choice?! I'm kidding, kinda :D)*
We all love (I hope, otherwise why are you here?!) the power of Tailwind, but separating the functional part of a component from its styling is still a dream for many. The good news? You can achieve this by combining Tailwind with Sass! You just need to install Sass and everything will start working right off the bat!
Joaquin Collado, through <a href="https://www.rootstrap.com/" target="_blank">Rootstrap</a>, has <a href="https://www.rootstrap.com/blog/how-to-use-module-scss-with-tailwind-in-next-js" target="_blank">an easy guide on how to Use Module SCSS with Tailwind in Next.js</a>. Let's follow along!
First, install Sass
```bash
npm install --save sass
```
Then, create the `.module.scss` for your components, e.g. `Button.module.scss`
```scss
.button {
@apply p-4 rounded bg-blue-500;
}
```
Import the styles in the component
```tsx
import styles from './Button.module.scss';
```
And finally use them in your React component
```tsx
/* ... */
<button className={styles.button}>Click Me</button>
/* ... */
```
Ta-da! 🎉 You will now be able, for most things, to separate the Tailwind classes from your component.
And you know what?! This approach allows you to use also the `@mixin` properties if you're used to them!
<blockquote>
<p>Mixins allow you to define styles that can be re-used throughout your stylesheet. They make it easy to avoid using non-semantic classes like <code class="not-italic">.float-left</code>, and to distribute collections of styles in libraries.</p>
— <a href="https://sass-lang.com/documentation/at-rules/mixin/" target="_blank">@mixin and @include</a> explanation from the <a href="https://sass-lang.com/" target="_blank">official Sass documentation</a>
</blockquote>
<a title="CodeSandbox Tailwind + Sass + @mixin demo repository" href="https://codesandbox.io/p/github/mahdava/quirky-kiwi/main?amp%3Bembed=1&file=%2Fcomponents%2FsassButton%2FsassButton.tsx&embed=1" target="_blank">Have a look at this demo repository in CodeSandbox where you can see Tailwind + Sass + a simple implementation of `@mixin` in action</a>, or — if you prefer — <a href="https://github.com/mahdava/quirky-kiwi" target="_blank">test this implementation of Tailwind + Sass + `@mixin` locally by cloning it from GitHub</a>.
So, let's take it slow and check... what do we have over there?
```jsx
// page.tsx
import SassButton from "@/components/sassButton/sassButton";
/* ... */
<SassButton>Hello, I am a simple button</SassButton>
<SassButton className="block mt-4" variant="cancel">
Me too!
</SassButton>
/* ... */
<SassButton className="w-full m-2" variant="alert">
I am a variant that takes in
both color and optional text color
</SassButton>
```
We have a poorly-named `SassButton` component that can accept two props (ignoring the children, in our case the text we want our button to have), `className` and `variant`. Both props are optional, and while we get later to `className` and best practices on how to use that, let's focus on the `variant` part.
Now, moving to the button component
```jsx
// SassButton.tsx
import cx from "classnames";
import { FC } from "react";
import styles from "./sassButton.module.scss";
interface SassButtonProps {
variant?: "default" | "cancel" | "alert";
className?: string;
children: React.ReactElement | string;
}
const SassButton: FC<SassButtonProps> = ({
variant = "default",
className = "",
children,
}) => (
<button className={cx(
styles["button-" + variant],
className
)}>
{children}
</button>
);
export default SassButton;
```
we use the `variant` prop to determine what style the button will inherit from our `sassButton.module.scss` (which will be `button-<variant_name>`), and when no variant is set we just set its value to `default`.
Let's finally have a look at the Sass module.
```scss
@mixin button-styles(
$button-bg-color,
$button-text-color: "white"
) {
background-color: $button-bg-color;
color: $button-text-color;
@apply p-4 rounded;
}
.button-default {
// here you can pass the background color
// text color will use the default from the @mixin
@include button-styles(blue);
}
.button-alert {
// here you can pass both background color and text color
@include button-styles(orange, black);
}
.button-cancel {
@include button-styles(#ff0000);
}
```
Our sass module starts with the overall shape and appearance of our button through the `@mixin` directive called `button-styles`, and it uses `$button-bg-color` and `$button-text-color` as variables to customize the color of the background and the text of the button.
Subsequently, we reuse the same setup by providing the variants *default* *alert* and *cancel* with the desired background and text color (the latter being optional and defaulting to *white* if nothing else is specified) by calling on `button-style` through the `@include` directive.
(Congratulations, you just had an absolute speed-run on how to use `@mixin` if this is your first time!)
Notice that with this approach nothing forbids us to use Tailwind classes at any point; we are already using simultaneously traditional CSS properties combined with Tailwind classes. Allegedly, no one could stop us from doing
```scss
.button-cancel {
@include button-styles(#ff0000);
@apply text-xs underline;
}
```
and it would be up to us to choose how to organise and create all the variants.
Heck, you might and think "why bringing `@mixin` up at all?" and in general I would say that Tailwind on its own is more than enough, but in a system that needs to contemplate multiple variants of the same component, `@mixin` could be the solution you were looking for — also, my objective with this article is to showcase more possibilities on how to work with Tailwind!
(Also... for some reason, I am not successful setting up a default value for `$button-bg-color`, if you know why let me know in the comments how to set all the parameters optional!)
<hr />
So do you remember that `className` I said I was gonna mention again later? Now it's the time, for we are going to talk about...
<br/>
## The misunderstood art of managing space between elements and sections
If you're working with a good design system, I'd expect everything to be well-divided in <strong>sections</strong> and you being blessed with the consistency of everything spaced *consistently* throughout the project.
A good design system would have an atomic structure, where each smallest __atom__ is a discernible component that may coincide to one or a few more HTML elements in the page that provide some value. Using these atoms/components together would form __molecules__, which would be the equivalent of a header, footer, sidebar — which all are __sections__ of a website page — and a full page could be then considered an __organism__.
The reality is that most times, __designers work in components and not in sections__, meaning that there's no real concept of molecules, just atoms and straight into organisms. This implies that there is easily a lot of discrepancy on how much space there can be above an heading or below a section based on the rest of the content of the page if there is no clear demarcation of the end of a section and the beginning of another.
In the above image (titled *Bad spacing*), the elements have an __add a margin down approach__ to space components between each other. While this might still result in the desired appearance for some pages, reusing the same components in new pages will cause issues as the margins might differ. There is no clarity on how much space the navigation or the article header should have, nor how much their components should have space around them.
<br />
In this other image (titled "Better spacing") we can see an improved understanding of spacing between elements and a clearer use of sections. We are able to deduct how much space there should be around each component, not only under, and which part of the overall space doesn't really belong to any component.
(Yes, I know, some components such as "Tags" should also be able to see the settings of each single "Tag" and in my picture they are all compact together, but I just wanted to get the spacing point across... :D)
<br />
While we are not here to discuss about design systems (although I would anytime), I want you to acknowledge that __components shouldn't directly take care of accommodating spacing based to their position in a page__, but rather, they should be able to __circumstantially receive and accommodate their spacing directives__.
What do I mean by circumstantially? Nothing more than accepting classes related to the spacing that is needed for the page where the component will show up.
<a href="https://codesandbox.io/p/github/mahdava/snappy-melon/main?file=%2Fapp%2Farticles%2Fpage.tsx&workspaceId=ffe5e611-b2f7-47e3-8cb3-c95e41c7d6b5" target="_blank">You can check this CodeSandbox to test how to give arbitrary spacing classes to a component</a> or, like before, <a href="https://github.com/mahdava/snappy-melon/blob/main/app/page.tsx" target="_blank">you can check how to give arbitrary spacing classes from this GitHub repository</a>.
Let's dig into the code.
We have three different pages, `app/pages.tsx`, `app/articles/page.tsx` and `app/about/page.tsx` that make use of the same `<HeaderTitle>` component. The homepage uses the component without worrying much about its spacing in the page:
```jsx
// app/page.tsx
/* ... */
<HeaderTitle
title="Lorem ipsum"
description="This HeaderTitle component
doesn't have any extra spacing setting"
/>
/* ... */
```
Meanwhile, both `app/articles/page.tsx` and `app/about/page.tsx` infer an extra `className` property that allows the `<HeaderTitle>` component to take up different space in the page.
```jsx
// app/articles/page.tsx
/* ... */
<HeaderTitle
title="Lorem ipsum"
description="This HeaderTitle has some extra margin
around it to fit better something like an article title"
className="my-24 mx-12" // we added margins for this page
/>
/* ... */
// app/about/page.tsx
/* ... */
<HeaderTitle
title="Lorem ipsum"
description={
<>
<ul>
<li>
This HeaderTitle shows how much
flexibility you can have
</li>
<li>
while the component itself doesn't have
to include natively any 'fixed'
position
</li>
</ul>
</>
}
// with this approach, we can also infer
// other styles related to the position of the element
className="fixed right-0 top-24"
/>
/* ... */
```
The point here is that __the code of the `<HeaderTitle>` component itself remains untouched and un-duplicated, while its spacing properties are relative to the contexts it gets used on, allowing for flexibility of usage across different pages and different needs__.
You might want to argue now "couldn't we just wrap the component into another div whenever we need more space around it?", and while that is possible, it also creates extra elements that might create challenges while delivering semantic HTML. Also, it really helps us categorizing which classes we need for its spacing and positioning, and which ones are for the component itself.
Which, guess what, leads us straight into the next topic!
<br/>
## Grouping by purpose for the sake of readability
It takes absolutely nothing to make Tailwind classes difficult to digest; you check one component's styling and all you see is a long list of classes and no quick glance will prevent you from mistakenly apply a second `mx-` class or so.
So, next in my personal recommended Tailwind best-practices, is to think about what each class is doing — layout, spacing, typography, colors, etc. — and group by that!
For example, instead of writing
```html
<div class="mt-4 bg-blue-500 text-white p-6 rounded-lg shadow-lg hover:bg-blue-700">
<!-- content -->
</div>
```
You could organize them like this
```html
<div class="p-6 mt-4 rounded-lg shadow-lg bg-blue-500 text-white hover:bg-blue-700">
<!-- content -->
</div>
```
Grouping similar classes together makes it easier to read and understand what stylings are being applied to the element.
If it sounds daunting as a task, worry not! Someone has already thought of a VSCode extension, <a href="https://marketplace.visualstudio.com/items?itemName=Trapfether.tailwind-raw-reorder" target="_blank">Tailwind Raw Reorder</a>, that will take care of the sorting for you! (And apparently, it also works in `module.scss` files without any extra configuration!)
The order that the extension proposes is as follows (or at least ChatGPT think it is — at least I couldn't find this from anywhere else):
<ol>
<li><strong>Layout:</strong> container, box-border, box-content, etc.</li>
<li><strong>Positioning:</strong> static, fixed, absolute, relative, sticky, etc.</li>
<li><strong>Flex and Grid:</strong> flex, inline-flex, grid, inline-grid, etc.</li>
<li><strong>Spacing:</strong> m-0, p-0, space-x-0, etc.</li>
<li><strong>Sizing:</strong> w-full, h-full, max-w-full, min-h-full, etc.</li>
<li><strong>Typography:</strong> font-sans, text-sm, font-bold, etc.</li>
<li><strong>Background:</strong> bg-white, bg-opacity-50, etc.</li>
<li><strong>Border:</strong> border, border-0, rounded, ring, etc.</li>
<li><strong>Effects:</strong> shadow, opacity-0, etc.</li>
<li><strong>Transitions and Transforms:</strong> transition, duration-300, ease-in, scale-100, etc.</li>
<li><strong>Miscellaneous:</strong> cursor-pointer, select-none, etc.</li>
</ol>
<a href="https://www.reddit.com/r/tailwindcss/comments/1720acu/vscode_tailwind_class_reorder_extension/" target="_blank">You can read more about the development journey of Tailwind Raw Reorder on Reddit</a>, where people talk also about the <a href="https://tailwindcss.com/blog/automatic-class-sorting-with-prettier" target="_blank">official recommendation for Automatic class sorting with Tailwind</a> (which I don't recommend anymore because it only sorts classes alphabetically, although I wouldn't necessarily diss, as <a href="https://www.oh-no.ooo/snippets/tailwind-automatic-class-sorting-with-prettier">I did like the combo of Tailwind + Prettier it when I discovered its existence</a>).
It is fair to note that <a href="https://marketplace.visualstudio.com/items?itemName=Trapfether.tailwind-raw-reorder" target="_blank">Tailwind Raw Reorder</a> by <a href="https://marketplace.visualstudio.com/publishers/Trapfether" target="_blank">Andrew Trefethen</a> is a fork of <a href="https://marketplace.visualstudio.com/items?itemName=heybourn.headwind" target="_blank">the now dated Headwind VSCode extension</a>.
Well damn, if this isn't all that you need to make the most out of Tailwind in a smart way!
But wait! There's more!
<br/>
## Tailwind Merge to the rescue
While I've already written <a href="https://www.oh-no.ooo/snippets/tailwind-merge-clean-your-code-by-removing-conflicting-styling">a small snippet about Tailwind Merge</a>, let me reiterate here what's for.
Like the name suggests, <a href="https://www.npmjs.com/package/tailwind-merge" target="_blank">`tailwind-merge` npm package</a> by <a href="https://github.com/dcastil" target="_blank">Dany Castillo</a> offers a solution that combines and reuses Tailwind utility classes.
Suppose you have a button component that can be either primary or secondary, with different styles for each state, some styles taken from a `button.module.scss` and perhaps with something else inherited by `className` as we have seen is possible from above (oh god, what a mess — I am legally obligated to mention the classic *"Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."*).
Instead of having to figure out *whyyy is the padding not working* as you'd expect, you can use tailwind-merge and give hierarchy on how all the various classes take priority.
```jsx
// button.tsx
import { FC, ReactNode } from 'react';
import { twMerge } from 'tailwind-merge';
import styles from './button.module.scss';
interface ButtonProps {
type?: 'primary' | 'secondary';
className?: string;
children: ReactNode;
}
const Button: FC<ButtonProps> = ({ type = 'primary', className, children }) => {
const baseClasses = 'p-4 rounded-lg';
const typeClasses = type === 'primary' ? 'bg-blue-500 text-white' : 'bg-gray-500 text-black';
return (
<button className={twMerge(baseClasses, typeClasses, styles.button, className)}>
{children}
</button>
);
};
export default Button;
```
Assuming a fictitious `button.module.scss` containing
```scss
// button.module.scss
.button {
@apply text-xl;
}
```
The button will eventually have the following classes
```text
'p-4 rounded-lg bg-red-500 underline text-xl text-white'
```
This approach eliminates potential conflicts and contradictions and hopefully keeps your elements rendered with fewer clean(er) classes.
The one caveat of Tailwind Merge, imho, is that you would have to start using it at the beginning of a project; however, nothing prevents you from progressively add it into the codebase.
<hr />
Now, what else? Well, there's plenty!
<br/>
## Speed it up with component libraries
We all like to keep rebuilding the wheel, as every project proposes different nuances and we just want that button to work *exactly* as we had it envisioned. But the key to a successful "I build it my way" is to know when to build and when to borrow.
<a href="https://ui.shadcn.com/" target="_blank">shadcn/ui</a> offers a great set of components (styled with Tailwind) that you can take as-is and customize all while they already include some accessibility contemplations. It differs from other libraries as it invites you to be hands-on the actual component and it allows you to modify it to your exact needs without having to create extra levels of abstractions for customisation.
It is worth to note that, like everything, it is not perfect and sometimes you might want to rewrite a couple of things here and there. For example, its class handling proposes a mix of `clsx` and `twMerge` that might be redundant, as described by <a href="https://medium.com/@pablo.haller" target="_blank">Pablo Haller</a> in his article <a href="https://medium.com/@pablo.haller/something-i-dont-like-from-shadcn-ui-3b71c9080a7d" target="_blank">Something I don’t like from shadcn/ui</a>.
That being said, shadcn/ui is a great way to speed up your work, especially combined with some good Figma prototyping made simple for you by <a href="https://www.figma.com/@skirano" target="_blank">Pietro Schirano</a> with their Figma <a href="https://www.figma.com/community/file/1203061493325953101" target="_blank">@shadcn/ui - Design System</a>.
And while shadcn/ui is a great free solution to implement ready-made-tailwind-styled components, you might want to use a more mature system such as Flowbite.
<a href="https://flowbite.com/" target="_blank">Flowbite</a> (whose main contributor is <a href="https://flowbite.com/blog/author/zoltan/" target="_blank">Zoltán Szőgyényi</a>) is a component library that is also built on top of Tailwind CSS. It provides a broader set of UI components that you can easily integrate into your Tailwind projects and it really helps you making everything look professional and curated from the very beginning.
On top of everything, Flowbite proposes a lot of videos that help you navigate the new ecosystem, as well as a lot of constant updates to their product, which we know is something we desire in a system that we want to implement and hopefully use for a long time.
If you want to familiarise with a great design system but you're not ready yet to make a monetary commitment, Flowbite is the right resource nonetheless. They offer also a free version of their <a href="https://www.figma.com/community/file/1179442320711977498" target="_blank">Flowbite Design System</a>, which can help you speed up your prototyping and design process while keeping high-quality mockups — and eventually figure out if Flowbite is the solution you were looking for.
<br/>
## Let's talk about forms
Got no time to make that form pretty? Worry not, because <a href="https://github.com/tailwindlabs/tailwindcss-forms/tree/master?tab=readme-ov-file" target="_blank">Tailwind Forms</a> can come to the rescue to style in a consistent manner all your forms!
It provides base styles for form controls like inputs, text areas, checkboxes, and radio buttons.
You can simply install the plugin
```bash
npm install @tailwindcss/forms
```
add it to your tailwind.config.js
```js
module.exports = {
//...
plugins: [
require('@tailwindcss/forms'),
],
}
```
and that's pretty much it! Wanna customise something more? Feel free to add some extra styles to the form elements like you normally would!
Be sure to check out the <a href="https://github.com/tailwindlabs/tailwindcss-forms/tree/master?tab=readme-ov-file" target="_blank">Tailwind Forms documentation</a> and be sure to checkout the <a href="https://tailwindcss-forms.vercel.app/" target="_blank">Tailwind Form example</a> they provide!
<br/>
## Snippity snippets!
I'd like to end this article by shining some lights on other majestic work that people have done and shared.
The homepage of <a href="https://tailwindui.com/" target="_blank">Tailwind UI</a>.
<br />
You can find a ready-made collection of templates in <a href="https://tailwindui.com/components" target="_blank">Tailwind UI</a>, developed and curated by the very same creators of Tailwind. It's a nice mature collection of components and templates, with the only downside that it will cost you some money. But hey, quality stuff nevertheless!
The homepage of <a href="https://react-tailwind-snippets.vercel.app/" target="_blank">Tailwind Snippets</a>.
<br />
<a href="https://react-tailwind-snippets.vercel.app/" target="_blank">Tailwind Snippets</a>, as it mentions in their homepage, proposes a collection of UI templates to speed up your UI development using React and Tailwind CSS. Quick and easy to browse, find the elements you need and copy away!
<a href="https://tws.zarifprogrammer.com/" target="_blank">Tailwind Snippets</a> website, but the one of the <a href="https://marketplace.visualstudio.com/items?itemName=Zarifprogrammer.tailwind-snippets" target="_blank">Tailwind Snippets VSCode plugin</a>.
<br />
With the same name but with a VSCode extension coming along with it, <a href="https://marketplace.visualstudio.com/items?itemName=Zarifprogrammer.tailwind-snippets" target="_blank">Tailwind Snippets</a> is a VSCode extension that allows you to take advantage of the numerous snippets available in <a href="https://tws.zarifprogrammer.com/" target="_blank">Tailwind Snippets</a> and developed by <a href="https://studio.zarifprogrammer.com/" target="_blank">ZS Software Studio</a>.
<a href="https://tailwindcomponents.com/" target="_blank">Tailwind Components</a> website.
<br />
On the same note, you'll find <a href="https://tailwindcomponents.com/" target="_blank">Tailwind Components</a> with its community-shared free-to-use components... all the options!
And these are just a few of the ready-made snippets you can copy away. Do you have a favourite one, or am I missing out on something good? Please share!
<hr />
I hope you found some good resources in this article, and that ideally you might have learned a thing or two. If you have any questions or additional tips, drop a comment below!
## Sources and inspiration
- <a href="https://marketplace.visualstudio.com/items?itemName=bradlc.vscode-tailwindcss" target="_blank">Tailwind CSS Intellisense</a> by <a href="https://tailwindcss.com/" target="_blank">Tailwind CSS</a>
- <a href="https://marketplace.visualstudio.com/items?itemName=stivo.tailwind-fold" target="_blank">Tailwind Fold</a> by <a href="https://marketplace.visualstudio.com/publishers/stivo" target="_blank">Stivo</a>
- <a href="https://flaviocopes.com/hiding-classes-in-vs-code/" target="_blank">Hiding classes in VSCode</a> by <a href="https://flaviocopes.com/" target="_blank">Flavio Copes</a>
- <a href="https://www.rootstrap.com/blog/how-to-use-module-scss-with-tailwind-in-next-js" target="_blank">How to Use Module SCSS with Tailwind in Next.js</a> by Joaquin Collado via <a href="https://www.rootstrap.com/" target="_blank">Rootstrap</a>
- <a href="https://chat.openai.com/" target="_blank">ChatGPT</a> prompt:
`Can you show me what order would the class list (handled by Tailwind Raw Reorder) have?`
- <a href="https://marketplace.visualstudio.com/items?itemName=Trapfether.tailwind-raw-reorder" target="_blank">Tailwind Raw Reorder</a> by <a href="https://marketplace.visualstudio.com/publishers/Trapfether" target="_blank">Andrew Trefethen</a>
- <a href="https://www.npmjs.com/package/tailwind-merge" target="_blank">`tailwind-merge` npm package</a> by <a href="https://github.com/dcastil" target="_blank">Dany Castillo</a>
- <a href="https://ui.shadcn.com/" target="_blank">shadcn/ui</a> main website
- <a href="https://medium.com/@pablo.haller/something-i-dont-like-from-shadcn-ui-3b71c9080a7d" target="_blank">Something I don’t like from shadcn/ui</a> by <a href="https://medium.com/@pablo.haller" target="_blank">Pablo Haller</a>
- <a href="https://flowbite.com/" target="_blank">Flowbite</a> by <a href="https://flowbite.com/blog/author/zoltan/" target="_blank">Zoltán Szőgyényi</a>
- <a href="https://www.figma.com/community/file/1179442320711977498" target="_blank">Flowbite Design System</a> on <a href="https://www.figma.com" target="_blank">Figma</a>
- <a href="https://github.com/tailwindlabs/tailwindcss-forms/tree/master?tab=readme-ov-file" target="_blank">Tailwind Forms documentation</a>
- <a href="https://tailwindui.com/components" target="_blank">Tailwind UI</a>
- <a href="https://react-tailwind-snippets.vercel.app/" target="_blank">Tailwind Snippets</a>
- <a href="https://tws.zarifprogrammer.com/" target="_blank">Tailwind Snippets</a>, the one that offers the <a href="https://marketplace.visualstudio.com/items?itemName=Zarifprogrammer.tailwind-snippets" target="_blank">Tailwind Snippets VSCode plugin</a>
- <a href="https://tailwindcomponents.com/" target="_blank">Tailwind Components</a>
- Cover: <a href="https://www.freepik.com/free-ai-image/view-3d-young-school-student_133758575.htm" target="_blank">3d young school student</a> by <a href="https://freepik.com" target="_blank">Freepik</a>, <a href="https://www.freepik.com/free-vector/flat-abstract-wireframe-background_16133537.htm" target="_blank">Flat abstract wireframe background</a> by <a href="https://freepik.com" target="_blank">Freepik</a>, Tailwind logotype from <a href="https://tailwindcss.com/brand" target="_blank">Tailwind Official Brand page</a>, semi-random coding text generated with <a href="https://carbon.now.sh" target="_blank">Carbon</a>
<hr />
Originally posted in <a href="https://oh-no.ooo">oh-no.ooo</a> (<a href="https://www.oh-no.ooo/articles/level-up-your-tailwind-game">Level up your Tailwind game</a>), my personal website. | mahdava |
1,872,501 | How to Create a Simple Web App with Flask for Python Beginners (Bite-size Article) | Introduction I am usually a programmer who focuses on web development. However, recently,... | 0 | 2024-06-07T20:27:14 | https://dev.to/koshirok096/how-to-create-a-simple-web-app-with-flask-for-python-beginners-bite-size-article-32ja | beginners, flask, python | #Introduction
I am usually a programmer who focuses on web development. However, recently, I've had more opportunities to use Python for various reasons (though I'm still a beginner with Python). Given my background in web development, I started to wonder, "Is it possible to run functions created in Python on the web?"
To solve this question, I did some research and found that using a micro web framework called Flask, it is quite easy to run Python functions on the web. In this post, I'd like to share how to use Flask with you all.

#Creating the Project Directory and Setting Up the Virtual Environment
First, create a directory for your project. Open your terminal or command prompt and run the following commands:
```sh
mkdir my_flask_app
cd my_flask_app
```
Next, create a Python virtual environment to manage dependencies for your project:
```sh
python -m venv venv
```
Then, activate the virtual environment.
## On Windows:
```shell
venv\Scripts\activate
```
## On macOS/Linux:
```shell
source venv/bin/activate
```

#Installing Flask
With the virtual environment activated, install Flask:
```sh
pip install flask
```
#Creating the requirements.txt File
Save the current environment's dependencies to a requirements.txt file:
```sh
pip freeze > requirements.txt
```
#Setting Up the Project Directory Structure
Create the directories and files using the following commands:
```sh
mkdir static templates
touch app.py config.py
```
Place static files like CSS, JavaScript, and images in the static directory. HTML templates should be placed in the templates directory.
#Basic Flask Application Setup and Creating Template Files
Edit the app.py file to create a basic Flask application:
```python
from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def home():
return render_template('index.html', message="Heyyy, Flask!")
if __name__ == '__main__':
app.run(debug=True)
```
Next, create the index.html file in the templates directory and enter the following code:
```html
<!doctype html>
<html>
<head><title>Flask App</title></head>
<body>
<h1>{{ message }}</h1>
</body>
</html>
```
This time, let's simply display a message.
#Running the Application
In your terminal or command prompt, run the following command:
```sh
python app.py
```
When you run the application, you will be able to access the page from your browser. If everything is set up correctly, you should see the message "Heyyy, Flask!" displayed.
The completed directory structure will look like this:
```bash
/my_flask_app
│
├── app.py
├── config.py
├── requirements.txt
├── /static
│ ├── /css
│ ├── /js
│ └── /images
│
└── /templates
└── index.html
```
With this, we have created a basic project directory for Flask and run a simple application. In this article, it only covered displaying a message on a page, but for those interested, you can further expand the project and add more features.

#Conclusion
Today I introduced how to a simple web application using Flask, especially for the Python beginners. Recently, I've been developing some business tools using Python and running them in a local environment. I wondered if I could get these tools to run in a browser, which led me to explore and learn about Flask. I wanted to share what I’ve learned with everyone.
Even though my experience with Python is still relatively new, I found setting up an environment with Flask to be straightforward and user-friendly. If you're interested, I encourage you to follow these steps and try it out for yourself. Feel free to expand and add more features as you see fit.
Thank you for reading!
| koshirok096 |
1,880,816 | Vengo AI - Create and monetize your AI identity with one line of code | Vengo AI is a cutting-edge B2B SaaS platform that democratizes AI creation, making it accessible for... | 0 | 2024-06-07T20:24:54 | https://dev.to/vengo-ai/vengo-ai-create-and-monetize-your-ai-identity-with-one-line-of-code-3o52 | ai, b2b, saas, marketplace | Vengo AI is a cutting-edge B2B SaaS platform that democratizes AI creation, making it accessible for everyone, from influencers and brands to entrepreneurs and businesses. Our proprietary system allows users to effortlessly integrate sophisticated AI identities into their websites with just one line of code. By joining our innovative community, members unlock the potential to create, customize, and monetize their digital twins, significantly enhancing their digital presence and generating new passive income streams. Experience the future of AI with Vengo AI, where technology meets simplicity and profitability.
[https://vengoai.com](https://vengoai.com) | vengo-ai |
1,880,815 | Introducing Comment Monk: Simple comment hosting system for static blogs and websites | I wanted to implement a comment hosting system for my static blog https://prahladyeri.github.io, just... | 0 | 2024-06-07T20:24:26 | https://prahladyeri.github.io/blog/2024/06/intoducing-comment-monk.html | I wanted to implement a comment hosting system for my static blog <https://prahladyeri.github.io>, just basic Wordpress.org style commenting feature with user's name, website, etc., no complicated logins or sign-ups or third-party platforms. The user reads your blog, posts a comment, and you approve from the backend (or alternatively, it gets auto-approved and you get an email notification). As simple as that!
Since github pages doesn't provide any backend PHP scripting facility, I had to develop a whole backend app along with a frontend EMCA script which could be plugged into a `div` block at the end of a blog post, a space typically reserved for comments. [Comment-Monk](https://github.com/prahladyeri/comment-monk/) is the result of that effort. I have made this app open source and put it on github so that it can be used by as many folks as possible. To use this app for your own static blog, just download the repo and deploy it to a PHP web hosting service. It's a very light script with Sqlite backend, intentionally kept small enough to be deployed to one of those cheap (even free) PHP hosting facilities.
Once you start the app, it takes you to the Install page where you can register with your details and credentials using which you can login to the app and administer it as a super user. It also asks the website or domain of your blog where you'll host the commenting system.

Once you login and go to home page, you can see this screen where you can view and manage your comments. Right now, you can just view and delete your comments but more features are en-route in the upcoming versions. You can also set your user preferences from the "Actions" menu on the top right.

Most importantly, you can click on the "Client Snippet" button which will guide you to implement your frontend HTML code to embed the comments.
Once you do that, your static blog should look something like this:

At the place where you added the `script` tag, you should be able to see a ready comment block showing all existing comments along with a submit form to post one.
Your audience can read your content and be able to post a comment. The backend validates the URI (Uniform Resource Identifier) and if a registered domain is found, creates an entry for that comment which will be reflected in the administrator's (YOUR) dashboard. I think this is almost as simple as it could be!
The comments block has a very basic and bland look by default but you can customize it fully by editing the `/static/cm-client.css` on backend which is used for styling it.
I hope you will find this system useful for your static blog. If you face any issue, don't forget to raise it on the [github tracker](https://github.com/prahladyeri/comment-monk/). Happy Coding! | prahladyeri | |
1,875,724 | Computer Vision Meetup: Lessons Learned fine-tuning Llama2 for Autonomous Agents | In this talk, Rahul Parundekar, Founder of A.I. Hero, Inc. does a deep dive into the practicalities... | 0 | 2024-06-07T20:19:30 | https://dev.to/voxel51/computer-vision-meetup-lessons-learned-fine-tuning-llama2-for-autonomous-agents-1jk3 | computervision, ai, machinelearning, datascience | In this talk, [Rahul Parundekar](https://www.linkedin.com/in/rparundekar), Founder of A.I. Hero, Inc. does a deep dive into the practicalities and nuances of making LLMs more effective and efficient. He’ll share hard-earned lessons from the trenches of LLMOps on Kubernetes, covering everything from the critical importance of data quality to the choice of fine-tuning techniques like LoRA and QLoRA. Rahul will share insights into the quirks of fine-tuning LLMs like Llama2, the need for looking beyond loss metrics and benchmarks for model performance, and the pivotal role of iterative improvement through user feedback – all learned through his work on fine-tuning an LLM for retrieval-augmented generation and autonomous agents. Whether you’re a seasoned AI professional or just starting, this talk will equip you with the knowledge of when and why you should fine-tune, to the long-term strategies to push the boundaries of what’s possible with LLMs, to building a performant framework on top of Kubernetes for fine-tuning at scale.
Speaker: Rahul Parundekar is the founder of A.I. Hero, Inc., a seasoned engineer, and architect with over 15 years of experience in AI development, focusing on Machine Learning and Large Language Model Operations (MLOps and LLMOps). AI Hero automates mundane enterprise tasks through agents, utilizing a framework for fine-tuning LLMs with both open and closed-source models to enhance agent autonomy.
Not a Meetup member? Sign up to attend the next event:
https://voxel51.com/computer-vision-ai-meetups/
Recorded on May 30, 2024 at the AI, Machine Learning and Data Science Meetup. | jguerrero-voxel51 |
1,880,805 | Next Generation SQL Injection: Github Actions Edition | In the evolving landscape of software development, Continuous Integration and Continuous Deployment... | 0 | 2024-06-07T20:16:53 | https://mymakerspace.substack.com/p/next-generation-sql-injection-github | githubactions, security, devops | In the evolving landscape of software development, Continuous Integration and Continuous Deployment (CI/CD) tools like GitHub Actions have become indispensable. However, with great power comes great responsibility, and it's crucial to be aware of potential security pitfalls. One such vulnerability is treating untrusted inputs, such as branch names or pull request (PR) titles, as static parameters.
Spoiler alert, **they should NOT be trusted**
I enjoy puzzles, so here goes one:
```
steps:
- name: Generate summary
run: |
echo "Pull Request for [${{ github.event.pull_request.title }}](https://github.com/${{ github.repository }}/pull/${{ github.event.pull_request.number }}) has been updated 🎉" >> $GITHUB_STEP_SUMMARY
echo "Image tagged **v${{ needs.determine_app_version.outputs.app_version }}** has been built and pushed to the registry." >> $GITHUB_STEP_SUMMARY
This will generate a workflow summary like:
```
Can you spot the vulnerability?
## The Problem
When a developer clicks on the Revert PR button in GitHub, the new PR title is `Revert "<OLD TITLE>"`. This breaks the Generate summary step.
Try again if you haven’t spotted the vulnerability
The vulnerability lies in how `github.event.pull_request.title` input is handled. **GitHub Actions expressions dynamically generate code**. Unfortunately, if the PR title contains any special characters or unexpected input, it can cause the script to fail or behave unexpectedly.
## The Impact
When developers click on Revert PR button, the PR title is `Revert "<OLD TITLE>"`, the echo command in the Generate summary step generates the bash command:
```
echo "Pull Request for [Revert "<OLD TITLE>"](https://github.com/${{ github.repository }}/pull/${{ github.event.pull_request.number }}) has been updated 🎉" >> $GITHUB_STEP_SUMMARY
```
The presence of the double quotes within the PR title closes the echo early. However, this allows for bad actors to craft a title that could be used take full control of the runner
## The Solution
To prevent this issue, one approach is to use env as an input “encoder” and avoid directly using GitHub Actions `${{ }}` expressions. For example:
```
env:
TITLE: ${{ github.event.pull_request.title }}
steps:
- name: Generate summary
run: |
echo "Pull Request for [$TITLE](https://github.com/${{ github.repository }}/pull/${{ github.event.pull_request.number }}) has been updated 🎉" >> $GITHUB_STEP_SUMMARY
```
By using the `env` context to store untrusted inputs, you avoid potential script-breaking issues caused by special characters pre-rendered by Github Actions.
Leave a comment if you can think on a different way to solve this problem! | alonch |
1,880,803 | Giving Back to the Boomi Community: How Your Contributions Make a Difference | Everybody wants to be part of a community—a group of people who validate a person’s thoughts and... | 0 | 2024-06-07T20:11:02 | https://dev.to/eyer-ai/giving-back-to-the-boomi-community-how-your-contributions-make-a-difference-3g4a | community, boomi | Everybody wants to be part of a community—a group of people who validate a person’s thoughts and feelings and help them out during difficult situations. In the [Boomi community by Eyer](https://discord.gg/SyTRyWpbgq), this group of people are Boomi engineers.
The community provides a sense of belonging and support that transcends individual achievements, fosters a culture of collaboration, and forges friendships that can last a lifetime. However, as with any partnership or relationship of any substance, the community and its people thrive on the principle of give and take.
In this article, you will learn what giving back to the Boomi community can do for you and, more importantly, the best way to give back.
## What do you gain from giving back to the Boomi community?
Helping your community is undeniably a good thing, but sometimes that “warm fuzzy feeling" isn't enough motivation for everyone. The good news is that giving back offers a ton of benefits beyond just feeling good. Here are some of the advantages you can gain by getting involved:
* **Skills development**: Helping out community members is a guaranteed way to enhance your Boomi integration skills. It's a fantastic opportunity to learn new techniques, refine existing skills, and apply your experience to solve interesting problems in new ways.
* **Networking opportunities**: Giving back and volunteering are amazing ways to meet like-minded people who share your passion for Boomi and mentorship. These activities guarantee valuable connections that can benefit you personally and professionally.
* **Build a professional brand**: By consistently sharing your Boomi knowledge or volunteering to help others, you'll rapidly establish yourself as the go-to expert for all things Boomi. This includes opportunities, inquiries, and much more, and the best part is that your reputation extends beyond the Boomi ecosystem!
* **Leaving a positive impact**: Contributing to the Boomi community is a powerful way to leave a positive mark. Sharing your knowledge and skills empowers others to grow and achieve their goals. Your insights can inspire and uplift fellow developers, creating a supportive and innovative space for everyone. This not only strengthens the entire community but also solidifies your reputation as a valuable and generous member.
## What are the different ways to contribute to the Boomi community?
While how you contribute to a community can vary based on its needs (some might value open-source contributions or leadership roles), here are some universal ways to give back, especially within the Boomi community by Eyer:
**Time commitment:** Actively participate in discussions within the Boomi community by Eyer. Provide insightful solutions and clear explanations to Boomi developers.
Volunteer at Boomi-sponsored, Eyer-sponsored, and Boomi-related events in your local tech community. Lend a hand with logistics and setup or even lead breakout sessions on specific Boomi topics.
**Skill-based contributions:** One of the most efficient ways to become a thought leader in the Boomi community is to share your expertise. You can create blog posts, tutorials, or short guides addressing common integration challenges other Boomi developers face.
Guide and support fellow Boomi users, particularly those new to the platform. Offer advice, answer questions on the Boomi community forum or the [Boomi community by Eyer discord](https://discord.gg/SyTRyWpbgq), and help them navigate the integration world.
**Acts of kindness (within the Boomiverse):** Finally, if you are all out of time and you see yourself not being able to commit as much as you would like to, you can contribute to the community by recognizing valuable contributions made by other members by upvoting their responses in [forum discussions](https://community.boomi.com/s/). This helps elevate quality content and ensures others find the information they need.
Extend a warm welcome to new members in the forums or online events. Offer to answer basic questions and help them navigate the Boomiverse’s resources.
Additionally, when someone goes above and beyond to help you, acknowledge their effort with a positive comment or a "thank you."
## Wrapping up
Hopefully, this article was enough to incentivize you to contribute more to your Boomi communities. Helping Boomi developers out, volunteering, and showing gratitude in the smallest ways are some of the best ways to provide value.
Sure, the idea is to give without expecting anything in return, but you can think of the rewards as a natural consequence of doing good. People will trust your Boomi expertise more because they've seen it in action and benefited from it. They can vouch for your character because you chose to volunteer when you didn't have to. This is an amazing position in a world run by referrals and recommendations.
So, take the first steps, join the [Boomi community by Eyer](https://discord.gg/SyTRyWpbgq), and start your journey today!
| amaraiheanacho |
1,880,802 | Getting Started with Networking and Sockets | In the realm of software development and network engineering, understanding the fundamentals of... | 27,728 | 2024-06-07T20:02:59 | https://www.kungfudev.com/blog/2024/06/07/getting-started-with-net-and-sockets | linux, rust, socket, network |
In the realm of software development and network engineering, understanding the fundamentals of sockets and networking is invaluable, regardless of your specific focus within the industry. This article aims to provide a comprehensive overview of these essential concepts, facilitating a clearer understanding of their importance and functionality for engineers. Let's delve into the basics and uncover the foundational principles that drive modern networking.
To keep things simple, we will start with basic examples and gradually introduce more complexity. This approach allows us to build a strong foundation without getting overwhelmed. In future articles, we will explore more advanced topics.
## Understanding the OSI and TCP/IP Models: Foundations of Networking
### OSI model
We can't start discussing basic networking concepts without mentioning our old friend, the OSI model. For two computers to communicate effectively, they need to speak the same language, structured by the OSI model. This foundational framework has been a cornerstone in understanding and implementing network communications. By breaking down the complex process of data transmission into seven manageable layers, the OSI model provides a clear and organized approach to networking that has stood the test of time.
The OSI model simplifies the complex process of data transmission by providing a universal language for networking. Each layer of the OSI model has specific responsibilities and capabilities, ensuring that technologies can interact seamlessly. Higher layers benefit from this abstraction, utilizing lower-layer functions without needing to understand their inner workings.
### OSI model layers

- **Physical Layer**: This is the lowest layer, dealing with the physical connection between devices. It is responsible for transmitting raw bit streams over a physical medium, such as cables or wireless signals.
- **Data-Link Layer**: This layer manages the transfer of data between two directly connected nodes. It handles error detection and correction, as well as flow control, ensuring reliable communication over the physical medium.
- **Network Layer**: This layer is responsible for routing data from the source to the destination across multiple networks. It uses logical addressing (such as IP addresses) to determine the best path for data transmission.
- **Transport Layer**: This layer ensures reliable data transfer between systems. It provides error detection and recovery, flow control, and ensures complete data transfer through protocols like TCP (Transmission Control Protocol).
- **Session Layer**: This layer manages sessions between applications. It establishes, maintains, and terminates connections, ensuring data is synchronized and properly sequenced.
- **Presentation Layer**: This layer translates data between the application layer and the network. It handles data encryption, compression, and translation, ensuring that data is presented in a readable format.
- **Application Layer**: This is the topmost layer, which interacts directly with user applications. It provides network services to end-user applications, such as web browsers and email clients.
> Have you ever heard the joke about Layer 8? If not, for those who enjoy a bit of networking humor, there's often mention of "Layer 8" — the user layer. It's a playful reminder that no matter how perfect the technical setup, the human element can introduce its own unique challenges!
When data is communicated through protocol layers, it's sent in small segments called packets. Each packet includes implementations from these protocol layers. Starting at the application layer, the data is encapsulated by the presentation layer, which is then encapsulated by the session layer, followed by the transport layer, and so on. This process is known as encapsulation.
Each layer adds a header and a body to the packet. The header contains protocol-specific information necessary for that layer, while the body contains the encapsulated data, including headers from the previous layers. This encapsulation technique can be visualized like the layers of an onion.
The illustration below demonstrates the encapsulation process as data moves from the application layer of one computer, across the Internet, to the application layer of the other computer.
```txt
Computer A Application Computer B Application
| ▲
▼ |
+---------------------+ +---------------------+
| 7. Application | | 1. Physical |
+---------------------+ +---------------------+
| |
+---------------------+ +---------------------+
| 6. Presentation | | 2. Data-Link |
+---------------------+ +---------------------+
| |
+---------------------+ +---------------------+
| 5. Session | | 3. Network |
+---------------------+ +---------------------+
| |
+---------------------+ +---------------------+
| 4. Transport | | 4. Transport |
+---------------------+ +---------------------+
| |
+---------------------+ +---------------------+
| 3. Network | | 5. Session |
+---------------------+ +---------------------+
| |
+---------------------+ +---------------------+
| 2. Data-Link | | 6. Presentation |
+---------------------+ +---------------------+
| |
+---------------------+ +---------------------+
| 1. Physical | | 7. Application |
+---------------------+ +---------------------+
| |
+--------------------------------------+
| Internet |
+--------------------------------------+
```
### TCP/IP as an Alternative to the OSI Model
While the OSI model provides a detailed framework for understanding network communications, TCP/IP is the more practical and widely-used model in real-world networking. TCP/IP simplifies the structure into four layers and serves as the foundation for the Internet, emphasizing robust, scalable communication across diverse networks.
Imagine you want to visit a website by entering a URL into your browser. Here’s how TCP/IP handles this request:
- **Application Layer**: Your web browser (application) generates an HTTP request for the webpage. In the TCP/IP model, the Application layer combines the functions of the Application, Presentation, and Session layers of the OSI model. This means it handles not only the user interface and communication protocols (Application layer in OSI), but also data translation, encryption, and compression (Presentation layer in OSI), as well as managing sessions and dialogues between the devices (Session layer in OSI). This consolidation simplifies the model and aligns it more closely with real-world protocols and applications.
- **Transport Layer**: The HTTP request is handed to the TCP protocol, which breaks it into smaller packets, adds a header with port information, and ensures reliable transmission. This layer matches the Transport layer in the OSI model.
- **Internet Layer**: Each packet is then passed to the IP protocol, which adds another header containing the IP addresses of both the source (your computer) and the destination (the web server). This layer corresponds to the Network layer in the OSI model.
- **Network Access Layer**: The packets are then sent over the physical network (like Ethernet or Wi-Fi), where they travel through various routers and switches to reach the web server. This layer combines the Data Link and Physical layers of the OSI model.
Both the OSI and TCP/IP models are open standards. They’re designed so that anyone can use them, or further build them out to meet specific requirements.
Programs that use networking, such as web browsers and email clients, rely on the operating system to manage network communications. The operating system handles the intricate details of network encapsulation, making it easier for developers to write network programs. They simply need to use the network interface provided by the OS, without worrying about the underlying complexities.
## Using Sockets for communication
To interface with the network or `enable inter-process communication` (IPC), developers often use sockets. Sockets serve as standardized endpoints for sending and receiving data, whether across a network or between processes on the same machine. They provide a way for programs to communicate, abstracting the complexities of the underlying protocol layers. By using socket APIs provided by the operating system, developers can focus on building their application logic while the OS manages the detailed network and IPC operations. This makes tasks like creating a web server, a chat application, or facilitating communication between processes more straightforward and accessible. The most common types of sockets are stream sockets, which provide a reliable connection-oriented service, and datagram sockets, which offer a connectionless service.
> For IPC communication offered by the Unix domain socket using the `AF_UNIX` family, we already explore this in the article below. We may explore it again from another perspective in future parts of this article.
> https://www.kungfudev.com/blog/2022/12/05/understanding-unix-domain-sockets-in-golang
`Stream sockets` and `datagram sockets` are the two most common types of sockets. Stream sockets, which use the TCP protocol, provide a reliable, connection-oriented service. This means that data is transmitted in a continuous stream, ensuring that it arrives in the correct order and without errors. Examples of stream sockets in action include web servers, where the integrity and order of the data (such as HTML pages) are crucial for proper rendering, and email clients, which require reliable data transmission to ensure messages are received intact. In contrast, datagram sockets use the UDP protocol and offer a connectionless service. This allows for faster data transmission but without the guarantees of order and reliability. An example of datagram sockets is in online gaming or live video streaming, where speed is more critical than perfect data accuracy, and occasional data loss is acceptable.
I would bet that at least once you have seen the image below while learning about networking, sockets, or reading some network articles. This diagram illustrates the basic flow of socket communication between a server and a client, highlighting the key function calls and interactions involved in establishing a connection and exchanging data.

Sockets behave similarly to files because of the Unix philosophy of treating everything as a file. This design choice provides a consistent interface for performing input and output operations across different types of data streams, simplifying the programming model.
Sockets and files both provide a stream of bytes that can be read from or written to. This stream abstraction fits well with many types of I/O operations, whether they involve local files, remote network communication, or inter-process communication.
When a socket is created with the `socket()` function, it requires parameters such as the domain (e.g., `AF_INET` for IPv4), the type (e.g., `SOCK_STREAM` for TCP), and the protocol (usually 0 to select the default protocol for the given type). The socket is then assigned a file descriptor, an integer that uniquely identifies the socket within the operating system.
> A file descriptor is a unique integer assigned by the operating system to an open file or socket, serving as an abstract indicator for accessing various I/O resources, such as files, sockets, and pipes. In simple terms, when you open a file, the operating system creates an entry to represent that file and stores information about it. If there are N files open, there will be N corresponding entries in the operating system, each represented by an integer like 20, 21, etc. This integer is the file descriptor. It uniquely identifies an open file within a process, allowing the process to perform operations like reading, writing, and closing files using standardized functions. By managing resources this way, the operating system ensures efficient communication between processes and I/O resources.
>
> [More info](https://www.lenovo.com/ca/en/glossary/file-descriptor/)
The `bind()` function associates the socket with a specific local `address` and `port`, which are provided as arguments. The `listen()` function marks the socket as a passive socket that will be used to accept incoming connection requests, taking an argument that specifies the maximum number of pending connections. The `accept()` function extracts the first connection request on the `queue of pending connections`, creating a new socket `file descriptor` for the connection.
On the client side, the `connect()` function is used to establish a connection to the server, requiring the server's address and port as arguments. Both the client and server can then use `send()` and `recv()`, or the analogous `write()` and `read()`, to transmit and receive data. The `close()` function is used to close the socket, releasing the file descriptor.
This design simplifies the API and makes it easier for developers to handle network communication using familiar file operations, streamlining the development process and making the code more intuitive and maintainable. By treating sockets as files, the operating system can efficiently manage various types of I/O operations using a unified interface, leveraging existing mechanisms for buffering, blocking, and error handling that are well-established for file systems.
As we mentioned, there are a couple of socket [types and families](https://man7.org/linux/man-pages/man2/socket.2.html), but for now, for socket families we are going to focus on `AF_INET` for IPv4, `AF_INET6` for IPv6, and as we mentioned the `AF_UNIX` for local communication within the same host. And regarding type, we will discuss the two primary types of sockets: stream sockets and datagram sockets.
### Socket in Actions
To illustrate what we have learned so far, we will write a couple of programs in Rust. We will use the `nix` crate, which provides friendly unix platform APIs (Linux, Darwin) for working with the socket API. While the Rust standard library offers standard socket functionality in the `std`, the `nix` crate provides more comprehensive and idiomatic access to lower-level unix system calls, making it easier to work with advanced socket features and perfect for explaining what we have seen until now.
### Server
So as we saw previously, to create a server the sequence of functions are: `socket()`, `bind()`, `listen()`, and `accept()`. The first step is to create the socket.
```rust
let socket_fd = socket(
nix::sys::socket::AddressFamily::Inet, // Socket family
nix::sys::socket::SockType::Stream, // Socket type
nix::sys::socket::SockFlag::empty(),
None,
)
.expect("Failed to create socket");
```
This code snippet creates a new socket using the `nix` crate in Rust. The `socket()` function call includes several parameters:
- `nix::sys::socket::AddressFamily::Inet`: Specifies the socket family, in this case, `AF_INET`, which is used for IPv4 addresses.
- `nix::sys::socket::SockType::Stream`: Specifies the socket type, `SOCK_STREAM`, which indicates a stream socket using the TCP protocol.
- `nix::sys::socket::SockFlag::empty()`: Indicates that no special flags are set for the socket.
- `None`: Indicates that the default protocol should be used.
After creating the socket, the next step is to bind the socket to a specific address and port. This associates the socket with a particular local endpoint.
```rust
// Create a socket address
let sock_addr =
SockaddrIn::from_str("127.0.0.1:6797").expect("Failed to create socket address");
// Bind the socket to the address
bind(socket_fd.as_raw_fd(), &sock_addr).expect("Failed to bind socket");
```
This code snippet binds the previously created socket to the local address `127.0.0.1` (localhost) and port `6797`:
- `SockaddrIn::from_str("127.0.0.1:6797")`: Creates a new socket address (`SockaddrIn`) from the string representation of the IPv4 address and port.
- `bind(socket_fd.as_raw_fd(), &sock_addr)`: Binds the socket file descriptor to the specified address. This makes the socket listen for incoming connections on `127.0.0.1:6797`.
> Something to notice here is that we are using IP addresses and ports in a specific format because we are using `AddressFamily::Inet`. Different protocol families have their own ways of defining endpoint addresses. This means the address format can vary depending on the address family, allowing sockets to handle different networking protocols and address formats properly. For this article, we are focusing on the INET family, but in future articles, we will explore other address families in more detail.
After binding the socket to a specific address and port, the next step is to listen for incoming connections. This prepares the socket to accept connection requests from clients.
```rust
// Listen for incoming connections
// The backlog parameter specifies the maximum length of the queue of pending connections
let backlog = Backlog::new(1).expect("Failed to create backlog");
listen(&socket_fd, backlog).expect("Failed to listen for connections");
```
This code snippet sets up the socket to listen for incoming connections:
- `let backlog = Backlog::new(1).expect("Failed to create backlog");`: This line creates a backlog object that specifies the maximum length of the queue of pending connections. In this case, the backlog is set to 1, meaning the socket can queue up to one pending connection.
- `listen(&socket_fd, backlog).expect("Failed to listen for connections");`: This line calls the `listen()` function, which puts the socket into listening mode.
> We use a backlog of 1 because we are keeping the example simple and synchronous for now. In the future, we will introduce an asynchronous runtime to handle multiple connections more efficiently.
>
> The `listen` operation involves the kernel creating two queues for this socket: the **syn** queue and the **accept** queue. To keep this article concise, we will explore these queues in detail in the next article.
At this point, the server socket is ready and waiting for incoming connection requests from clients. The `listen()` function allows the server to queue incoming connections, which can be accepted and processed one by one.
Once the server socket is set to listen for incoming connections, the next step is to accept these connections and handle the data communication with the client.
```rust
// Accept incoming connections
let conn_fd = accept(socket_fd.as_raw_fd()).expect("Failed to accept connection");
// Read data
let mut buf = [0u8; 1024];
let bytes_read = read(conn_fd, &mut buf).expect("Failed to read from connection");
let received_data =
std::str::from_utf8(&buf[..bytes_read]).expect("Failed to convert received data to string");
println!(
"Received {} bytes: {:?} repr: {}",
bytes_read,
&buf[..bytes_read],
received_data
);
```
This code snippet demonstrates how to accept an incoming connection and read data from it:
- `let conn_fd = accept(socket_fd.as_raw_fd()).expect("Failed to accept connection");`: This line calls the `accept()` function to accept an incoming connection. It creates a new socket file descriptor for the connection, `conn_fd`.
- `let mut buf = [0u8; 1024];`: This line initializes a buffer to store the incoming data.
- `let bytes_read = recv(conn_fd, &mut buf, MsgFlags::empty()).expect("Failed to read from connection");`: This line reads data from the accepted connection into the buffer. The `recv()` function is used for this purpose, with `MsgFlags::empty()` indicating no special flags. The number of bytes read is stored in `bytes_read`.
- `let received_data = std::str::from_utf8(&buf[..bytes_read]).expect("Failed to convert received data to string");`: This line converts the received byte data into a UTF-8 string. It slices the buffer up to the number of bytes read and converts it.
At this point, the server has accepted an incoming connection and read the data sent by the client. The server can then process this data, echo it back, or perform other actions based on the application logic.
```rust
let bytes_written = send(conn_fd, &buf[..bytes_read], MsgFlags::empty())
.expect("Failed to write to connection");
```
This code snippet demonstrates how to send data back to the client:
- `let bytes_written = send(conn_fd, &buf[..bytes_read], MsgFlags::empty()).expect("Failed to write to connection");`: This line sends data from the buffer back to the client using the `send()` function. The buffer is sliced to the number of bytes read from the client, ensuring only the received data is sent back. `MsgFlags::empty()` indicates no special flags are used.
> For simplicity, we are using `MsgFlags::empty()`, which indicates no special options are set, allowing the `send()` and `recv()` functions to operate in their default mode. To read more about these [flags ](https://man7.org/linux/man-pages/man2/sendto.2.html).
With this step, the server echoes the received data back to the client, completing a simple round-trip communication. This demonstrates the basic flow of data from client to server and back to client, showcasing the core operations of socket communication.
So putting it all together, we have the complete example for this simple TCP echo. We’ve explained each step, as discussed previously:
```rust
fn main() {
let socket_fd = socket(
nix::sys::socket::AddressFamily::Inet, // Socket family
nix::sys::socket::SockType::Stream, // Socket type
nix::sys::socket::SockFlag::empty(),
None,
)
.expect("Failed to create socket");
// Create a socket address
let sock_addr =
SockaddrIn::from_str("127.0.0.1:6797").expect("Failed to create socket address");
// Bind the socket to the address
bind(socket_fd.as_raw_fd(), &sock_addr).expect("Failed to bind socket");
// Listen for incoming connections
let backlog = Backlog::new(1).expect("Failed to create backlog");
listen(&socket_fd, backlog).expect("Failed to listen for connections");
// Accept incoming connections
let conn_fd = accept(socket_fd.as_raw_fd()).expect("Failed to accept connection");
// echo back the received data
let mut buf = [0u8; 1024];
let bytes_read =
recv(conn_fd, &mut buf, MsgFlags::empty()).expect("Failed ... connection");
let received_data =
std::str::from_utf8(&buf[..bytes_read]).expect("Failed to ...");
// Echo back the received data
let bytes_written = send(conn_fd, &buf[..bytes_read], MsgFlags::empty())
.expect("Failed to write to connection");
}
```
> In Rust, we do not explicitly use the `close` method to close sockets because Rust's ownership and borrowing system automatically handles resource management. When a socket goes out of scope, Rust's memory safety guarantees ensure that the socket is properly closed and resources are freed. This eliminates the need for explicit `close` calls, reducing the risk of resource leaks and making the code cleaner and safer.
Most programming languages provide higher-level abstractions to simplify socket programming, making it more accessible and easier to use. These abstractions often wrap the underlying system calls, handling details, and resource management for you. For example, in Rust, the `std::net` module offers a convenient API for TCP networking:
```rust
use std::net::TcpListener;
fn main() {
let listener = TcpListener::bind("127.0.0.1:7878").unwrap();
for stream in listener.incoming() {
let stream = stream.unwrap();
println!("Connection established!");
}
}
```
In this example, `TcpListener::bind` abstracts the complexity of creating and binding a socket, ensuring that the address is correctly formatted ""and byte-ordered."" The `incoming` method returns an iterator over incoming connections, managing the accept loop. This abstraction makes the code more readable and easier to maintain, allowing us to focus on the application logic rather than the intricacies of socket communication.
### Running the server
Now, if we run the server and use `telnet` from another terminal, we will see this in action:
```bash
$ cargo run part-one-server
Socket file descriptor: 3
Socket bound to address: 127.0.0.1:6797
Listening for incoming connections...
received 22 bytes
bytes: [72, 101, 108, 108, 111, 32, 102, 114, 111, 109, 32, 75, 117, 110, 103, 102, 117, 68, 101, 118, 13, 10]
hex repr: ["0x48", "0x65", "0x6c", "0x6c", "0x6f", "0x20", "0x66", "0x72", "0x6f", "0x6d", "0x20", "0x4b", "0x75", "0x6e", "0x67", "0x66", "0x75", "0x44", "0x65", "0x76", "0x0d", "0x0a"]
str repr: "Hello from KungfuDev\r\n"
Sent 22 bytes back to client
```
This output indicates that the server is running, bound to the address `127.0.0.1:6797`, and successfully received and echoed back data from a client.
And in our `telnet(client)` we can see the data is replied back.
```bash
telnet localhost 6797
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Hello from KungfuDev
Hello from KungfuDev
Connection closed by foreign host.
```
You can find the code in this repository.
### What about the client?
As you saw in the diagram that illustrates the flow of socket communication, the client, instead of calling `listen`, uses `connect` to initiate a connection to a server socket.
```rust
send(socket_fd.as_raw_fd(), data.as_bytes(), MsgFlags::empty())
.expect("Failed to send data to server");
```
The client uses the `connect` function to establish a connection with the server. After successfully connecting, the client can send data to the server using the `send` function and receive data from the server using the `recv` function.
You can find the code for this example and future ones in this [repo](https://github.com/douglasmakey/socket_net/tree/main).
## To conclude
In this article, we explored the fundamentals of socket programming using Rust and the `nix` crate. We began by understanding the OSI model and its practical application through the TCP/IP model, laying the groundwork for network communication. We then delved into the sequence of functions necessary to create a server: `socket()`, `bind()`, `listen()`, and `accept()`. By walking through a complete example, we demonstrated how to set up a simple server that listens for incoming connections, receives data, and echoes it back to the client.
Running the provided Rust program and testing it with `telnet` illustrated these concepts in action, showing how data is transmitted and received over a network. Although we kept the example synchronous and used a small backlog for simplicity, this foundation paves the way for more advanced topics such as asynchronous programming and handling multiple connections efficiently.
Stay tuned for future articles where we will introduce asynchronous runtimes and explore additional and advanced socket programming techniques to enhance our networking skills. | douglasmakey |
1,880,801 | Discover Authentic Pakistani Attire at IZEmporium.com: Your Online Destination for Pakistani Dresses in the UK | Are you in search of genuine Pakistani clothing that captivates with its intricate designs and... | 0 | 2024-06-07T20:01:59 | https://dev.to/ahmed_ali_16a2b97a9fe6cd6/discover-authentic-pakistani-attire-at-izemporiumcom-your-online-destination-for-pakistani-dresses-in-the-uk-7k9 | pakistaniclothes, pakistanidresses, salwarkameez, weddingwear | Are you in search of genuine Pakistani clothing that captivates with its intricate designs and vibrant colors? Look no further than IZEmporium.com, your premier online store for the finest [Pakistani dresses](https://izemporium.com/) available in the UK. Whether you are preparing for a festive occasion or simply wish to add some cultural flair to your wardrobe, IZEmporium.com offers an extensive range of options that blend traditional craftsmanship with contemporary fashion sensibilities.
**
## A Rich Tapestry of Choices
**
IZEmporium.com takes pride in its vast collection that caters to all tastes and occasions. From luxurious bridal wear to elegant casual outfits, the website is a treasure trove of Pakistani dresses. Each garment is a piece of art, adorned with exquisite embroideries, embellishments, and fabrics that speak volumes of the rich Pakistani heritage. The selection includes but is not limited to, graceful shalwar kameez, enchanting sarees, and lehengas that radiate with elegance.
## Quality and Authenticity
When it comes to authenticity and quality, IZEmporium.com stands out. Each dress is crafted using high-quality materials that ensure durability and comfort. Authentic fabrics like silk, chiffon, and cotton are used, which are both luxurious and practical for different weather conditions. The embroideries and embellishments are done with meticulous care, reflecting the skilled craftsmanship that Pakistani fashion is renowned for.
## Shopping Experience and Customer Service
Shopping for Pakistani clothes online in the UK has never been easier, thanks to IZEmporium.com’s user-friendly platform. The website is designed to provide a seamless shopping experience, allowing customers to browse through collections effortlessly and make purchases with just a few clicks. Moreover, detailed product descriptions and high-resolution images help customers make informed decisions about their purchases.
Understanding the importance of customer satisfaction, IZEmporium.com offers excellent customer service. Their team is readily available to assist with any queries related to size, fit, or material, ensuring that you find the perfect outfit for your needs.
Fast UK Delivery
One of the key benefits of shopping at IZEmporium.com is the promise of fast delivery across the UK. Regardless of where you are located, the platform ensures that your chosen Pakistani dresses reach you in time for your special occasions. This reliability makes IZEmporium.com a favorite among Pakistani expatriates and enthusiasts of South Asian fashion in the UK.
## Cultural Connection
For many, IZEmporium.com is more than just a clothing store; it is a link to their cultural roots. In a foreign land, wearing clothes from one’s homeland can be a profound expression of identity and pride. IZEmporium.com facilitates this connection by providing authentic Pakistani attire that one might otherwise find hard to access in the UK.
## Conclusion
Whether you are attending a wedding, celebrating a festival, or just in the mood to flaunt a beautiful Pakistani outfit, IZEmporium.com has something special for you. It celebrates the rich Pakistani culture through its wide range of clothing that appeals to both traditional and modern aesthetics. So, next time you think of enhancing your wardrobe with something uniquely Pakistani, consider IZEmporium.com – your go-to source for Pakistani dresses online in the UK. Shop today and experience the perfect blend of culture, quality, and style at your fingertips!
| ahmed_ali_16a2b97a9fe6cd6 |
1,880,800 | Normalization and Normal Forms (1NF, 2NF, 3NF) | Introduction Normalization is a systematic approach to organizing data in a database to... | 0 | 2024-06-07T19:58:19 | https://dev.to/kellyblaire/normalization-and-normal-forms-1nf-2nf-3nf-240a | ## Introduction
Normalization is a systematic approach to organizing data in a database to reduce redundancy and improve data integrity. The process involves decomposing a table into smaller, related tables without losing data. This article will explain the concepts of normalization and the different normal forms (1NF, 2NF, 3NF), providing clear illustrations and examples to help students understand these concepts thoroughly.
## What is Normalization?
Normalization involves structuring a relational database in a way that minimizes redundancy and dependency by organizing fields and table relations. The primary goals of normalization are to:
- Eliminate redundant data.
- Ensure data dependencies make sense.
- Reduce the potential for anomalies during data operations (insertion, update, deletion).
## Normal Forms
Normal forms are a series of guidelines that a relational database must follow to be considered normalized. Each normal form builds on the previous one, creating a series of increasingly stringent rules.
### First Normal Form (1NF)
A table is in the First Normal Form if:
1. All the values in a table are atomic (indivisible).
2. Each column contains values of a single type.
3. Each column contains unique values.
4. The order in which data is stored does not matter.
#### Example of 1NF
Consider a table that stores information about students and their courses:
| StudentID | StudentName | Courses |
|-----------|-------------|-------------------|
| 1 | John Doe | Math, Science |
| 2 | Jane Smith | History, Math |
This table is not in 1NF because the `Courses` column contains multiple values. To convert it to 1NF, we need to ensure that each column contains atomic values:
| StudentID | StudentName | Course |
|-----------|-------------|---------|
| 1 | John Doe | Math |
| 1 | John Doe | Science |
| 2 | Jane Smith | History |
| 2 | Jane Smith | Math |
### Second Normal Form (2NF)
A table is in the Second Normal Form if:
1. It is in 1NF.
2. All non-key attributes are fully functionally dependent on the primary key.
This means that there should be no partial dependency of any column on the primary key. In other words, all columns must depend on the entire primary key.
#### Example of 2NF
Consider the following table that stores information about students, courses, and instructors:
| StudentID | CourseID | StudentName | CourseName | InstructorName |
|-----------|----------|-------------|------------|----------------|
| 1 | 101 | John Doe | Math | Dr. Smith |
| 1 | 102 | John Doe | Science | Dr. Jones |
| 2 | 101 | Jane Smith | Math | Dr. Smith |
| 2 | 103 | Jane Smith | History | Dr. Brown |
This table is in 1NF but not in 2NF because `StudentName` depends only on `StudentID` and `CourseName`, `InstructorName` depend only on `CourseID`, not on the combination of `StudentID` and `CourseID`. To convert it to 2NF, we decompose the table into two tables:
**Students Table:**
| StudentID | StudentName |
|-----------|-------------|
| 1 | John Doe |
| 2 | Jane Smith |
**Courses Table:**
| CourseID | CourseName | InstructorName |
|----------|------------|----------------|
| 101 | Math | Dr. Smith |
| 102 | Science | Dr. Jones |
| 103 | History | Dr. Brown |
**Enrollment Table:**
| StudentID | CourseID |
|-----------|----------|
| 1 | 101 |
| 1 | 102 |
| 2 | 101 |
| 2 | 103 |
### Third Normal Form (3NF)
A table is in the Third Normal Form if:
1. It is in 2NF.
2. There are no transitive dependencies.
A transitive dependency occurs when a non-key column is dependent on another non-key column.
#### Example of 3NF
Consider the following table:
| StudentID | CourseID | CourseName | InstructorName | InstructorOffice |
|-----------|----------|------------|----------------|------------------|
| 1 | 101 | Math | Dr. Smith | Room 101 |
| 1 | 102 | Science | Dr. Jones | Room 102 |
| 2 | 101 | Math | Dr. Smith | Room 101 |
| 2 | 103 | History | Dr. Brown | Room 103 |
This table is in 2NF but not in 3NF because `InstructorOffice` is dependent on `InstructorName`, which is not a key. To convert it to 3NF, we decompose it further:
**Students Table:**
| StudentID | StudentName |
|-----------|-------------|
| 1 | John Doe |
| 2 | Jane Smith |
**Courses Table:**
| CourseID | CourseName | InstructorName |
|----------|------------|----------------|
| 101 | Math | Dr. Smith |
| 102 | Science | Dr. Jones |
| 103 | History | Dr. Brown |
**Instructors Table:**
| InstructorName | InstructorOffice |
|----------------|------------------|
| Dr. Smith | Room 101 |
| Dr. Jones | Room 102 |
| Dr. Brown | Room 103 |
**Enrollment Table:**
| StudentID | CourseID |
|-----------|----------|
| 1 | 101 |
| 1 | 102 |
| 2 | 101 |
| 2 | 103 |
## Summary
Normalization is an essential process in database design that aims to reduce redundancy and ensure data integrity. By following the rules of normalization and moving through the different normal forms (1NF, 2NF, 3NF), we can create a well-structured database that minimizes data anomalies and supports efficient data operations. Understanding and applying these principles is fundamental for anyone involved in database design and management. | kellyblaire | |
1,741,494 | Building a web server: Containers | Welcome back to This Old Box! A series that covers the journey of building and running a web server... | 25,757 | 2024-06-07T19:47:30 | https://dev.to/stmcallister/building-a-home-web-server-containerizing-our-app-50ed | docker, kubernetes, helm, learning | Welcome back to This Old Box! A series that covers the journey of building and running a web server out of an old 2014 Mac mini.
In this post we'll walk through how we containerized our first web application, configured our Kubernetes cluster with our new container, and exposed that cluster to the world.
The project we deployed is the most simple from an infrastructure standpoint. It's a static site that is mainly html, css, and a wee bit of client-side JavaScript. Logistically, the whole site fit into a single container.
## Docker
There are several container runtimes in the industry today. We chose to use [Docker](https://www.docker.com/) as it seems to be the industry standard, especially on a Mac. Docker is also well documented, and thus should be the easiest to learn. Although, I am concerned because I've heard the runtime is a bit greedy with CPU and memory resources. I hope this Old Box can handle it.
We installed [Docker Desktop](https://www.docker.com/products/docker-desktop/) in [our last post](https://dev.to/stmcallister/building-a-web-server-installing-the-software-gjb). With that application running we built a docker image by writing the instructions in a `Dockerfile`.
This first website we containerized is a collection of static sites that my son created called [AjMcWeb](https://www.ajmcweb.com/). The instructions for building the container for this site were relatively simple. All we really needed to do was copy all the html, javascript, css, and image files for the site into the container, run them on `nginx`, and expose port 3001 on the container. The `Dockerfile` to accomplish these steps looked like this.
```docker
FROM nginx
COPY . /usr/share/nginx/html
EXPOSE 3001
```
Using that `Dockerfile`, we ran the following `docker build` command inside the same folder to build a running image of that container.
```bash
docker build -t ajmcweb:0.1 .
```
This command built the Docker image. The `-t` argument told Docker to tag the image with the the name `ajmcweb` and gave it a version of `0.1`. The trailing `.` told Docker to use the Dockerfile in the current directory.
Now that we had our site encapsulated inside a Docker container we needed to deploy it to the world. That's where Kubernetes came in.
## Kubernetes
Docker Desktop includes a standalone Kubernetes server and client that we enabled [in our last post](https://dev.to/stmcallister/building-a-web-server-installing-the-software-gjb). This enablement instantiated the images required to run the Kubernetes server as containers, and installed the `kubectl` utility--used to manage our Kubernetes cluster--on our machine.
## ngrok for ingress
Before configuring our Kubernetes cluster, we needed to install the [ngrok Ingress Controller](https://github.com/ngrok/kubernetes-ingress-controller?tab=readme-ov-file#ngrok-kubernetes-ingress-controller) so we could use [ngrok](https://ngrok.com/) to provide ingress to our application. [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is "an API object that manages external access to the services in a cluster."
ngrok is an ingress platform that helps provide access to a variety of things including Kubernetes clusters as well as devices and other networks. The reason why we chose ngrok for this particular project is because [ngrok works behind a NAT](https://ngrok.com/blog-post/ngrok-ingress-controller-differentiators#the-ngrok-kubernetes-ingress-controller-works-behind-nat).
Here's how we set up ngrok and its ingress controller.
## Setting up ngrok
First, we signed up for a [free ngrok](https://ngrok.com/signup) and then upgraded to a [paid plan](https://ngrok.com/pricing) so we could use our [own domains](https://ngrok.com/docs/guides/how-to-set-up-a-custom-domain/) for our sites--like [www.ajmcweb.com](https://www.ajmcweb.com/) for example.
In order to use ngrok we first needed to get our [ngrok authtoken](https://ngrok.com/docs/agent/#authtokens) in our ngrok dashboard. Then, we used that to set an `NGROK_AUTHTOKEN` environment variable on our system.
Next we [created an API Key](https://ngrok.com/docs/agent/#api-keys) and set that to an `NGROK_API_KEY` environment variable.
The last environment variable we set was `NAMESPACE`. This is going to be the Kubernetes namespace we'll use for our clusters on this box. In our case, we used `macbox`.
Both of these environment variables will be used when we install the ngrok ingress controller. But, before we leave ngrok, I'll mention how we set up our own domains inside of our ngrok account.
### Setting up custom domains in ngrok
We followed the [ngrok guide for setting up Custom Domains](https://ngrok.com/docs/guides/how-to-set-up-a-custom-domain/) which can be boiled down to two steps. First, we went to the Domains section of the [ngrok dashboard](https://dashboard.ngrok.com) and clicked the New Domain button.
We typed in `www.ajmcweb.com` and clicked Continue. This gave us a Domain Record value that we copied and pasted into a CNAME record on our domain registrar.
## Setting up ngrok ingress controller
To establish ingress, or provide user access, we wanted to incorporate the ngrok account that we configured previously. ngrok has an [ingress controller for Kubernetes]() which we installed as a helm chart with the following steps:
We added the ngrok repo to our helm settings.
```bash
helm repo add ngrok https://ngrok.github.io/kubernetes-ingress-controller
```
Then, we installed the helm chart with the following command, referencing the environment variables we set earlier.
```bash
helm install ngrok-ingress-controller ngrok/kubernetes-ingress-controller \
--namespace $NAMESPACE \
--create-namespace \
--set credentials.apiKey=$NGROK_API_KEY \
--set credentials.authtoken=$NGROK_AUTHTOKEN
```
### Configuring Kubernetese
Going back to our code we captured in a Docker container we were ready to define our application deployment in Kubernetes. We began with configuring a `Service` that is a pod running on `http` and port 80.
```yaml
apiVersion: v1
kind: Service
metadata:
name: ajmcweb
namespace: macbox
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: ajmcweb
```
Then we defined a `Deployment` object to run our `ajmcweb` container we built earlier. In the code below you'll see that we're running version `0.3` of that container, which is denoted as `ajmcweb:0.3`.
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ajmcweb
namespace: macbox
spec:
replicas: 1
selector:
matchLabels:
app: ajmcweb
template:
metadata:
labels:
app: ajmcweb
version: v0.1
spec:
containers:
- name: ajmcweb
image: ajmcweb:0.3
ports:
- name: http
containerPort: 80
```
And, finally, we configured our ingress object using the [ngrok Kubernetes Operator](https://ngrok.com/docs/k8s/) that we installed earlier.
```yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ajmcweb
namespace: macbox
spec:
ingressClassName: ngrok
rules:
- host: www.ajmcweb.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ajmcweb
port:
number: 80
```
Our ingress object had one rule which defined the host as `www.ajmcweb.com` and the backend would run the `ajmcweb` service on port 80.
We applied those Kubernetes configurations with a `kubectl apply -f <filename>.yaml` and [www.ajmcweb.com](https://www.ajmcweb.com) came right up in the browser!
Success!
## Conclusion
This felt great to be able to get our simple static website deployed on our server and have ngrok handle the network settings to make the site reachable.
Our next task will be to get a more complex application online. My son has an application he built to help track the time he spends practicing his saxophone. The app uses [Node.js](https://nodejs.org/en) and [MySQL](https://www.mysql.com/) for the backend and has a much more rich frontend. It will require some more effort both from a deployment standpoint and from a processing perspective. Stay tuned as we figure out how to get this application online and keep it running! | stmcallister |
1,880,799 | Advice 65 Million Years in the Making: How Jurassic Park Made Me a Better Programmer | Jurassic Park, published in 1990 and later adapted in a blockbuster movie in 1993, is an exhilarating... | 0 | 2024-06-07T19:57:52 | https://dev.to/daniel_ankofski_0c1a307be/advice-65-million-years-in-the-making-how-jurassic-park-made-me-a-better-programmer-3bcd | beginners, career, careerdevelopment, learning | _Jurassic Park_, published in 1990 and later adapted in a blockbuster movie in 1993, is an exhilarating story of pushing the boundaries of science without regard for the outcome. At its heart, it's an examination of the hubris of humankind. In almost every aspect of _Jurassic Park_, corners are cut and expenses are spared.
To many, the 1993 movie is the definitive version of _Jurassic Park_. In fact, many people–including myself for most of my youth–are unaware the movie is based on a book. The novel explores the same themes with more depth. Lengthy conversations reveal all the mistakes John Hammond made when developing the park, as well as their disastrous outcomes. While the novel can be dense at times, it is a rewarding endeavor with a number of lessons that can be applied to the real world.
After a recent rereading of _Jurassic Park_, I found a lot of it rang true to my current career as a programmer. The big lessons should be obvious to most: be thorough in your project planning to ensure all your bases are covered, hire the right people when needed, and manage your expenses properly. However, there is a particular scene in the book not present in the movie that has helped me understand an incredibly important aspect of programming: real-world variables.
Early on, the group invited to Jurassic Park take a tour of the inner-workings of the park. Part of this tour includes the sophisticated system put in place to track the dinosaurs. Through a combination of motion detection and visual imaging, they are able to get a consistently accurate reading of the current number of dinosaurs present in the park: 238. Since the dinosaurs are man made, the minds behind the tracking system know exactly what to look for if anything changes, e.g. a dinosaur dies.
As the story progresses, our heroes find evidence that indicates the dinosaurs are reproducing, a feat that is seemingly impossible for a female-only population. Dr. Ian Malcolm, portrayed by the charismatic Jeff Goldblum in the motion picture, is not convinced the tracking system is producing an accurate number. When he pushes them to increase the number of animals to search for, they eventually see 238 jump to 292.
How did a mistake like this happen? To quote Ian Malcolm, “you only tracked the expected number of dinosaurs. You were worried about losing animals, and your procedures were designed to advise you instantly if you had less than the expected number. But that wasn’t the problem. The problem was, you had _more_ than the expected number.” To put it simply, they designed their program to track the number of dinosaurs based on their own expectations.
Designing based on your expectations is a pitfall many programmers fall into. Sometimes it results in a family of bloodthirsty raptors or, in less extreme cases, compromised data. Since we’re all still patiently waiting for a real Jurassic Park, let’s look at a more applicable example. Say you are tasked with building a payment form that allows the end user to conveniently make payments. You put together a basic form with a submit button. Seems simple enough, but what happens to that submit button while the payment is processing? You know to wait patiently, but a user might start slamming the button in hopes that it speeds up the process.
Did you account for the impatient user who just processed their payment four or five times by disabling the button after the processing begins? For many, the thought might not have crossed their mind. In all your testing, you always press the button once and wait, but much like the ecosystem in Jurassic Park, users are unpredictable and often behave erratically. It’s important to keep this in mind when programming. At every step, you should be trying to break your program because, more often than not, someone else will, and the outcome could be disastrous.
_Jurassic Park_ remains a cultural icon, not for its action-packed set pieces featuring dinosaurs, but for its examination of human error on a catastrophic scale. It’s easy to gloss over some of the finer details when planning a project, but those details could have major consequences. As a programmer, you should always be aware of this and doing what you can to account for every variable. You never know when one of those variables might come back to bite you in the neck.
| daniel_ankofski_0c1a307be |
1,880,798 | Database Design and Entity-Relationship Diagrams (ERDs) | Introduction Database design is a crucial aspect of developing robust and efficient... | 0 | 2024-06-07T19:54:48 | https://dev.to/kellyblaire/database-design-and-entity-relationship-diagrams-erds-2909 | ## Introduction
Database design is a crucial aspect of developing robust and efficient information systems. A well-designed database ensures data integrity, supports business processes, and enhances performance. One of the primary tools used in database design is the Entity-Relationship Diagram (ERD), which helps in visualizing and structuring the database schema. This article provides a comprehensive overview of database design and the role of ERDs in this process.
## What is Database Design?
Database design is the process of defining the structure, storage, and retrieval mechanisms of data within a database system. It involves creating a detailed model of the data and its relationships to support efficient and accurate data management.
### Phases of Database Design
1. **Requirements Analysis**:
- Gathering detailed requirements from stakeholders.
- Understanding the data needs, business rules, and user requirements.
2. **Conceptual Design**:
- Creating a high-level model of the database using ERDs.
- Identifying entities, attributes, and relationships.
3. **Logical Design**:
- Translating the conceptual model into a logical model.
- Defining tables, columns, primary keys, and foreign keys.
- Normalizing the database to reduce redundancy.
4. **Physical Design**:
- Implementing the logical model on a specific database management system (DBMS).
- Defining storage structures, indexing strategies, and partitioning schemes.
5. **Implementation and Maintenance**:
- Developing and deploying the database.
- Continuous monitoring, tuning, and updating to ensure optimal performance.
## Entity-Relationship Diagrams (ERDs)
ERDs are a graphical representation of the entities, attributes, and relationships within a database. They provide a clear and structured way to visualize the database schema, making it easier to understand and communicate.
### Components of ERDs
1. **Entities**:
- Represent real-world objects or concepts.
- Depicted as rectangles.
- Examples: Customer, Order, Product.
2. **Attributes**:
- Properties or characteristics of entities.
- Depicted as ovals connected to their respective entities.
- Examples: CustomerID, OrderDate, ProductName.
3. **Relationships**:
- Describe associations between entities.
- Depicted as diamonds connected to entities with lines.
- Examples: A Customer places an Order, an Order includes Products.
4. **Primary Key**:
- A unique identifier for an entity.
- Ensures each record within a table is unique.
- Examples: CustomerID, OrderID.
5. **Foreign Key**:
- An attribute that creates a link between two tables.
- Ensures referential integrity.
- Examples: CustomerID in the Order table, referencing the Customer table.
### Types of Relationships
1. **One-to-One (1:1)**:
- Each instance of Entity A is related to one instance of Entity B.
- Example: Each employee has one employee ID.
2. **One-to-Many (1:N)**:
- Each instance of Entity A is related to multiple instances of Entity B.
- Example: A customer can place multiple orders.
3. **Many-to-Many (M:N)**:
- Multiple instances of Entity A are related to multiple instances of Entity B.
- Example: Students enroll in multiple courses, and courses have multiple students.
### Cardinality and Modality
- **Cardinality**: Specifies the number of instances of an entity that can be associated with one instance of another entity.
- **Modality (Optionality)**: Indicates whether an instance of a relationship is mandatory or optional.
## Creating an ERD
### Step-by-Step Process
1. **Identify Entities**:
- Determine the main objects or concepts involved.
- Example: In a library system, entities could be Book, Member, and Loan.
2. **Define Attributes**:
- List the properties of each entity.
- Example: Book entity attributes might include ISBN, Title, Author, and PublicationYear.
3. **Establish Relationships**:
- Identify how entities interact with each other.
- Example: A Member borrows a Book, creating a Loan relationship.
4. **Assign Primary and Foreign Keys**:
- Ensure each entity has a primary key.
- Define foreign keys to maintain referential integrity.
- Example: Loan table might have MemberID and BookID as foreign keys.
5. **Draw the ERD**:
- Use software tools like Microsoft Visio, Lucidchart, or online ERD tools.
- Ensure clarity and accuracy in representing entities, attributes, and relationships.
### Example ERD
Consider a simplified online shopping system with the following entities and relationships:
- **Entities**: Customer, Order, Product, OrderItem
- **Relationships**:
- A Customer can place multiple Orders.
- An Order can include multiple Products through OrderItem.
- A Product can be included in multiple Orders.
#### ERD Diagram

_I generated this ERD using [https://dbdiagram.io/d](https://dbdiagram.io/d)_
In this ERD:
- `users`,`follows`, and `posts` are all entities
- The `users` entity or table has these **attributes**: id (PK), username, role, created_at.
- The `follows` entity or table has these **attributes**: following_user_id (FK), followed_user_id(FK), created_at.
- The `posts` entity or table has these **attributes**: id (PK), title, body, user_id (FK), status, created_at.
- The `posts` entity is related to the `users` entity via `posts`.`user_id`, which is a foreign key (FK) that is related to the `users`.`id` primary key (PK).
- Both `follows`.`following_user_id` and `follows`.`followed_user_id` are foreign key (FK) attributes that are related to the `users`.`id` attribute.
Check out this article I wrote on [Understanding Primary Keys and Foreign Keys, to learn more.](https://dev.to/kellyblaire/understanding-primary-keys-and-foreign-keys-in-sql-a-simple-and-detailed-guide-28jm)
## Best Practices in Database Design
1. **Normalize Data**:
- Apply normalization rules to reduce redundancy.
- Ensure efficient data storage and retrieval.
2. **Use Indexes Wisely**:
- Create indexes on frequently queried columns.
- Balance between read and write performance.
3. **Maintain Data Integrity**:
- Enforce primary and foreign key constraints.
- Implement validation rules and triggers.
4. **Plan for Scalability**:
- Design the database to handle growth in data volume and user load.
- Consider partitioning, sharding, and replication strategies.
5. **Document the Design**:
- Keep comprehensive documentation of the database schema.
- Include ERDs, data dictionaries, and business rules.
## Conclusion
Database design is a critical process in developing effective and efficient database systems. ERDs play a vital role in conceptualizing and visualizing the database structure. By understanding entities, attributes, and relationships, designers can create robust databases that support business needs and ensure data integrity. Following best practices in database design further ensures that the system can handle future demands and maintain optimal performance. | kellyblaire | |
1,880,797 | Weekly Updates - June 7, 2024 | Hi everyone!👋 Hope you had a great week. 🎉 Exciting update to VSCode Extension -We’re thrilled to... | 0 | 2024-06-07T19:53:57 | https://dev.to/couchbase/weekly-updates-june-7-2024-1d7b | couchbase, community, database, ai | Hi everyone!👋
Hope you had a great week.
- 🎉 **Exciting update to VSCode Extension** -We’re thrilled to announce a significant update to our Couchbase VSCode Extension. With our latest release, we’ve expanded our horizons to support GitHub Codespaces and various other remote development environments. [*Read the blog here >>*](https://www.couchbase.com/blog/couchbase-vscode-remote-development-environments/)
<br>
- 📖 **New Blog: What is Data Mining** - Have you wanted to learn about Data Mining, what it is, techniques, tools and applications? Look no further, [*read our blog here >>*](https://www.couchbase.com/blog/data-mining-techniques/)
<br>
- 📺 **Looking for videos to learn more about Couchbase?** Check out our YouTube channel with various Playlists, such as Vector Search & AI, Community Highlights and more! [*You can subscribe to our channel here >>*](https://www.youtube.com/@CouchbaseInc/playlists)
<br>
Have a great weekend everyone! 😊 | carrieke |
1,880,795 | Data Warehousing Concepts: A Comprehensive Guide | Introduction In the era of big data, organizations are inundated with vast amounts of data... | 0 | 2024-06-07T19:52:00 | https://dev.to/kellyblaire/data-warehousing-concepts-a-comprehensive-guide-14pa | #### Introduction
In the era of big data, organizations are inundated with vast amounts of data from various sources. To manage, analyze, and make sense of this data, businesses turn to data warehousing. A data warehouse (DW) is a central repository of integrated data from one or more disparate sources, used for reporting and data analysis. This article delves into the core concepts of data warehousing, its architecture, components, processes, and benefits.
#### What is a Data Warehouse?
A data warehouse is a system used for reporting and data analysis, and is considered a core component of business intelligence. It stores current and historical data in one place, making it easier to create analytical reports for decision-making. Data from operational systems and external sources is extracted, transformed, and loaded (ETL) into the data warehouse, where it can be queried and analyzed.
#### Key Components of a Data Warehouse
1. **Data Sources**: These are the various operational systems, databases, and external data sources from which data is collected. Examples include CRM systems, ERP systems, flat files, and online transaction processing (OLTP) databases.
2. **ETL Process**: ETL stands for Extract, Transform, Load. This process involves:
- **Extracting** data from different source systems.
- **Transforming** the data to fit operational needs (e.g., cleaning, filtering, aggregating).
- **Loading** the transformed data into the data warehouse.
3. **Data Staging Area**: A temporary storage area where data is cleaned, transformed, and prepared for loading into the data warehouse.
4. **Data Storage**: The core of the data warehouse where transformed data is stored. This is usually a relational database designed for query and analysis.
5. **Metadata**: Data about data, which includes information about the source, transformation, storage, and usage of data within the warehouse. Metadata helps in managing and using the data warehouse effectively.
6. **Data Marts**: Subsets of the data warehouse tailored for specific business lines or departments. Data marts can be dependent (a logical subset of the data warehouse) or independent (a separate physical subset).
7. **OLAP (Online Analytical Processing) Engine**: Tools that allow users to interactively analyze data in the warehouse using multidimensional views. OLAP operations include slice, dice, drill-down, and roll-up.
8. **Data Access Tools**: These include reporting and querying tools, dashboards, data visualization tools, and other front-end applications that help users access and analyze data.
#### Data Warehouse Architecture
1. **Single-Tier Architecture**: This architecture aims to minimize the amount of data stored, mainly to remove redundancies. It's rarely used in practice due to performance issues.
2. **Two-Tier Architecture**: In this architecture, the data warehouse is physically separated from the source systems. It improves performance and scalability but can be complex to manage.
3. **Three-Tier Architecture**: The most commonly used architecture, it includes:
- **Bottom Tier**: The data warehouse server, where data is loaded and stored.
- **Middle Tier**: OLAP servers that provide an abstracted view of the database to the end users.
- **Top Tier**: Front-end tools and client applications for data querying, reporting, and analysis.
#### Processes in Data Warehousing
1. **Data Integration**: Combining data from different sources into a unified view. This includes data cleaning, transformation, and consolidation.
2. **Data Transformation**: Converting data from its original form into a format suitable for analysis. This may involve normalization, denormalization, aggregation, and other operations.
3. **Data Cleaning**: Ensuring that the data is accurate, complete, and free of errors. This involves removing duplicates, correcting errors, and filling in missing values.
4. **Data Loading**: Involves importing the transformed data into the data warehouse. This can be done in batches or in real-time.
5. **Data Refreshing**: Updating the data in the warehouse to reflect changes in the source data. This can be periodic (e.g., nightly, weekly) or real-time.
#### Benefits of Data Warehousing
1. **Improved Data Quality and Consistency**: Data warehousing involves cleaning and transforming data, ensuring that it is accurate and consistent.
2. **Enhanced Business Intelligence**: By integrating data from various sources, data warehouses provide a comprehensive view of the organization, enabling better decision-making.
3. **Faster Query Performance**: Data warehouses are optimized for read-heavy operations and complex queries, providing faster access to data for analysis.
4. **Historical Data Analysis**: Data warehouses store historical data, allowing for trend analysis and long-term business planning.
5. **Scalability and Performance**: Data warehouses can handle large volumes of data and complex queries efficiently, making them suitable for large enterprises.
6. **Centralized Data Management**: Provides a single source of truth for data across the organization, facilitating better data governance and management.
#### Challenges in Data Warehousing
1. **High Initial Cost**: Setting up a data warehouse can be expensive due to the cost of hardware, software, and skilled personnel.
2. **Complexity of Integration**: Integrating data from disparate sources can be complex and time-consuming.
3. **Data Governance and Security**: Ensuring data security and compliance with regulations is a major concern in data warehousing.
4. **Performance Issues**: As the volume of data grows, maintaining performance can be challenging.
5. **Maintenance and Upgrades**: Keeping the data warehouse updated and running smoothly requires ongoing maintenance and occasional upgrades.
#### Future Trends in Data Warehousing
1. **Cloud-Based Data Warehousing**: Increasingly, organizations are moving their data warehouses to the cloud to take advantage of scalability, flexibility, and cost savings.
2. **Real-Time Data Warehousing**: Real-time data integration and analytics are becoming more common, enabling organizations to make faster, data-driven decisions.
3. **Big Data Integration**: Incorporating big data technologies, such as Hadoop and Spark, to handle massive volumes of unstructured data alongside traditional structured data.
4. **Advanced Analytics and AI**: Integrating advanced analytics, machine learning, and artificial intelligence to gain deeper insights and predictive capabilities.
5. **Data Warehouse Automation**: Automating the ETL process and other data warehousing tasks to reduce manual effort and improve efficiency.
#### Conclusion
Data warehousing is a critical component of modern business intelligence and analytics strategies. It enables organizations to consolidate data from multiple sources, ensure data quality and consistency, and perform complex queries efficiently. Despite the challenges, the benefits of data warehousing in terms of improved decision-making, data management, and analytical capabilities make it a valuable investment for businesses looking to leverage their data for competitive advantage. As technology evolves, data warehousing continues to adapt, incorporating new trends and innovations to meet the ever-growing demands of data-driven organizations. | kellyblaire | |
1,880,793 | How to Buy Gold and Silver with Bitcoin and Cryptocurrency Online? | The rapid rise of cryptocurrencies, led by Bitcoin, has disrupted traditional financial systems and... | 0 | 2024-06-07T19:47:40 | https://dev.to/owenparker22212/how-to-buy-gold-and-silver-with-bitcoin-and-cryptocurrency-online-17a1 | The rapid rise of cryptocurrencies, led by Bitcoin, has disrupted traditional financial systems and opened up new possibilities for investors. One such possibility is the ability to [buy physical gold and silver with Bitcoin](https://bitgolder.com/) and other cryptocurrencies. This merging of the digital and traditional worlds provides investors with the opportunity to diversify their portfolios and hedge against the volatility of digital currencies.
Investing in gold and silver has long been considered a safe haven strategy, protecting investors against inflation and economic uncertainties. By combining the stability of precious metals with the convenience and flexibility of cryptocurrencies, investors can enjoy the best of both worlds.
## Where to Buy Gold with Bitcoin and Cryptocurrency
When it comes to buying gold and silver with Bitcoin and other cryptocurrencies, Bitgolder is a leading platform that offers a user-friendly and secure experience. Bitgolder has been at the forefront of integrating digital currencies into the precious metals industry since its inception in 2019. Their expertise in both cryptocurrencies and precious metals ensures a seamless and efficient investment experience.
## Accepted Cryptocurrencies on Bitgolder
Bitgolder accepts a wide range of cryptocurrencies for purchasing gold and silver. In addition to Bitcoin, you can use cryptocurrencies such as Ethereum, Litecoin, Ripple, and Dash. They also accept stablecoins, which are digital assets pegged to traditional fiat currencies like the US Dollar or Euro. This variety of accepted cryptocurrencies provides investors with flexibility and accessibility.

## How to Make a Purchase on Bitgolder
Making a purchase on Bitgolder is a straightforward process. First, you need to create an account on their platform. Once your account is set up, you can browse their extensive collection of [gold and silver products](https://bitgolder.com/products/). Select the desired items and add them to your cart.
At the checkout stage, you will have the option to pay with your preferred cryptocurrency. The Bitgolder platform will guide you through the payment process, ensuring a secure and seamless transaction. After the payment is confirmed, the purchased gold or silver will be delivered to your specified address.
## Benefits of Buying Gold Using Bitcoin and Cryptocurrency
As you explore the option of buying gold with Bitcoin and other cryptocurrencies, it's essential to understand the benefits of this innovative approach. Here are some key advantages of purchasing gold using cryptocurrencies:
### Low Transfer Fees
One significant advantage of using cryptocurrencies like Bitcoin for buying gold is the relatively low transfer fees. While Bitcoin transfers do incur fees, they are usually lower compared to traditional wired networks. These fees, known as gas fees, are paid to the blockchain network and distributed as rewards to miners for validating transactions.
Due to the decentralized nature of Bitcoin, there is only a single blockchain network, simplifying the fee distribution process. This results in lower fees compared to traditional payment methods, making buying gold with Bitcoin cost-effective.
### Enhanced Security
When purchasing gold using credit cards or bank transfers, there are inherent risks associated with the involvement of intermediaries in the transaction. In contrast, Bitcoin operates over a decentralized blockchain network, eliminating the need for intermediaries.
The transparency of the blockchain ensures that all transactions can be reviewed by anyone, providing authenticity and security. By leveraging the security features of cryptocurrencies, buying gold becomes a more secure and reliable investment option.
### Higher Transaction Speed
Another significant benefit of buying gold with Bitcoin is the faster transaction speed compared to traditional payment methods. Blockchains settle payments at a higher speed, making the purchase process more efficient.
While the transaction speed varies across different cryptocurrencies, Bitcoin generally takes only a few minutes to settle transactions. This streamlined process eliminates the need for extensive paperwork, making buying gold with Bitcoin a convenient and time-saving option.
### No Need for a Banking System
One of the advantages of using cryptocurrencies like Bitcoin is the ability to buy gold without involving traditional banking systems. When you buy gold with Bitcoin, you store the Bitcoins in your wallet and use them directly for the purchase.
This eliminates the need for unnecessary credit and identity checks typically associated with banks. By bypassing the banking system, individuals can acquire gold without excessive legal formalities, making the process more accessible and efficient.
### No Currency Conversion Required
International gold dealers and online stores often deal with customers using different currencies. In such cases, if your currency does not match the one offered by the seller, you may have to pay additional exchange fees along with the gold price and shipping charges. However, when using Bitcoin to buy gold, no conversion is required.
When a gold merchant accepts Bitcoin as payment, there is no need for any currency conversion. This eliminates additional costs associated with currency conversion, regardless of the buyer and seller's location. Buying gold with Bitcoin provides a cost-effective and seamless transaction experience.

## Why Choose Bitgolder.com?
When it comes to buying gold and silver with cryptocurrencies, Bitgolder.com stands out as a trusted and reliable platform. Their commitment to customer satisfaction, innovative payment solutions, and extensive collection of products make them a top choice for investors. Here are some reasons why you should choose Bitgolder.com:
### The Expertise of Bitgolder in the Precious Metals and Cryptocurrency Industries
Bitgolder.com is more than just a trusted name in the precious metals industry. With nearly a decade of experience, they have mastered the intricacies of cryptocurrencies, making them true experts in the field. Their deep understanding of both precious metals and digital currencies ensures a seamless and secure investment experience.
### Innovative Payment Solutions
Bitgolder.com has developed its own crypto payment gateways to provide secure and smooth transactions. Whether you are a seasoned investor or new to digital currencies, their payment solutions cater to your needs. Their commitment to innovation ensures that your transactions are handled with the utmost security and efficiency.
### Diverse Investment Options
At Bitgolder.com, you have access to a wide range of gold and silver products that can be purchased with leading cryptocurrencies like Bitcoin, Ethereum, and various stablecoins. This diverse selection allows you to tailor your investment portfolio to your preferences and investment goals. Whether you prefer gold bars, coins, or bullions, Bitgolder.com has a range of options to suit your needs.
### Trust and Reliability
Bitgolder.com has built a reputation as a reliable source for precious metals investment. Their dedication to customer service and continuous innovation in their offerings sets them apart from other platforms. When you choose Bitgolder.com, you can trust that your investments are in safe hands.
## Buying Gold and Silver with Ethereum and Stablecoins
In addition to Bitcoin, Bitgolder.com offers the option to purchase gold and silver using Ethereum and stablecoins. Ethereum is a popular cryptocurrency known for its smart contract capabilities and widespread adoption. Buying gold with Ethereum provides investors with an alternative to Bitcoin and expands their investment options.
Stablecoins, on the other hand, offer the unique advantage of being pegged to traditional fiat currencies. This stability provides investors with a sense of trust and familiarity in the cryptocurrency space. Bitgolder.com accepts various stablecoins, allowing you to buy gold and silver with the stability and security associated with these digital assets.
## Fees for Buying Gold with Cryptocurrency at Bitgolder
When buying gold with cryptocurrencies on Bitgolder.com, fees are associated with different cryptocurrencies. Here is a breakdown of the fees for purchasing gold with specific cryptocurrencies:
• Bitcoin: 0.6% (through BTCPay)
• Ethereum: 0.8% (through Coinbase)
• Litecoin: 0.8% (through Coinpayments)
• USDC, USDT, BUSD & DAI: 0.8% (through Coinpayments)
• Other cryptocurrencies: 2-8% (through Coinpayments)
It's important to note that the fees vary depending on the cryptocurrency used for the transaction. Bitcoin, being the most widely accepted cryptocurrency, has a lower fee compared to other cryptocurrencies.

## Is Now a Good Time to Buy Gold with Bitcoin and Cryptocurrency?
The decision to buy gold using Bitcoin and other cryptocurrencies depends on several factors. Both gold and Bitcoin are considered investment assets, but they differ in terms of stability and perceived value. Here are some factors to consider when deciding whether to exchange Bitcoin for gold:
• Stability: Gold is known for its stability and long-term appreciation, making it an attractive choice for conservative investors. Bitcoin, on the other hand, remains a speculative investment driven by market sentiment and carries inherent risks.
• Investment Objectives: Your investment objectives and risk tolerance should guide your decision. Conservative investors aiming to preserve capital may find Bitcoin's volatility unappealing, while those seeking higher gains might prefer Bitcoin.
• Diversification: Regardless of your investor profile, diversifying your portfolio is crucial. Holding both gold and Bitcoin can provide the benefits of each asset class while mitigating risks. Diversification ensures that you are not solely reliant on one investment.
## Conclusion
In conclusion, buying gold and silver with Bitcoin and other cryptocurrencies offers investors a unique opportunity to diversify their portfolios and hedge against the volatility of digital currencies. Platforms like Bitgolder.com provide a secure and efficient way to make these purchases, backed by their expertise in both the [precious metals and cryptocurrency](https://bitgolder.com/products/gold-bar/) industries. Whether you choose Bitcoin, Ethereum, or stablecoins, buying gold with cryptocurrencies is a forward-thinking investment strategy that combines the stability of precious metals with the convenience of digital currencies.
| owenparker22212 | |
1,880,791 | Abiding Limo: Safe, Stylish, and Affordable Houston Transportation | Abiding Limo sticks out many of the many transportation options in Houston for its unwavering... | 0 | 2024-06-07T19:37:56 | https://dev.to/abduljabbar4533/abiding-limo-safe-stylish-and-affordable-houston-transportation-4g92 | limo | Abiding Limo sticks out many of the many transportation options in Houston for its unwavering dedication to protection. As one of the most [Reliable Houston Limo Rentals-Abiding Limo](https://abidinglimo.com/houston-limo-service/), safety is continually a top priority for the organisation. Abiding Limo is going above and beyond to make sure that passengers feel stable and protected at some stage in every journey.
With a focal point on enforcing stringent protection measures, Abiding Limo units are well known for the enterprise. From ordinary preservation exams on their fleet of motors to thorough heritage checks on their drivers, Abiding Limo leaves no stone unturned in guaranteeing the safety in their passengers. This dedication to safety no longer most effectively displays the business enterprise's professionalism but also gives peace of thoughts to individuals who select Abiding Limo for their transportation needs.
### Stringent Safety Measures Implemented
To uphold their commitment to protection, [Abiding Limo, a well-known Houston limo transportation service](https://abidinglimo.com/houston-limo-service/), has applied a sequence of stringent protection measures to ensure the well-being of their passengers. From regular car maintenance tests to strict driver education applications, each component of their operation is geared closer to providing a steady and dependable transportation experience for his or her clients.
In addition to retaining a fleet of properly-maintained cars, Abiding Limo also adheres to enterprise-leading safety protocols that pass above and beyond wellknown requirements. Every motive force undergoes thorough historical past assessments and drug screenings, and is required to stick to strict tips concerning velocity limits, passenger protection, and universal professionalism. By prioritizing protection at every step of the manner, Abiding Limo units itself aside as a dependent and dependable preference for those in search of terrific transportation services within the Houston vicinity.
### Explore Houston with Abiding Limo's Sightseeing Tours
Embark on a captivating journey via the colourful metropolis of Houston with Abiding Limo's unique sightseeing tours. Delve into the heart of this bustling city observed by way of experienced publications who will narrate interesting tales about the town's wealthy history and cultural significance. Whether you are a solo traveler, a couple searching for a romantic escapade, or a set of buddies seeking out a fun adventure, Abiding Limo's sightseeing tours offer a tailor-made revel in for all and sundry.
Discover Houston's iconic landmarks and hidden gem stones as you cruise through the town in luxury and fashion. From the historical districts to fashionable architectural marvels, Abiding Limo's sightseeing excursions make certain you witness the best of Houston with consolation and comfort. Let the knowledgeable courses lead you through the city's maximum famend attractions, presenting insightful observation and insider hints to beautify your exploration.
### Experienced Guides for Informative Excursions
Guests embarking on Abiding Limo's sightseeing tours can count on to be followed by way of skilled and knowledgeable courses who're nicely-versed in the rich records and way of life of Houston. These guides now not only have a deep expertise of the town's landmarks and sights, but also have a passion for sharing their insights with traffic, ensuring that each excursion isn't handiest fun but also exceedingly informative. With their expertise, guests can relax confident that they may benefit from a thorough information of the sites they visit and the testimonies behind them, making their excursion a clearly enriching enjoyment.
Moreover, Abiding Limo's guides are adept at growing engaging and interactive tours that cater to the diverse pastimes of their visitors. Whether it's delving into the colourful arts scene of Houston, exploring the city's historic districts, or uncovering hidden gems off the overwhelmed course, those publications are professional at customizing each excursion to shape the choices of the group. By providing a personalized and tailor-made revel in, those experienced guides ensure that visitors now not simply see the long-lasting sights of Houston but also gain a deeper appreciation for the metropolis's precise charm and person.
### Abiding Limo's Competitive Pricing
Abiding Limo prides itself on providing aggressive pricing options to cater to a huge variety of budgets without compromising on the best provider. The enterprise is familiar with the importance of supplying obvious quotes to ensure customers realize precisely what to anticipate whilst reserving their transportation desires. By retaining a focal point on fee-effectiveness, Abiding Limo strives to make luxury transportation on hand to all, whether or not it's for commercial enterprise tour, unique events, or leisurely excursions around the town.
Customers can count on to acquire exceptional prices for their money whilst selecting Abiding Limo for his or her transportation desires. With a dedication to turning in pinnacle-notch providers at inexpensive fees, the organization is going above and beyond to exceed customer expectations. By presenting price-powerful options without hidden charges or surcharges, Abiding Limo sticks out as a dependable desire for those seeking a top class transportation experience without breaking the bank.
### Transparent Rates and CostEffective Options
When it involves booking a limo service, transparency in rates is essential for customers. Abiding Limo prides itself on supplying truthful and complete pricing records to make certain that customers realize precisely what to expect. By supplying price-effective options without hidden charges, customers can consider that they are getting a fair deal with Abiding Limo for their transportation needs.
Clients recognize the clear breakdown of charges for numerous offerings that Abiding Limo offers. Knowing the total fee in advance permits customers to plan their budget for this reason and make knowledgeable decisions. By prioritizing transparency in pricing, Abiding Limo aims to construct a courting of consideration with its customers, ensuring pleasure and peace of mind throughout the booking process. | abduljabbar4533 |
1,880,154 | Spring Cloud: Get configuration from config server | Following library is used Java 17 Spring Framework 6.1.6 Spring Cloud Common 4.1.2 Spring Cloud... | 0 | 2024-06-07T19:37:09 | https://dev.to/saladlam/spring-cloud-get-configuration-from-config-server-19ok | spring, springcloud | Following library is used
- Java 17
- Spring Framework 6.1.6
- Spring Cloud Common 4.1.2
- Spring Cloud Config Client 4.1.2
The minimum entries of configuration is
```
spring.application.name=example-application
spring.config.import=configserver:https://localhost:8888
```
When the application starts, first retrieve the configuration from **https://localhost:8888/example-application/default**. The response content type must be **application/json**. Following is an example of the response.
```
HTTP/1.1 200
Content-Type: application/json
Transfer-Encoding: chunked
Date: Tue, 21 May 2024 10:00:00 GMT
{"name":"example-application","profiles":["default"],"label":null,"version":null,"state":null,"propertySources":[{"name":"file:/C:/work/temp/config/example-application-default.properties","source":{"a.b.c":"d"}},{"name":"file:/C:/work/temp/config/example-application.properties","source":{"a.b":"c","a.a":"b","message":"Hello world!"}}]}
```
The retrieve logic is defined in *org.springframework.cloud.config.client.ConfigServerConfigDataLoader#getRemoteEnvironment* and the configuration class is *org.springframework.cloud.config.client.ConfigClientProperties*.
The basic HTTP authentication can be specified by
```
spring.application.name=example-application
spring.config.import=configserver:https://localhost:8888
spring.cloud.config.username=user
spring.cloud.config.password=pass
```
After configuration is retrieved, two entries are inserted into the *PropertySources* list in *ApplicationContext*.

# Decrypt encrypted secret
Property value in configuration can be encrypted. Encoded value has a prefix **{cipher}**.
```
message={cipher}0123456789abcfef0123456789abcfef
```
Decryption is done in *org.springframework.cloud.bootstrap.encrypt.DecryptEnvironmentPostProcessor#postProcessEnvironment*.
The default algorithm is **AES/CBC/PKCS5Padding**. Property of **encrypt.key** is a string password, then a 256 bits key is generated by PBKDF2 hash function (Java implementation is *com.sun.crypto.provider.PBKDF2KeyImpl*).
```
encrypt.key=any_string_is_ok
```
After decryption, a SystemEnvironmentPropertySource of name *decrypted* is inserted into the PropertySources list in ApplicationContext. | saladlam |
1,880,789 | ReGAL: Refactoring Programs to Discover Generalizable Abstractions | ReGAL: Refactoring Programs to Discover Generalizable Abstractions | 0 | 2024-06-07T19:31:39 | https://aimodels.fyi/papers/arxiv/regal-refactoring-programs-to-discover-generalizable-abstractions | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [ReGAL: Refactoring Programs to Discover Generalizable Abstractions](https://aimodels.fyi/papers/arxiv/regal-refactoring-programs-to-discover-generalizable-abstractions). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
• The paper presents a tool called ReGAL that aims to refactor existing programs to discover new, generalizable abstractions.
• By analyzing the structure and patterns in code, ReGAL can automatically identify opportunities to extract reusable components, improving the modularity and maintainability of the codebase.
• This approach can be particularly useful for large, complex programs where manual refactoring becomes challenging, as ReGAL can scale to handle real-world software systems.
## Plain English Explanation
• [ReGAL: Refactoring Programs to Discover Generalizable Abstractions](https://aimodels.fyi/papers/arxiv/rewriting-code-simple-method-large-language-model) is a tool that can automatically restructure computer programs to make them more modular and reusable.
• Oftentimes, as software projects grow in size and complexity, the code can become difficult to maintain and understand. ReGAL aims to address this by analyzing the code and identifying common patterns or functionalities that could be extracted into separate, standalone components.
• By refactoring the code in this way, ReGAL can help developers create more organized and flexible programs, where certain tasks or features can be easily reused across different parts of the application. This can save time and effort in the long run, as developers don't have to rewrite the same functionality from scratch.
• The tool is designed to work on large, real-world software systems, where manual refactoring can be time-consuming and error-prone. ReGAL's automated approach can help streamline this process and improve the overall quality and maintainability of the codebase.
## Technical Explanation
• The key idea behind ReGAL is to use a combination of program analysis techniques, such as abstract syntax tree (AST) manipulation and pattern matching, to identify opportunities for refactoring within existing code.
• The tool first parses the input program into an AST, which represents the structure of the code in a hierarchical, tree-like format. ReGAL then applies a series of transformation rules to the AST, aiming to extract reusable components or "abstractions" that can be encapsulated and generalized.
• These transformation rules are based on heuristics and domain-specific knowledge about common programming patterns and idioms. For example, ReGAL might recognize that certain code fragments are responsible for a specific task, such as input validation or data processing, and suggest extracting these into a separate, self-contained module.
• Once the refactoring opportunities have been identified, ReGAL generates a modified version of the original program, where the new, generalized abstractions have been incorporated. This refactored code can then be evaluated and compared to the original to assess the improvements in modularity and code quality.
## Critical Analysis
• While ReGAL demonstrates promising results in automatically refactoring programs to improve their structure and reusability, the paper acknowledges some limitations and areas for further research.
• One potential concern is the reliance on heuristics and domain-specific rules, which may not generalize well to all types of programming languages and code structures. There could be value in exploring more data-driven or machine learning-based approaches to program refactoring, as seen in [Learning to Reason via Program Generation & Emulation](https://aimodels.fyi/papers/arxiv/learning-to-reason-via-program-generation-emulation) and [Synthesizing Programmatic Reinforcement Learning Policies from Large Language Models](https://aimodels.fyi/papers/arxiv/synthesizing-programmatic-reinforcement-learning-policies-large-language).
• Additionally, the paper does not address the potential impact of refactoring on the program's overall functionality and behavior. While the goal is to improve modularity and maintainability, it would be important to ensure that the refactored code still preserves the original program's semantics and correctness, as seen in [RoboCoder: Robotic Learning from Basic Skills to Complex Behaviors](https://aimodels.fyi/papers/arxiv/robocoder-robotic-learning-from-basic-skills-to).
• Further research could explore ways to validate the correctness and safety of the refactored programs, perhaps by incorporating automated testing or formal verification techniques into the ReGAL framework.
## Conclusion
• The ReGAL tool presented in this paper offers a promising approach to automatically refactoring existing programs to discover more generalized and reusable abstractions.
• By analyzing the structure and patterns in code, ReGAL can identify opportunities to extract modular components, potentially improving the maintainability and flexibility of large, complex software systems.
• While the tool shows promising results, there are still areas for further research, such as exploring more data-driven techniques and ensuring the refactored code preserves the original program's functionality and correctness.
• Overall, the paper demonstrates the potential for automated program refactoring to enhance software engineering practices, and the continued need for advancements in this field to support the development of high-quality, scalable, and sustainable software.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,880,788 | The Geometry of Categorical and Hierarchical Concepts in Large Language Models | The Geometry of Categorical and Hierarchical Concepts in Large Language Models | 0 | 2024-06-07T19:31:05 | https://aimodels.fyi/papers/arxiv/geometry-categorical-hierarchical-concepts-large-language-models | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [The Geometry of Categorical and Hierarchical Concepts in Large Language Models](https://aimodels.fyi/papers/arxiv/geometry-categorical-hierarchical-concepts-large-language-models). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
• This paper explores the geometry of categorical and hierarchical concepts in large language models, which are AI systems trained on vast amounts of text data to understand and generate human language.
• The researchers investigate how these models represent and organize different types of concepts, including broad categories like "animal" and more specific subcategories like "dog" and "cat."
• They use techniques from [topology and geometry](https://aimodels.fyi/papers/arxiv/learning-discrete-concepts-latent-hierarchical-models) to analyze the structure and relationships between these conceptual representations in the models' internal "thought processes."
## Plain English Explanation
Large language models like GPT-3 and BERT have shown remarkable abilities to understand and generate human language. However, the inner workings of how these models represent and organize different concepts, from broad categories to specific examples, is not well understood.
This research paper dives into the geometric and topological properties of how these models represent and structure conceptual knowledge. The researchers find that broader categorical concepts like "animal" tend to occupy larger, more diffuse regions in the models' internal representation spaces. Meanwhile, more specific concepts like "dog" and "cat" are represented by tighter, more concentrated clusters.
Interestingly, the researchers also observe clear hierarchical relationships between these concepts, where subcategories like "dog" and "cat" are embedded within the broader "animal" concept. This mirrors the way humans organize knowledge into taxonomies and ontologies.
By using advanced mathematics techniques like [manifold learning](https://aimodels.fyi/papers/arxiv/contextual-categorization-enhancement-through-llms-latent-space) and [persistent homology](https://aimodels.fyi/papers/arxiv/exploring-concept-depth-how-large-language-models), the researchers are able to extract and visualize these complex conceptual structures within the models. This provides valuable insights into how large language models [represent meaning and semantics](https://aimodels.fyi/papers/arxiv/not-all-language-model-features-are-linear) in a hierarchical and structured way.
## Technical Explanation
The paper begins by establishing that large language models, despite their impressive linguistic capabilities, have an internal representational structure that is not well understood. The researchers hypothesize that these models may possess rich geometric and topological properties that organize conceptual knowledge in a hierarchical fashion.
To investigate this, the authors use a variety of techniques from topology and geometry. First, they leverage manifold learning algorithms to extract low-dimensional manifold structures from the high-dimensional vector representations of concepts within the models. This reveals that broader categorical concepts occupy larger, more diffuse regions, while specific subcategories form tighter, more concentrated clusters.
Next, the researchers apply persistent homology, a technique from algebraic topology, to uncover the hierarchical relationships between these conceptual representations. They find clear topological structures that mirror human taxonomic knowledge, with subcategories nesting within broader categories.
The paper also explores how these geometric and topological properties relate to the models' ability to [reason about and manipulate concepts](https://aimodels.fyi/papers/arxiv/neural-semantic-parsing-extremely-rich-symbolic-meaning) in downstream tasks. The authors provide visualizations and quantitative analyses to support their findings.
## Critical Analysis
The researchers present a compelling and rigorous analysis of the geometric and topological properties underlying the conceptual representations in large language models. By leveraging advanced mathematical techniques, they are able to uncover structural insights that were previously hidden within these complex systems.
One potential limitation of the study is the reliance on a single language model (GPT-3) and a limited set of conceptual categories. It would be valuable to extend the analysis to a broader range of models and concept types to validate the generalizability of the findings.
Additionally, while the paper provides evidence for the hierarchical organization of concepts, it does not fully address the question of how this structure emerges during the training process. Further research could explore the [developmental dynamics](https://aimodels.fyi/papers/arxiv/learning-discrete-concepts-latent-hierarchical-models) that lead to the formation of these conceptual geometries.
Overall, this study makes an important contribution to our understanding of how large language models represent and organize knowledge. The insights could have significant implications for fields like [commonsense reasoning](https://aimodels.fyi/papers/arxiv/contextual-categorization-enhancement-through-llms-latent-space), [semantic parsing](https://aimodels.fyi/papers/arxiv/neural-semantic-parsing-extremely-rich-symbolic-meaning), and [knowledge extraction](https://aimodels.fyi/papers/arxiv/not-all-language-model-features-are-linear) from these powerful AI systems.
## Conclusion
This paper presents a novel investigation into the geometric and topological structure of conceptual representations in large language models. The researchers find that broader categorical concepts occupy larger, more diffuse regions, while specific subcategories form tighter, more concentrated clusters. Importantly, they also uncover clear hierarchical relationships between these conceptual representations, mirroring the way humans organize knowledge.
By leveraging advanced mathematical techniques, the authors are able to shed light on the complex inner workings of these powerful AI systems. The insights gained could have significant implications for our understanding of how large language models represent and reason about meaning, with potential applications in areas like commonsense reasoning, knowledge extraction, and semantic parsing.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,880,787 | Knockout: A simple way to handle missing inputs | Knockout: A simple way to handle missing inputs | 0 | 2024-06-07T19:30:30 | https://aimodels.fyi/papers/arxiv/knockout-simple-way-to-handle-missing-inputs | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [Knockout: A simple way to handle missing inputs](https://aimodels.fyi/papers/arxiv/knockout-simple-way-to-handle-missing-inputs). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- This paper introduces a simple yet effective method called "Knockout" for handling missing inputs in machine learning models.
- The method involves randomly masking or "knocking out" a portion of the input features during training, forcing the model to learn to make predictions without access to all the information.
- The authors demonstrate that this simple technique can lead to significant improvements in model performance, especially when dealing with real-world datasets that often contain missing data.
## Plain English Explanation
The paper presents a new method called "Knockout" that can help machine learning models handle missing data more effectively. In the real world, it's common for datasets to be incomplete, with some of the input features missing. This can be a challenge for machine learning models, which typically expect a complete set of inputs.
The Knockout method addresses this problem by randomly masking or "knocking out" a portion of the input features during the model's training process. This forces the model to learn how to make accurate predictions even when it doesn't have access to all the information it would normally rely on.
For example, imagine you're training a model to predict a person's income based on factors like their education, job, and location. With the Knockout method, the model would sometimes be trained on datasets where some of these input features are missing. This helps the model learn to work with incomplete information and perform well even when faced with real-world data that has missing values.
The authors [demonstrate that this simple technique can lead to significant improvements in model performance](https://aimodels.fyi/papers/arxiv/imputation-using-training-labels-classification-via-label), especially when dealing with real-world datasets that often contain missing data. By forcing the model to be more robust to missing inputs, the Knockout method can make it more reliable and useful in practical applications.
## Technical Explanation
The core idea behind the Knockout method is to randomly mask or "knock out" a portion of the input features during the model's training process. This is done by applying a binary mask to the input, where some features are set to a special "missing" value (e.g., 0) while the rest are left unchanged.
The authors [show that this simple technique can lead to significant improvements in model performance](https://aimodels.fyi/papers/arxiv/data-imputation-by-pursuing-better-classification-supervised), especially when dealing with real-world datasets that often contain missing data. By forcing the model to learn to make predictions without access to all the input features, the Knockout method helps it become more robust and adaptable to incomplete information.
The authors [compare the Knockout method to other approaches for handling missing data, such as data imputation](https://aimodels.fyi/papers/arxiv/masked-language-modeling-becomes-conditional-density-estimation) and show that it can outperform these methods on a variety of benchmark tasks. They also [explore the relationship between the amount of masking and the model's performance](https://aimodels.fyi/papers/arxiv/pre-training-random-orthogonal-projection-image-modeling), providing insights into the tradeoffs involved in choosing the right level of masking.
## Critical Analysis
The Knockout method is a simple and elegant solution to a common problem in machine learning, and the authors demonstrate its effectiveness on several benchmark tasks. However, the paper does not address some potential limitations or areas for further research.
For example, the authors do not explore how the Knockout method might perform on datasets with more complex patterns of missing data, such as when the missingness is correlated with the target variable or other input features. It would be interesting to see how the method holds up in these more challenging scenarios.
Additionally, the authors do not provide much insight into the underlying mechanisms that make the Knockout method effective. [Exploring the model's learned representations and decision-making processes could lead to a deeper understanding of the method's strengths and limitations](https://aimodels.fyi/papers/arxiv/tilt-your-head-activating-hidden-spatial-invariance).
Overall, the Knockout method appears to be a promising approach for handling missing data in machine learning, but further research is needed to fully understand its capabilities and potential drawbacks.
## Conclusion
The Knockout method introduced in this paper offers a simple yet effective way to make machine learning models more robust to missing data. By randomly masking a portion of the input features during training, the method forces the model to learn to make accurate predictions even with incomplete information.
The authors' experiments demonstrate that this simple technique can lead to significant improvements in model performance, particularly on real-world datasets that often contain missing values. While the paper does not address all the potential limitations of the method, it presents a compelling approach that could have important implications for a wide range of machine learning applications.
As datasets continue to grow in complexity and the demand for robust, reliable models increases, techniques like Knockout may become increasingly valuable tools in the machine learning practitioner's toolkit.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,880,786 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | 0 | 2024-06-07T19:29:56 | https://aimodels.fyi/papers/arxiv/mamba-linear-time-sequence-modeling-selective-state | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://aimodels.fyi/papers/arxiv/mamba-linear-time-sequence-modeling-selective-state). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- Foundation models, the backbone of modern deep learning applications, are often based on the computationally inefficient Transformer architecture and its attention module.
- Researchers have developed several subquadratic-time models, such as [linear attention](https://aimodels.fyi/papers/arxiv/transformers-are-ssms-generalized-models-efficient-algorithms), gated convolution, and [structured state space models (SSMs)](https://aimodels.fyi/papers/arxiv/mamba-360-survey-state-space-models-as), to address this issue, but they have not matched the performance of attention on important modalities like language.
- The key weakness of these models is their inability to perform content-based reasoning, which this research aims to address.
## Plain English Explanation
The most powerful deep learning models today, known as "foundation models," are often built using a specific architecture called the Transformer. While the Transformer is very effective, it has a significant downside: it is computationally expensive, especially when dealing with long sequences of data.
To address this issue, researchers have developed alternative models that are more efficient, such as [linear attention](https://aimodels.fyi/papers/arxiv/transformers-are-ssms-generalized-models-efficient-algorithms), gated convolution, and [structured state space models (SSMs)](https://aimodels.fyi/papers/arxiv/mamba-360-survey-state-space-models-as). These models are able to process information faster, but they haven't been able to match the performance of the Transformer, particularly when it comes to language-based tasks.
The researchers identify a key weakness in these alternative models: they struggle with "content-based reasoning," which means they have difficulty understanding and processing the actual content of the data, rather than just the sequence of the data. The researchers set out to address this weakness and develop a more efficient model that can still perform well on important tasks like language modeling.
## Technical Explanation
The researchers make two key improvements to address the content-based reasoning weakness of subquadratic-time models like [SSMs](https://aimodels.fyi/papers/arxiv/mamba-360-survey-state-space-models-as):
1. **Allowing the SSM parameters to be functions of the input**: This enables the model to selectively propagate or forget information along the sequence length dimension based on the current token, improving its performance on discrete modalities like language.
2. **Designing a hardware-aware parallel algorithm in recurrent mode**: Even though this change prevents the use of efficient convolutions, the researchers develop a parallel algorithm that maintains the linear scaling of the model in sequence length.
The researchers integrate these "selective SSMs" into a simplified end-to-end neural network architecture called Mamba, which does not use attention or even MLP blocks. Mamba enjoys fast inference (5x higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences.
The researchers demonstrate that Mamba, as a general sequence model backbone, can achieve state-of-the-art performance across several modalities, including language, audio, and genomics. On language modeling specifically, their Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.
## Critical Analysis
The researchers acknowledge that their selective SSM approach prevents the use of efficient convolutions, which are a key component of many state-of-the-art sequence models. However, they argue that their custom parallel algorithm in recurrent mode maintains the linear scaling of the model in sequence length, which is a significant advantage over attention-based models.
One potential limitation of the research is that it does not provide a detailed comparison of the computational and memory requirements of Mamba versus Transformer-based models. While the authors claim Mamba enjoys faster inference, more concrete benchmarks would help readers understand the practical implications of this improvement.
Additionally, the researchers do not delve into the potential biases or limitations of the Mamba architecture. As with any deep learning model, it is crucial to understand how the model's design choices and training data may lead to biased or problematic outputs, especially when deploying Mamba in real-world applications.
## Conclusion
This research presents a novel approach to addressing the computational inefficiency of Transformer-based foundation models, which are the backbone of many state-of-the-art deep learning applications. By developing a [selective SSM](https://aimodels.fyi/papers/arxiv/mambats-improved-selective-state-space-models-long) architecture and integrating it into the Mamba model, the researchers have achieved significant improvements in inference speed and sequence length scaling, while maintaining competitive performance on a range of modalities, including language.
The [Mamba](https://aimodels.fyi/papers/arxiv/demystify-mamba-vision-linear-attention-perspective) model's [dual-path architecture](https://aimodels.fyi/papers/arxiv/dual-path-mamba-short-long-term-bidirectional) and ability to perform content-based reasoning suggest it could be a valuable alternative to attention-based models in many deep learning applications, particularly those that require processing of long sequences. As the field of deep learning continues to evolve, innovations like Mamba will play a crucial role in making these powerful models more accessible and practical for real-world use.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.