id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,916,127 | O laço for | A forma geral do laço for para repetir uma única instrução. Inicialização: Configura o valor... | 0 | 2024-07-09T22:03:07 | https://dev.to/devsjavagirls/o-laco-for-3ii8 | java | - A forma geral do laço for para repetir uma única instrução.

- Inicialização: Configura o valor inicial da variável de controle do laço.
- Condição: Uma expressão booleana que decide se o laço continua ou não.
- Iteração: Define como a variável de controle é atualizada a cada iteração.
- O laço for continua enquanto a condição especificada é verdadeira.
- Quando a condição se torna falsa, a execução do programa continua após o laço.
- Este programa Java calcula e exibe as raízes quadradas dos números de 1 a 99, junto com o erro de arredondamento de cada cálculo.

O erro no cálculo do erro de arredondamento é feito duplicando a raiz quadrada de cada número e subtraindo esse resultado do número original para obter o erro.
- O programa utiliza um laço for para exibir números de 100 a -95 em decrementos de 5, demonstrando a flexibilidade do laço for em controlar a variável de iteração em qualquer direção.
- Este programa Java demonstra um laço for que itera de 100 até -100 em decrementos de 5, exibindo cada valor de x durante cada iteração.

- A expressão condicional é sempre testada no início do laço. Ou seja, o código de dentro do laço pode não ser executado se a condição for falsa.

- O laço nunca será executado, porque sua variável de controle, count, é maior do que 5 quando entramos no laço pela primeira vez. Isso torna a expressão condicional, count < 5, falsa desde o início; logo, não ocorrerá nem mesmo uma iteração no laço.
** Algumas variações do laço for**
O laço for é uma das instruções mais versáteis da linguagem Java porque permite muitas variações.

A saída do programa é mostrada aqui:
i and j: 0 10
i and j: 1 9
i and j: 2 8
i and j: 3 7
i and j: 4 6
- O programa utiliza vírgulas para separar as instruções de inicialização (i=0, j=10) e as expressões de iteração (i++, j--).
- No início do laço, i é inicializada como 0 e j como 10.
- A cada iteração do laço, i é incrementada e j é decrementada simultaneamente.
- O uso de múltiplas variáveis de controle no laço for pode simplificar algoritmos específicos.
- A condição que controla o laço pode ser qualquer expressão booleana válida, não necessariamente envolvendo diretamente as variáveis de controle do laço.
- Um exemplo prático seria um laço que continua executando até que o usuário digite a letra 'S' no teclado.

**Partes ausentes**
- Podemos deixar vazias algumas partes ou todas as partes da definição do laço.

- A iteração do for está vazia. Em vez disso, a variável de controle
i é incrementada dentro do corpo do laço.
Saída do código:
Pass #0
Pass #1
Pass #2
Pass #3
Pass #4
Pass #5
Pass #6
Pass #7
Pass #8
Pass #9
- Exemplo tirando a inicialização:

- i é inicializada antes de o laço começar em vez de como parte de
for. Normalmente, preferimos inicializar a variável de controle dentro de for.
**O laço infinito**
Um laço infinito pode ser criado usando for deixando a expressão condicional vazia.
Este tipo de laço executa indefinidamente.
Embora alguns programas, como os que processam comandos do sistema operacional, necessitem de laços infinitos, na maioria dos casos, são laços que requerem critérios especiais para encerramento geralmente usando a instrução break. | devsjavagirls |
1,916,034 | 🐦JSON vs. BSON🐦 | What's the Difference? Hey, Devs! 👋 Let's talk about two popular data formats: JSON and... | 0 | 2024-07-08T15:46:19 | https://dev.to/gadekar_sachin/json-vs-bson-4o5c | development, javascript, beginners, programming |
## What's the Difference?
Hey, Devs! 👋 Let's talk about two popular data formats: JSON and BSON. Understanding their differences can help you choose the right one for your projects. Ready? Let's dive in!
### 1. JSON (JavaScript Object Notation)
📝 **What is JSON?**
- Lightweight data interchange format.
- Easy to read and write.
- Text-based, uses human-readable text to store and transmit data objects.
💡 **Key Features:**
- **Simplicity:** Easy to parse and generate.
- **Readability:** Human-readable and easy to debug.
- **Language Agnostic:** Supported by many programming languages.
### 2. BSON (Binary JSON)
📝 **What is BSON?**
- Binary-encoded serialization of JSON-like documents.
- Designed to be efficient both in storage and scan-speed.
💡 **Key Features:**
- **Binary Format:** Faster read/write operations.
- **Type Information:** Supports more data types like Date and BinData.
- **Efficient Storage:** More compact than JSON for some types of data.
### 3. JSON vs. BSON: A Quick Comparison
| Feature | JSON | BSON |
|-------------------|------------------------------------------|-------------------------------|
| Format | Text | Binary |
| Readability | Human-readable | Not human-readable |
| Data Types | Limited (strings, numbers, arrays, etc.) | Extensive (Date, BinData, etc.) |
| Use Case | Data interchange | Efficient storage and retrieval|
### 4. When to Use Which?
🔍 **Use JSON when:**
- You need human-readable data.
- You're working with web APIs.
- Simplicity and ease of use are priorities.
🚀 **Use BSON when:**
- You need efficient data storage and retrieval.
- Working with MongoDB (it's the default format).
- Performance is critical, and binary format is acceptable.
### 5. Summary
JSON is great for human readability and simplicity, while BSON excels in performance and efficiency for certain applications. Choose wisely based on your project's needs! 🛠️
| gadekar_sachin |
1,916,035 | Automating EC2 Instance Management with AWS Lambda and EventBridge Using Terraform | Introduction: EC2 instances are virtual servers for running applications on the AWS infrastructure.... | 0 | 2024-07-08T15:47:16 | https://dev.to/mohanapriya_s_1808/automating-ec2-instance-management-with-aws-lambda-and-eventbridge-using-terraform-38jm | **Introduction:**
_EC2 instances are virtual servers for running applications on the AWS infrastructure. It is crucial for providing scalable computing capacity, allowing users to deploy and manage applications efficiently in the cloud. EC2 instances are widely used for hosting websites, running databases, and handling various computing workloads._
_Managing EC2 instances manually can be a daunting task, especially when dealing with multiple instances and varying usage patterns. Automating this process not only saves time but also ensures that your resources are used efficiently, leading to significant cost savings. By leveraging AWS Lambda, EventBridge, and Terraform, you can create an automated solution that starts and stops your EC2 instances based on a schedule, ensuring optimal resource utilization and cost efficiency._
_In this guide, we'll take you through the entire process of setting up this automation, from creating the EC2 instances to configuring the Lambda functions and EventBridge rules using Terraform. Let's dive in and unlock the potential of automated cloud resource management!_
**Architecture:**

**EC2:** _An EC2 instance is a virtual server which is used for running applications on the AWS infrastructure._
**Lambda:** _AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. It automatically scales applications by running code in response to events. Lambda is widely used for event-driven applications, real-time file processing, and backend services._
**EventBridge:** _Amazon EventBridge is a serverless event bus service that makes it easy to connect applications using data from your own apps, SaaS apps, and AWS services. It simplifies event-driven architecture by routing events between services and allowing you to build scalable, event-driven workflows for various use cases such as application integration, automation, and observability._
**IAM Role:** _An IAM (Identity and Access Management) role in AWS defines permissions for entities like AWS services or users, ensuring secure access to AWS resources without needing long-term credentials. Roles are used to delegate permissions across AWS services and are integral for managing security and access control within cloud environments._
**Pre-requisites:**
Before we dive into the steps, let's ensure you have the following prerequisites in place:
1. **AWS Account:** _If you don't have one, sign up for an AWS account._
2. **Terraform Installed:** _Download and install Terraform from the official website._
3. **AWS CLI Installed:** _Install the AWS CLI by following the instructions here._
4. **AWS Credentials Configured:** _Configure your AWS CLI with your credentials by running aws configure._
**Step-By-Step Procedure:**
_We'll walk you through the entire process of setting up this automation using Terraform. The steps include configuring the AWS provider, creating the EC2 instances, setting up IAM roles and policies, defining the Lambda functions, and creating the EventBridge rules._
**Step-1:** _Create a main.tf file. This file contains the configuration for creating three instances, IAM role for the lambda function to access the EC2 instance, lambda functions for starting the EC2 instances and stopping the EC2 instances, EventBridge rules for triggering the startec2instance lambda function and stopec2instance lambda function._
```
provider "aws" {
region = "ap-south-1"
}
resource "aws_instance" "ec2" {
count = var.instance_count
ami = "ami-02a2af70a66af6dfb"
instance_type = "t2.micro" # Update with your desired instance type
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
tags = merge(var.default_ec2_tags,
{
Name = "${var.name}-${count.index + 1}"
}
)
}
resource "aws_iam_role" "lambda_role" {
name = "lambda_role"
# Terraform's "jsonencode" function converts a
# Terraform expression result to valid JSON syntax.
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "lambda.amazonaws.com"
}
},
]
})
tags = {
tag-key = "tag-value"
}
}
resource "aws_iam_policy" "lambda_policy_start_stop_instance" {
name = "lambda_policy_start_stop_instance"
path = "/"
description = "My test policy"
# Terraform expression result to valid JSON syntax.
policy = jsonencode({
Version = "2012-10-17"
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"ec2:Start*",
"ec2:Stop*",
"ec2:Describe*"
],
"Resource": "*"
}
]
})
}
resource "aws_iam_role_policy_attachment" "test-attach" {
role = aws_iam_role.lambda_role.name
policy_arn = aws_iam_policy.lambda_policy_start_stop_instance.arn
}
resource "aws_lambda_function" "stop_ec2_instance" {
# If the file is not in the current working directory you will need to include a
# path.module in the filename.
filename = "stopec2instance.zip"
function_name = "stop_ec2_instance"
role = aws_iam_role.lambda_role.arn
handler = "stopec2instance.lambda_handler"
source_code_hash = filebase64sha256("stopec2instance.zip")
runtime = "python3.11"
}
resource "aws_lambda_function" "start_ec2_instance" {
# If the file is not in the current working directory you will need to include a
# path.module in the filename.
filename = "startec2instance.zip"
function_name = "startec2instance"
role = aws_iam_role.lambda_role.arn
handler = "startec2instance.lambda_handler"
source_code_hash = filebase64sha256("startec2instance.zip")
runtime = "python3.11"
}
resource "aws_cloudwatch_event_rule" "stop_ec2_schedule" {
name = "stop_ec2_schedule"
description = "Schedule to trigger Lambda to stop EC2 instances every 2 minutes"
schedule_expression = "rate(2 minutes)"
}
resource "aws_cloudwatch_event_target" "stop_ec2_target" {
rule = aws_cloudwatch_event_rule.stop_ec2_schedule.name
target_id = "lambda"
arn = aws_lambda_function.stop_ec2_instance.arn
}
resource "aws_lambda_permission" "allow_cloudwatch_stop" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.stop_ec2_instance.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.stop_ec2_schedule.arn
}
resource "aws_cloudwatch_event_rule" "start_ec2_schedule" {
name = "start_ec2_schedule"
description = "Schedule to trigger Lambda to start EC2 instances every 1 minute"
schedule_expression = "rate(1 minute)"
}
resource "aws_cloudwatch_event_target" "start_ec2_target" {
rule = aws_cloudwatch_event_rule.start_ec2_schedule.name
target_id = "lambda"
arn = aws_lambda_function.start_ec2_instance.arn
}
resource "aws_lambda_permission" "allow_cloudwatch_start" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.start_ec2_instance.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.start_ec2_schedule.arn
}
```
**Step-2:** _Create variables.tf file_
```
variable "instance_count" {
description = "Number of EC2 instances to create"
default = 3
}
variable "security_group_id" {
description = "ID of the security group for EC2 instances"
}
variable "subnet_id" {
description = "ID of the subnet for EC2 instances"
}
variable "key" {
description = "Name of the SSH key pair for EC2 instances"
}
variable "name" {
description = "Name prefix for EC2 instances"
}
variable "default_ec2_tags" {
type = map(string)
description = "(optional) default tags for EC2 instances"
default = {
managed_by = "terraform"
Environment = "Dev"
}
}
```
**Step-3:** _Create terraform.tfvars file which contains configuration such as number of instance, security group id, subnet id, key pair name, name of the instance._
```
instance_count = 3
security_group_id = "sg-0944b5d5471b421fb"
subnet_id = "subnet-0582feff6651618d4"
key = "mynewkeypair"
name = "EC2-Test-Instance"
```
**Step-4:** _Create two python files stopec2instance, startec2instance this files contain the code for the lambda function. Make sure the python files are zipped and they lie in the same directory._
```
#stopec2instance
import boto3
def is_dev(instance):
is_dev = False
if 'Tags' in instance:
for tag in instance['Tags']:
if tag['Key'] == 'Environment' and tag['Value'] == 'Dev':
is_dev = True
break
return is_dev
def is_running(instance):
return instance['State']['Name'] == 'running'
def lambda_handler(event, context):
ec2 = boto3.client('ec2', region_name='ap-south-1')
try:
response = ec2.describe_instances()
reservations = response['Reservations']
for reservation in reservations:
for instance in reservation['Instances']:
if is_dev(instance) and is_running(instance):
instance_id = instance['InstanceId']
ec2.stop_instances(InstanceIds=[instance_id])
print(f'Stopping instance: {instance_id}')
except Exception as e:
print(f'Error stopping instances: {str(e)}')
return {
'statusCode': 200,
'body': 'Function executed successfully'
}
```
```
#startec2instance
import boto3
def is_dev(instance):
is_dev = False
if 'Tags' in instance:
for tag in instance['Tags']:
if tag['Key'] == 'Environment' and tag['Value'] == 'Dev':
is_dev = True
break
return is_dev
def is_stopped(instance):
return instance['State']['Name'] == 'stopped'
def lambda_handler(event, context):
ec2 = boto3.client('ec2', region_name='ap-south-1')
try:
response = ec2.describe_instances()
reservations = response['Reservations']
for reservation in reservations:
for instance in reservation['Instances']:
if is_dev(instance) and is_stopped(instance):
instance_id = instance['InstanceId']
ec2.start_instances(InstanceIds=[instance_id])
print(f'Starting instance: {instance_id}')
except Exception as e:
print(f'Error starting instances: {str(e)}')
return {
'statusCode': 200,
'body': 'Function executed successfully'
}
```
**terraform init:** — _To initialize the backend that means terraform will check in this step what is the provider used here and correspondingly download all the dependencies of that provider (AWS in our case) if everything is fine the output will show somewhat like this :_

**terraform plan:** _In this step terraform will show you how many resources it will create like this :_

**terraform apply:** _In this step it wil actually create the resources based on the previous step_

_Once all the resources are created the output will be like this:_
**EC2 instance**

**Lambda Function**

**EventBridge Rules**

_Whenever the lambda function is triggered by EventBridge rules the output will be like this_



_If you want to delete the resources you have to do terraform destroy._
**Conclusion:**
_By automating the start and stop of EC2 instances using Lambda, EventBridge, and Terraform, we've created an efficient and cost-effective solution for managing our cloud resources. This setup can be easily adapted to suit different schedules and requirements._
_Happy automating!_
| mohanapriya_s_1808 | |
1,916,037 | Unlocking the Potential of React AI Assistants with Sista AI | Integrating AI with React: A Comprehensive Guide Integrating AI into a React app can seem daunting,... | 27,994 | 2024-07-08T15:51:40 | https://dev.to/sista-ai/unlocking-the-potential-of-react-ai-assistants-with-sista-ai-3npo | ai, react, javascript, typescript | <h2>Integrating AI with React: A Comprehensive Guide</h2><p>Integrating AI into a React app can seem daunting, especially for those without extensive machine learning experience. However, with the right tools and pre-trained models, it can be achieved with relative ease.</p><p>This article will explore how to combine AI with React, highlighting the use of pre-trained models and various AI assistant apps that can enhance user experience.</p><h2>Using Pre-Trained Models with React</h2><p>TensorFlow.js provides a range of pre-trained models that can be easily imported and utilized in a React app. For example, the mobilenet model can be used for image recognition tasks.</p><p>To set up a React project with the mobilenet model, you need to install the necessary dependencies using npm or yarn.</p><h2>AI Assistant Apps for Everyday Tasks</h2><p>There are various AI assistant apps available that can assist with everyday tasks. For instance, Superhuman is an AI-enhanced email tool that can help professionals manage their emails more efficiently.</p><p>It integrates with both Gmail and Outlook, providing features such as automatic email categorization and rapid response capabilities.</p><h2>Implementing Chatbots in React Apps</h2><p>Implementing a chatbot in a React app can be achieved using ChatGPT. To instruct the assistant to understand the app's functionality, you can generate a sitemap and documentation automatically.</p><p>This approach allows the assistant to learn how the app works and provide more accurate responses to user queries.</p><h2>GitHub Projects for AI Assistants</h2><p>There are numerous AI assistant projects available on GitHub, showcasing a range of capabilities.</p><p>For example, some projects use Meta's LLaMA3 and OpenAI to power their assistants, which can perform tasks such as voice recognition, text-to-speech, and automated chat functionality.</p><h2>Choosing the Best AI Assistant</h2><p>When selecting an AI assistant, it is essential to consider the specific needs of your app or business.</p><p>Different assistants excel in various areas, such as content creation, scheduling, or customer support.</p><p>By understanding how AI assistants work using machine learning and natural language processing, you can make an informed decision about which assistant best suits your requirements.</p><p>In conclusion, integrating AI with React can be achieved through the use of pre-trained models and AI assistant apps.</p><p>By leveraging these tools, developers can create more intuitive and user-friendly applications that enhance productivity and accessibility. Visit <a href='https://smart.sista.ai/?utm_source=sista_blog&utm_medium=blog_post&utm_campaign=Unlocking_the_Potential_of_React_AI_Assistants_with_Sista_AI'>Sista AI</a> and start your free trial today.</p><br/><br/><a href="https://smart.sista.ai?utm_source=sista_blog&utm_medium=blog_post&utm_campaign=big_logo" target="_blank"><img src="https://vuic-assets.s3.us-west-1.amazonaws.com/sista-make-auto-gen-blog-assets/sista_ai.png" alt="Sista AI Logo"></a><br/><br/><p>For more information, visit <a href="https://smart.sista.ai?utm_source=sista_blog&utm_medium=blog_post&utm_campaign=For_More_Info_Banner" target="_blank">sista.ai</a>.</p> | sista-ai |
1,916,038 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-07-08T15:51:03 | https://dev.to/favoy64573/buy-verified-cash-app-account-3476 | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | favoy64573 |
1,916,039 | Next.js with Shadcn UI Progress Bar Example | In this tutorial, we will learn how to use a progress bar in Next.js with Shadcn UI. Before using... | 0 | 2024-07-08T15:54:54 | https://frontendshape.com/post/next-13-with-shadcn-ui-progress-bar-example | nextjs, webdev, shadcnui | In this tutorial, we will learn how to use a progress bar in Next.js with Shadcn UI.
Before using the progress bar in Next.js 13 with Shadcn UI, you need to install it by running `npx shadcn-ui@latest` add progress.
```
npx shadcn-ui@latest add progress
# or
npx shadcn-ui@latest add
```
1. Create a progress bar in Next.js 13 using the Shadcn UI `Progress `component.
```jsx
import { Progress } from "@/components/ui/progress"
export default function ProgressDemo() {
return (
<div className="space-y-2">
<Progress value={10} />
<Progress value={25} />
<Progress value={50} />
<Progress value={75} />
<Progress value={100} />
</div>
)
}
```

2.Implementing a progress bar in Next.js 13 with Shadcn UI using useEffect, useState, and setTimeout.
```jsx
"use client"
import * as React from "react"
import { Progress } from "@/components/ui/progress"
export default function ProgressDemo() {
const [progress, setProgress] = React.useState(13)
React.useEffect(() => {
const timer = setTimeout(() => setProgress(66), 500)
return () => clearTimeout(timer)
}, [])
return <Progress value={progress} className="w-[60%]" />
}
```

3.NextJS with shadcn ui progress bar with percentage.
```jsx
import { Progress } from "@/components/ui/progress"
export default function ProgressDemo() {
return (
<div className="space-y-4">
<div>
<h2 className="text-xl font-semibold mb-2 text-center">Progress Bars</h2>
<div className="space-y-2">
{/* 10% Progress */}
<div className="flex items-center">
<span className="w-1/6 text-right mr-2">10%</span>
<div className="w-5/6">
<Progress value={10} />
</div>
</div>
{/* 25% Progress */}
<div className="flex items-center">
<span className="w-1/6 text-right mr-2">25%</span>
<div className="w-5/6">
<Progress value={25} />
</div>
</div>
{/* 50% Progress */}
<div className="flex items-center">
<span className="w-1/6 text-right mr-2">50%</span>
<div className="w-5/6">
<Progress value={50} />
</div>
</div>
{/* 75% Progress */}
<div className="flex items-center">
<span className="w-1/6 text-right mr-2">75%</span>
<div className="w-5/6">
<Progress value={75} />
</div>
</div>
{/* 100% Progress */}
<div className="flex items-center">
<span className="w-1/6 text-right mr-2">100%</span>
<div className="w-5/6">
<Progress value={100} />
</div>
</div>
</div>
</div>
</div>
)
}
```

4.NextJS with shadcn ui progress bar with animation.
```jsx
import { Progress } from "@/components/ui/progress"
export default function ProgressDemo() {
return (
<div className="space-y-4">
<div>
<h2 className="text-xl font-semibold mb-2 text-center">Progress Bars</h2>
<div className="space-y-2">
{/* 10% Progress */}
<div className="flex items-center">
<span className="w-1/6 text-right mr-2">10%</span>
<div className="w-5/6 animate-pulse">
<Progress value={10} />
</div>
</div>
{/* 25% Progress */}
<div className="flex items-center">
<span className="w-1/6 text-right mr-2">25%</span>
<div className="w-5/6 animate-pulse">
<Progress value={25} />
</div>
</div>
{/* 50% Progress */}
<div className="flex items-center">
<span className="w-1/6 text-right mr-2">50%</span>
<div className="w-5/6 animate-pulse">
<Progress value={50} />
</div>
</div>
</div>
</div>
</div>
)
}
``` | aaronnfs |
1,916,040 | Building a Basic Auth System With Go and MySQL | Go is one of the few languages that have caught my attention in the past. So in our quest to do more,... | 0 | 2024-07-09T06:18:31 | https://dev.to/kalashin1/building-an-auth-system-with-go-and-mysql-1i5h | go, mysql, backend | [Go](https://go.dev/) is one of the few languages that have caught my attention in the past. So in our quest to do more, we will build a simple auth system using Go. I am starting to like writing Go because the syntax is quite elegant and helps to reduce the overall amount of code I have to write.
Go is also very fast and as such it would make so much sense for us to build a backend service with Go, in today’s post we will explore how to build a simple authentication system using Go and as such we will cover the following talking points;
- Project Setup
- Integration with MySQL and GORM
- Create Account
- Login
## Project Setup
The first thing we need to do is to set up a new project and for that, we’ll need to create a new directory “go_auth_app”. We’ll navigate into and run the go init command inside the directory.
```bash
mkdir go_auth_app
```
Now we need to navigate into the `go_auth_app` and run the `go mod init` command
```bash
cd go_auth_app && go mod init
```
Let’s install the dependencies that we will need for this project starting with [Gorilla Mux](https://github.com/gorilla/mux) Http amongst others.
```bash
go get -u github.com/gorilla/mux
```
Now let's create a `server.go` file in our project root. This file will contain our server so let's create a basic server.
```go
package main
import (
"net/http"
"github.com/gorilla/mux"
)
func main () {
r := mux.NewRouter()
http.handle("/", r)
http.ListenAndServe(":8080", nil)
}
```
## Integration with MySQL and GORM
Now we need to install an ORM that will simplify the process of interacting with our database and for that, we will use [GORM](https://gorm.io/docs/). Let's go ahead and add that dependency to our project.
```bash
go get -u gorm.io/gorm
```
Now we have `GORM` installed we need to install a compatible SQL driver.
```bash
go get -u gorm.io/driver/mysql
```
Now let's create a model for a user. To do that we need to create a new folder. `models` and inside this folder, we will create a new file `user.go` let's add the following code to the `user.go`.
```go
// go_auth_app/models/user.go
package models
import (
"gorm.io/gorm"
)
type User struct {
gorm.Model
Name string
Age uint
Email string
Phone string
Password string
}
```
## Create User
Now that we have our model set up, we need to create a controller package to export some functions for interacting with our data. Let's make a folder called `controllers` and inside we'll create a new go file `user.go`.
```go
// go_auth_app/controllers/user.go
package controller
import (
"net/http"
"encoding/json"
"go_auth_app/helper"
"gorm.io/gorm"
"golang.org/x/crypto/bcrypt"
)
func ReturnCreateUser(db *gorm.DB) func(http.ResponseWriter, *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
user, err := helper.ParseRequestBody(r)
if len(err) > 1 {
http.Error(w, err, http.StatusBadRequest)
return
}
toHash := []byte(user.Password)
hashedPassword, hashErr := bcrypt.GenerateFromPassword(toHash, 14)
if hashErr != nil {
{
http.Error(w, "Failed to hash password", http.StatusInternalServerError)
return
}
}
createdUser := models.User{Name: user.Name, Age: user.Age, Email: user.Email, Password: string(hashedPassword)}
result := db.Create(&createdUser)
if result == nil {
http.Error(w, "Failed to create user", http.StatusInternalServerError)
return
}
json.NewEncoder(w).Encode(createdUser)
}
}
```
The code snippet above implements a function called `ReturnCreateUser` in the controller package. Here's a breakdown of what the code does: This function is designed to handle creating a new user in the system. It takes the database connection (db of type *gorm.DB) as input and returns another function that acts as the actual route handler.
Inside the route handler function, we call `ParseRequestBody` on the helper package. The `ParseRequestBody` function is used to extract user data (name, age, email, password) from the request body (likely sent as JSON data). If there's an error parsing the request body (length of err string is greater than 1), then we send a bad request (400) error response.
Next, we hash the user's Password by converting the user's password (assumed to be a string) to a byte slice ([]byte). We use the `bcrypt.GenerateFromPassword` function to securely hash the password. [Bcrypt](https://gorm.io/docs/) is a popular hashing algorithm for passwords. The function takes the password bytes and a cost factor (14 in this case) to control the hashing intensity. If there's an error during hashing (hashErr is not nil), it sends an internal server error (500) response with a message and exits. Run the following command to install the bcrypt library.
```bash
go get golang.org/x/crypto/bcrypt
```
Then we create a new `models.User` object by populating the user object with the parsed data (name, age, email) and the hashed password converted back to a string. Then we save the user to the Database using the database connection (db). We call the `Create` method on the database connection, passing the createdUser object as an argument. If the Create operation fails (result is nil), it sends an internal server error (500) response with a message and exits. If successful, we use the json.NewEncoder to encode the newly created user object (including the ID generated by the database) back into JSON format and sends it as the response. Now we need to implement the `helper` package and the `ParseRequestBody` function.
```go
// go_auth_app/helper/helper.go
package helper
import (
"encoding/json"
"io"
"net/http"
)
type Payload struct {
Name string `json:"name"`
Age int `json:"age"`
Email string `json:"email"`
Password string `json:"password"`
}
func ParseRequestBody[S string, P Payload](req *http.Request) (P, S) {
body, err := io.ReadAll(req.Body)
var payload P
if err != nil {
return payload, "Error reading request body"
}
err = json.Unmarshal(body, &payload)
if err != nil {
return payload, "Invalid JSON format in the request body"
}
return payload, ""
}
```
The code snippet above defines a package named helper containing functionalities for parsing request bodies in a Go web application. The package imports the necessary libraries:
`encoding/json` for working with JSON data.
`io` for reading data from the request body.
`net/http` for accessing request information.
Then we define a struct named Payload. This struct represents the expected format of the data sent in the request body. It has fields for Name, Age (int), Email, and Password. The json tag specifies the corresponding JSON field names for each struct field during marshaling and unmarshalling. This function is designed to parse the request body and extract the user data. It takes an `http.Request` object as input and returns two values: `P` An instance of the Payload struct populated with the parsed user data (name, age, email, password). `S`: A string representing any error message encountered during parsing.
Now let's put all of this together in our `server.go` file
```go
// go_auth_app/server.go
package main
import (
"log"
"net/http"
"github.com/gorilla/mux"
"gorm.io/driver/mysql"
"gorm.io/gorm"
"go_auth_app/controller"
"go_auth_app/models"
)
func main() {
r := mux.NewRouter()
db, err := gorm.Open(mysql.New(mysql.Config{
DSN: "root@tcp(localhost:3306)/test?charset=utf8mb4&parseTime=true",
}))
if err != nil {
panic("failed to connect database")
}
// Migrate the schema
db.AutoMigrate(&models.User{})
r.HandleFunc("/user", controller.ReturnCreateUser(db)).Methods("POST")
r.HandleFunc("/login", controller.ReturnLoginUser(db)).Methods("POST")
http.Handle("/", r)
log.Println("server started on port 8080")
http.ListenAndServe(":8080", nil) // Start server on port 8080
}
```
## Login
The code snippet in the main package ties together the previously explained controller and helper packages to create a functional web server application. Now let's add a new handler function to enable the user to log in to their account. We'll edit the `controller/user` file to add the handler function to enable the user login.
```go
// go_auth_app/controller/user.go
package controller
// cont'd
func ReturnLoginUser(db *gorm.DB) func(http.ResponseWriter, *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
payload, err := helper.ParseRequestBody(r)
if len(err) > 1 {
http.Error(w, err, http.StatusBadRequest)
return
}
var user models.User
db.Where("email =?", payload.Email).First(&user) // Find user by email
// if the user with that email does not exist throw an error
if user.Email == "" {
http.Error(w, "User not found", http.StatusNotFound)
return
}
// compare their passwords
compareErr := bcrypt.CompareHashAndPassword([]byte(user.Password), []byte(payload.Password))
if compareErr != nil {
http.Error(w, "Invalid credentials", http.StatusUnauthorized)
return
}
// if the passwords match, return the user
json.NewEncoder(w).Encode(user)
}
}
```
That's going to be all for now, I hope you found this useful and leave your thoughts on the article and building a web server with Go in the comment section below. | kalashin1 |
1,916,041 | How to Use Icons in Shadcn UI with Next.js | In this section, we'll see how to use icons in Next.js with ShadCN UI. ShadCN UI includes... | 0 | 2024-07-10T15:33:00 | https://frontendshape.com/post/how-to-use-icon-in-shadcn-ui-with-next-js-13 | nextjs, webdev, shadcnui | In this section, we'll see how to use icons in Next.js with ShadCN UI. ShadCN UI includes Lucide-React icons by default, and you also have the option to incorporate SVG icons from [Heroicons](https://heroicons.com/).
```
npm install lucide-react
# yarn
yarn add lucide-react
```
Next.js with Shadcn UI Icons: Small, Medium, and Large Sizes.
```jsx
import { Mail } from "lucide-react"
export default function DemoIcon() {
return (
<div className="flex gap-4">
<Mail className="h-4 w-4" />
<Mail className="h-6 w-6" />
<Mail className="h-8 w-8" />
<Mail className="h-12 w-12" />
<Mail className="h-16 w-16" />
</div>
)
}
```

Next.js with Shadcn UI button with Icon using lucide-react icon.
```jsx
import { Mail } from "lucide-react"
import { Button } from "@/components/ui/button"
export default function DemoIcon() {
return (
<Button>
<Mail className="mr-2 h-4 w-4" />
Send Mail
</Button>
)
}
```

If you prefer using SVG icons in Next.js with Shadcn UI, you can easily integrate icons from heroicons.com. or you can install **@heroicons/react**.
```
npm install @heroicons/react
```

Next.js with Shadcn UI SVG Icons: Small, Medium, and Large Sizes.
```jsx
export default function DemoIcon() {
return (
<div className="flex gap-4">
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth={1.5}
stroke="currentColor"
className="w-4 h-4"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M21.75 6.75v10.5a2.25 2.25 0 01-2.25 2.25h-15a2.25 2.25 0 01-2.25-2.25V6.75m19.5 0A2.25 2.25 0 0019.5 4.5h-15a2.25 2.25 0 00-2.25 2.25m19.5 0v.243a2.25 2.25 0 01-1.07 1.916l-7.5 4.615a2.25 2.25 0 01-2.36 0L3.32 8.91a2.25 2.25 0 01-1.07-1.916V6.75"
/>
</svg>
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth={1.5}
stroke="currentColor"
className="w-6 h-6"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M21.75 6.75v10.5a2.25 2.25 0 01-2.25 2.25h-15a2.25 2.25 0 01-2.25-2.25V6.75m19.5 0A2.25 2.25 0 0019.5 4.5h-15a2.25 2.25 0 00-2.25 2.25m19.5 0v.243a2.25 2.25 0 01-1.07 1.916l-7.5 4.615a2.25 2.25 0 01-2.36 0L3.32 8.91a2.25 2.25 0 01-1.07-1.916V6.75"
/>
</svg>
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth={1.5}
stroke="currentColor"
className="w-8 h-8"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M21.75 6.75v10.5a2.25 2.25 0 01-2.25 2.25h-15a2.25 2.25 0 01-2.25-2.25V6.75m19.5 0A2.25 2.25 0 0019.5 4.5h-15a2.25 2.25 0 00-2.25 2.25m19.5 0v.243a2.25 2.25 0 01-1.07 1.916l-7.5 4.615a2.25 2.25 0 01-2.36 0L3.32 8.91a2.25 2.25 0 01-1.07-1.916V6.75"
/>
</svg>
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth={1.5}
stroke="currentColor"
className="w-12 h-12"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M21.75 6.75v10.5a2.25 2.25 0 01-2.25 2.25h-15a2.25 2.25 0 01-2.25-2.25V6.75m19.5 0A2.25 2.25 0 0019.5 4.5h-15a2.25 2.25 0 00-2.25 2.25m19.5 0v.243a2.25 2.25 0 01-1.07 1.916l-7.5 4.615a2.25 2.25 0 01-2.36 0L3.32 8.91a2.25 2.25 0 01-1.07-1.916V6.75"
/>
</svg>
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth={1.5}
stroke="currentColor"
className="w-16 h-16"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M21.75 6.75v10.5a2.25 2.25 0 01-2.25 2.25h-15a2.25 2.25 0 01-2.25-2.25V6.75m19.5 0A2.25 2.25 0 0019.5 4.5h-15a2.25 2.25 0 00-2.25 2.25m19.5 0v.243a2.25 2.25 0 01-1.07 1.916l-7.5 4.615a2.25 2.25 0 01-2.36 0L3.32 8.91a2.25 2.25 0 01-1.07-1.916V6.75"
/>
</svg>
</div>
)
}
```

Next.js with Shadcn UI button with SVG Icon using heroicons icon.
```jsx
import { Button } from "@/components/ui/button"
export default function DemoIcon() {
return (
<Button>
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth={1.5}
stroke="currentColor"
className="mr-1 w-4 h-4"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M21.75 6.75v10.5a2.25 2.25 0 01-2.25 2.25h-15a2.25 2.25 0 01-2.25-2.25V6.75m19.5 0A2.25 2.25 0 0019.5 4.5h-15a2.25 2.25 0 00-2.25 2.25m19.5 0v.243a2.25 2.25 0 01-1.07 1.916l-7.5 4.615a2.25 2.25 0 01-2.36 0L3.32 8.91a2.25 2.25 0 01-1.07-1.916V6.75"
/>
</svg>
Send Mail
</Button>
)
}
```
 | aaronnfs |
1,916,043 | Best Comic Book Store | Check out One of the best places online for comic books. They have a wide variety of comics and... | 0 | 2024-07-08T16:02:44 | https://dev.to/khadija_fahad_151c2358fb8/best-comic-book-store-4o5e | Check out [](https://onlinecomicbookstore.com)
One of the best places online for comic books. They have a wide variety of comics and graphic novels for every fan. Whether you collect comics or just love to read them, this site has something for you. Explore the selection and find your next favorite comic book! | khadija_fahad_151c2358fb8 | |
1,916,044 | Building Ollama Cloud - Scaling Local Inference to the Cloud | Ollama is primarily a wrapper around llama.cpp, designed for local inference tasks. It's not... | 0 | 2024-07-08T16:43:56 | https://dev.to/samyfodil/building-ollama-cloud-scaling-local-inference-to-the-cloud-2i1a | cloudcomputing, rag, webassembly, go |
Ollama is primarily a wrapper around `llama.cpp`, designed for local inference tasks. It's not typically your first choice if you're looking for cutting-edge performance or features, but it has its uses, especially in environments where external dependencies are a concern.
#### Local AI Development
When using Ollama for local AI development, the setup is straightforward but effective. Developers typically leverage Ollama to run inference tasks directly on their local machines. Here's a visual depiction of a typical local development setup using Ollama:

This configuration allows developers to test and iterate quickly without the complexities of remote server communications. It's ideal for initial prototyping and development phases where quick turnaround is critical.
#### From Local to Cloud
Transitioning from a local setup to a scalable cloud environment involves evolving from a simple 1:1 setup (one user request to one inference host) to a more complex many-to-many (multiple user requests to multiple inference hosts) configuration. This shift is necessary to maintain efficiency and responsiveness as demand increases.
Here's how this scaling looks when moving from local development to production:

Adopting a straightforward approach during this transition can significantly increase the complexity of applications, especially as sessions need to maintain consistency across various states. Delays and inefficiencies may arise if requests are not optimally routed to the best available inference host.
Moreover, the complex nature of distributed applications makes them challenging to test locally, which can slow down the development process and increase the risk of failures in production environments.
#### Serverless
Serverless computing abstracts server management and infrastructure details, allowing developers to focus solely on code and business logic. By decoupling request handling and consistency maintenance from the application, serverless architecture simplifies scaling.
This approach allows the application to remain concentrated on delivering value, solving many common scaling challenges without burdening developers with infrastructure complexities.
#### WebAssembly
WebAssembly (Wasm) addresses the challenge of dependency management by enabling the compilation of applications into self-contained modules. This makes apps easier to orchestrate and test both locally and in the cloud, ensuring consistency across different environments.
#### Tau
[](https://github.com/taubyte/tau)
Tau is a framework to build low-maintenance and highly scalable cloud computing platforms. It excels in simplicity and extendibility. Tau makes deployment straightforward and supports running a local cloud for development, allowing for end-to-end (E2E) testing of both the cloud infrastructure and the applications running on it.
This approach, referred to by Taubyte as "Local Coding Equals Global Production," ensures that what works locally will work globally, significantly easing the development and deployment processes.
#### Integrating Ollama into Tau with the Orbit Plugin System
Tau’s plugin system, known as Orbit, significantly simplifies turning services into manageable components by wrapping them into WebAssembly host modules. This approach allows Tau to take over the orchestration duties, streamlining the deployment and management process.
#### Exporting Functions in Ollama
To make Ollama functions accessible within Tau’s ecosystem, we utilize the Orbit system to export Ollama’s capabilities as callable endpoints. Here’s how you can export an endpoint in Go:
```go
func (s *ollama) W_pull(ctx context.Context, module satellite.Module, modelNamePtr uint32, modelNameSize uint32, pullIdptr uint32) Error {
model, err := module.ReadString(modelNamePtr, modelNameSize)
if err != nil {
return ErrorReadMemory
}
id, updateFunc := s.getPullId(model)
if updateFunc != nil {
go func() {
err = server.PullModel(s.ctx, model, &server.RegistryOptions{}, updateFunc)
s.pullLock.Lock()
defer s.pullLock.Unlock()
s.pulls[id].err = err
}()
}
module.WriteUint64(pullIdptr, id)
return ErrorNone
}
```
For a straightforward example of exporting functions, you can refer to the [hello_world example](https://github.com/taubyte/tau/tree/main/pkg/vm-orbit/examples/hello_world).
Once defined, these functions, now called via `satellite.Export`, enable the seamless integration of Ollama into Tau’s environment:
```go
func main() {
server := new(context.TODO(), "/tmp/ollama-wasm")
server.init()
satellite.Export("ollama", server)
}
```
#### Writing Tests for the Ollama Plugin
Testing the plugin is streamlined and straightforward. Here's how you can write a serverless function test in Go:
```go
//export pull
func pull() {
var id uint64
err := Pull("gemma:2b-instruct", &id)
if err != 0 {
panic("failed to call pull")
}
}
```
Using Tau's test suite and Go builder tools, you can build your plugin, deploy it in a test environment, and execute the serverless functions to verify functionality:
```go
func TestPull(t *testing.T) {
ctx := context.Background()
// Create a testing suite to test the plugin
ts, err := suite.New(ctx)
assert.NilError(t, err)
// Use a Go builder to build plugins and wasm
gob := builder.New()
// Build the plugin from the directory
wd, _ := os.Getwd()
pluginPath, err := gob.Plugin(path.Join(wd, "."), "ollama")
assert.NilError(t, err)
// Attach plugin to the testing suite
err = ts.AttachPluginFromPath(pluginPath)
assert.NilError(t, err)
// Build a wasm file from serverless function
wasmPath, err := gob.Wasm(ctx, path.Join(wd, "fixtures", "pull.go"), path.Join(wd, "fixtures", "common.go"))
assert.NilError(t, err)
// Load the wasm module and call the function
module, err := ts.WasmModule(wasmPath)
assert.NilError(t, err)
// Call the "pull" function from our wasm module
_, err = module.Call(ctx, "pull")
assert.NilError(t, err)
}
```
#### Code
You can find the complete code here https://github.com/ollama-cloud/ollama-as-wasm-plugin/tree/main/tau
#### What's Next?
You can now build LLM applications with ease. Here are the steps to get started:
- [Start locally using `dream`](https://github.com/ollama-cloud/get-started/blob/main/README.md#set-up-your-local-environment): Set up your local environment to develop and test your application.
- [Create a project](https://tau.how/01-getting-started/02-first-project/): Begin a new project with Tau to harness its full potential.
- [Create your production cloud](https://tau.how/01-getting-started/04-deploy-a-cloud/): Deploy your project in a production cloud environment.
- Drop the plugin binary in the `/tb/plugins` folder.
- Import your project into production
- Show off!
| samyfodil |
1,916,045 | Create File Upload UI in Next.js with Shadcn UI | In this tutorial, we will create a file upload feature in Next.js using Shadcn UI. Before use file... | 0 | 2024-07-12T15:32:00 | https://frontendshape.com/post/create-file-upload-in-nextjs-13-with-shadcn-ui | webdev, nextjs, shadcnui | In this tutorial, we will create a file upload feature in Next.js using Shadcn UI.
Before use file upload in next js 13 with shadcn ui you need to install npx shadcn-ui add input.
```
npx shadcn-ui add input
# or
npx shadcn-ui@latest add
```
1.Create a File Upload Feature in Next.js Using Shadcn UI's Input and Label Components.
```jsx
import { Input } from "@/components/ui/input"
import { Label } from "@/components/ui/label"
export default function InputFile() {
return (
<div className="grid w-full lg:max-w-sm items-center gap-1.5">
<Label htmlFor="picture">Picture</Label>
<Input id="picture" type="file" />
</div>
)
}
```

2.File Upload with Blue Color in Next.js Using Shadcn UI.
```jsx
import { Input } from "@/components/ui/input"
import { Label } from "@/components/ui/label"
export default function InputFile() {
return (
<div className="grid w-full lg:max-w-sm items-center gap-1.5">
<Label htmlFor="picture">Picture</Label>
<Input
id="picture"
type="file"
className="file:bg-blue-50 file:text-blue-700 hover:file:bg-blue-100"
/>
</div>
)
}
```

3.File Upload with File Border Color in Next.js Using Shadcn UI.
```jsx
import { Input } from "@/components/ui/input"
import { Label } from "@/components/ui/label"
export default function InputFile() {
return (
<div className="grid w-full lg:max-w-sm items-center gap-1.5">
<Label htmlFor="picture">Picture</Label>
<Input
id="picture"
type="file"
className="file:bg-blue-50 file:text-blue-700 hover:file:bg-blue-100 file:border file:border-solid file:border-blue-700 file:rounded-md border-blue-600"
/>
</div>
)
}
```
 | aaronnfs |
1,916,046 | Building a Hybrid Sign-Up/Subscribe Form with Stripe Elements | I had a user reach out to me on X asking if there was any way to integrate a Stripe credit card entry... | 0 | 2024-07-08T16:13:16 | https://clerk.com/blog/building-a-hybrid-sign-up-and-subscribe-form-with-stripe | clerk, stripe, nextjs, saas | I had a user reach out to me on X asking if there was any way to integrate a Stripe credit card entry field with Clerk's sign-up forms.
{% embed https://twitter.com/koslib/status/1788611782598131950 %}
Kostas is building a Chrome extension that uses AI to let users write responses to LinkedIn posts directly from their browser. To reduce the friction of users who want to sign up for the trial, he presented the following requirements:
1. It should be a single form that accepts an email address, tier selection, and credit card details.
1. The user should complete the sign-up using a one-time passcode sent to their email account.
1. Upon verifying their email, the user should automatically be signed up for a trial of the selected tier with no further interaction.
In this article, I'll walk through the process of building a completely custom sign-up form that matches the above requirements, starting with the end result.
## The final product
Before walking through how this solution was built, it's worth seeing it in action. The first phase of the signup process has the user entering the details outlined above.

Upon completing this form, the user receives an email from Clerk with their sign-up code. The form in the previous screenshot will automatically update to accept a verification code.

After the code is entered, the user is presented with a loading view, indicating that their account is being created in Clerk and the subscription is being registered in Stripe. Although the user experience appears seamless, the process happening behind the scenes is rather complex with a number of moving parts. Let's explore how this solution was built, starting with the front-end part.

## Constructing the form
We'll start by exploring the components of both Clerk and Stripe that are used to build the user-facing part of this flow.
### Custom flows
Clerk has a great set of predesigned components that developers can drop directly into their application to provide a great sign-up and sign-in experience for their users.
In this scenario, however, the default components are not flexible enough to embed a product selection and credit card form, so we'll need to use [Custom flows](https://clerk.com/docs/custom-flows/overview). Custom flows in Clerk allow you to build custom forms with your own logic to both register and sign in users, as well as customize the logic behind these actions to do whatever you need to for your application. Instead of using any of the components, we can instead build an HTML `<form>` with an `onSubmit` function to handle the submit process.
### Stripe Elements
[Stripe Elements](https://stripe.com/payments/elements) is a set of prebuilt components that can be used during the payment processing flow of your application.
One of these components is a credit card entry form that can generate a token for a given set of credit card details, allowing us to securely store a reference to the card and not the card details themselves. This token can be used later in the process to tie the card as a form of payment to the customer in Stripe. In order to use Elements in a Next.js application, the component that renders the form must be wrapped in the `<Elements>` component.
Because we're using a Custom flow, we can create a separate component that renders and handles the form logic and wrap it in `<Elements>` on the page, allowing us to combine the credit card entry form with our sign-in form.
### Unsafe metadata
Users in Clerk have a number of different [metadata](https://clerk.com/docs/users/metadata#user-metadata) categories that are used for different purposes:
- public metadata - readable on the frontend, but writeable from the backend
- private metadata - accessible only from the backend
- unsafe metadata - readable and writeable from the front end, can also store pre-signup info about the user
Since unsafe metadata can be used to store information before the signup process is complete, we can take advantage of this to store information about the selected tier (a “product” in Stripe) and payment details provided in the custom form. When the user completes signup, the data stored locally in unsafe metadata will also be saved with the user on Clerks systems.
### Exploring the form code
After walking through all the moving parts required to solve this on the front end, let's take a look at the code.
To start, we have the `page.tsx` file which renders one of two forms based on if the signup attempt is verifying or not. If `verifying` is true, it means that the user has submitted the required details and the application is just waiting for them to add the OTP code they received via email. Take note that `SignUpForm` is wrapped in the Stripe `<Elements>` node, which is required to use Elements.
```tsx
// src/app/sign-up/[[...sign-up]]/page.tsx
'use client'
import * as React from 'react'
import { useState } from 'react'
import SignUpForm from './SignUpForm'
import { loadStripe } from '@stripe/stripe-js'
import { Elements } from '@stripe/react-stripe-js'
import VerificationForm from './VerificationForm'
export default function Page() {
const [verifying, setVerifying] = useState(false)
const options = {
appearance: {
theme: 'stripe',
},
}
const stripePromise = loadStripe(process.env.NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY as string)
// 👉 Render the verification form, meaning OTP email has been set
if (verifying) {
return <VerificationForm />
}
// 👉 Render the signup form by default
return (
<div className="mt-20 flex items-center justify-center">
{/* @ts-ignore */}
<Elements options={options} stripe={stripePromise}>
<SignUpForm setVerifying={setVerifying} />
</Elements>
</div>
)
}
```
Next let's explore the `SignUpForm.tsx` which is the form that accepts an email address, product selection, and credit card information. This component accepts a single prop of `setVerifying` which is only to signal to the page that the form has been submitted and the `VerificationForm` component can be shown instead.
When the form is submitted, three main things happen:
1. The card info is tokenized.
1. Clerk is notified that a signup is being attempted using the provided email address. This is where unsafe metadata is set as well.
1. The `setVerifying` prop is set to true, indicating to the parent that the `VerificationForm` component can now be rendered.
```tsx
// src/app/sign-up/[[...sign-up]]/SignUpForm.tsx
'use client'
import Link from 'next/link'
import { Button } from '@/components/ui/button'
import {
Card,
CardContent,
CardDescription,
CardFooter,
CardHeader,
CardTitle,
} from '@/components/ui/card'
import { Input } from '@/components/ui/input'
import { Label } from '@/components/ui/label'
import { CardElement, useElements, useStripe } from '@stripe/react-stripe-js'
import { useSignUp } from '@clerk/nextjs'
import { useState } from 'react'
import { RadioGroup, RadioGroupItem } from '@/components/ui/radio-group'
type Props = {
setVerifying: (val: boolean) => void
}
function SignUpForm({ setVerifying }: Props) {
const { isLoaded, signUp } = useSignUp()
const stripe = useStripe()
const elements = useElements()
const [priceId, setPriceId] = useState('')
const [email, setEmail] = useState('')
// 👉 Handles the sign-up process, including storing the card token and price id into the users metadata
async function onSubmit() {
if (!isLoaded && !signUp) return null
try {
if (!elements || !stripe) {
return
}
let cardToken = ''
const cardEl = elements?.getElement('card')
if (cardEl) {
const res = await stripe?.createToken(cardEl)
cardToken = res?.token?.id || ''
}
await signUp.create({
emailAddress: email,
unsafeMetadata: {
cardToken,
priceId,
},
})
// 👉 Start the verification - an email will be sent with an OTP code
await signUp.prepareEmailAddressVerification()
// 👉 Set verifying to true to display second form and capture the OTP code
setVerifying(true)
} catch (err) {
// 👉 Something went wrong...
}
}
return (
<form onSubmit={onSubmit}>
<Card className="w-full sm:w-96">
<CardHeader>
<CardTitle>Create your account</CardTitle>
<CardDescription>Welcome! Please fill in the details to get started.</CardDescription>
</CardHeader>
<CardContent className="grid gap-y-4">
{/* // 👉 Email input */}
<div>
<Label htmlFor="emailAddress">Email address</Label>
<Input
value={email}
onChange={(e) => setEmail(e.target.value)}
type="email"
id="emailAddress"
name="emailAddress"
required
/>
</div>
{/* // 👉 Product selection radio group */}
<div>
<Label>Select tier</Label>
<RadioGroup
defaultValue="option-one"
className="mt-2"
value={priceId}
onValueChange={(e) => setPriceId(e)}
>
<div className="flex items-center space-x-2">
<RadioGroupItem value="price_1PG1OcF35z7flJq7p803vcEP" id="option-one" />
<Label htmlFor="option-one">Pro</Label>
</div>
<div className="flex items-center space-x-2">
<RadioGroupItem value="price_1PG1UwF35z7flJq7vRUrnOiv" id="option-two" />
<Label htmlFor="option-two">Enterprise</Label>
</div>
</RadioGroup>
</div>
{/* // 👉 Use Stripe Elements to render the card capture form */}
<Label>Payment details</Label>
<div className="rounded border p-2">
<CardElement />
</div>
</CardContent>
<CardFooter>
<div className="grid w-full gap-y-4">
<Button type="submit" disabled={!isLoaded}>
Sign up for trial
</Button>
<Button variant="link" size="sm" asChild>
<Link href="/sign-in">Already have an account? Sign in</Link>
</Button>
</div>
</CardFooter>
</Card>
</form>
)
}
export default SignUpForm
```
Finally, we have the `VerificationForm.tsx` component, which simply accepts the code that was sent to the user's email address. The submit handler for this form sends the code to Clerk where it is checked to be valid. If valid, the user account will be created and the user will be redirected to `/after-sign-up` .
```tsx
// src/app/sign-up/[[...sign-up]]/VerificationForm.tsx
import * as React from 'react'
import { useSignUp } from '@clerk/nextjs'
import { useRouter } from 'next/navigation'
import { Button } from '@/components/ui/button'
import {
Card,
CardContent,
CardDescription,
CardFooter,
CardHeader,
CardTitle,
} from '@/components/ui/card'
import { Input } from '@/components/ui/input'
import { Label } from '@/components/ui/label'
import { useState } from 'react'
function VerificationForm() {
const { isLoaded, signUp, setActive } = useSignUp()
const [code, setCode] = useState('')
const router = useRouter()
// 👉 Handles the verification process once the user has entered the validation code from email
async function handleVerification(e: React.FormEvent) {
e.preventDefault()
if (!isLoaded && !signUp) return null
try {
// 👉 Use the code provided by the user and attempt verification
const signInAttempt = await signUp.attemptEmailAddressVerification({
code,
})
// 👉 If verification was completed, set the session to active
// and redirect the user
if (signInAttempt.status === 'complete') {
await setActive({ session: signInAttempt.createdSessionId })
router.push('/after-sign-up')
} else {
// 👉 If the status is not complete. User may need to complete further steps.
}
} catch (err) {
// 👉 Something went wrong...
}
}
return (
<div className="mt-20 flex items-center justify-center">
<form onSubmit={handleVerification}>
<Card className="w-full sm:w-96">
<CardHeader>
<CardTitle>Create your account</CardTitle>
<CardDescription>Welcome! Please fill in the details to get started.</CardDescription>
</CardHeader>
<CardContent className="grid gap-y-4">
<div>
<Label htmlFor="code">Enter your verification code</Label>
<Input
value={code}
onChange={(e) => setCode(e.target.value)}
id="code"
name="code"
required
/>
</div>
</CardContent>
<CardFooter>
<div className="grid w-full gap-y-4">
<Button type="submit" disabled={!isLoaded}>
Verify
</Button>
</div>
</CardFooter>
</Card>
</form>
</div>
)
}
export default VerificationForm
```
## Registering the subscription in Stripe
Now that we've covered everything the user sees, let's break down what happens behind the scenes to make sure the user is successfully registered for the trial of their chosen tier.
### Clerk webhooks
We'll need a reliable way to signal that a user has been created and something needs to be done about it, and that's where webhooks come in.
Webhooks are HTTP requests that are automatically dispatched to an API endpoint of your choosing when an event happens in Clerk. One of these can be triggered when a user is created, using the `user.created` event. The dispatched request also contains various details about the user that was created, including the unsafe metadata. By configuring a webhook handler in our application to accept the request, we can read in the selected product and payment info, and create the subscription using the Stripe SDK.

Using the `@brianmmdev/clerk-webhooks-handler` utility library, we can define functions that automatically validate the webhook signature and allow you to easily handle the payload, including pulling out the unsafe metadata that was set during the signup process.
```tsx
// src/app/api/clerkhooks/route.ts
import { createWebhooksHandler } from '@brianmmdev/clerk-webhooks-handler'
import { Stripe } from 'stripe'
import { clerkClient } from '@clerk/nextjs/server'
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY as string)
const handler = createWebhooksHandler({
onUserCreated: async (user) => {
// 👉 Parse the unsafe_metadata from the user payload
const { cardToken, priceId } = user.unsafe_metadata
if (!cardToken || !priceId) {
return
}
// 👉 Stripe operations will go here...
},
})
export const POST = handler.POST
```
### Creating the Stripe entities
Creating a subscription in Stripe requires three separate entities:
- Customer
- Payment method
- Subscription
Since the card info was tokenized and stored in the user's unsafe metadata along with the selected product, we can take advantage of the info sent to our application to create these three entities and tie them to each other. The first thing is to create a payment method based on the tokenized card info:
```tsx
// src/app/api/clerkhooks/route.ts
const pm = await stripe.paymentMethods.create({
type: 'card',
card: {
token: cardToken as string,
},
})
```
Next, we can use the captured email address to create a customer in Stripe and tie the payment method to them:
```tsx
// src/app/api/clerkhooks/route.ts
const customer = await stripe.customers.create({
email: user?.email_addresses[0].email_address,
payment_method: pm.id,
})
```
Finally, we can create the subscription entity, attach it to the customer, set the payment method, AND set a trial period:
```tsx
// src/app/api/clerkhooks/route.ts
const subscription = await stripe.subscriptions.create({
customer: customer.id,
default_payment_method: pm.id,
trial_period_days: 14,
items: [
{
price: priceId as string,
},
],
})
```
### Syncing subscription state
Although the frontend and backend flows occur separately, we need a way to signal to the front end that processing has been completed on the backend. To do this, we can use metadata again (public metadata in this case) to set data from the Stripe operations to indicate that the process has been completed:
```tsx
// src/app/api/clerkhooks/route.ts
await clerkClient.users.updateUser(user.id, {
publicMetadata: {
stripeCustomerId: customer.id,
stripeSubscriptionId: subscription.id,
},
})
```
On the front end, our redirect page actually just renders a loading indicator but also polls the user's info from Clerk to redirect them once that data is available. The following is the code that makes up the page for `/after-sign-up`, which is where the user was redirected after the OTP code was entered.
```tsx
// src/app/after-sign-up/page.tsx
'use client'
import { Icons } from '@/components/ui/icons'
import { useUser } from '@clerk/nextjs'
import { useRouter } from 'next/navigation'
import React, { useEffect } from 'react'
async function sleep(ms: number) {
return new Promise((resolve) => {
setTimeout(resolve, ms)
})
}
function AfterSignUp() {
const router = useRouter()
const { user } = useUser()
// 👉 Poll the user data until a stripeSubscriptionId is available
useEffect(() => {
async function init() {
while (!user?.publicMetadata?.stripeSubscriptionId) {
await sleep(2000)
await user?.reload()
}
// 👉 Once available, redirect to /dashboard
router.push('/dashboard')
}
init()
}, [])
return (
<div className="mt-20 flex items-center justify-center">
<Icons.spinner className="size-8 animate-spin" />
</div>
)
}
export default AfterSignUp
```
## Putting it all together
As you can see there are quite a lot of moving parts that allow for this simple form to do so much. To put everything into context, let's look at the entire flow step by step:

1. When the user submits the form, the card details are sent to Stripe to tokenize the card. That token, and the selected product, are stored as `unsafeMetadata`.
1. The app will signal to Clerk that a user is trying to sign up.
1. Clerk sends the user an OTP to their email.
1. The user enters the code into the application.
1. The app signals to Clerk that the user completed the signup and the account should be created.
1. The `user.created` webhook is triggered and the payload is sent to an API route in the application.
1. The webhook handler uses the Stripe SDK to create a payment method, customer, and subscription.
1. Once done, the user record is updated from Next and the user is allowed to proceed
## Conclusion
Custom flows in Clerk open a world of opportunities, allowing you to create your own forms to handle sign-up and sign-in. By taking advantage of webhooks and using the various types of metadata, you can also build in complex and advanced automation, while creating a seamless experience for your users.
If you enjoyed this, share it on X and let us know by tagging [@clerkdev](https://x.com/clerkdev)! | brianmmdev |
1,916,047 | Gaining a Competitive Edge: The Vital Role of Competitive Market Intelligence | In today's rapidly changing business environment, surpassing the competition across various metrics... | 0 | 2024-07-08T16:13:14 | https://dev.to/linda0609/gaining-a-competitive-edge-the-vital-role-of-competitive-market-intelligence-46ch | strtegyconsultingservices, competitivemarketintelligence | In today's rapidly changing business environment, surpassing the competition across various metrics is essential for long-term success. Consequently, [competitive market intelligence (CMI)](https://www.sganalytics.com/market-research/market-intelligence/) has become increasingly vital. CMI involves gathering, analyzing, and visualizing information about industry rivals, estimating market trends, and identifying disruptive industry developments. This comprehensive process aids leaders in optimizing growth and resilience strategies. This article delves into the benefits and importance of CMI, showcasing how it can guide [strategic decision-making](https://www.sganalytics.com/market-research/strategy-consulting-services/) and provide a significant competitive advantage.
**Benefits of Competitive Market Intelligence**
1. **Data-Backed Decisions for Competitive Advantage**
An informed business development strategy enables brands to explore methods for increasing market share. By leveraging CMI, organizations can make more effective decisions based on reliable data rather than assumptions. CMI can uncover new ideas to enhance product conception, personalize marketing efforts, and develop competitive resilience strategies. For instance, monitoring competitor actions and market dynamics allows corporations to anticipate risks and challenges. This foresight enables leaders to devise preventative and mitigative measures to minimize potential revenue losses and reduce consumer churn.
Understanding competitors’ pricing strategies, product launches, and promotional activities can help businesses adjust their strategies accordingly. CMI also provides insights into the effectiveness of competitors’ campaigns, allowing companies to learn from their successes and failures. This knowledge can be crucial in refining one’s own marketing and sales tactics.
2. **Identifying Relevant and Unmet Demands**
CMI tools with trend analytics assist global businesses in identifying unique trends and opportunities within their target markets. These tools facilitate the development of innovative products and services that meet novel consumer needs. Gap and pain point identification help businesses understand where competitors fall short in fulfilling consumer requests. By targeting underserved customer segments and excelling at satisfying their unmet needs, companies can attract a broader audience. Over time, other consumers may also switch to these superior offerings.
For example, if CMI reveals a growing demand for eco-friendly products that competitors have yet to address, a company can innovate and launch environmentally friendly alternatives. This proactive approach not only attracts environmentally conscious consumers but also positions the company as a leader in sustainability.
3. **Performance Benchmarking**
Performance metrics in CMI enable companies to compare themselves with rivals through benchmarking methods for long-term analysis. Comparative studies based on key performance indicators (KPIs) such as sales numbers, market share percentage, and customer satisfaction (CSAT) scores provide valuable insights. Performance benchmarking reveals competitors’ best practices for addressing business challenges or attracting new audiences. Organizations can adopt and optimize these practices to improve their core operations and strategies.
Benchmarking also helps in setting realistic and achievable goals. By understanding industry standards and competitor performance, businesses can set targets that are ambitious yet attainable. This practice fosters a culture of continuous improvement and drives overall business growth.
4. **Enhancing Marketing, Sales, and Innovation**
Targeted advertisement campaigns powered by CMI insights can outshine competitors’ marketing strategies. Utilizing related toolkits and reports helps increase sales, retain consumers, and customize offers. Additionally, examining competitors’ products and services highlights areas where improvements are necessary to surpass their strengths and mitigate weaknesses. Responsible businesses prioritize enhancing existing products or developing new ones that offer superior value only after evaluating their impact on competitiveness. CMI systems pinpoint areas where competitors are aggressively investing, enabling teams to revise resource allocation and project scope to stay ahead in intellectual property (IP) and patent registration.
CMI also helps in identifying the most effective channels and messages for marketing campaigns. By analyzing competitors’ marketing efforts, businesses can discover which platforms and content types resonate most with their shared target audience. This information can guide the creation of more effective marketing strategies that drive higher engagement and conversion rates.
**The Importance of Competitive Market Intelligence**
In the twenty-first century, maintaining a competitive edge in a hyper-competitive market is a data-driven endeavor. CMI facilitates resilience strategy creation, helping organizations anticipate and react to competitors’ tactics, thereby reducing potential losses. To secure a competitive edge and maintain long-term industry rankings, leveraging CMI techniques to monitor the competitive landscape is nonnegotiable.
Achieving a sustainable competitive advantage requires adaptable strategies due to the frequent emergence of industry-disrupting regulations and technologies. Performance monitoring powered by benchmarks is crucial, making CMI integral to flexible and agile business management approaches. In essence, CMI supports dynamic strategy formulation that can adjust to changing market conditions and competitive pressures.
Furthermore, CMI plays a critical role in risk management. By continuously monitoring the competitive environment, businesses can identify potential threats early and develop contingency plans. This proactive approach minimizes the impact of adverse events and enhances organizational resilience.
**Conclusion**
Organizations seeking strategic growth and risk forecasting must adopt effective intelligence-gathering methods to outpace business rivals. Given its numerous benefits and critical importance, competitive market intelligence is a modern tool that significantly enhances corporate resilience. CMI aids in branding, marketing, customer base comparison, gap analysis, and innovative product development. Consequently, global companies are increasingly collaborating with CMI experts to strengthen their competitive strategies and improve their industry positioning.
By utilizing high-quality data, businesses can enhance resource allocation, expand their audiences, and retain consumers more effectively. This comprehensive approach not only helps companies surpass competitors but also positions them as industry leaders. In summary, competitive market intelligence is indispensable for any organization aiming to thrive in today’s complex and rapidly evolving business landscape.
In conclusion, CMI is not just a tool but a strategic necessity in modern business operations. It empowers organizations to make informed decisions, identify new opportunities, benchmark performance, and enhance marketing and innovation efforts. By continuously monitoring the competitive landscape, businesses can stay ahead of industry trends and competitors, ensuring long-term success and sustainability. | linda0609 |
1,916,048 | Adding Tailwind CSS to Django | The quickest way to get started with Tailwind in CSS is by using Django tailwind. We'll walk you... | 0 | 2024-07-09T03:22:58 | https://dev.to/paul_freeman/adding-tailwind-css-to-django-14a | django, tailwindcss | The quickest way to get started with Tailwind in CSS is by using
[Django tailwind](https://github.com/timonweb/django-tailwind).
We'll walk you through the Django-tailwind setup.
First, you need to install the django-tailwind package. You can do this using pip
```
pip install django-tailwind
```
Then add django-tailwind to INSTALLED_APPS in settings.py
```py
INSTALLED_APPS = [
...
'tailwind',
...
]
```
Now create a tailwind app
```py
python manage.py tailwind init <tailwind_app_name>
```
Now add that app to INSTALLED_APPS
```py
INSTALLED_APPS = [
# other Django apps
'tailwind',
'tailwind_app_name'
]
```
Now go ahead and register the app by adding the following in `settings.py`
```py
TAILWIND_APP_NAME='<tailwind_app_name>'
```
By default Django tailwind includes a base.html file, if you are not using that base.html file, you can delete it and add the following to your custom base.html
```html
{% load static tailwind_tags %}
...
<head>
...
{% tailwind_css %} <!-- this adds the css stylesheet -->
...
</head>
```
Now that's done, lets start our tailwind by using the following command
```
python manage.py tailwind start
```
For production build use
```
python manage.py tailwind build
```
That's it you can now start using Tailwind in Django. If you want to read advanced usage, check out their [docs](https://django-tailwind.readthedocs.io/en/latest/installation.html) | paul_freeman |
1,916,049 | Day 8 of 90 DevOps Project: Creating a Private Kubernetes Cluster on AWS EKS with Public Jump Server Access | Hey Everyone, Welcome Back! I'm excited to share my latest project, part of my 90-day DevOps... | 0 | 2024-07-08T16:31:20 | https://dev.to/arbythecoder/day-8-of-90-devops-project-creating-a-private-kubernetes-cluster-on-aws-eks-with-public-jump-server-access-57pf | devops, kubernetes, aws, beginners | #
Hey Everyone, Welcome Back!
I'm excited to share my latest project, part of my 90-day DevOps journey. I know there's been a delay in delivering this article, and I want to be transparent about the reasons. AWS charges led me to close my previous account and open a new one after my free tier expired. Unfortunately, I couldn't afford the services at that time, which caused some setbacks.
Despite these challenges, I'm proud to present the eighth project in my series: creating a private Kubernetes cluster on AWS EKS with public jump server access and IAM role configuration. In this article, I'll walk you through the process, highlighting the obstacles I faced and the solutions I implemented. My goal is to help others avoid similar issues and provide a clear path to achieving this setup. Additionally, I've learned the importance of closely monitoring the free tier usage to avoid unexpected costs.
By the end of this guide, you'll be able to:
- Set up a private EKS cluster on AWS
- Configure a public jump server for secure access
- Implement IAM roles for secure cluster management
Let's dive in!
## Step 1: Create a VPC
### 1.1 Create a VPC
**VPC (Virtual Private Cloud)**: Your isolated network within AWS.
**Steps**:
1. Open the VPC Dashboard.
2. Click on "Create VPC".
3. Choose "VPC with Public and Private Subnets".
4. Configure the CIDR block, subnets, and other settings as needed.
5. Click "Create VPC".
### 1.2 Create Subnets
**Public Subnet**: For the jump server.
**Private Subnet**: For the EKS cluster nodes.
**Steps**:
1. Go to Subnets in the VPC Dashboard.
2. Click "Create subnet".
3. Select your VPC and configure subnets for both public and private.
### 1.3 Configure Route Tables
**Routing**: Ensures proper traffic flow between subnets.
**Steps**:
1. Go to Route Tables.
2. Create a route table for public subnets and associate an internet gateway.
3. Create a route table for private subnets with appropriate routes.

## Step 2: Create Security Groups
### 2.1 Create Security Groups for EKS Nodes
**Security Groups**: Act as virtual firewalls.
**Steps**:
1. Open the EC2 Dashboard.
2. Navigate to Security Groups.
3. Click "Create security group".
4. Define inbound rules for necessary ports (port 22 for SSH, Kubernetes API, etc.).

### 2.2 Create Security Group for Jump Server
**Steps**:
1. Follow the same steps as above to create a security group for the jump server.
2. Allow inbound SSH access from your IP address.
## Step 3: Create an EKS Cluster
### 3.1 Create IAM Roles
**IAM Roles**: Grant permissions to EKS nodes.
**Steps**:
1. Open the IAM Dashboard.
2. Create a role with the AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, and AmazonEC2ContainerRegistryReadOnly policies.

### 3.2 Create the EKS Cluster
**EKS Cluster**: The core of your Kubernetes environment.
**Important Note**: When creating your EKS cluster, select Kubernetes version 1.25. This version's extended support ends in May 2025, ensuring you have ample time for updates and maintenance without immediate upgrade concerns.
**Steps**:
1. Open the EKS Dashboard.
2. Click "Create cluster".
3. Configure the cluster name, Kubernetes version (select 1.25), and VPC settings.
4. Create the cluster and node group using the IAM roles configured.

## Step 4: Deploy the Jump Server
### 4.1 Launch an EC2 Instance
**Jump Server**: An EC2 instance in the public subnet.
**Steps**:
1. Go to the EC2 Dashboard.
2. Launch an instance and select a suitable Amazon Machine Image (AMI).
3. Choose an instance type and configure it to be in the public subnet.
4. Assign the security group created for the jump server.
5. Launch the instance.
### 4.2 Configure SSH Access
**Steps**:
1. Obtain the public DNS of the instance.
2. SSH into your jump server using the key pair created during instance launch.

## Step 5: Configure Access to EKS Cluster
### 5.1 Install kubectl on the Jump Server
**kubectl**: The Kubernetes command-line tool.
**Steps**:
1. SSH into your jump server.
2. Follow the [official documentation](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) to install kubectl.
### 5.2 Configure kubectl for EKS
**Steps**:
1. Update your kubeconfig file to point to your EKS cluster:
```bash
aws eks --region <your-region> update-kubeconfig --name <your-cluster-name>
```
2. Test the configuration:
```bash
kubectl get svc
```
## Step 6: Secure Access with IAM Roles
### 6.1 Create IAM Role for Jump Server
**Steps**:
1. Create a role with the necessary permissions to access EKS.
2. Attach the role to the EC2 instance (jump server).
### 6.2 Verify IAM Role Configuration
**Steps**:
1. SSH into the jump server.
2. Ensure the IAM role has the correct permissions by running a test command:
```bash
aws sts get-caller-identity
```
## Conclusion
You now have a private Kubernetes cluster on AWS EKS that can only be accessed through a public jump server. This setup ensures a secure and controlled environment, leveraging IAM roles for authentication and authorization.
### Resources
- [AWS EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
Feel free to reach out if you have any questions or need further assistance. Happy coding! | arbythecoder |
1,916,050 | 40 Days Of Kubernetes (13/40) | Day 13/40 Static Pods, Manual Scheduling, Labels, and Selectors in... | 0 | 2024-07-11T16:59:12 | https://dev.to/sina14/40-days-of-kubernetes-1340-45gf | kubernetes, 40daysofkubernetes | ## Day 13/40
# Static Pods, Manual Scheduling, Labels, and Selectors in Kubernetes
[Video Link](https://www.youtube.com/watch?v=6eGf7_VSbrQ)
@piyushsachdeva
[Git Repository](https://github.com/piyushsachdeva/CKA-2024/)
[My Git Repo](https://github.com/sina14/40daysofkubernetes)
In this part, `node` selector, `label` and `selector`, static `pod` and manual `scheduling` will be covered.
There's one component named `scheduler` decides which `workload` run on which `node`.
```console
root@localhost:~# kubectl get po -n kube-system | grep scheduler
kube-scheduler-lucky-luke-control-plane 1/1 Running 1 (6d10h ago) 7d
```
---
In `Kubernetes`, a `static pod` is a concept wherein you can deploy a `pod` that is not managed by the `API-server`.
`Static pods` are directly managed by the `Kubelet` component. The `Kubelet` service is deployed with the configuration path where we can add the pod manifest for the `Kubelet` to deploy.[source](https://devopscube.com/create-static-pod-kubernetes/), [read more](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/)
**Note** we provisioned our `cluster` with `kind` and it's actually `kubernetes` in `docker` which means every `node` is a `container` so we get help with `docker exec` command.
```console
root@localhost:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c0266722d131 kindest/node:v1.30.0 "/usr/local/bin/entr…" 7 days ago Up 6 days lucky-luke-worker
f791fa85c269 kindest/node:v1.30.0 "/usr/local/bin/entr…" 7 days ago Up 6 days 0.0.0.0:30001->30001/tcp, 127.0.0.1:39283->6443/tcp lucky-luke-control-plane
9c2d43b4f977 kindest/node:v1.30.0 "/usr/local/bin/entr…" 7 days ago Up 6 days lucky-luke-worker2
c9d85c72c573 weejewel/wg-easy "docker-entrypoint.s…" 5 months ago Up 6 days 0.0.0.0:51820->51820/udp, :::51820->51820/udp, 0.0.0.0:51821->51821/tcp, :::51821->51821/tcp wg-easy
```
---
#### 1. Static pods and manual scheduling:
- Run bash on control-plane node
```console
root@localhost:~# docker exec -it lucky-luke-control-plane bash
root@lucky-luke-control-plane:/# pwd
/
root@lucky-luke-control-plane:/# ps ef | grep kubelet
94749 pts/1 S+ 0:00 \_ grep kubelet HOSTNAME=lucky-luke-control-plane PWD=/ container=docker HOME=/root TERM=xterm NO_PROXY= SHLVL=1 HTTPS_PROXY= HTTP_PROXY= KUBECONFIG=/etc/kubernetes/admin.conf PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin _=/usr/bin/grep
```
- Directory of manifests
`kubelet` is monitoring this directory and as soon as you remove something in the directory, the `container` will be removed from `cluster`
```console
root@lucky-luke-control-plane:/# cd /etc/kubernetes/manifests/
root@lucky-luke-control-plane:/etc/kubernetes/manifests# ls -lh
total 16K
-rw------- 1 root root 2.4K Jul 1 16:16 etcd.yaml
-rw------- 1 root root 3.9K Jul 1 16:16 kube-apiserver.yaml
-rw------- 1 root root 3.4K Jul 1 16:16 kube-controller-manager.yaml
-rw------- 1 root root 1.5K Jul 1 16:16 kube-scheduler.yaml
```
- Manual scheduling
The `scheduler` looks for `nodeName` if it's not specified in the manifest of a `workload` or `yaml` file to schedule and provision on a `node`. And if it's specified, it's not its responsible to scheduling it.
```console
root@lucky-luke-control-plane:~# kubectl run nginx --image=nginx -o yaml > nginx-pod.yaml
```
Define `nodeName`
```yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2024-07-08T16:55:26Z"
labels:
run: nginx
name: nginx
namespace: default
resourceVersion: "915219"
uid: 17ebe9ec-78db-4886-95ba-3882dd141e5f
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
nodeName: lucky-luke-worker
```
First, we need to delete the `pod` which is created
```console
root@localhost:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 9m45s
nginx-ds-946m4 1/1 Running 0 4d
nginx-ds-rslrm 1/1 Running 0 4d
root@localhost:~# kubectl delete pods/nginx
pod "nginx" deleted
```
Then runing the `pod`:
```console
root@lucky-luke-control-plane:~# kubectl apply -f nginx-pod.yaml
pod/nginx created
root@lucky-luke-control-plane:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 70s 10.244.1.13 lucky-luke-worker <none> <none>
nginx-ds-946m4 1/1 Running 0 4d 10.244.2.10 lucky-luke-worker2 <none> <none>
nginx-ds-rslrm 1/1 Running 0 4d1h 10.244.1.12 lucky-luke-worker <none> <none>
```
---
#### 2. Labels and Selectors
The labels are metadata which help to filtering the resources. We also have labels in `spec` for `pods` and `selector` section that is matching the labels of pods with deployments.
```console
root@lucky-luke-control-plane:~# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 11m run=nginx
nginx-ds-946m4 1/1 Running 0 4d1h controller-revision-hash=76c9ffb96,env=demo,pod-template-generation=1
nginx-ds-rslrm 1/1 Running 0 4d1h controller-revision-hash=76c9ffb96,env=demo,pod-template-generation=1
root@lucky-luke-control-plane:~# kubectl get pods --selector run=nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 11m
```
Also, we can see an additional metadata `annotations` in a `workload`, which is similar to labels and storing additional details and information related to that object.
```console
root@localhost:~# kubectl edit pod nginx
```
```yaml
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"creationTimestamp":"2024-07-08T16:55:26Z","labels":{"run":"nginx"},"name":"nginx","namespace":"default","resourceVersion":"915219","uid":"17ebe9ec-78db-4886-95ba-3882dd141e5f"},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"Always","name":"nginx"}],"nodeName":"lucky-luke-worker"}}
creationTimestamp: "2024-07-08T17:09:50Z"
labels:
run: nginx
...
```
| sina14 |
1,916,076 | Generic DB Manager in .net C# | Simplify Your .NET Project with DbManager.EFCore Git Hub : dbmanager DbManager is a... | 0 | 2024-07-08T16:22:25 | https://dev.to/internet_traffic_6ced875e/generic-db-manager-in-net-c-18go | netcore, efcore, csharp | ## **Simplify Your .NET Project with DbManager.EFCore**
Git Hub : [dbmanager](https://github.com/ony19161/dbmanager)
DbManager is a powerful package that simplifies the integration of CRUD (Create, Read, Update, Delete) operations in your .NET projects. Currently it supports two database systems MS SQL Server and MySQL and leverages the functionalities of Entity Framework Core.
Integrating DbManager into your project is as easy as 1-2-3-4:
**Step 1: Add Project Reference**
Begin by adding DbManager to your solution as a project reference.
.Net CLI > dotnet add package DbManager.EFCore
Package Manager > Install-Package DbManager.EFCore
**Step 2: Register Database Context**
In your Program.cs file, register the AppDbContext class of DbManager library:
```
var builder = WebApplication.CreateBuilder(args);
// Use DbManager -> AppDbContext to register DbContext for your project.
builder.Services.AddDbContext<AppDbContext>();
var app = builder.Build();
```
**Step 3: Configure Connection Strings**
In your appsettings.json file, configure the connection strings based on your database provider and entities project:
For MS SQL Server:
```
{
"DatabaseProvider": "SqlServer",
"EntitiesAssemblyName": "Demo.Db", // Assembly name where you have your Entity classes
"ConnectionStrings": {
"DefaultConnection": "your_sql_server_connection_string_here"
}
}
For MySQL:
{
"DatabaseProvider": "MySql",
"EntitiesAssemblyName": "Demo.Db",
"ConnectionStrings": {
"DefaultConnection": "your_mysql_connection_string_here"
}
}
```
**Step 5: Add entity class/classes**
```
[Table("Students")]
public class Student
{
[Key]
public int Id { get; set; }
public string Name { get; set; }
public int RollNo { get; set; }
public string Section { get; set; }
public string BirthDate { get; set; }
public string BloodGroup { get; set; }
}
```
You must mark your entity classes with [Table] annotation, otherwise "DbManager", will not be able to include them to the DbContext.
Step 4: Inject DbManager into Your Classes
Inject the AppDbContext class into your desired Controller, Business, or Repository class. For example, in a StudentRepository class:
```
public class StudentRepository : BaseRepository<Student>, IStudentRepository
{
public StudentRepository(AppDbContext context) : base(context) // Injected DbManager -> AppDbContext to StudentRepository
{
}
}
-----------------------------------------------------------------------------
public interface IStudentRepository : IRepository<Student>
{
}
```
In above code, BaseRepository is from "DbManager" library, and IStudentRepository must inherit from "IRepository" of "DbManager" library.
Utilize Pre-defined CRUD Operations
With DbManager integrated into your project, you can take advantage of all the CRUD methods already defined in the BaseRepository class.
**Here is a quick overview:**
- FindAsync : Find a single entity based on a provided predicate.
- FetchListBySPAsync: Fetch a list of entities using a stored procedure and parameters.
- GetAllAsync: Get all entity objects.
- GetByIdAsync: Retrieve an entity object based on its ID.
- InsertAsync: Insert a new entity object into the database.
- UpdateAsync: Update an existing entity object.
- DeleteAsync : Delete a single entity object.
Remember, all methods are asynchronous, so ensure you use async/await properly while calling them.
Congratulations! You've successfully configured DbManager for your project!
For more detailed information and usage examples, refer to our documentation.
Note: Make sure to add a project reference to DbManager in your solution to access the BaseRepository class and other utilities. For any queries or issues, please reach out to our support team.
Happy Coding with DbManager! | internet_traffic_6ced875e |
1,916,077 | A Personalized News Summary AI Tool with me. | Have you wished to read like top 5 latest articles all at once on a single page and in just a few... | 0 | 2024-07-11T22:56:30 | https://dev.to/sababu_/a-personalized-news-summary-ai-tool-with-me-79g | Have you wished to read like top 5 latest articles all at once on a single page and in just a few minutes instead of turning pages 🤔? Nowadays, everything is fast, there are many things to read, from newsletters we subscribed to, flooding into our email inbox every morning to articles we come across shared with us or popping up on our social media timelines, it could be challenging to cover them all.
I recently pondered how I could stay updated with the news from home 🇷🇼, while also exploring new interests and learning. Many reasons can be as excuses for not finding time to read the news (I don't know mine honestly 🤭). One of the popular online newspapers for Rwandan news in English is [the New Times](https://www.newtimes.co.rw/).
With artificial intelligence (AI) booming in every corner of our lives, [ from my view ] it has seemed to be facilitating us rather than taking over our jobs ( now we are less worried, even though **Devin** is there), that's why I have been interested lately in software engineering with AI/ML. A plethora of models are being released daily and platforms like HuggingFace have been free of access to a vast of the models, datasets, demo apps, and AI/ML researchers.
I wanted to read the latest articles but in briefe, summarized. I used the Bart-large-cnn for summarization, as one of the Machine learning (ML) models.
```
from transformers import pipeline
summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
ARTICLE = 'put-all-your-contents-here'
summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False)
```
you can explore more [here](https://huggingface.co/facebook/bart-large-cnn).
The model mentioned above was used to summarize the 5 latest extracted articles, by web scrapping them.
> _A web scraper automates the process of extracting information from other websites, quickly and accurately_
and one of the libraries for easing the job is BeautifulSoup4 for parsing HTML and XML into texts, then summarizes one article per article in 130 characters on maximum.
[Check how it returns the articles with Postman]

To integrate everything, I built an API with Django in the backend (wanted to keep it in Python) and react.js for the readers ( Front End).
Additionally, the reader can also opt to receive these summarised articles via their email at the scheduled time using the SMTP application layer protocol.
[ check how they are sent in email]


In conclusion, many models can accomplish similar tasks, tools like ChatGPT or Claude, etc... This project was an exploration of what different tools combined could do, thanks to AI/ML for easing the job. Numerous tools are being developed and released to assist us in our work, from generating graphics to composing audio to enhancing the articles we write. it is up to us to know how we can integrate these tools into our projects and get the best out of them.
NB. Retrieving all the articles and summarizing them at once takes some seconds, a lot still needs to be improved 😉, remember, it was for fun. Here is the [link](https://github.com/bsababu/summarized-news) to the project.
| sababu_ | |
1,916,079 | Unlocking Innovation: The Future of Low-Code Development | Introduction Low-code development platforms have revolutionized the way applications are built,... | 0 | 2024-07-08T16:23:47 | https://dev.to/engkerollosadel/unlocking-innovation-the-future-of-low-code-development-48o | **Introduction**
Low-code development platforms have revolutionized the way applications are built, enabling developers to create software with minimal hand-coding. This evolution has opened up new opportunities for businesses to innovate quickly and efficiently. As we move forward, the next generation of low-code capabilities promises even greater advancements, enhancing productivity, reducing costs, and democratizing application development.
**The Current State of Low-Code Development**
- Definition and Purpose: Low-code platforms are designed to reduce the complexity of coding by providing visual development tools and pre-built modules.
- Market Growth: The low-code development market is experiencing rapid growth, driven by the need for faster application delivery and the shortage of skilled developers.
- Use Cases: Businesses are leveraging low-code platforms for a wide range of applications, from simple process automation to complex enterprise solutions.
**Emerging Capabilities in Low-Code Development**
- AI-Driven Development: Integration of artificial intelligence to suggest code, automate tasks, and optimize performance.
- Enhanced Collaboration Tools: Features that facilitate better communication and teamwork among developers, designers, and business stakeholders.
- Greater Integration: Improved ability to integrate with various data sources, legacy systems, and third-party services.
- Advanced Customization: More flexibility for developers to customize and extend applications beyond the default capabilities of the platform.
**Benefits of the Next-Gen Low-Code Platforms**
1. Increased Speed: Accelerated development cycles, allowing for rapid prototyping and deployment.
2. Cost Efficiency: Reduced need for extensive coding and testing, lowering overall development costs.
3. Accessibility: Empowering non-developers to create applications, thus reducing the dependency on highly skilled IT professionals.
4. Scalability: Platforms that can handle larger, more complex applications and scale seamlessly with business growth.
**Challenges and Considerations**
1. Security Concerns: Ensuring that low-code applications adhere to robust security standards.
2. Quality Assurance: Maintaining high-quality standards in applications developed with minimal coding.
3. Skill Gap: Training developers and non-developers to effectively use low-code platforms.
**Conclusion**
The next generation of low-code capabilities is set to transform the landscape of application development. By embracing these advancements, businesses can innovate faster, reduce costs, and stay competitive in an ever-evolving digital world. | engkerollosadel | |
1,916,080 | Erecept | Wegovy to skuteczny lek, który zdobył znaczną popularność w leczeniu otyłości i nadwagi. FlexTouch to... | 0 | 2024-07-08T16:24:12 | https://dev.to/receptax07/erecept-51h | Wegovy to skuteczny lek, który zdobył znaczną popularność w leczeniu otyłości i nadwagi. FlexTouch to zaawansowany device iniekcji leku Wegovy opracowany przez firmę Novo Nordisk, który pozwala na precyzyjne podawanie leku. W tym artykule omówimy działanie leku Wegovy FlexTouch, jego zastosowanie, dawkowanie, skutki uboczne, przeciwwskazania oraz cenę leku i jak move można kupić.
Działanie leku Wegovy FlexTouch
Wegovy, podobnie jak Ozempic zawiera substancję czynną semaglutyd, która należy do grupy agonistów receptora GLP-1 (glukagonopodobnego peptydu 1). Semaglutyd zawarty w leku wegovy działa na kilka sposobów, aby wspierać utratę wagi:
Zwiększenie uczucia sytości: Wegovy wpływa na receptory w mózgu odpowiedzialne za kontrolowanie apetytu, co pomaga zmniejszyć uczucie głodu i zwiększyć uczucie sytości po posiłkach.
Opóźnienie opróżniania żołądka: Lek spowalnia proces opróżniania żołądka, co sprawia, że jedzenie pozostaje w żołądku dłużej, co również przyczynia się do dłuższego utrzymania uczucia sytości.
Zmniejszenie apetytu: Działając na centralny układ nerwowy, Wegovy redukuje apetyt, co pomaga zmniejszyć spożycie kalorii.
**_[Erecept](https://receptax.pl/recepta-online/)_**
Zastosowanie Wegovy FlexTouch
Wegovy FlexTouch jest stosowany przede wszystkim w leczeniu otyłości u dorosłych, którzy mają:
Wskaźnik masy ciała (BMI) wynoszący 30 lub więcej.
BMI wynoszący 27 lub więcej z co najmniej jednym towarzyszącym schorzeniem związanym z wagą, takim jak nadciśnienie tętnicze, cukrzyca typu 2, dyslipidemia czy bezdech senny.
Lek jest częścią zintegrowanego podejścia do zarządzania wagą, które obejmuje dietę niskokaloryczną oraz zwiększoną aktywność fizyczną.
Dawkowanie leku Wegovy FlexTouch
Dawkowanie Wegovy FlexTouch różni się w zależności od etapu leczenia i tolerancji pacjenta na lek.
Standardowy schemat dawkowania leku wegovy wygląda następująco:
Początkowa dawka: zero,25 mg podawana raz na tydzień przez cztery tygodnie.
Stopniowe zwiększanie dawki: Po początkowym okresie, dawka jest stopniowo zwiększana do zero,five mg na tydzień przez kolejne cztery tygodnie.
Docelowa dawka: Po dalszym stopniowym zwiększaniu, docelowa dawka wynosi 2,4 mg raz na tydzień.
Zwiększanie dawki w ten sposób pomaga zmniejszyć ryzyko wystąpienia działań niepożądanych oraz pozwala organizmowi na lepsze przystosowanie się do leku.
Skutki uboczne leku Wegovy
Jak każdy lek, Wegovy może powodować skutki uboczne.
Najczęściej zgłaszane skutki uboczne wegovy to:
Problemy żołądkowo-jelitowe: nudności, wymioty, biegunka, zaparcia.
Bóle głowy: niektóre osoby doświadczają bólów głowy, zwłaszcza na początku leczenia.
Zawroty głowy: mogą wystąpić zawroty głowy, szczególnie przy zmianie pozycji ciała.
Reakcje w miejscu wstrzyknięcia: takie jak zaczerwienienie, obrzęk lub ból w miejscu podania leku.
Rzadziej mogą wystąpić poważniejsze skutki uboczne, takie jak:
Ostre zapalenie trzustki: objawiające się silnym bólem brzucha, nudnościami, wymiotami.
Problemy z nerkami: nasilenie istniejących problemów z nerkami lub ich nowy rozwój.
Reakcje alergiczne: w tym wysypki, świąd, obrzęk twarzy, ust, języka czy gardła.
W przypadku wystąpienia poważnych skutków ubocznych, należy natychmiast skontaktować się z lekarzem.
Przeciwwskazania do stosowania leku Wegovy FlexTouch
Nie każdy może stosować Wegovy FlexTouch. Istnieje kilka przeciwwskazań do jego stosowania:
Nadwrażliwość na semaglutyd: osoby uczulone na substancję czynną leku nie powinny move stosować.
Historia zapalenia trzustki: osoby, które w przeszłości miały zapalenie trzustki, powinny unikać stosowania Wegovy.
Problemy z tarczycą: szczególnie rak rdzeniasty tarczycy lub zespół MEN 2 (zespół mnogiej gruczolakowatości wewnątrzwydzielniczej typu 2).
Ciąża i karmienie piersią: Wegovy nie jest zalecany dla kobiet w ciąży ani karmiących piersią.
Gdzie kupić lek Wegovy FlexTouch?
Wegovy FlexTouch można nabyć w aptekach, zarówno stacjonarnych, jak i internetowych. Aby uzyskać receptę na Wegovy, należy umówić się na wizytę u lekarza, który oceni, czy lek jest odpowiedni dla danej osoby.
Oto kilka kroków, jak można kupić Wegovy:
Konsultacja lekarska: Umów się na wizytę u lekarza pierwszego kontaktu lub specjalisty, który przepisze lek Wegovy jeśli uzna to za odpowiednie.
Recepta on line: Otrzymaną receptę można zrealizować w aptece stacjonarnej lub przesłać do apteki internetowej.
Zakup w aptece stacjonarnej: Upewnij się, że apteka ma lek na stanie. Możesz wcześniej zadzwonić i zapytać o dostępność Wegovy.
Zakup w aptece internetowej: W Polsce działa wiele aptek internetowych, które oferują Wegovy. Przy zakupie on-line należy przesłać skan lub zdjęcie recepty. Lek zostanie dostarczony pod wskazany adres.
Cena leku Wegovy FlexTouch
Cena leku Wegovy FlexTouch może się różnić w zależności od regionu i apteki. W Polsce koszt leku Wegovy FlexTouch oscyluje w granicach od 2000 do 3000 zł za miesięczną kurację. Warto jednak pamiętać, że dokładna cena może się różnić w zależności od miejsca zakupu oraz aktualnych promocji i rabatów.
Podsumowanie
FlexTouch to nowoczesny device podawania leku Wegovy, który wspomaga leczenie otyłości i nadwagi działając na kilku poziomach, aby pomóc pacjentom w osiągnięciu i utrzymaniu zdrowej wagi. Mimo że lek ten jest skuteczny, należy pamiętać o jego możliwych skutkach ubocznych oraz przeciwwskazaniach do stosowania. Przed rozpoczęciem leczenia Wegovy FlexTouch zawsze warto skonsultować się z lekarzem, aby ocenić czy jest to odpowiednia opcja terapeutyczna dla danej osoby.
Stosowanie Wegovy FlexTouch jako część zintegrowanego programu zarządzania wagą obejmującego zmiany w diecie, zwiększoną aktywność fizyczną i modyfikacje stylu życia może prowadzić do znaczącej poprawy zdrowia i jakości życia pacjentów z otyłością. | receptax07 | |
1,916,081 | Day 3: Continuous Integration Explained: How to Integrate Code Efficiently | Introduction to Continuous Integration (CI) Continuous Integration (CI) is a cornerstone... | 0 | 2024-07-08T16:27:43 | https://dev.to/dipakahirav/day-3-continuous-integration-explained-how-to-integrate-code-efficiently-2ohh | devops, cicd, learning, beginners | #### Introduction to Continuous Integration (CI)
Continuous Integration (CI) is a cornerstone of modern software development practices. It involves the regular merging of code changes into a shared repository, followed by automated builds and tests. This process helps identify and fix issues early, ensuring that the codebase remains stable and releasable. In this post, we will delve into the details of CI, its workflow, implementation, and best practices.
please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1
) to support my channel and get more web development tutorials.
#### What is Continuous Integration (CI)?
Continuous Integration (CI) is the practice of frequently integrating code changes into a central repository. Each integration triggers an automated build and testing process, allowing developers to detect and resolve issues quickly. The primary goals of CI are to improve code quality, reduce integration problems, and accelerate the development process.
**Key Benefits of CI:**
- Early detection of bugs
- Improved collaboration among developers
- Faster development cycles
- Enhanced code quality and reliability
#### CI Workflow
A typical CI workflow includes the following steps:
1. **Code Commit:**
- Developers commit their code changes to a shared repository (e.g., Git).
- Each commit triggers the CI pipeline.
2. **Build:**
- The CI server (e.g., Jenkins) fetches the latest code from the repository.
- The code is compiled and built into executable artifacts.
3. **Automated Testing:**
- Automated tests (unit tests, integration tests) are executed to verify the code changes.
- Test results are reported back to the developers.
4. **Integration:**
- If the build and tests pass, the code changes are integrated into the main branch.
- The application is ready for further testing or deployment.
#### Implementing CI with Jenkins
**Step 1: Create a New Jenkins Job**
1. **New Item:**
- On the Jenkins dashboard, click on "New Item" to create a new job.
- Enter a name for your job (e.g., "My CI Pipeline") and select "Freestyle project" or "Pipeline" as the job type.
2. **Configure SCM:**
- In the job configuration page, scroll down to the "Source Code Management" (SCM) section.
- Select "Git" and enter the repository URL where your code is hosted.
3. **Build Triggers:**
- In the "Build Triggers" section, select "Poll SCM" or "GitHub hook trigger" to automatically trigger builds on code commits.
**Step 2: Define Build Steps**
1. **Build Step:**
- Scroll down to the "Build" section and add a build step.
- For a simple Java project, you might use "Execute shell" to run build commands like:
```sh
mvn clean install
```
2. **Test Step:**
- Add another build step to run your automated tests.
- For example, you can run JUnit tests using:
```sh
mvn test
```
**Step 3: Save and Run**
1. **Save and Build:**
- Save your job configuration and click on "Build Now" to run the pipeline.
- Jenkins will execute the build and test steps, displaying the output in real-time.
#### Best Practices for CI
1. **Frequent Commits:**
- Commit code changes frequently to ensure that the CI pipeline runs often.
- Smaller, more frequent commits help isolate issues and reduce integration conflicts.
2. **Automated Testing:**
- Implement a robust suite of automated tests to catch issues early.
- Include unit tests, integration tests, and, if possible, end-to-end tests.
3. **Maintain a Clean Build:**
- Ensure that the main branch always has a clean, stable build.
- Use branching strategies (e.g., feature branches, pull requests) to manage code changes.
4. **Monitor and Optimize:**
- Regularly monitor your CI pipeline for performance and reliability.
- Optimize build and test times to keep the pipeline efficient.
5. **Continuous Feedback:**
- Provide continuous feedback to developers through build and test results.
- Use notifications (e.g., email, Slack) to keep the team informed of pipeline status.
#### Conclusion
Continuous Integration (CI) is a powerful practice that enhances code quality and accelerates the development process. By integrating code changes frequently and automating builds and tests, CI helps detect and resolve issues early, improving collaboration and productivity. In the next post, we will explore Continuous Deployment (CD) and learn how to automate your software releases.
Stay tuned for more insights and practical tips on mastering CI/CD!
Feel free to leave your comments or questions below. If you found this guide helpful, please share it with your peers and follow me for more web development tutorials. Happy coding!
### Follow and Subscribe:
- **Website**: [Dipak Ahirav] (https://www.dipakahirav.com)
- **Email**: dipaksahirav@gmail.com
- **Instagram**: [devdivewithdipak](https://www.instagram.com/devdivewithdipak)
- **YouTube**: [devDive with Dipak](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1
)
- **LinkedIn**: [Dipak Ahirav](https://www.linkedin.com/in/dipak-ahirav-606bba128) | dipakahirav |
1,916,082 | I launched a real-time telegram tracker for crypto communities. | The Problems We're Solving: *1. Information Overload: * - Crypto and Web3 enthusiasts often need to... | 0 | 2024-07-08T16:28:07 | https://dev.to/tonyfuchs112/i-launched-a-real-time-telegram-tracker-for-crypto-communities-1fle | javascript, python, cryptocurrency | The Problems We're Solving:
**1. Information Overload:
** - Crypto and Web3 enthusiasts often need to monitor dozens or even hundreds of communities across Telegram and Discord.
- Real-life example: Our platform allows you to track these communities without cluttering your personal Telegram with spam-filled chats. You can view all relevant information through our streamlined interface.
**2. Time-Consuming Research:
** - Manually sifting through thousands of messages daily is incredibly time-intensive.
- Our tool automates this process, saving hours of scrolling and reading.
**3. Difficulty Identifying Genuine Activity:
** - It's challenging to differentiate between real community engagement and artificial hype or bot activity.
- We provide metrics on authentic user interactions and admin involvement.
**4. Scam and Rug Pull Detection:
** - The crypto space is unfortunately rife with scams that can be hard to spot.
- Our platform flags suspicious behavior patterns and potential red flags.
**5. Keeping Up with Rapid Developments:
** - Projects can evolve quickly, making it hard to stay informed.
- We offer real-time updates and summaries of key discussions and announcements.
**6. Limited Visibility into Project Team Activity:
** - It's often unclear how active and responsive project teams really are.
- Our admin and moderator activity tracking provides insights into team engagement.
**7. Difficulty in Comparative Analysis:
** - Comparing multiple projects across different platforms can be challenging.
- Our tool brings data from various sources into one place for easier comparison.
By addressing these issues, our platform aims to empower users with more efficient, comprehensive, and insightful research capabilities in the crypto and Web3 space.
Website: https://nyxcipher.ai
X : https://x.com/nyxcipherai | tonyfuchs112 |
1,916,085 | Lightroom Premium APK for iOS | Adobe Lightroom is a powerful tool for photographers, offering a wide array of features for photo... | 0 | 2024-07-08T16:29:58 | https://dev.to/shams_uddin_54a750943f8e2/lightroom-premium-apk-for-ios-3e2e | learning, mobile, software | Adobe Lightroom is a powerful tool for photographers, offering a wide array of features for photo editing and management. While the official app is available on the Apple App Store, some users seek out modified versions like Lightroom Premium APK to access premium features for free. However, this practice comes with risks and considerations. In this article, we'll explore what Lightroom Premium APK offers, how it compares to the official app, and the implications of using it on your iOS device.
## **What is Lightroom Premium APK?**
Lightroom Premium APK is a modified version of the official Adobe Lightroom app. It is designed to unlock premium features without requiring a subscription. These features typically include advanced editing tools, selective adjustments, healing brushes, cloud storage, and more.
## **Key Features of Lightroom Premium APK
**
**Advanced Editing Tools:**
Access to professional-grade editing tools such as curves, color grading, and geometry adjustments.
**Selective Adjustments:
**
Ability to apply edits to specific areas of a photo, enhancing precision and control.
**Healing Brush:
**
Remove unwanted objects or blemishes seamlessly.
**Cloud Storage:
**
Free access to Adobe's cloud storage for backing up and syncing photos across devices.
**Presets and Profiles:
**
A vast library of presets and profiles for one-click photo enhancements.
## **How to Install Lightroom Premium APK on iOS
**
Enable Trust for Untrusted Apps:
Go to Settings > General > Profiles & Device Management.
Find the profile associated with the downloaded APK and enable trust for it.
Download from a Reliable Source:
Ensure you download the APK from a reputable source to minimize the risk of malware or other security issues.
Install the App:
Use a tool like Cydia Impactor or AltStore to sideload the APK onto your iOS device.
## **Benefits of Using the Official App:
**
**Security**:
Guaranteed safety and regular updates directly from Adobe.
Support:
Access to Adobe’s customer support for troubleshooting and assistance.
Integration:
Seamless integration with other Adobe Creative Cloud apps and services.
**
## Conclusion
**
While [Lightroom Premium APK for iOS](https://lrapkp.com/adobe-lightroom-for-iphone-ipad-ios/) offers an enticing array of features for free, it comes with significant risks. Security vulnerabilities, legal issues, and lack of updates are serious considerations that can outweigh the benefits. For a secure and reliable photo editing experience, the official Adobe Lightroom app remains the best choice. It ensures you have access to the latest features, updates, and support, all within the bounds of Adobe’s terms of service.
| shams_uddin_54a750943f8e2 |
1,916,086 | What are your goals for week 28 of 2024? | It's week 28 of 2024. What are your goals for the week? What are you building? What... | 19,128 | 2024-07-08T17:05:58 | https://dev.to/jarvisscript/what-are-your-goals-for-week-28-of-2024-jil | discuss, motivation | It's week 28 of 2024.
## What are your goals for the week?
- What are you building?
- What will be a good result by week's end?
- What events are happening this week?
* any suggestions for in person or virtual events?
- Any special goals for the quarter?
### Last Week's Goals
Last week was a short week, no local events. Didn't plan out too much for the week expected most people would be off Thursday and Friday.
- [:white_check_mark:] Continue Job Search.
- [:white_check_mark:] Project work. Content update. Habit tracker v2 is not saving correctly.
- [:x:] Blog.
- Events.
No events
- [:white_check_mark:] Run a goal setting thread on Virtual Coffee Slack.
### This Week's Goals
- Continue Job Search.
- Project work.
- Blog.
- Events.
* Deploy 2024. On Tuesday.
- Run a goal setting thread on Virtual Coffee Slack.
### Your Goals for the week
Your turn what do you plan to do this week?
- What are you building?
- What will be a good result by week's end?
- What in person or virtual events are happening this week?
```html
-$JarvisScript git commit -m "What are your goals?"
``` | jarvisscript |
1,916,087 | Deploying a Discord Bot to AWS EC2 Using Terraform | Prerequisites Install Terraform: Ensure that Terraform is installed on your local... | 0 | 2024-07-08T16:32:13 | https://dev.to/aisquare/deploying-a-discord-bot-to-aws-ec2-using-terraform-in5 | ## Prerequisites
1. **Install Terraform**: Ensure that Terraform is installed on your local machine. You can download it from the [Terraform website](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli).
2. **AWS CLI**: Install and configure the AWS CLI with your credentials. Follow the [AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
3. Discord Bot code, this tutorial is only concerned with the deployment of your code to AWS, not the actual bot code itself.
## AWS Setup with Terraform
1. **Set Up AWS Credentials**: Configure your AWS CLI with the aws configure command and provide your AWS Access Key, Secret Key, region, and output format.
2. **Create a Key Pair**: Generate an SSH key pair to access your EC2 instance. Use the following command:
```
ssh-keygen -t rsa -b 2048 -f ~/.ssh/aws -N ""
```
This will create a public key file (_~/.ssh/aws.pub_) and a private key file (_~/.ssh/aws_).
3. **Create a Terraform Directory**: Create a directory for your Terraform files and navigate into it:
```
mkdir terraform-deploy
cd terraform-deploy
```
4. **Discord Bot Code**: Place your Discord bot code in the following directory structure:
```
<terraform-dir>/Code/discordbot.py
<terraform-dir>/Code/.env
<terraform-dir>/Code/requirements.txt
```
Note: It is crucial to have your Discord bot code ready in the specified structure before proceeding. If your files are named differently or located in different directories, you must update the _user_data.sh.tpl _file accordingly to match your setup.
## Terraform Configuration Files
1. **main.tf**: This file contains the main configuration for deploying an EC2 instance.
```
provider "aws" {
region = var.aws_region
}
resource "aws_instance" "web" {
ami = var.ami_id
instance_type = var.instance_type
key_name = aws_key_pair.generated_key.key_name
security_groups = [aws_security_group.sg_ssh.name]
user_data = templatefile("user_data.sh.tpl", {
python_script = file("${path.module}/Code/discordbot.py"),
env_file = file("${path.module}/Code/.env"),
requirements = file("${path.module}/Code/requirements.txt")
})
tags = {
Name = var.instance_name
}
}
resource "aws_security_group" "sg_ssh" {
name = "allow_ssh"
description = "Allow SSH inbound traffic"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [var.trusted_ip]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_key_pair" "generated_key" {
key_name = "generated-key"
public_key = file(var.public_key_path)
}
```
2. **variables.tf**: This file defines the variables used in the Terraform configuration.
```
variable "aws_region" {
description = "The AWS region to deploy to"
default = "us-east-1"
}
variable "ami_id" {
description = "The AMI ID for the EC2 instance"
default = "ami-0bb84b8ffd87024d8" # Amazon Linux 2 AMI ID
}
variable "instance_type" {
description = "The instance type for the EC2 instance"
default = "t2.micro" # You can update this based on your requirements
}
variable "trusted_ip" {
description = "The IP address allowed to SSH into the instance"
default = "<Insert-Your-IPV4-Address>/32" # Use “0.0.0.0/0” to allow SSH from all
}
variable "public_key_path" {
description = "Path to the SSH public key"
default = "~/.ssh/aws.pub"
}
variable "instance_name" {
description = "The name of the EC2 instance"
default = "Discord-Scripts-Instance" # Replace with your desired instance name
}
```
3. **outputs.tf**: This file outputs the public IP address of the EC2 instance.
```
output "instance_ip" {
description = "Public IP of the EC2 instance"
value = aws_instance.web.public_ip
}
```
4. **user_data.sh.tpl**: This script will be executed on the EC2 instance to set up the environment and run the Discord bot.
```
#!/bin/bash
# Update the package index
sudo yum update -y
# Install Python3 and pip
sudo yum install -y python3
# Ensure pip in installed
python3 -m ensurepip --upgrade
# Create a directory for the Python script and environment file
mkdir -p /home/ec2-user/app
# Copy the Python script
cat <<EOF > /home/ec2-user/app/discordbot.py
${python_script}
EOF
# Copy the .env file
cat <<EOF > /home/ec2-user/app/.env
${env_file}
EOF
# Copy the requirements.txt file
cat <<EOF > /home/ec2-user/app/requirements.txt
${requirements}
EOF
## Copy the Any additional files similarly
# Install required Python packages
sudo pip3 install -r /home/ec2-user/app/requirements.txt
# Run the Python script
python3 /home/ec2-user/app/discordbot.py
```
## Deploying with Terraform
1. **Initialize Terraform**: Run the following command to initialize Terraform. This will download the necessary providers and set up the environment.
```
terraform init
```
2. **Plan the Deployment**: Create an execution plan to ensure everything is configured correctly.
```
terraform plan
```
3. **Apply the Deployment**: Apply the configuration to deploy the resources to AWS.
```
terraform apply
```
4. **Retrieve the Instance IP**: Once the deployment is complete, you can retrieve the public IP of the EC2 instance.
```
terraform output instance_ip
```
5. **Access the EC2 Instance**: Use the public IP to SSH into the instance.
```
ssh -i ~/.ssh/aws ec2-user@<instance_ip>
```
## Conclusion
By following these steps, you can deploy your Discord bot to an AWS EC2 instance using Terraform. This setup ensures that your bot runs automatically on the instance, leveraging the infrastructure as code approach to manage your deployment.
Feel free to customize the provided Terraform configuration files according to your specific requirements. Happy deploying!
## ABOUT AISQUARE
[AISquare](https://aisquare.com/) is an innovative platform designed to gamify the learning process for developers. Leveraging an advanced AI system, AISquare generates and provides access to millions, potentially billions, of questions across multiple domains. By incorporating elements of competition and skill recognition, AISquare not only makes learning engaging but also helps developers demonstrate their expertise in a measurable way. The platform is backed by the Dynamic Coalition on Gaming for Purpose ([DC-G4P](https://intgovforum.org/en/content/dynamic-coalition-on-gaming-for-purpose-dc-g4p)), affiliated with the UN's Internet Governance Forum, which actively works on gamifying learning and exploring the potential uses of gaming across various sectors. Together, AISquare and DC-G4P are dedicated to creating games with a purpose, driving continuous growth and development in the tech industry.
You can reach us at [LinkedIn](https://www.linkedin.com/groups/14431174/), [X](https://x.com/AISquareAI), [Instagram](https://www.instagram.com/aisquarecommunity/), [Discord](https://discord.com/invite/8tJ3aCDYur).
_Author - Jatin Saini_
| aisquare | |
1,916,088 | Tooth Whitening | https://maps.google.com/maps?cid=17481504403585659126 | 0 | 2024-07-08T16:32:23 | https://dev.to/tooth-whitening/tooth-whitening-e8d | [https://maps.google.com/maps?cid=17481504403585659126](https://maps.google.com/maps?cid=17481504403585659126) | tooth-whitening | |
1,916,089 | Node.js is Not Single-Threaded | Node.js is known as a blazingly fast server platform with its revolutionary single-thread... | 0 | 2024-07-08T18:40:39 | https://dev.to/evgenytk/nodejs-is-not-single-threaded-29o1 | node, javascript, webdev, programming | Node.js is known as a blazingly fast server platform with its revolutionary single-thread architecture, utilizing server resources more efficiently. But is it actually possible to achieve that amazing performance using only one thread? The answer might surprise you.
In this article we will reveal all the secrets and magic behind Node.js in a very simple manner.
## Process vs Thread ⚙️
Before we begin, we have to understand what a process and a thread are, and discover their differences and similarities.
A **process** is an instance of a program that is currently being executed. Each process runs independently of others. Processes have several substantial resources:
- **Execution code**;
- **Data Segment** - contains global and static variables that needs to be accessible from any part of the program;
- **Heap** - dynamic memory allocation;
- **Stack** - local variables, function arguments and function calls;
- **Registers** - small, fast storage locations directly within CPU used to hold data temporarily during execution of programs (like program pointer and stack pointer).
A **thread** is a single unit of execution within a process. There might be multiple threads within the process performing different operations simultaneously. The process share execution code, data and heap with threads, but stack and registers are **allocated separately for each thread**.

## JavaScript is Not Threaded ❗️
To avoid misunderstanding of terms, it's important to note that JavaScript itself is **neither single-threaded nor multi-threaded**. The language has nothing to do with threading. It's just a set of instructions for the execution platform to handle. The platform handle these instructions in its own way - whether in a single-threaded or multi-threaded manner.
## I/O operations 🧮
(Or Input / Output operations) are generally considered to be slower compared to other computer operations. Here are some examples:
- write data to the disk;
- read data from the disk;
- wait for user input (like mouse click);
- send HTTP request;
- performing a database operation.
## I/O's are Slow 🐢
You might be wondering why reading data from disk is considered slow? The answer lies in the physical implementation of hardware components.
Accessing the RAM is in the order of **nanoseconds**, while accessing data on the disk or the network is in the order of **milliseconds**.
The same applies to the bandwidth. RAM has a transfer rate consistently in the order of **GB/s**, while the disk or network varies from **MB/s** to optimistically GB/s.
On top of that, we have to consider the **human factor**. In many circumstances, the input of an application comes from a real person (like, a key press). So the speed and frequency of I/O doesn't only depend on technical aspects.
## I/O's Block the Thread 🚧
I/O's can significantly slow down a program. The thread remains blocked, and no further operations will be executed until the I/O is completed.

## Create More Threads! 🤪
Okay, why not just spawn more threads inside the program and handle each request separately? Well, it seems like a good idea. Now, each client request has its own thread, and the server can handle multiple requests simultaneously.

Them program needs to allocate additional memory and CPU resources for each thread. This sounds reasonable. However, a **significant issue** arises when threads perform I/O operations - they become idle and spend most of their time using 0% of resources, waiting for the operation to complete. The more threads there are, the more resources are inefficiently utilized.
On top of that, managing threads is a challenging tasks leading to potential issues such as race conditions, deadlocks, and livelocks. The operating system needs to switch between threads, which can add overhead and reduce the efficiency gains from multithreading.
## What the Solution? 🤔
Luckily, humanity has already invented smart mechanisms to perform these kinds of operations in an efficient manner.
Welcome the **Event Demultiplexer**. It involves a process called **Multiplexing** - method by which signals are combined into one signal over a shared resource. The aim is to share a scarce resource (in our case it's CPU and RAM). For example, in telecommunications, several telephone calls may be carried using one wire.

The responsibilities of the Event Demultiplexer are divided into the following steps:
- **Identify event Sources**. Each source can generate events;
- **Register event Sources**. The registration involves specifying which events to monitor for each source;
- **Wait for events**;
- **Send event notification**.
Important! The Event Demultiplexer is not a component or device that exist in real world. It's more like a theoretical model used to explain how to handle numerous simultaneous events efficiently.
To understand this complex process, let's go back to the past. Imagine an old phone switchboard: it identifies and registers sources of events (phones) and waits for new events (calls). Once there is a new event (a phone call), the switchboard delivers a notification (lights up a bulb). Then, the switchboard operator reacts to the notification by checking the target phone number and forwarding the call to its desired destination.

For computers, the principle is the same. However, the role of sources is played by things such as file descriptors, network sockets, timers, or user input devices. Each source can generate events like data available to read, space available to write, or connection requests.
Each operating system has already implemented the Event Demultiplexer mechanism: [epoll](https://en.wikipedia.org/wiki/Epoll#:~:text=epoll%20is%20a%20Linux%20kernel,45%20of%20the%20Linux%20kernel.) (Linux), [kqueue](https://en.wikipedia.org/wiki/Kqueue) (macOS), event ports (Solaris), [IOCP](https://en.wikipedia.org/wiki/Input/output_completion_port) (Windows).
But Node.js is crossplatform. To govern this entire process while supporting cross-platform I/O, there is an abstraction layer that encapsulates these inter-platform and intra-platform complexities and expose a generalized API for the upper layers of Node.
## Libuv the King 🏆

Welcome [libuv](https://docs.libuv.org/en/v1.x/) - a cross-platform library (written in C) originally developed for Node.js to provide a consistent interface for non-blocking I/O across various operating systems. Libuv not only interfaces with the system's Event Demultiplexer but also incorporates two important components: the **Event Queue** and the **Event Loop**. These components work together to efficiently handle concurrent non-blocking resources
The **Event Queue** is a data structure where all events are placed by the Event Demultiplexer, ready to be enqueued and processed sequentially by the Event Loop until the queue is empty.
The **Event Loop** is a continuously running process that waits for messages in the Event Queue and then dispatches them to the appropriate handlers.
## Problem Solved? 🥳
This is what happens when we call an I/O operation:
1. Libuv initializes the appropriate event demultiplexer depending on the operating system;
2. The Node.js interpreter scans the code and puts every operation into the call stack;
3. Node.js sequentially executes operations in the call stack. However, for I/O operations, Node.js sends them to the Event Demultiplexer in a non-blocking way. This approach ensures that the I/O operation does not block the thread, allowing other operations to be executed concurrently.
4. The Event Demultiplexer identifies the source of the I/O operation and registers the operation using the OS's facilities;
5. The Event Demultiplexer continuously monitors the source (e.g., network sockets) for events (e.g., when data is available to read);
When the event occurs (such as data becoming available to read), the 6. Event Demultiplexer signals and adds the event with the associated callback to the Event Queue;
7. The Event Loop continuously checks the Event Queue and processes the event callback.
What Node.js does is that while one request is waiting, it can handle another request. Node.js does not wait for a request to complete before processing all other requests. By default, all requests you make in Node.js are concurrent - they do not wait for other requests to finish before executing.

Hooray! It seems like the problem is solved. Node.js can run efficiently on a single thread since most of the complexities of blocking I/O operations have been solved by OS developers. Thank you!
## Problem is NOT Solved 🫠
But if we take a closer look at the libuv structure, we find an interesting aspect:

Wait, **Thread Pool**? What? Yes, now we've delved deep enough to answer the main question - Why Node.js is not (entirely) single-threaded?
## Unveiling The Secret 🤫
Okay, we have a powerful tool and OS utilities that allow us to run asynchronous code in a single thread.
But here is a problem with Event Demultiplexer. Since the implementation of the Event Demultiplexer on each OS is different, some parts of I/O operations are **not fully supported** in terms of asynchrony. It is difficult to support all the different types of I/O in all the different types of OS platforms. Those issues are especially related to the [file I/O implementations](https://blog.libtorrent.org/2012/10/asynchronous-disk-io/). This also has an impact on some of [Node.js's DNS functions](https://nodejs.org/api/dns.html#dns_implementation_considerations).
Not only that. There are other types of I/O's that can not be completed in asynchronous manner, like:
- DNS Operations, like `dns.lookup` can block because they might need to query a remote server;
- CPU-bound tasks, like cryptography;
- ZIP compression.
For these kinds of cases, the thread pool is used to perform the I/O operations in separate threads (typically there are 4 threads by default). So, the complete Node.js architecture diagram would look like this:

Yes, Node.js itself is single-threaded, but the libraries it uses internally, such as libuv with its thread pool for **some** I/O operations, are not.
The **Thread Pool**, in conjunction with the **Tasks Queue**, is used to handle blocking I/O operations. By default, the Thread Pool includes 4 threads, but this behavior can be modified by providing additional environment variable:
```
UV_THREADPOOL_SIZE=8 node my_script.js
```
This is what happens when an I/O operation cannot be performed asynchronously, but the key differences are:
1. When the Event Demultiplexer identifies the source of I/O operation it registers the operation in the Tasks Queue;
2. The Thread Pool continuously monitors the Tasks Queue for new tasks;
3. When a new task is placed in the Tasks Queue, the Thread Pool reacts by handling it with one of the pre-defined threads asynchronously;
4. After finishing the operation, the Thread Pool signals and adds the event with the associated callback to the Event Queue.
---
There is no magic here. I/O cannot be actually non-blocking and there is no way to achieve that (at least for now). Data cannot be transferred faster that it dictated by physics constraints. Nothing is perfect, so until we find ways to increase data transfer speeds at the hardware level, we use a set of optimised algorithms to perform asynchronous operations in the most efficient way possible.
Thank you for reading and have a wonderful day :) | evgenytk |
1,916,090 | Using NgModule vs Standalone components | I have been studying Angular from a free YouTube course by Procademy. We are working on different dev... | 0 | 2024-07-08T16:35:04 | https://dev.to/yash_saxena_/using-ngmodule-vs-standalone-components-8d8 | I have been studying Angular from a free YouTube course by Procademy. We are working on different dev versions(node - 18.19.1 npm-10.8.1 ng-18.0.6). The creator created a sample project and there was an app.module.ts file by default but when I created the project it wasn't there. I started by creating components and using documentation I figured that If I create standalone components, I can just use imports to include it in another component. There are a few questions bugging me-
1. Do I need to use or create the app.module.ts file sooner or later?
2. I also tried to create the module.ts file by myself but it was not automatically registering the components by itself and it started throwing errors around imports and standalone. I removed the standalone property from the components and include them in the ts file but still throws error. Can someone experienced give me a holistic view and is there someplace I can learn with other people for free online. | yash_saxena_ | |
1,916,091 | Best Surrogacy Centres in Panaji - Ekmifertility | Ekmifertility is the best surrogacy centres in Goa . Contact us now, to know more about surrogacy... | 0 | 2024-07-08T16:35:06 | https://dev.to/nikhil_kumarsingh_214aef/best-surrogacy-centres-in-panaji-ekmifertility-7f1 | Ekmifertility is the best surrogacy centres in Goa . Contact us now, to know more about surrogacy treatment options. Surrogacy is a widely known process where another woman has to carry your child in her womb until birth. Ekmi Fertility, being the best surrogacy centre in Panaji has a large pool of surrogate mothers who are ready to help you complete your family. To book your appointment, call us now at +91-8448841271.
https://ekmifertility.com/blog/best-surrogacy-centre-in-panaji/
| nikhil_kumarsingh_214aef | |
1,916,092 | Python Introduction Course with Kaniyam | Day1 Introduction to Python and its usages How to install python in windows, Linux and MacOS How to... | 0 | 2024-07-08T16:43:18 | https://dev.to/mansoor_hussain_24fa27251/python-introduction-course-with-kaniyam-5b5i | python, kaniyam, week1 | Day1
1. Introduction to Python and its usages
2. How to install python in windows, Linux and MacOS
3. How to raise questions
- Use Google search
- Connect with online forums - https://forums.tamillinuxcommunity.org/
- Class chat - Whatsapp channel/Class Channel
4. How to check python version
- Open python console and type python
5. Install visual studio code and run python file using Terminal window
6. Run the first print command.
7. Oversall - Walk through from Saeed is excellent.
8. FOSS introduction from Shrini also a brilliant and the journey his team came along is really a remarkable one.
Day2
1. Recap first day lesson.
2. Print function general introduction.
3. How to use sep and end parameters.
4. How to use format and concatenate in print function.
5. How to use variable.
| mansoor_hussain_24fa27251 |
1,916,093 | What are Usna and Arwa Rice and their Health Benefits | Rice is consumed as a primary food source around the world, and comes in different varieties to... | 0 | 2024-07-08T16:43:38 | https://dev.to/veeroverseas_fc6c8680453a/what-are-usna-and-arwa-rice-and-their-health-benefits-5dme | basmati, news | Rice is consumed as a primary food source around the world, and comes in different varieties to choose from. Rice comes in two different options: Arwa and Usna rice. Arwa rice Vs Usna rice can pull out an interesting debate among rice lovers. If you’re planning a family get together, but confused between Usna and Arwa rice, then this blog is for you.
Both rice can be confusing. Meanwhile, Arwa and Usna are not any variety of rice; these are the two ways of processing rice from paddy. Any rice variety can be Usna or Arwa. In this blog, we’ll talk about Arwa and Usna rice, Differences Between Usna and Arwa Rice, Nutritional Advantages of Usna and Arwa Rice, and much more. Let’s dive deep into today’s article now to explore Usna and Arwa Rice Health Benefits.
What Is Arwa Rice?
Arwa rice is also known as white rice and by other different names in different states of India. Arwa rice is clear and white in texture , and consumed with lentils, curries, and stir fried veggies in India. Arwa rice is extracted from paddy by removing its outer husk.
Arwa rice is thoroughly processed to remove dust, contaminants, husk, and foreign grains. Arwa rice has no preservatives added and free from harmful pollutants. It has a delicate texture and a delicious flavour. Arwa rice has several health benefits, such as- fibre, carbohydrates, and good calories. But on the dark end, Arwa rice also has some side effects that you should know.
Side effects of Arwa rice
Arwa rice, also known as aromatic rice, is generally safe to consume and is a staple in many cuisines around the world. However, like any food, there can be some considerations and potential side effects associated with its consumption:
Calorie Content: Arwa rice, like most rice varieties, is a source of carbohydrates and calories. If consumed in excessive quantities without appropriate portion control, it can contribute to weight gain and obesity.
Blood Sugar Levels: While Arwa rice has complex carbohydrates that provide sustained energy, it can still affect blood sugar levels. People with diabetes should monitor their intake and consider portion control to manage blood sugar effectively.
Gluten-Free: Arwa rice is naturally gluten-free, which is excellent for those with celiac disease or gluten sensitivity. However, cross-contamination during processing or preparation can be a concern, so individuals with severe gluten allergies should ensure they are consuming rice that has been certified gluten-free.
Digestive Issues: Rice, including Arwa rice, contains dietary fiber that can aid digestion for most people. However, some individuals may experience digestive discomfort such as gas, bloating, or diarrhea if they consume rice in excessive amounts.
Arsenic Concerns: Like many rice varieties, Arwa rice may contain trace amounts of arsenic, a naturally occurring element in soil and water. Chronic exposure to high levels of arsenic can be harmful. To mitigate this, rinse the rice thoroughly before cooking and consider using a higher water-to-rice ratio when cooking, which can help reduce arsenic levels.
Nutrient limitations: While Arwa rice has its unique aroma and flavor, it may not be as nutrient-dense as some other whole grains. If you rely heavily on Arwa rice in your diet, make sure to balance it with a variety of other foods to ensure you’re getting a broad spectrum of nutrients.
Health Benefits of Arwa Rice
If you haven’t heard about the top health benefits of Arwa rice, then this section is for you. Keep reading till the end, to explore the additional health benefits of Arwa rice
• Arwa rice is a rich source of Gamma-linolenic acid (GLA), which will improve your skin health.
• The GLA element reduces wrinkles, age spots, and other skin problems.
• Additionally Arwa rice is a primary source of vitamins and fiber, like niacin, vitamin B6, thiamin, and folate.
• Consuming Arwa rice will reduce inflammation, improve heart health, and promote blood sugar control.
What Is Usna Rice?
Usna rice is a little smoky in texture and light for the stomach, and the best part is it has fewer carbohydrates which makes it best for weight management. But, the process of extracting Usna rice is different from Arwa rice that’s why it has extra nutritional value than Arwa rice.
Usna rice is extracted by boiling the paddy and cooking them for a while. The rice cooked later has been dried in the sunlight to get tightened again. After drying, the husk is removed, and rice is extracted, known as Usna rice.
Differences Between Usna and Arwa Rice
Processing Method: The primary difference between Usna and Arwa rice lies in their processing methods. Usna rice is parboiled, which helps retain more nutrients, while Arwa rice is long-grain aromatic rice, known for its distinct aroma.
Texture: Usna rice tends to be slightly firmer in texture compared to the fluffier Arwa rice. The texture of the rice can influence the dishes you choose to prepare with them.
Nutrient Content: Usna rice typically contains more nutrients like B vitamins and minerals due to the parboiling process. Arwa rice, on the other hand, is prized for its aromatic qualities but may not have the same nutrient retention.
Usna vs. Arwa Rice: Which is Healthier?
The choice between Usna and Arwa rice depends on your dietary preferences and health goals. If you are looking for a rice variety for **[Best Basmati Rice Supplier in India?](https://veeroverseas.com/)** that retains more nutrients and has a lower glycemic index, Usna rice may be the better choice. Usna and Arwa Rice Health Benefits are extremely beneficial for individuals who are managing diabetes or aiming to increase their fiber intake.
Another if you are searching **[Best Basmati Rice Exporter in India?](https://veeroverseas.com/)** Veer Overseas Ltd. Is one of the **[Best Basmati Rice Manufacturer in India](https://veeroverseas.com/)**.
On the other hand, if you are seeking a rice variety that enhances the aroma and overall dining experience of your meals, Arwa rice is an excellent option. Its fluffy texture and aromatic qualities make it a favorite for dishes like biryani, pilaf, and other rice-based delicacies.
| veeroverseas_fc6c8680453a |
1,916,094 | Unleashing the Potential of JS Voice User Interface with Sista AI | Unleash the potential of JS Voice User Interface with Sista AI! Explore how AI can transform user interactions and revolutionize engagement 🚀 | 27,994 | 2024-07-08T16:45:34 | https://dev.to/sista-ai/unleashing-the-potential-of-js-voice-user-interface-with-sista-ai-bb6 | ai, react, javascript, typescript | <h2>Unlocking the Power of Voice User Interfaces with Sista AI</h2><p>Sista AI, an end-to-end AI integration platform, revolutionizes user interactions by seamlessly integrating AI voice assistants into apps within minutes, enhancing user engagement and accessibility. With cutting-edge solutions like conversational AI agents and multi-tasking UI controllers, Sista AI transforms user interactions.</p><h2>Revolutionizing User Experience with Advanced AI</h2><p>Sista AI's multi-tasking UI controller and automatic screen reader revolutionize user experience, while real-time data integration and personalized customer support elevate app functionality. The voice user interface supports commands in over 40 languages, ensuring a dynamic and engaging user experience.</p><h2>Creating Seamless Voice User Interfaces</h2><p>Designing user-centered voice interfaces involves critical concepts like conversation and visual interface design. Minimize the number of options in system prompts, provide definitive choices, and mimic actual speech cadence for a seamless experience.</p><h2>The Future of AI-Driven Interactions</h2><p>Sista AI leads the future of Artificial General Intelligence (AGI) with groundbreaking innovations. Our advanced AI solutions set new industry standards, offering unprecedented capabilities and seamless integration, revolutionizing interactions between businesses and users.</p><p>Unlock the power of AI-driven interactions today with <a href='https://smart.sista.ai/?utm_source=sista_blog&utm_medium=blog_post&utm_campaign=Unleashing_Potential_of_JS_Voice_User_Interface_with_Sista_AI'>Sista AI</a>. Experience the future of AI integration with <a href='https://admin.sista.ai'>Sista AI Admin Panel</a>.</p><br/><br/><a href="https://smart.sista.ai?utm_source=sista_blog_devto&utm_medium=blog_post&utm_campaign=big_logo" target="_blank"><img src="https://vuic-assets.s3.us-west-1.amazonaws.com/sista-make-auto-gen-blog-assets/sista_ai.png" alt="Sista AI Logo"></a><br/><br/><p>For more information, visit <a href="https://smart.sista.ai?utm_source=sista_blog_devto&utm_medium=blog_post&utm_campaign=For_More_Info_Banner" target="_blank">sista.ai</a>.</p> | sista-ai |
1,916,095 | Why ‘Screw Optimization’ is My New Mantra | Before the LeetCoders jump in, let me clarify: I’m not advocating for sloppy algorithms. Instead, I’m... | 0 | 2024-07-11T15:15:00 | https://dev.to/nmiller15/why-screw-optimization-is-my-new-mantra-1lip | productivity, programming, learning, career | Before the LeetCoders jump in, let me clarify: I’m not advocating for sloppy algorithms. Instead, I’m challenging the obsession with finding the perfect solution before taking any action.
### **The Perfectionist’s Dilemma**
For the past two months, I’ve been telling myself to stop obsessing over the optimal solution and just get started. As a perfectionist, I’ve always wanted to ensure every system I implement is flawless—whether it’s a productivity tool, a gym routine, or learning a new programming language. This obsession often led to paralysis.
Instead of hitting the gym, I’d spend weeks perfecting the ideal workout plan. Instead of writing code, I’d get lost in researching the best resources, ending up with nothing but wasted time. I bet some of you can relate!
### **When Planning Becomes Procrastination**
This phenomenon is called productive procrastination. Have you experienced it? It feels good, like you’re making progress. But at the end of the day, you haven’t moved the needle on your meaningful work. Worse yet, sometimes diving in headfirst without a plan just means spinning your wheels in familiar but unproductive patterns.
### **The Cost of Chasing Perfection**
The key is finding a balance between planning and action. If you’re spending two hours planning each day, you’re stuck in productive procrastination. If you constantly switch directions after every task, you’re suffering from a lack of planning.
In my experience, endless hours spent searching for the “right” way to do something often results in producing less meaningful work.
### **Embracing Imperfection: My New Approach**
So, what’s the alternative? Create a non-optimal plan and don’t stick to it rigidly. It sounds counterintuitive, but sometimes all you need is a rough idea and the courage to get started.
Let's say I want to learn Python. I just have to start with a basic outline and a project idea. As I encounter problems, I can to my resources and adjust my plan. This approach will allow me to not only learn concepts but also apply them in real-world situations, leaving me with practical knowledge and a working project, even if I did backtrack occasionally.
### **Just Get Started**
The point is simple: just get started. You’re going to make mistakes and take wrong turns, but that’s part of the learning process. Trying to optimize every second will paradoxically waste more time than if you had just begun.
So, to all my fellow perfectionists out there: start now, refine as you go, and don’t let the quest for perfection hold you back.
| nmiller15 |
1,916,096 | A instrução if e else | A forma completa da instrução if é: A cláusula else é opcional. Se a expressão condicional for... | 0 | 2024-07-09T22:02:21 | https://dev.to/devsjavagirls/a-instrucao-if-e-else-j8c | java | - A forma completa da instrução if é:

- A cláusula else é opcional.
- Se a expressão condicional for verdadeira, as instruções dentro do if será executado. Caso contrário, se houver, as instruções do else será executado.
- Nunca ambos serão executados.
- A expressão condicional que controla if deve produzir um resultado booleano.
- Exemplo: para demonstrar if e outras instruções de controle, será criado um jogo de adivinhação simples. Na primeira versão do jogo, o programa pede ao jogador uma letra entre A e Z. Se o jogador pressionar a letra correta no teclado, o programa exibirá a mensagem Right.

- Esse programa interage com o jogador e lê um caractere no teclado.
- Usando uma instrução if, ele compara o caractere com a resposta (K).
- Se K for inserido, a mensagem Right será exibida. O K deve ser inserido em maiúscula para o programa funcionar corretamente (Java diferencia maiúscula e minúscula).
- Exemplo: a próxima versão usa else para exibir uma mensagem quando a letra errada é escolhida.

** Ifs aninhados**
- Um if aninhado é uma instrução if que é alvo de outro if ou else.
- Ifs aninhados são muito comuns em programação.
- Em Java, uma instrução else sempre se refere à instrução if mais próxima dentro do mesmo bloco que ainda não estiver associada a um else.
- Exemplo:

- O else final não está associado a if(j < 20) porque não está no mesmo bloco.
- O else final está associado a if(i == 10).
- O else interno é referente a if(k > 100) porque é o if mais próximo dentro do mesmo bloco.
- Exemplo: um if será adicionado para melhorar o jogo de adivinhação fornecendo ao jogador uma explicação sobre um palpite errado.

**A escada if-else-if**
- Uma estrutura de programação comum baseada no if aninhado é a escada if-else-if.
- As expressões condicionais são avaliadas de cima para baixo.
- Quando uma condição verdadeira é encontrada, a instrução associada é executada e o resto da escada é ignorado.
- Se nenhuma condição for verdadeira, a instrução else final será executada.
- O else final frequentemente age como uma condição padrão.
Se não houver um else final e todas as condições forem falsas, nenhuma ação será executada.
- Exemplo:

O programa produz a saída abaixo:
x is not between 1 and 4
x is one
x is two
x is three
x is four
x is not between 1 and 4 | devsjavagirls |
1,916,097 | Sahipro controls for Automation testing | Hi All, Today I learn about sahipro for automation testing Text input... | 0 | 2024-07-08T16:48:34 | https://dev.to/karthick_ravi/sahipro-controls-for-automation-testing-2dkn | automaton, selenium, testing | Hi All,
Today I learn about sahipro for automation testing
Text input
_setValue(_textbox("fieldloc"),"Test1")
_selectWindow()
_lockWindow()
| karthick_ravi |
1,916,098 | How to connect keycloak and Nuxt | While working in an internal project, I got the task of getting the connection between keycloak and... | 0 | 2024-07-08T16:51:06 | https://dev.to/leamsigc/how-to-connect-keycloak-and-nuxt-3blc | nuxt, vue, tutorial |
While working in an internal project, I got the task of getting the connection between keycloak and our nuxt application.
After an hour of research, I found two feasible options to get this working fast and easy.
→ Using the `keycloak-js`
1. manage the keycloak manually
```vue
<script setup>
import Keycloak from 'keycloak-js'
import { useKeycloak } from '@/stores/keycloak'
useHead({
title: 'Home page'
})
const config = useRuntimeConfig()
const store = useKeycloak()
const state = reactive({
loggedIn: false
})
if (config.public.keycloakDisabled) {
state.loggedIn = true
} else {
const initOptions = {
url: config.public.keycloakUrl,
realm: config.public.keycloakRealm,
clientId: config.public.keycloakClientId,
onLoad: 'login-required'
}
const keycloak = new Keycloak(initOptions)
keycloak
.init({ onLoad: initOptions.onLoad })
.then((auth) => {
if (!auth) {
window.location.reload()
} else {
store.setup(keycloak)
state.loggedIn = true
}
})
}
</script>
<template>
<div>
<div v-if="state.loggedIn">
<Header />
<NuxtPage />
</div>
</div>
</template>
```
With this option you don’t have public pages
→ Using `# Nuxt OpenID-Connect` Module
that is using `node-openid-client`
With this option, you can have public routes by just extending the `nuxt-config`
```ts
openidConnect: {
addPlugin: true,
op: {
issuer: "http://keycloak:8080/realms/dev-realm", // change to your OP addrress
clientId: "CLIENT_ID",
clientSecret: "SECRET_KEY",
callbackUrl: "", // optional
scope: ["email", "profile", "address"],
},
config: {
debug: true,
response_type: "code",
secret: "oidc._sessionid",
cookie: { loginName: "" },
cookiePrefix: "oidc._",
cookieEncrypt: true,
cookieEncryptKey: "SECRET_KEY",
cookieEncryptIV: "ab83667c72eec9e4",
cookieEncryptALGO: "aes-256-cbc",
cookieMaxAge: 24 * 60 * 60, // default one day
cookieFlags: {
access_token: {
httpOnly: true,
secure: false,
},
},
},
},
```
Then create a `middleware/auth.global.ts`
```ts
export default defineNuxtRouteMiddleware((to, from) => {
if (import.meta.server) {
return;
}
const isAuthRequired = to.meta.auth || false;
const oidc = useOidc();
if (isAuthRequired && !oidc.isLoggedIn) {
oidc.login(to.fullPath);
}
});
```
for public pages, you can set the meta attribute:
```vue
<script lang="ts" setup>
/**
*
* Component Description:Desc
*
* @author Reflect-Media <ismael@leamsigc.com>
* @version 0.0.1
*
* @todo [ ] Test the component
* @todo [ ] Integration test.
* @todo [✔] Update the typescript.
*/
definePageMeta({
auth: false,
layout: "public-view",
});
</script>
<template>
<div class="grid place-items-center">
<RegistrationForm />
</div>
</template>
<style scoped></style>
```
for the pages that need authentication:
```vue
<script lang="ts" setup>
/**
*
* Component Description:Desc
*
* @author Reflect-Media <reflect.media GmbH>
* @version 0.0.1
*
* @todo [ ] Test the component
* @todo [ ] Integration test.
* @todo [✔] Update the typescript.
*/
definePageMeta({
auth: true,
});
</script>
<template>
<div>content</div>
</template>
<style scoped></style>
```
The other option is to create
- `layouts/default.vue` → that set the auth to true by default
- `layouts/publicView.vue` → will set the auth to false.
Resources:
[Module](https://github.com/aborn/nuxt-openid-connect)
[Example with keycloak-js](https://github.com/FAIRDataTeam/TrainHandler-client)
***Happy hacking!*
{% gist https://gist.github.com/leamsigc/68d3f891d9298c35de273caa2b21e453 %}
> Working on the audio version
[The Loop VueJs Podcast](https://podcasters.spotify.com/pod/show/the-loop-vuejs) | leamsigc |
1,916,099 | 🚀 Launching FidForward! 🚀 | Today, Bernardo and I are excited to launch FidForward in private beta! We're looking for companies... | 0 | 2024-07-08T16:54:15 | https://dev.to/rbatista19/launching-fidforward-54gc | react, node | Today, Bernardo and I are excited to launch FidForward in private beta!
We're looking for companies with 10-50 employees willing to pilot the next generation of performance management.
With two pilot programs already running successfully (achieving a 30-50% improvement in eNPS), we’re eager to expand FidForward to more companies to validate these results further.
If you’re a founder seeking to implement performance management or a larger company aiming to elevate your current system, please reach out to me!
🔗 Learn more about FidForward at [FidForward.com](https://fidforward.com) | rbatista19 |
1,916,100 | A instrução switch | A segunda instrução de seleção Java é switch. A instrução switch fornece uma ramificação com vários... | 0 | 2024-07-09T22:02:40 | https://dev.to/devsjavagirls/a-instrucao-switch-5bed | java | - A segunda instrução de seleção Java é switch.
- A instrução switch fornece uma ramificação com vários caminhos, permitindo ao programa fazer uma seleção entre várias alternativas.
- Embora uma série de instruções if aninhadas possa executar testes com vários caminhos, em muitas situações, switch é uma abordagem mais eficiente.
- Funciona verificando o valor de uma expressão em uma lista de constantes.
- Quando uma correspondência é encontrada, a sequência de instruções associada a essa correspondência é executada.

- Em versões de Java anteriores ao JDK 7, a expressão que controla switch deve ser do tipo byte, short, int, char ou uma enumeração. A partir do JDK 7, a expressão também pode ser do tipo String.
- Cada valor especificado nos cases deve ser uma expressão de constante exclusiva.
- Não são permitidos valores duplicados nos cases, e o tipo de cada valor deve ser compatível com o tipo da expressão.
- A sequência de instruções default é executada quando nenhuma constante case coincide com a expressão.
- A instrução default é opcional; se não estiver presente, nenhuma ação ocorrerá quando todas as comparações falharem.
- Quando uma correspondência é encontrada, as instruções associadas a esse case são executadas até encontrar um break, ou, no caso do default ou do último case, até o fim do switch ser alcançado.

A saída produzida por esse programa é mostrada aqui:
i is zero
i is one
i is two
i is three
i is four
i is five or more
i is five or more
i is five or more
i is five or more
i is five or more
- A cada passagem pelo laço, as instruções associadas à constante case que corresponde a i são executadas.
- Todas as outras instruções case são ignoradas.
- Quando i é cinco ou maior, nenhuma instrução case corresponde, então a instrução default é executada.
- Tecnicamente, a instrução break é opcional, mas é comumente usada em switch.
- A presença de uma instrução break dentro de um case faz o programa sair da estrutura switch.
- Se não houver um break após um case correspondente, as instruções do case seguinte serão executadas até encontrar um break ou chegar ao fim da estrutura switch.

Esse programa exibirá a saída abaixo:
i is less than one
i is less than two
i is less than three
i is less than four
i is less than five
i is less than two
i is less than three
i is less than four
i is less than five
i is less than three
i is less than four
i is less than five
i is less than four
i is less than five
i is less than five
- Como o programa ilustra, a execução passará para o próximo case se não houver uma instrução break presente.
- Você pode ter cases vazios.

- Nesse fragmento, se i tiver o valor 1, 2 ou 3, a primeira instrução println() será executada.
- Se i for igual a 4, a segunda instrução println() será executada.
** Instruções switch aninhadas**
- Um switch pode fazer parte da sequência de instruções de um switch externo, o que é chamado de switch aninhado.
- Mesmo se as constantes case do switch interno e externo tiverem valores comuns, não ocorrerá conflito.
- Exemplo:

| devsjavagirls |
1,916,101 | Complete Guide to the Django Services and Repositories Design Pattern with the Django REST Framework | Introduction to the Django Services and Repositories Design Pattern with Django REST... | 0 | 2024-07-08T16:59:18 | https://mateoramirezr.hashnode.dev/django-services-and-repositories-design-pattern-with-rest-api | webdev, django, backend, api | ## Introduction to the Django Services and Repositories Design Pattern with Django REST Framework
[**You can find the complete code and structure of the project in the following GitHub link:** *<mark>Click</mark>*](https://github.com/MateoRamirezRubio1/mini-blog-rest-api).
In the world of software development, code organisation and maintainability are crucial to the long-term success of any project. In particular, when working with frameworks like Django and the Django REST Framework to build robust web applications and APIs, it's essential to follow design patterns that help us keep our code clean, modular and easy to scale.
In this blog, we will explore one of these widely used design patterns: Services and Repositories. This pattern allows us to separate the concerns of data access and business logic, improving the structure and clarity of our code. Through this approach, we not only make our applications easier to maintain and test, but also more flexible and future-proof.
Join us as we break down this pattern step-by-step, from initial project setup to service and repository deployment, and discover how it can transform the way you develop your Django apps.
The realisation of the project is divided into three main sections: the structuring and design of the project; the coding of the structured project; testing.
[**You can find the complete code and structure of the project in the following GitHub link:** *<mark>Click</mark>*](https://github.com/MateoRamirezRubio1/mini-blog-rest-api).
---
## Creation and design of the project
In this first section, we will see how to create a new Django project with DRF (Django REST Framework) and we will analyse how the main parts of the project and REST APIs will be structured.
1. ### Start and creation of the Django project
Before starting the project, you must install Django and Django REST Framework if you don't already have it:
```bash
pip install django djangorestframework
```
Now that we've installed the main tools we're going to use for the project, we'll create a new Django project using the `django-admin startproject` command:
```bash
django-admin startproject my_project_blog
```
This command generates the basic structure of a Django project, including configuration files and a project directory.
2. ### Creation of Django applications
For this small blog project, we will have two main functionalities which we will divide into two Django apps: `Posts` and `Comments`.
For this, in the terminal, inside the main project directory (my\_project\_blog), create two applications: `Posts` and `Comments` using the python command `manage.py startapp`:
```bash
python manage.py startapp posts
python manage.py startapp comments
```
Apps are modular Django components that group related functionalities. In this case, `Posts` will handle blog posts and `Comments` will handle comments.
3. ### **File settings:**
Add the created apps and the DRF to the `INSTALLED_APPS` section in the project settings file: `/my_project_blog/my_project_blog/settings.py` :
```python
INSTALLED_APPS = [
"rest_framework",
"apps.posts",
"apps.comments",
...
```
4. ### Project structure:
Next I will show you how the project will be structured, you can create all the directories/folders and files (ignore `README.md`, `.env`, `.gitignore`), and then we will fill them with code as we learn.
* **Main structure:**

* **Structure of the Comments app:**

* **Structure of the Posts app:**

5. ### System design:
For the realisation of the project we opted for an architecture based on layers and classic design patterns such as Repository and Services.
The Services and Repositories pattern divides business logic (services) and data access logic (repositories) into separate layers:
| Layer | Description |
| --- | --- |
| **Web Layer** | It presents the data to users. In this case, the views are for both REST APIs (using DRF) and web views (using Django Views). ***<mark>Note:</mark>*** REST APIs views handle all the API logic (CRUD, etc), while web views only handle sending data to the user via templates. |
| **Service layer** | Contains the business logic. It communicates with the `Repository Layer` to retrieve data and performs additional operations (without interacting directly with the database) before returning data to the views or controllers. |
| **Repository Layer** | It is responsible for interacting directly with the database. This layer is responsible for basic CRUD operations. ***<mark>Note:</mark>*** This layer is the only one in charge of interacting directly with the database. |
| **Model Layer** | Data models representing the database tables. |

We will delve a little deeper into each layer later as we code.
6. ### Design of REST APIs
REST APIs allow communication between the frontend and the backend. We use the Django REST Framework to create RESTful APIs. This is the structure we have for their respective naming and functions:
| HTTP Method | Endpoint | Description |
| --- | --- | --- |
| **GET** | `/posts/` | Gets a list of all posts. |
| **POST** | `/posts/` | Create a new post in the database. |
| **GET** | `/posts/{post_id}/` | Gets the details of a specific post. |
| **PUT** | `/posts/{post_id}/` | Update a specific post. |
| **DELETE** | `/posts/{post_id}/` | Delete a specific post. |
| **GET** | `/posts/{post_id}/comments/` | Gets a list of comments for a post. |
| **POST** | `/posts/{post_id}/comments/` | Create a new comment for a post. |
| **GET** | `/posts/{post_id}/comments/{comment_id}/` | Gets the details of a comment on a post. |
| **PUT** | `/posts/{post_id}/comments/{comment_id}/` | Update a comment on a post. |
| **DELETE** | `/posts/{post_id}/comments/{comment_id}/` | Remove a comment from a post. |
---
## Coding and deepening
Now that we are clear about the layout of the project, for the second section, we will code our blogging project by delving into several of the layers we talked about earlier.
1. ### Model:
The Model represents the data. It defines the database tables. In Django, models are classes that inherit from `models.Model`.
For this project we will create two models, one for each Django app:
| Modelo | Fields | Description |
| --- | --- | --- |
| **Post** | `'title', 'content'` | It represents a blog post. |
| **Comment** | `'post', 'content'` | Represents a comment associated with a post. |
* **In the archive:**`/my_project_blog/apps/posts/models.py` :
```python
class Post(models.Model):
title = models.CharField(max_length=200)
content = models.TextField()
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return self.title
```
* **In the archive:**`/my_project_blog/apps/comments/models.py` :
```python
from apps.posts.models import Post
class Comment(models.Model):
post = models.ForeignKey(Post, related_name="comments", on_delete=models.CASCADE)
content = models.TextField()
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return self.content[:20]
```
2. ### Views
The View handles user requests and returns responses. Django offers both class-based views (CBV) and function-based views (FBV).
`CommentListView` and `CommentDetailView` are examples of Class-Based Views (CBV).
Separation of concerns is a design principle that promotes the separation of a program into distinct sections, with each section addressing a separate concern.
As we saw earlier for this project, we separated the views into two:
* **API REST Views:** Handle API-specific logic, such as serialisation, validation and returning JSON responses.
* **Traditional Django Views (Web Views):** They handle rendering templates, session management and other web-specific logic.
**For the Posts app:**
* **In the archive:**`/my_project_blog/apps/posts/views/api_views.py` :
```python
from rest_framework import generics
from ..serializers import PostSerializer
from ..services.post_service import PostService
# API view for listing all posts and creating a new post.
# Utilizes Django REST Framework's ListCreateAPIView for listing and creating resources.
class PostListCreateAPIView(generics.ListCreateAPIView):
serializer_class = PostSerializer # Defines the serializer class used for converting model instances to JSON and vice versa.
# Fetches all posts from the database.
# `get_queryset` method specifies the queryset for listing posts.
def get_queryset(self):
return (
PostService.get_all_posts()
) # Delegates the database query to the PostService layer.
# Handles the creation of a new post.
# `perform_create` is called after validation of the serializer.
def perform_create(self, serializer):
# Creates a new post using validated data from the serializer.
PostService.create_post(
serializer.validated_data
) # Uses PostService to handle creation logic.
# API view for retrieving, updating, and deleting a specific post.
# Extends RetrieveUpdateDestroyAPIView for detailed operations on a single resource.
class PostRetrieveUpdateDestroyAPIView(generics.RetrieveUpdateDestroyAPIView):
serializer_class = PostSerializer # Specifies the serializer class for retrieving, updating, and deleting resources.
# Retrieves a post object based on the provided post_id.
# `get_object` method returns the post instance for the specified post_id.
def get_object(self):
post_id = self.kwargs.get("post_id") # Extracts post_id from the URL kwargs.
post = PostService.get_post_by_id(
post_id
) # Fetches the post using PostService.
if post is None:
raise NotFound(
"Comment not fund"
) # Raises a 404 error if the post does not exist.
return post # Returns the post instance.
# Updates an existing post instance.
# `perform_update` is called after the serializer's data is validated.
def perform_update(self, serializer):
post_id = self.kwargs["post_id"] # Extracts post_id from the URL kwargs.
# Updates the post with new data.
PostService.update_post(
serializer.validated_data, post_id
) # Delegates the update logic to PostService.
# Deletes a post instance.
# `perform_destroy` is called to delete the specified post.
def perform_destroy(self, instance):
post_id = self.kwargs["post_id"] # Extracts post_id from the URL kwargs.
# Deletes the post using PostService.
PostService.delete_post(post_id) # Delegates the deletion logic to PostService.
```
* **In the archive:**`/my_project_blog/apps/posts/views/web_views.py` :
```python
from django.views.generic import ListView, DetailView
from ..models import Post
from ..services.post_service import PostService
# Class-based view for listing all posts in the web interface.
# Utilizes Django's ListView to handle displaying a list of posts.
class PostListView(ListView):
model = Post # Specifies the model to be used in the view.
template_name = (
"posts/post_list.html" # Path to the template for rendering the list of posts.
)
context_object_name = "posts" # Context variable name to be used in the template.
# Overrides the default get_queryset method to fetch all posts from the service layer.
def get_queryset(self):
return (
PostService.get_all_posts()
) # Delegates the database query to the PostService.
# Class-based view for displaying the details of a single post.
# Utilizes Django's DetailView to handle displaying detailed information of a single post.
class PostDetailView(DetailView):
model = Post
template_name = "posts/post_detail.html"
context_object_name = "post"
# Overrides the default get_object method to fetch a specific post based on post_id.
def get_object(self, queryset=None):
post_id = self.kwargs.get("post_id") # Extracts post_id from the URL kwargs.
return PostService.get_post_by_id(
post_id
) # Fetches the post using the PostService.
```
**For the Comments app:**
* **In the archive:**`/my_project_blog/apps/comments/views/api_views.py` :
```python
from rest_framework import generics
from ..serializers import CommentSerializer
from ..services.comment_service import CommentService
from rest_framework.exceptions import NotFound
class CommentListCreateAPIView(generics.ListCreateAPIView):
serializer_class = CommentSerializer
def get_queryset(self):
# Retrieve the 'post_id' from the URL kwargs. This ID is used to filter comments related to a specific post.
post_id = self.kwargs.get("post_id")
# Fetch comments related to the given post ID using the CommentService. The repository layer handles actual data fetching.
return CommentService.get_comments_by_post_id(post_id)
def perform_create(self, serializer):
# Retrieve the 'post_id' from the URL kwargs. This ID is used to associate the new comment with a specific post.
post_id = self.kwargs.get("post_id")
# Create a new comment for the specified post using the CommentService. The service layer handles data manipulation.
CommentService.create_comment(serializer.validated_data, post_id)
class CommentRetrieveUpdateDestroyAPIView(generics.RetrieveUpdateDestroyAPIView):
serializer_class = CommentSerializer
def get_object(self):
# Retrieve the 'post_id' and 'comment_pk' from the URL kwargs.
# 'post_id' is used to ensure the comment belongs to the post, while 'comment_pk' identifies the specific comment.
post_id = self.kwargs.get("post_id")
comment_id = self.kwargs.get("comment_pk")
# Fetch the specific comment for the given post ID and comment ID using the CommentService.
# Raise a 404 error if the comment is not found.
comment = CommentService.get_comment_by_post_and_id(post_id, comment_id)
if comment is None:
raise NotFound("Comment not fund")
return comment
def perform_update(self, serializer):
# Retrieve the 'comment_pk' from the URL kwargs for updating the specific comment.
comment_id = self.kwargs["comment_pk"]
# Update the specified comment using the CommentService. The service layer handles data manipulation.
CommentService.update_comment(serializer.validated_data, comment_id)
def perform_destroy(self, instance):
# Retrieve the 'comment_pk' from the URL kwargs for deleting the specific comment.
comment_id = self.kwargs["comment_pk"]
# Delete the specified comment using the CommentService. The service layer handles data manipulation.
CommentService.delete_comment(comment_id)
```
* **In the archive:**`/my_project_blog/apps/comments/views/api_views.py` :
```python
from django.views.generic import ListView, DetailView
from ..models import Comment
from ..services.comment_service import CommentService
class CommentListView(ListView):
model = Comment
template_name = "comments/comment_list.html"
context_object_name = "comments"
def get_queryset(self):
# Extract 'post_id' from URL parameters to fetch comments associated with a specific post.
post_id = self.kwargs.get("post_id")
# Use the CommentService to retrieve comments for the specified post.
# The service layer handles data access logic, keeping the view simple.
return CommentService.get_comments_by_post_id(post_id)
def get_context_data(self, **kwargs):
# Get the default context from the parent class and add additional context for 'post_id'.
context = super().get_context_data(**kwargs)
# Include 'post_id' in the context so it can be used in the template.
# This is necessary for rendering links or forms related to the post.
context["post_id"] = self.kwargs.get("post_id")
return context
class CommentDetailView(DetailView):
model = Comment
template_name = "comments/comment_detail.html"
context_object_name = "comment"
def get_object(self, queryset=None):
# Extract 'post_id' and 'comment_id' from URL parameters to retrieve a specific comment for a post.
post_id = self.kwargs.get("post_id")
comment_id = self.kwargs.get("comment_id")
# Use the CommentService to retrieve the specific comment based on 'post_id' and 'comment_id'.
# If the comment is not found, `CommentService.get_comment_by_post_and_id` will return None.
# In this case, a 404 error will be raised automatically by the `DetailView` if the object is not found.
return CommentService.get_comment_by_post_and_id(post_id, comment_id)
```
3. ### Repositories
Repositories handle data persistence and encapsulate data access logic. They are defined in separate files (`repositories.py`) within each application.
**For the Post app:**
* **In the archive:**`/my_project_blog/apps/posts/repositories/post_repository.py` :
```python
from ..models import Post
class PostRepository:
# Fetch all posts from the database
@staticmethod
def get_all_posts():
return Post.objects.all()
# Fetch a specific post by its primary key (ID)
@staticmethod
def get_post_by_id(post_id):
return Post.objects.get(pk=post_id)
# Create a new post with the provided data
@staticmethod
def create_post(data):
return Post.objects.create(**data)
# Update an existing post with the provided data
@staticmethod
def update_post(data, post_id):
post = Post.objects.get(pk=post_id)
for attr, value in data.items():
setattr(post, attr, value) # Dynamically update each attribute
post.save()
return post
# Delete a post by its primary key (ID)
@staticmethod
def delete_post(post_id):
post = Post.objects.get(pk=post_id)
post.delete()
```
**For the Comments app:**
* **In the archive:**`/my_project_blog/apps/comments/repositories/comment_repository.py` :
```python
from ..models import Comment
class CommentRepository:
@staticmethod
def get_comments_by_post_id(post_id):
# Retrieve all comments associated with a specific post_id
return Comment.objects.filter(post_id=post_id)
@staticmethod
def get_comment_by_post_and_id(post_id, comment_id):
# Retrieve a specific comment by post_id and comment_id
try:
return Comment.objects.get(post_id=post_id, id=comment_id)
except Comment.DoesNotExist:
# Return None if no comment is found
return None
@staticmethod
def create_comment(data, post_id):
# Create a new comment associated with a specific post_id
return Comment.objects.create(post_id=post_id, **data)
@staticmethod
def update_comment(data, comment_id):
# Update an existing comment identified by comment_id
comment = Comment.objects.get(pk=comment_id)
for attr, value in data.items():
setattr(comment, attr, value)
comment.save()
return comment
@staticmethod
def delete_comment(comment_id):
# Delete an existing comment identified by comment_id
comment = Comment.objects.get(pk=comment_id)
comment.delete()
```
4. ### Services
Services contain the business logic and act as intermediaries between views and repositories. They are defined in separate files (`services.py`) within each application.
**For the Post app:**
* **In the archive:**`/my_project_blog/apps/posts/services/post_service.py` :
```python
from ..repositories.post_repository import PostRepository
class PostService:
@staticmethod
def get_all_posts():
# Retrieve all posts from the repository
return PostRepository.get_all_posts()
@staticmethod
def get_post_by_id(post_id):
# Retrieve a post by its ID from the repository
return PostRepository.get_post_by_id(post_id)
@staticmethod
def create_post(data):
# Create a new post with the given data
return PostRepository.create_post(data)
@staticmethod
def update_post(data, post_id):
# Update an existing post with the given data
return PostRepository.update_post(data, post_id)
@staticmethod
def delete_post(post_id):
# Delete a post by its ID from the repository
return PostRepository.delete_post(post_id)
```
**For the Comments app:**
* **In the archive:**`/my_project_blog/apps/comments/services/comment_service.py` :
```python
from ..repositories.comment_repository import CommentRepository
class CommentService:
@staticmethod
def get_comments_by_post_id(post_id):
# Delegate the task of retrieving comments by post_id to the repository layer
return CommentRepository.get_comments_by_post_id(post_id)
@staticmethod
def get_comment_by_post_and_id(post_id, comment_id):
# Delegate the task of retrieving a specific comment by post_id and comment_id to the repository layer
return CommentRepository.get_comment_by_post_and_id(post_id, comment_id)
@staticmethod
def create_comment(data, post_id):
# Delegate the task of creating a new comment to the repository layer
return CommentRepository.create_comment(data, post_id)
@staticmethod
def update_comment(data, comment_id):
# Delegate the task of updating an existing comment to the repository layer
return CommentRepository.update_comment(data, comment_id)
@staticmethod
def delete_comment(comment_id):
# Delegate the task of deleting a comment to the repository layer
return CommentRepository.delete_comment(comment_id)
```
5. ### Serializers
Serializers convert complex data into native data formats (JSON) and vice versa. Serialisers are an important part of creating REST APIs with DRF which are not usually used in a normal Django project.
**For the Post app:**
* **In the archive:**`/my_project_blog/apps/posts/serializers.py`
```python
from rest_framework import serializers
from .models import Post
# Serializer for the Post model
# This class is responsible for converting Post instances into JSON data and validating incoming data for creating or updating posts.
class PostSerializer(serializers.ModelSerializer):
# Meta class specifies the model and fields to be used by the serializer.
class Meta:
model = Post # The model associated with this serializer. It tells DRF which model the serializer will be handling.
fields = "__all__" # Specifies that all fields from the model should be included in the serialization and deserialization process.
```
**For the Comments app:**
* **In the archive:**`/my_project_blog/apps/comments/serializers.py` :
```python
from rest_framework import serializers
from .models import Comment
# CommentSerializer is a ModelSerializer that automatically creates fields and methods for the Comment model.
class CommentSerializer(serializers.ModelSerializer):
class Meta:
model = Comment # The model that this serializer will be based on.
fields = "__all__" # Automatically include all fields from the Comment model.
```
6. ### Configuration of URLs
URLs are organised into two separate files for the API and web views, allowing a clear separation between accessing data via the API and presenting data via web views. This organisation facilitates code management and maintenance by distinguishing between paths that serve JSON data and paths that present HTML views.
API routes are defined only in the `Posts` application to centralise the management of resources related to posts and their comments. By not having its own `urls.py` file in the `Comments` application, redundancy is reduced and the project structure is simplified by grouping all related API paths in one place. This makes it easier to understand the data flow and the interrelationship between posts and comments, promoting a more coherent and maintainable architecture, and the correct use of REST best practices.
**URLs API REST:**
* **In the archive:**`/my_project_blog/apps/posts/urls/api_urls.py` :
```python
from django.urls import path
from ..views.api_views import PostListCreateAPIView, PostRetrieveUpdateDestroyAPIView
from ...comments.views.api_views import (
CommentListCreateAPIView,
CommentRetrieveUpdateDestroyAPIView,
)
urlpatterns = [
# Route for listing all posts or creating a new post
path("", PostListCreateAPIView.as_view(), name="post-list-create"),
# Route for retrieving, updating, or deleting a specific post by post_id
path(
"<int:post_id>/",
PostRetrieveUpdateDestroyAPIView.as_view(),
name="post-retrieve-update-destroy",
),
# Route for listing all comments for a specific post or creating a new comment for that post
path(
"<int:post_id>/comments/",
CommentListCreateAPIView.as_view(),
name="post-comment-create",
),
# Route for retrieving, updating, or deleting a specific comment by comment_pk for a specific post
path(
"<int:post_id>/comments/<int:comment_pk>/",
CommentRetrieveUpdateDestroyAPIView.as_view(),
name="post-comment-retrieve-update-destroy",
),
]
```
**URLs web:**
* **In the archive:**`/my_project_blog/apps/posts/urls/web_urls.py` :
```python
from django.urls import path
from ..views.web_views import PostListView, PostDetailView
from ...comments.views.web_views import CommentListView, CommentDetailView
urlpatterns = [
# Route for listing all posts.
# The URL is "web/posts/", and it maps to PostListView to display a list of all posts.
path("", PostListView.as_view(), name="post-list"),
# Route for displaying details of a specific post identified by post_id.
# The URL is "web/posts/<post_id>/", and it maps to PostDetailView to display details of a single post.
path("<int:post_id>/", PostDetailView.as_view(), name="post-detail"),
# Route for listing all comments for a specific post identified by post_id.
# The URL is "web/posts/<post_id>/comments", and it maps to CommentListView to display a list of comments for the given post.
path(
"<int:post_id>/comments",
CommentListView.as_view(),
name="post-comments",
),
# Route for displaying details of a specific comment identified by comment_id for a specific post.
# The URL is "web/posts/<post_id>/comments/<comment_id>/", and it maps to CommentDetailView to display details of a single comment.
path(
"<int:post_id>/comments/<int:comment_id>/",
CommentDetailView.as_view(),
name="post-comment-detail",
),
]
```
**General project URLs:**
* **In the archive:**`/my_project_blog/my_project_blog/urls.py` :
```python
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path("admin/", admin.site.urls),
path("posts/", include("apps.posts.urls.web_urls")),
path("api/posts/", include("apps.posts.urls.api_urls")),
]
```
---
## Web application and API testing
Now, in the third section of this blog, we will create some posts and add several comments to each of the posts to verify the correct functioning of the web application and the API and web URLs.
1. ### Execute the project
\- **Creates Migrations:**
```bash
python manage.py makemigrations
```
\- **Apply migrations:**
```bash
python manage.py migrate
```
\- **Run the Django development server:**
```bash
python manage.py runserver
```
***<mark>Note:</mark>*** Open your browser and navigate to `http://127.0.0.1:8000/`, from now on the urls that I will give you will be after the domain (in this case our `localhost`), example: If I give you the url `/api/posts/`, in the browser you will have to put `http://127.0.0.1:8000/api/posts/`.
2. **Creation of posts**
1. **Access the Django REST API interface:**
* **URL:**`/api/posts/`
* **Action:** Fill in the form with the details of the post to be created and click on the "POST" button.

3. ### **Add comments to posts:**
1. **Access the Django REST API interface:**
* **URL:**`/api/posts/{post_id}/comments/`
* **Action:** Fill in the form with the details of the comment to be created for a specific post and click on the "POST" button.

You can repeat these steps to create as many posts and comments as you want.
Also, in this same Django REST API interface you can edit posts or comments or delete them.
4. ### Verify the creation of posts and comments:
Having already created several posts and comments for some of these via the REST API, we can see these in the web URLs that interact with the HTML templates.
So that when you enter the given URLs you can see the same design, you can see the HTML code of each template in the HTML files of the project.
[**Full project code on GitHub**](https://github.com/MateoRamirezRubio1/mini-blog-rest-api): [*<mark>Click</mark>*](https://github.com/MateoRamirezRubio1/mini-blog-rest-api).
1. 1. **See the list of posts:**
* **URL:**`/posts/`

2. **See the details of a specific post:**
* **URL:**`/posts/{post_id}/`

3. **See comments on a specific post:**
* **URL:**`/posts/{post_id}/comments/`

4. **View the details of a comment on a specific post:**
* **URL:**`/posts/{post_id}/comments/{comment_id}/`

---
## Conclusions
### Benefits of the Services and Repositories Employer
1. **Separation of Responsibilities:**
Separation of responsibilities is one of the fundamental principles in software design. In the `Services` and `Repositories` pattern, we clearly separate data access concerns (repositories) from business logic (services).
**Benefits:**
* **Clarity and Organisation:** The code is organised so that each class and function has a single, clearly defined responsibility. This makes the code easier to understand and maintain.
* **Maintainability:** When the business logic changes, only the services need to be modified, while the repositories remain unchanged. This reduces the number of changes and the risk of introducing errors.
* **Team Collaboration:** In a development team, different members can work at different layers (e.g. one on services and one on repositories) without interfering with each other, facilitating collaboration and integration.
2. **Re-use of the Code:**
The pattern encourages code reuse by centralising business logic and data access in specific classes. This avoids code duplication and facilitates code reuse in different parts of the application.
Benefits:
* **Duplication Reduction:** By having business logic in services and data access in repositories, we avoid repeating code in multiple places, making the code more DRY (Don't Repeat Yourself).
* **Consistency:** Reusing the same code in different parts of the application ensures that the same logic and rules are followed everywhere, which increases consistency and reduces errors.
* **Ease of Update:** If you need to change the way data is accessed or business logic is implemented, you only need to update the corresponding repository or service. The other parts of the application that depend on them will automatically benefit from the changes.
3. **Ease of Testing:**
The separation of business logic and data access logic facilitates the creation of unit and integration tests. Services and repositories can be tested in isolation, which simplifies the testing process.
Benefits:
* **Unit testing:** By having business logic in services and data access in repositories, it is easier to create unit tests for each component independently. This makes it possible to quickly detect errors and ensure that each part works correctly.
* **Mocks and Stubs:** During testing, it is easy to use mocks and stubs to simulate the behaviour of repositories or services. This allows testing business logic without relying on the database or other external services.
* **Reduced Testing Complexity:** By having clear and well-defined responsibilities, testing becomes less complex and more specific. This improves test coverage and software reliability.
4. **Maintainability and Extensibility:**
The `Services` and `Repositories` pattern makes code more maintainable and extensible. This is especially important in long-term projects, where requirements may change over time.
**Benefits:**
* **Code Evolution:** When you need to add new functionality, it is easy to extend existing services and repositories without affecting the rest of the application. This allows the code to evolve in a controlled and safe way.
* **Easy Refactoring:** If you identify an improvement in the code structure or business logic implementation, it is easy to refactor services and repositories without great risk. The separation of responsibilities makes it easy to identify which parts of the code need to be changed.
* **Adaptability to New Technologies:** If at some point you decide to change the persistence technology (for example, from a SQL database to a NoSQL one), you only need to modify the repositories without affecting the business logic implemented in the services.
---
Finally, in closing, I thank you for your time and attention in following this guide to the Services and Repositories design pattern in a Django project with the Django REST Framework. I hope this detailed explanation has given you a solid understanding of how to structure your applications in an efficient and maintainable way.
Implementing this pattern will not only improve the organization of your code, but also facilitate its evolution and scalability as your project grows. If you have any additional questions or opinions on the topic, feel free to post in the comments, they are left open for discussion.
Good luck with your development and best of luck with your future projects!
See you next time! | mateoramirezr |
1,916,102 | Common Lisp VS C: a testimony | I like testimonies. Here's one on Lisp vs C. About execution time, speed of development, length of... | 0 | 2024-07-08T16:58:40 | https://dev.to/vindarel/common-lisp-vs-c-a-testimony-42ga | lisp, commonlisp, c, programming | _I like testimonies. Here's one on Lisp vs C. About execution time, speed of development, length of programs, ease of development._
---
I find SBCL produces highly performant code, and is even faster with a small number of well-placed type declarations. I have a Lisp vs C story: I'm a mathematician and was doing some research on the Cops and Robbers game in graph theory with a friend of mine who is a computer science professor and has worked in the past as a professional programmer. We needed some data on which graphs have winning strategies for the cops and decided to independently write code to compute them so we could vet the results against each other.
I wrote my code in Common Lisp and ran it with SBCL; he wrote his code in C.
My program was 500 lines and his was 4,000 lines. My program was faster than his and it's no mystery why: I tried several different optimizations my friend also thought of, but didn't implement because it would have been another 1,000 lines or so. I also find my program much more readable than his, just because of the length: even if you are 5 times as fast at reading C than Lisp, his program is still 8 times as long!
---
by @oantolin@mathstodon.xyz on Mastodon (https://framapiaf.org/@oantolin@mathstodon.xyz/112746475805471440)
and... that's it o/ | vindarel |
1,916,103 | Evotto: Drive Your Dreams - Transforming Journeys into Unforgettable Adventures | Embarking on the journey of "Evotto" -an evolution to automobile , offers an opportunity to connect... | 0 | 2024-07-08T17:00:01 | https://dev.to/evotto_official/evotto-drive-your-dreams-transforming-journeys-into-unforgettable-adventures-jg2 | Embarking on the journey of "Evotto" -an evolution to automobile , offers an opportunity to connect with transport readers by making them feel the freedom and excitement of exploring new destinations in the comfort of our rental vehicles. We engage our audience by sharing vivid ways, highlighting unique vehicle features, and offering expert travel tips that resonate with their desires and needs. The key is to create an immersive experience that speaks directly to their wanderlust, using evocative option that ignites their imagination. By focusing on the emotions and experiences our services facilitate, you transform mundane transactions into memorable adventures. Let our automobile evolution be a beacon, guiding potential customers through the vast sea of rental options to your doorstep, promising an unforgettable journey. | evotto_official | |
1,916,104 | Tente isso 3-1 : Construa um sistema de ajuda | Este projeto constrói um sistema de ajuda simples que exibe a sintaxe das instruções de controle... | 0 | 2024-07-09T22:02:55 | https://dev.to/devsjavagirls/tente-isso-3-1-construa-um-sistema-de-ajuda-45ng | java | Este projeto constrói um sistema de ajuda simples que exibe a sintaxe das instruções de controle Java. O programa exibe um menu contendo as instruções de controle e então espera que uma seja selecionada. Após a seleção, a sintaxe da instrução é exibida. Nessa primeira versão do programa, só há ajuda disponível para as instruções if e switch. As outras instruções de controle serão adicionadas em projetos subsequentes.
1. Crie um arquivo chamado Help.java
2. O programa começa exibindo o menu a seguir:
Help on:
1. if
2. switch
Choose one:
Para exibi-lo, você usará a sequência de instruções mostradas aqui:
System.out.println("Help on:");
System.out.println(" 1. if");
System.out.println(" 2. switch");
System.out.print("Choose one: ");
3. Em seguida, o programa lerá a seleção do usuário chamando System.in.read( ):
choice = (char) System.in.read();
4. Uma vez que a seleção tiver sido lida, o programa usará a instrução switch mostrada a seguir para exibir a sintaxe da instrução selecionada.

Observe como a cláusula default captura escolhas inválidas. Por exemplo, se o usuário inserir 3, não haverá uma constante case correspondente, fazendo a sequência default ser executada.
5. Aqui está a listagem inteira do programa Help.java:

6. Resultado.
Help on:
1. if
2. switch
Choose one: 1
The if:
if(condition) statement;
else statement;
| devsjavagirls |
1,916,105 | Gin and router example | Install Gin with the following command: go get -u github.com/gin-gonic/gin Enter... | 0 | 2024-07-08T17:34:09 | https://dev.to/hieunguyendev/gin-and-router-example-2939 | go, gin, beginners, backend |

- Install Gin with the following command:
```
go get -u github.com/gin-gonic/gin
```
- After installation, we proceed to code in the “main.go” file with a simple function as follows:
```
package main
import (
"net/http"
"github.com/gin-gonic/gin"
)
func main() {
r := gin.Default()
r.GET("/ping", func(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{
"message": "pong",
})
})
r.Run() // listen and serve on 0.0.0.0:8080 (for windows "localhost:8080")
}
```
- Then run the main file with the following command:
```
go run cmd/server/main.go
```
**Note:** Depending on your folder organization, the command to run the main.go file may differ from mine. I have introduced how to organize the structure in the article: [here](https://dev.to/hieunguyendev/backend-project-structure-go-1ph8)
- After running the above command, we will receive the following result:

- Great, let's try calling the ping endpoint with the following command:
```
curl http://localhost:8080/ping
```
- The result received will be:

- Now, let's try to split the simple router:
```
package main
import (
"net/http"
"github.com/gin-gonic/gin"
)
func main() {
r := gin.Default()
v1 := r.Group("/v1")
{
v1.GET("/ping", Pong) // curl http://localhost:8080/v1/ping
}
r.Run() // listen and serve on 0.0.0.0:8080 (for windows "localhost:8080")
}
func Pong(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{
"message": "pong",
})
}
```
**Note:** After changing the code as above, remember to run the command: "go run cmd/server/main.go" to apply the new code.
- Next, let's try running the new router with the following command:
```
curl http://localhost:8080/v1/ping
```
- The result received will be:

- Now, let's try to get the parameter from Param:
```
package main
import (
"net/http"
"github.com/gin-gonic/gin"
)
func main() {
r := gin.Default()
v1 := r.Group("/v1")
{
v1.GET("/ping/:name", Pong) // curl http://localhost:8080/v1/ping
}
r.Run() // listen and serve on 0.0.0.0:8080 (for windows "localhost:8080")
}
func Pong(c *gin.Context) {
name := c.Param("name")
c.JSON(http.StatusOK, gin.H{
"message": "pong:::: " + name,
})
}
```
**Note:** After changing the code as above, remember to run the command: "go run cmd/server/main.go" to apply the new code.
- Try to get the param with the following command:
```
curl http://localhost:8080/v1/ping/hieunguyen
```
- The result received will be:

- Next, we will try to get the parameter from the Query:
```
package main
import (
"net/http"
"github.com/gin-gonic/gin"
)
func main() {
r := gin.Default()
v1 := r.Group("/v1")
{
v1.GET("/ping", Pong) // curl http://localhost:8080/v1/ping
}
r.Run() // listen and serve on 0.0.0.0:8080 (for windows "localhost:8080")
}
func Pong(c *gin.Context) {
id := c.Query("id")
c.JSON(http.StatusOK, gin.H{
"message": "pong:::: " + id,
})
}
```
**Note:** After changing the code as above, remember to run the command: "go run cmd/server/main.go" to apply the new code.
- Try to get the query with the following command:
```
curl http://localhost:8080/v1/ping\?id=12345
```
- The result received will be:

You can directly refer to Gin on GitHub with the following link:[here](https://github.com/gin-gonic/gin)
**Give me a reaction and a follow for motivation** 😘
**If you found this article useful and interesting, please share it with your friends and family. I hope you found it helpful. Thanks for reading** 🙏
Let's get connected! You can find me on:
- Medium: [Quang Hieu (Bee) -- Medium](https://quanghieudev.medium.com/)
- Dev: [Quang Hieu (Bee) -- Dev](https://dev.to/hieunguyendev)
- Linkedin: [Quang Hieu (Bee) -- Linkedin](https://www.linkedin.com/in/hiếu-nguyễn-1a38132bb/)
- Buy Me a Coffee: [Quang Hieu (Bee) -- buymeacoffee](https://buymeacoffee.com/quanghieudev) | hieunguyendev |
1,916,107 | Python_In_Tamil-001 | Dear All, I am Govindarajan from Thanjavur. I teach Basic Python through OnLine. I am a PCEP and... | 0 | 2024-07-08T17:06:10 | https://dev.to/govi1964/pythonintamil-001-54f | python, learning, programming, basic | Dear All,
I am Govindarajan from Thanjavur. I teach Basic Python through OnLine. I am a PCEP and PCAP.
Ok, let us start learning Basic Python.
1. Install python from python.org as per your computer system.
2. In windows, in the search bar, type cmd, cmd page will get opened. Write python at the cursor location and press Enter. If you see Python with the version number of the python, then it is confirmed that python is installed in your system. If not, go to the point number 1 above.
3. Then, in the search bar, type IDLE. Click on IDLE with right side mouse and click Run as administrator. you will see a new window titled as IDLE Shell python version number. This window is called as Shell window or Console window.
4. In the shell window, click File => New File. Another window with a title as untitled will open. This window is called as Editor window. we will write all our codes/program in this Editor window. It will be saved as python file(.py extension).
5. Type ('Hello World') with a file name as BP001, save it(Ctrl + s), click Run, click Run Module F5 (or Fn + F5), the code will get executed and you will see Hello World in your Shell Window.
6. Great !!!. You have successfully completed your first program in Python. Congratulations.
Let us meet tomorrow in 'Python_In_Tamil-002'.
Thanks and Regards.
Govi.
https://www.youtube.com/@Python_In_Tamil | govi1964 |
1,916,108 | Git User Credentials | So i always wondered why i do a git push to a certain repository but notice that i see a different... | 0 | 2024-07-08T17:09:21 | https://dev.to/debuggingrabbit/git-credentials-23fc | So i always wondered why i do a git push to a certain repository but notice that i see a different user as the committer, and not the account i cloned from. so the problem is this - i clone from repo A with username A1, but while doing a git push i noticed that the commit done was from username B1, even though they all belong to me but it should be username A1 with the commit.
The issue is that the git credential on your local machine was set to global hence the terminal uses it for commits. so how to change that...
Navigate to the git directory
```
cd <git directory>
```
check for the credentials list to see the default
```
git config --list
```
here you can see the global username and email
All you need to do is to add another username in our case here the username and email of A1
```
git config user.name "<username>"
git config user.email "<email>"
```
if you want to override the default and make A1 the default
```
git config --global user.name "<username>"
git config --global user.email "<email>"
```
To switch to a different user credential, run the following commands with the desired user name and email:
```
git config user.name "New Name"
git config user.email "new.email@example.com"
```
If you want to switch to a global user credential, add the --global flag to the commands:
```
git config --global user.name "New Name"
git config --global user.email "new.email@example.com"
```
Remember that the user credential settings are stored in the .git/config file within your local repository. If you want to view or edit this file directly, you can open it with a text editor.
What does this solution solve, it can solve some ssh issues, some wrong user commit and some serious headache lol | debuggingrabbit | |
1,916,109 | Dockerizing a Laravel App: Nginx, MySql, PhpMyAdmin, and Php-8.2 | What is Docker? Docker is an open-source platform that enables developers to automate the... | 0 | 2024-07-08T18:13:47 | https://dev.to/kamruzzaman/dockerizing-a-laravel-app-nginx-mysql-phpmyadmin-and-php-82-43ne | laravel, nginx, mysql, phpmyadmin |

**What is Docker?**
Docker is an open-source platform that enables developers to automate the deployment, scaling, and management of applications using containerization. Containers package an application and its dependencies into a single, lightweight unit that runs consistently across different computing environments.
**Why Do We Need Docker?**
1. Consistency Across Environments: Docker ensures that your application runs the same way regardless of where it’s deployed — whether on a developer’s local machine, a testing server, or in production. This eliminates the “it works on my machine” problem.
2. Isolation: Containers encapsulate everything needed to run an application, ensuring that dependencies, libraries, and configurations are isolated from other applications. This isolation prevents conflicts between different applications on the same host.
3. Scalability: Docker makes it easy to scale applications horizontally by running multiple container instances. This can be managed dynamically based on load and demand, leading to better resource utilization and performance.
4. Efficiency: Containers are lightweight and share the host system’s kernel, making them more efficient than traditional virtual machines (VMs) which include a full operating system.
5. Rapid Deployment: Docker containers can be quickly created, started, stopped, and destroyed. This rapid provisioning accelerates development, testing, and deployment cycles.
6. Portability: Containers can run on any system that supports Docker, providing great flexibility in deployment choices, whether on-premises or in the cloud.
**### Prerequisite Laravel App Setup with Docker**
**1. Install Docker**
- For Windows: Download and install Docker Desktop from the [Docker website](https://docs.docker.com/build-cloud/).
- For macOS: Download and install Docker Desktop from the [Docker website](https://docs.docker.com/build-cloud/).
- For Linux: Follow the instructions on the [Docker website](https://docs.docker.com/build-cloud/) for your specific Linux distribution.
Ensure Docker is running by executing the following command in your terminal:
```
docker –version
```
**2. Install Docker Compose**
Docker Compose is a tool for defining and running multi-container Docker applications. It’s often used for setting up complex environments with multiple services (like web servers, databases, etc.).
- For Windows and macOS: Docker Compose is included with Docker Desktop.
- For Linux: Install Docker Compose following the instructions on the Docker Compose installation page.
Verify the installation by running:
```
docker-compose –version
```
**### Set Up a Laravel Project**
If you don’t have a Laravel project already, you can create a new one. If you already have a Laravel project, skip this step.
```
composer create-project laravel/laravel example-app
cd example-app
```
**### Dockerize Laravel application**
**1. Create a Dockerfile in your root project**
A Dockerfile is a script that contains instructions on how to build a Docker image for your application.
**Dockerfile**
```
FROM php:8.2-fpm-alpine
ARG user
ARG uid
RUN apk update && apk add \
curl \
libpng-dev \
libxml2-dev \
zip \
unzip \
shadow # Add shadow package to install useradd
RUN docker-php-ext-install pdo pdo_mysql \
&& apk --no-cache add nodejs npm
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
#USER root
#RUN chmod 777 -R /var/www/
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
WORKDIR /var/www
USER $user
```
Here we pass two Arguments one is user and the other is uid for create a new user and other permission.
**2. Create a docker-compose folder. And inside this folder create the below folder**
- Mysql
- Nginx
- - ssl
- Php
- Redis
- - data
**Like this way**

Here are other files we will create later.
**3. Create a docker-compose.yml file in your root project**
**docker-compose.yml**
```
version: "3.7"
services:
####################################################################################################
# app
####################################################################################################
app:
build:
args:
user: developer
uid: 1000
context: ./
dockerfile: Dockerfile
image: app
container_name: app-rifive-laravel
restart: unless-stopped
environment:
VIRTUAL_HOST: laravel.test
working_dir: /var/www/
volumes:
- ./:/var/www
- ~/.ssh:/root/.ssh
depends_on:
- db
- redis
networks:
- laravel
####################################################################################################
# DATABASE (MySQL)
####################################################################################################
db:
image: mysql:8.0
container_name: mysql-rifive-laravel
restart: unless-stopped
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./docker-compose/mysql/data:/var/lib/mysql
- ./docker-compose/mysql/logs:/var/log/mysql
- ./docker-compose/mysql/ql:/docker-entrypoint-initdb.d
networks:
- laravel
####################################################################################################
# Nginx
####################################################################################################
nginx:
image: nginx:alpine
container_name: nginx-rifive-laravel
restart: unless-stopped
ports:
- 80:80
- 443:443
volumes:
- ./:/var/www
- ./docker-compose/nginx:/etc/nginx/conf.d
- ./docker-compose/nginx/ssl:/etc/nginx/conf.d/ssl
- ./docker-compose/nginx/phpmyadmin.conf:/etc/nginx/conf.d/phpmyadmin.conf
networks:
- laravel
####################################################################################################
# phpMyAdmin
####################################################################################################
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin-rifive-laravel
ports:
- 8080:80
links:
- db
restart: unless-stopped
environment:
PMA_HOST: db
#PMA_USER: ${DB_USERNAME}
#PMA_PASSWORD: ${DB_PASSWORD}
PMA_PORT: 3306
PMA_ARBITRARY: 1
networks:
- laravel
####################################################################################################
# Redis
####################################################################################################
redis:
image: "redis:alpine"
container_name: ri-rifive-redis
restart: unless-stopped
volumes:
- ./docker-compose/redis/data:/data
ports:
- "6379:6379"
networks:
- laravel
networks:
laravel:
driver: bridge
```
here we create 5 image
1. Laravel App
2. Database MySQL
3. Nginx
4. PhpMyAdmin
5. Redis
**4. Create a Necessary file for nginx. Go to docker-compose/nginx Folder and create these files**
**laravel.conf**
```
server {
listen 80;
server_name laravel.test;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name laravel.test;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
ssl_certificate /etc/nginx/conf.d/ssl/self-signed.crt;
ssl_certificate_key /etc/nginx/conf.d/ssl/self-signed.key;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
```
**phpMyAdmin.conf**
```
server {
listen 80;
server_name phpmyadmin.laravel.test;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name phpmyadmin.laravel.test;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /usr/share/nginx/html;
ssl_certificate /etc/nginx/conf.d/ssl/self-signed.crt;
ssl_certificate_key /etc/nginx/conf.d/ssl/self-signed.key;
location / {
proxy_pass http://phpmyadmin:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
**
## ::::Now run docker Container::::
**
```
docker compose up –build
```
**when build is complete then run this command**
```
docker compose exec -it app sh
```

Now you can see your shell terminal. If you run whoami command then you see developer as a user. Run below this command
```
cd docker-compose/nginx/ssl/
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout self-signed.key -out self-signed.crt
```
This command creates two files in your docker-compose/nginx/ssl folder.
1. self-signed.crt
2. self-signed.key
Like this picture

Now run this command
```
cd /var/www
php artisan migrate
```
now you exit shell command and your application will be ready for run. Hit this URL
**Laravel App**
```
https://localhost
```
**PhpMyAdmin**
```
http://localhost:8080/
```
That’s all. Happy Learning :) .
[if it is helpful, giving a star to the repository 😇]
https://github.com/kamruzzamanripon/docker-laravel-nginx
| kamruzzaman |
1,916,115 | Internal Error: No such file or directory @ rb_sysopen - /box/script.js when using Judge0 API | I'm encountering an issue while trying to execute code using the Judge0 API. The response I receive... | 0 | 2024-07-08T17:21:52 | https://dev.to/nischal_kshaj_f2c1d595ea/internal-error-no-such-file-or-directory-rbsysopen-boxscriptjs-when-using-judge0-api-2la4 | I'm encountering an issue while trying to execute code using the Judge0 API. The response I receive from the API is as follows:
json
Copy code
{
"stdout": null,
"time": null,
"memory": null,
"stderr": null,
"token": "4957159e-0921-44fa-82a6-b9d4b202f276",
"compile_output": null,
"message": "No such file or directory @ rb_sysopen - /box/script.js",
"status": {
"id": 13,
"description": "Internal Error"
}
}
Setup
I set up Judge0 using the provided .config and docker-compose.yml files. There were no additional files included in the configuration.
Issue
When I use the following route: submissions/?base64_encoded=false&wait=false, I get the error mentioned above. It seems that the Judge0 server cannot find or create the script.js file. | nischal_kshaj_f2c1d595ea | |
1,916,116 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash app... | 0 | 2024-07-08T17:22:13 | https://dev.to/gomon87305/buy-verified-cash-app-account-5b2g | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | gomon87305 |
1,916,117 | Architecting a Secure and Scalable Network with AWS VPCs and Subnets | Building a secure and scalable network in the cloud is critical for any organization that leverages... | 0 | 2024-07-08T17:22:18 | https://dev.to/harshana_vivekanandhan_88/architecting-a-secure-and-scalable-network-with-aws-vpcs-and-subnets-3kcc | webdev, aws, cloudcomputing | Building a secure and scalable network in the cloud is critical for any organization that leverages cloud services. AWS Virtual Private Cloud (VPC) and its associated subnets provide the foundational infrastructure to achieve these goals. This blog post will guide you through the process of architecting a secure and scalable network using AWS VPCs and subnets.
## Project Prerequisites
- An AWS account that is free-tier eligible because we don’t want to spend money on this project.
- Basic knowledge of VPCs, subnets, Network ACLs, routing and security groups.
## My Architecture Diagram

## Creating a custom VPC and Subnets
I will launch all the resources I use in this project in the North Virginia region. If you want to follow along to the last detail, make sure you are also launching your resources in the North Virginia region.
Now let’s create our VPC. Creating a VPC has been made easier as you can create your VPC and subnets, and define route tables and other VPC resources in one go.
Here change the IPv4 CIDR block and the zones and subnets to 1 gateways and endpoints to None


## Configuring Security Groups
To locate the security group console, we have to search and navigate to the EC2 management console window. Once in the EC2 dashboard navigate to the security groups tab and create a security group as shown in the image below.[](url){https://whatismyipaddress.com/ip/110.224.90.87} to find the IP address
Here Allow SSH from local computer to bastion host launched in public subnet.



### Setting up a Bastion Host
The next step in the project is to launch a bastion host in the public subnet via which we are going to connect to an EC2 instance launched in the private subnet. So let’s get that done. Within your management console, navigate to the EC2 window. To make sure we don’t accrue any cost, we are going to use an AMI that is free-tier eligible.
While launching the EC2 instance, be sure to select the security group we created earlier as its security group. After having filled in all the details, clicking on the Launch instance button launches our bastion host EC2 instance.




## Launch Private EC2 Instance
We need another EC2 instance in our private subnet which we are going to access it using SSH via the bastion host. This instance is going to use the same key pair and security group as the bastion host in the public subnet.


## SSH to Bastion Host
The time to start testing our work has come. We are going to access our bastion host from our computer. So open a terminal window and run the following commands.
```
chmod 400 /path/to/private/key.pem
```
This command will secure the key pair file that was downloaded when we created our key pair. After that, the next command to be run is:
```
ssh -i path/to/key.pem ec2-user@bastion-public-ip
```
Make sure you edit the command as needed before running it. With that, we will be connected to our bastion host and we can now connect to the private instance via SSH from the bastion host. To do that, run the following command:
```
ssh ec2-user@private-instance-private-ip
```
That’s all that there is to it. Now close the connection to your EC2 instances by running the edit command
```
**exit**
```
command.
| harshana_vivekanandhan_88 |
1,916,118 | python learning-D1 | hi day 1 '''python print(welcome)''' | 0 | 2024-07-08T17:22:22 | https://dev.to/perumal_s_9a6d79a633d63d4/python-learning-d1-mal | hi
day 1
'''python
print(welcome)''' | perumal_s_9a6d79a633d63d4 | |
1,916,119 | Git Dual Boot Files Modified Solution | So there is a problem that you might face which is if you use dual boot or switched from windows to... | 0 | 2024-07-08T17:23:26 | https://dev.to/abdulmateenzwl/git-dual-boot-files-modified-solution-32ee | git, linux, microsoft, dualboot | So there is a problem that you might face which is if you use dual boot or switched from windows to linux or vice versa That the files in your git are shown as modified but when you try to pull it says already uptodate.
This issue often occurs due to differences in how Windows and Linux handle line endings in text files. Windows typically uses CRLF (Carriage Return and Line Feed) for line endings, while Linux uses LF (Line Feed) only. Git can detect these differences and mark files as modified even if their contents haven't changed.
On windows machine configure
> `git config --global core.autocrlf true`
On Ubuntu configure
> `git config --global core.autocrlf input`
if the issue still presists
> `git add --renormalize .`
at last if there is still issue you can make a commit and this will solve the issue. If you do not want to make that commit just delete the folder and clone it from remote (Github) again.
Thanks
| abdulmateenzwl |
1,916,120 | Transmute negative emotions into personal growth | I had a bad day at work today. Corporate games, lazy management, and unmeaningful work is what got... | 0 | 2024-07-08T22:46:43 | https://dev.to/dellboyan/transmute-negative-emotions-into-personal-growth-4c1p | productivity, learning, programming | I had a bad day at work today. Corporate games, lazy management, and unmeaningful work is what got me. There's no need to go into details but this situation left me with a lot of anger. It did help that I was biking from work so a lot of that steam went out, but it also got me thinking, what's the point of this state? After all, this is just another form of energy that I can convert into something useful, and that's just what I'm doing right now with this post. I think this is the healthiest way to handle these situations that anyone can use and train themselves. Here's the list of common negative emotions with examples how you can transmute them into something useful.
## Anger
Like I mentioned from the first example, anger is one of the most powerful emotions you can use because it's very actionable. Whenever you notice you are feeling mad, just start coding or working on something else. For me, I notice this is when I'm feeling the most productive. It's like your brain is on fire and you just want to crush whatever's in front of you, and that is just free energy you get, so why not use it for something beneficial for you. Ultimate state is when you can call upon this energy whenever you want when you need it, Goggins style.

## Fear
Fear is also a common one, it's important to recognize fear in certain situations, it can be in the form of anxiety, I usually feel it in the gut. Previously I would just ignore it and let it pass, but now I try to analyze why I feel that way and try to take action. In most cases it's related to fear, and for this emotion the best medicine is action. Scared of speaking with your users? Contact one right away. Afraid of cold calling? Get that phone and call. The point is, fear is just pointing out where you need to grow.
## Current relationship
This is more related to men than women but I'm sure some women can relate too. If you had a fight with your significant other instead of going into those little wars trying to win, try to take the other person's view. You'll be able to recognize your shortcomings and take action instead of just fighting. It's like playing chess with yourself - you might lose, but you'll definitely learn something.
## Previous relationships
Love is one of the most powerful energies there is. If you were in a relationship that broke your heart, don't waste your time with wishful thinking, instead use that energy to better yourself and improve. Hit the gym, learn a new skill, or code that new app nobody will use. Nothing says "I'm over you" like becoming a better version of yourself.
## Greed & Jealousy
These emotions often get a bad rap, but they're just signposts pointing to what you really want. Jealous of your friend's new job? That's your subconscious telling you it's time for a career change. Greedy for more money? Maybe it's time to start that side hustle you've been dreaming about. The trick is to use these feelings as motivation, not let them eat you up inside. Turn that green-eyed monster into a money-making machine.
## Doubt from others
I don't know about anyone else, but one of the greatest motivators for me is when someone is doubting me or thinks I can't achieve something. Use this doubt from others as an energy source to become even better. It's like they're handing you free rocket fuel. Next time someone says "You can't do that," mentally thank them for the boost and prove them wrong. Nothing tastes sweeter than success seasoned with a little "I told you so."
Remember, emotions are just energy. And energy can't be created or destroyed, but it can be transformed. So next time you're feeling down, angry, or scared, don't waste that energy - transmute it. Turn that frown upside down, and then use it to power your personal growth rocket. Who knows? You might just find that your worst days become the launching pad for your best self. | dellboyan |
1,916,121 | Unlock the Power of Cryptography with the 'Polybius Square Encryption in Python' Project | The article is about the 'Polybius Square Encryption in Python' project, a captivating programming practice course offered on LabEx. It delves into the intricacies of the Polybius square encryption algorithm, guiding readers through its implementation in Python. The article highlights the key skills learners will acquire, including understanding the Polybius square, developing the encryption algorithm, handling various input scenarios, and encrypting and decrypting text. By completing this project, readers will not only master the art of cryptography but also enhance their Python programming abilities, opening up new opportunities in the fields of cybersecurity and data protection. The article provides a comprehensive overview of the course, making it an enticing read for those interested in exploring the world of cryptography through hands-on coding projects. | 27,678 | 2024-07-08T17:28:19 | https://dev.to/labex/unlock-the-power-of-cryptography-with-the-polybius-square-encryption-in-python-project-57n4 | labex, programming, course, python |
Embark on an exciting journey into the world of cryptography with the 'Polybius Square Encryption in Python' project on [LabEx](https://labex.io/courses/project-chessboard-encryption). This captivating programming practice course will equip you with the knowledge and skills to implement the Polybius square encryption algorithm, a powerful tool for securing your digital communications.

## Unravel the Mysteries of the Polybius Square
The Polybius square is a 5x5 grid that maps each letter of the English alphabet to a pair of coordinates. By encrypting text using this method, you can create a coded message that can only be decrypted by someone with knowledge of the Polybius square. In this project, you will dive deep into the inner workings of this encryption technique, learning how to:
### Understand the Polybius Square
Explore the fundamental principles of the Polybius square and how it can be used to encrypt and decrypt messages.
### Implement the Encryption Algorithm
Develop the Python code to encrypt text using the Polybius square, ensuring that your implementation is robust and efficient.
### Handle Various Input Scenarios
Learn to handle empty or `None` input, as well as other edge cases, to ensure your encryption solution is versatile and reliable.
### Encrypt and Decrypt Text
Put your newfound knowledge into practice by encrypting and decrypting text using the Polybius square, unlocking the power of this cryptographic technique.
## Enhance Your Coding Skills and Unlock New Opportunities
By completing the 'Polybius Square Encryption in Python' project on [LabEx](https://labex.io/courses/project-chessboard-encryption), you will not only gain a deep understanding of cryptography but also enhance your Python programming skills. This project is designed to challenge and inspire you, providing a hands-on learning experience that will prepare you for future opportunities in the field of cybersecurity, data protection, and beyond.
Embark on this captivating journey and unlock the secrets of the Polybius square encryption algorithm. Enroll in the 'Polybius Square Encryption in Python' project on [LabEx](https://labex.io/courses/project-chessboard-encryption) today and take the first step towards mastering the art of cryptography.
## Empowering Learners with LabEx's Interactive Playground and AI-Driven Support
LabEx is a renowned online learning platform that sets itself apart with its immersive, hands-on programming courses. Each course is accompanied by a dedicated Playground environment, allowing learners to dive into practical coding exercises and put their newfound knowledge into immediate practice.
Designed with beginner-friendly principles in mind, LabEx's step-by-step tutorials guide learners through the learning process, making it accessible for those new to programming. The platform's automated verification system provides timely feedback, ensuring learners can quickly assess their progress and identify areas for improvement.
But LabEx's support doesn't stop there. The platform also features an AI learning assistant, offering invaluable services such as code error correction and concept explanation. This AI-driven support empowers learners to overcome challenges, deepen their understanding, and accelerate their learning journey.
By combining interactive Playgrounds, structured tutorials, and AI-powered assistance, LabEx creates a comprehensive and engaging learning experience that helps aspiring programmers of all levels unlock their full potential.
---
## Want to learn more?
- 🌳 Explore [20+ Skill Trees](https://labex.io/learn)
- 🚀 Practice Hundreds of [Programming Projects](https://labex.io/projects)
- 📖 Read More [Tutorials](https://labex.io/tutorials)
Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) 😄 | labby |
1,916,122 | Deep Dive into PandApache3: Code de lancement | A post by Mary 🇪🇺 🇷🇴 🇫🇷 | 0 | 2024-07-08T17:30:41 | https://dev.to/pykpyky/deep-dive-into-pandapache3-code-de-lancement-3chm | webdev, csharp, dotnet | pykpyky | |
1,916,123 | Buy Verified Paxful Account | https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are... | 0 | 2024-07-08T17:31:51 | https://dev.to/gomon87305/buy-verified-paxful-account-58e9 | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | gomon87305 |
1,916,125 | Python class first day | Hi all, Today I joined the python class run by Kaniyam Foundation. First day of class today they... | 0 | 2024-07-08T17:36:06 | https://dev.to/mahesh_s_369d8f0b1ccd1b9e/python-class-first-day-4o9c | python | Hi all,
Today I joined the python class run by Kaniyam Foundation. First day of class today they taught basic installation and instructions and how to create blogs. It was very useful for me. And they taught basic Python programming.
Basic program
👇👇👇
```
Print("hello world")
```
I will write about the daily activities of this python class in this blog in coming days. | mahesh_s_369d8f0b1ccd1b9e |
1,916,126 | Deep Dive into PandApache3: Code de lancement | Vous êtes-vous déjà demandé a quoi ressemble un serveur web de l’intérieur ? Avez-vous déjà rêvé d'en... | 0 | 2024-07-08T17:36:14 | https://dev.to/pykpyky/deep-dive-into-pandapache3-code-de-lancement-327n | webdev, csharp, dotnet, development | Vous êtes-vous déjà demandé a quoi ressemble un serveur web de l’intérieur ? Avez-vous déjà rêvé d'en créer un vous-même ? Vous êtes au bon endroit !
Bienvenue dans ce premier article technique consacré au développement de PandApache3.
Cet article et les suivants ont pour ambition de vous décrire le fonctionnement interne de PandApache3, un serveur web léger et minimaliste prêt à concurrencer Apache2 (Ici, nous sommes pour la retraite à 29 annuité).
Ces articles ne sont pas une documentation. Ils ne seront pas mis à jour au fil des évolutions de PandApache3. Leur objectif est plutôt de partager et d'expliquer du code et des choix de design.
Certaines parties du code seront simplifiées pour être plus digestes, afin de faciliter la compréhension générale de notre projet.
Avant d’aller plus loin, vous ne connaissez pas PandApache3 ? Vous pouvez si vous le souhaitez en apprendre plus en listant ce précédant article : [PandApache3, le tueur d'Apache ](https://dev.to/pykpyky/pandapache3-le-tueur-dapache-580g)
---
Mais par où commencer ? Décrire un projet à partir de zéro est toujours un défi. Nous nous concentrerons sur les éléments essentiels pour débuter, en évitant de nous perdre dans des détails qui pourraient dérouter les débutants. Nous commencerons par explorer la tâche la plus fondamentale d’un service : le démarrage.
---
## Décollage : Les premiers pas de PandApache3

Que fait PandApache3 au démarrage du service ? Avant même d'accepter des requêtes HTTP et d'écouter sur un port, plusieurs tâches doivent être effectuées. Dans cette partie, nous nous concentrons sur les actions entreprises avant que la première connexion au service puisse être établie.
Notre méthode de démarrage s'appelle `StartServerAsync`, c'est la toute première méthode appelée lorsque notre serveur est lancé.
```CSharp
public static async Task StartServerAsync()
{
Logger.Initialize();
Server.STATUS = "PandApache3 is starting";
Logger.LogInfo($"{Server.STATUS}");
ServerConfiguration.Instance.ReloadConfiguration();
_ConnectionManager = new ConnectionManager();
TerminalMiddleware terminalMiddleware = new TerminalMiddleware();
RoutingMiddleware routingMiddleware = new RoutingMiddleware(terminalMiddleware.InvokeAsync, fileManager);
LoggerMiddleware loggerMiddleware = new LoggerMiddleware(authenticationMiddleware.InvokeAsync);
Func<HttpContext, Task> pipeline = loggerMiddleware.InvokeAsync;
await _ConnectionManagerWeb.StartAsync(pipeline);
}
```
La première étape consiste à initialiser notre logger. Le logger est une classe essentielle qui enregistre toutes les actions, erreurs et messages du serveur. C'est particulièrement crucial lors du démarrage, car il doit être prêt à signaler tout problème éventuel, comme illustré par l'enregistrement du statut "is starting" sur la troisième ligne.
Les informations de logs peuvent être disponibles à deux endroits selon la configuration choisie :
- Dans des fichiers de logs, qui est la configuration classique des services. Un fichier PandApache3.log est créé et chaque événement y est enregistré.
- Dans la console, ce qui est très utile pour voir les logs directement sur la sortie console ou le terminal, en complément ou en alternative aux fichiers de logs.
Ces deux options peuvent également être combinées, vous permettant ainsi de choisir comment vous souhaitez gérer vos logs selon vos besoins.
---
_Entre nous_
> Pourquoi opter pour du NoLog ou des logs uniquement dans la console plutôt que dans un fichier ? À première vue, cela peut sembler étrange de ne pas conserver les logs dans un fichier. Cependant, cette décision est stratégique pour PandApache3, conçu pour être PaaS-friendly. Lorsque vous gérez une plateforme en tant que service (PaaS) avec des milliers d'instances, stocker les logs sur le serveur peut poser des problèmes d'accessibilité et d'espace disque. Il est donc plus judicieux de rediriger les logs générés par l'application depuis la console vers un système dédié tel que ADX ou Elastic Search.
> Cette approche permet également d'obtenir rapidement des retours d'informations lors du développement de l'application.
> Enfin, la possibilité d'utiliser NoLog avec PandApache3 (en désactivant l'écriture de logs à la fois dans le fichier et dans la console) est une conséquence directe de la flexibilité offerte par le service.
---
## Plongée dans la configuration:

Comme tout service, PandApache3 est configurable. Après l’initialisation des logs, le chargement de la configuration devient donc la seconde étape obligatoire à réaliser. Cette configuration, disponible sous la forme du fichier PandApache3.conf sur la machine, joue un rôle essentiel dans le comportement et les fonctionnalités de PandApache3.
```CSharp
public void ReloadConfiguration()
{
string fullPath = Path.Combine(_configurationPath, "PandApache3.conf");
if (!File.Exists(fullPath))
{
throw new FileNotFoundException("The configuration file didn't exist", fullPath);
}
try
{
foreach (var line in File.ReadLines(fullPath))
{
if (string.IsNullOrWhiteSpace(line) || line.Trim().StartsWith("#"))
{
continue;
}
else
{
var parts = line.Split(new[] { ' ' }, 2, StringSplitOptions.RemoveEmptyEntries);
if (parts.Length == 2)
{
var key = parts[0].Trim();
var value = parts[1].Trim();
MapConfiguration(key, value);
}
}
}
Logger.LogInfo("Configuration reloaded");
}
catch (Exception ex)
{
throw new Exception($"Error during configuration reload: {ex.Message}");
}
}
public void MapConfiguration(string key, string value)
{
var actionMap = new Dictionary<string, Action<string>>
{
["servername"] = v => ServerName = v,
["serverip"] = v =>
{
if (IPAddress.TryParse(v, out var parsedIPAddress))
ServerIP = parsedIPAddress;
else
Logger.LogWarning("Server IP invalid");
},
["serverport"] = v => TrySetIntValue(v, val => ServerPort = val, "Server port invalid"),
};
if (actionMap.TryGetValue(key.ToLower(), out var action))
{
action(value);
}
else
{
Logger.LogWarning($"Unknown configuration key: {key}");
}
}
```
Cette fonction `ReloadConfiguration `charge chaque ligne du fichier PandApache3.conf (hors commentaire), en associant chaque clé à une valeur. Puis, la fonction `MapConfiguration `dispose (Comme la constitution 🤓) d’un dictionnaire (`actionMap`) qui va permettre de mapper chaque clé, sur une action à réaliser avant d’associer la valeur a la variable de la classe.
Par exemple pour la ligne : `["servername"] = v => ServerName = v,`
La clé du dictionnaire est `servername` et l’action associée est `v => ServerName = v`, où `v` représente la valeur. L’action est une fonction lambda qui affecte cette valeur à la propriété `ServerName`.
Maintenant équipé des informations nécessaires, notre serveur est prêt à démarrer selon les spécifications fournis et à nous renvoyer des retours d'informations en cas de problème. Passons à la prochaine étape : la gestion des connexions !
---
_Entre nous_
> Une erreur de paramètre dans la configuration n’est pas bloquante ; une alerte sera émise, mais le service démarrera tout de même ! En cas d’absence de fichier de configuration, les paramètres par défaut de l’application seront utilisés.
_Toujours entre nous_
> Pourquoi avoir choisi un fichier .conf au format texte plutôt que JSON ou YAML ? D’abord pour sa simplicité : rien de plus facile que d’écrire un premier fichier de configuration au format texte, contrairement à l’édition de JSON ou YAML qui peut poser problème sans un bon éditeur. De plus, le format texte accepte les commentaires, ce qui est très pratique pour l’auto-documentation du fichier de configuration. À l’avenir, il n’est pas exclu de supporter plusieurs formats de fichier pour gérer la configuration.
---
## Le Cœur de PandApache3 : Le Gestionnaire de Connexions

Le cœur de notre serveur PandApache3 réside dans son gestionnaire de connexion, représenté par l'objet `ConnectionManager`.
```CSharp
_ConnectionManager = new ConnectionManager();
```
Cet objet relativement simple possède deux attributs clés : le `TcpListener ` et le `pipeline`.
```CSharp
public TcpListener Listener { get; set; }
private Func<HttpContext, Task> _pipeline;
```
Le `TcpListener `est un composant fondamental qui permet aux clients de se connecter à notre serveur via le protocole TCP. Quant à notre variable `_pipeline`, elle représente une fonction asynchrone qui prend en paramètre un contexte HTTP (`HttpContext`) et retourne une tâche (`Task`). De manière imagée, notre pipeline est une série d'actions que nous souhaitons exécuter sur chaque requête HTTP. Chaque action est exécutée par ce que l'on appelle un Middleware.
Justement, dans la suite de notre code nous mettons en place les middlewares à utiliser pour chaque requête HTTP reçue :
```CSharp
TerminalMiddleware terminalMiddleware = new TerminalMiddleware();
RoutingMiddleware routingMiddleware = new RoutingMiddleware(terminalMiddleware.InvokeAsync);
LoggerMiddleware loggerMiddleware = new LoggerMiddleware(authenticationMiddleware.InvokeAsync);
Func<HttpContext, Task> pipeline = loggerMiddleware.InvokeAsync;
```
Nous avons donc trois middlewares ici :
- TerminalMiddleware
- RoutingMiddleware
- LoggerMiddleware
Chaque middleware appelle le suivant dans une chaîne bien définie (Logger appelle Routing, puis Routing appelle Terminal). Cette chaîne de middlewares (notre pipeline) est affectée à notre gestionnaire de connexion (`ConnectionManager`).
Maintenant que tout est en place, nous pouvons démarrer notre gestionnaire de connexion :
```CSharp
await _ConnectionManagerWeb.StartAsync(pipeline);
```
La fonction `StartAsync `configure simplement notre `TcpListener `pour écouter sur le port et l'adresse IP définis dans la configuration, puis le met en marche :
```CSharp
public async Task StartAsync(Func<HttpContext, Task> pipeline)
{
Listener = new TcpListener(ServerConfiguration.Instance.ServerIP, ServerConfiguration.Instance.ServerPort);
Logger.LogInfo($"Web server listening on {ServerConfiguration.Instance.ServerIP}:{ServerConfiguration.Instance.ServerPort}");
Listener.Start();
_pipeline = pipeline;
}
```
Voila, notre serveur est désormais démarré et prêt à recevoir des connexions entrantes.
---
_Entre nous_
> Ce que font les middlewares n'est pas crucial pour le moment. Ce qu'il faut retenir, c'est que notre ConnectionManager, chargé de traiter les connexions reçues sur son TCP listener, les fera passer toutes à travers cette chaîne de middlewares et dans cet ordre.
Cependant, les noms sont assez explicites et vous pouvez deviner le rôle de chaque middleware :
- Logger : Enregistre la requête entrante dans les logs.
- Routing : Dirige la requête vers la bonne ressource.
- Terminal : Le dernier middleware dans la chaîne, qui ne fait rien de
particulier mais qui est là.
_Toujours entre nous_
> Une requête qui traverse les middlewares le fait à l'aller, mais aussi au retour (dans le sens inverse). Dans notre exemple, cela signifie que la requête est d'abord loggée par le premier middleware, puis la réponse obtenue est également loggée par ce même middleware devenu le dernier dans la chaîne.
---
Merci infiniment d'avoir exploré les coulisses de PandApache3 avec moi ! Vos réflexions et votre soutien sont essentiels pour faire évoluer ce projet. 🚀
N'hésitez pas à partager vos idées et impressions dans les commentaires ci-dessous. Je suis impatient d'échanger avec vous !
Suivez mes aventures sur Twitter [@pykpyky](https://x.com/PykPyky) pour rester à jour sur toutes les nouvelles.
Vous pouvez également découvrir le projet complet sur [GitHub](https://github.com/MarieLePanda/PandApache3) et me rejoindre lors de sessions de codage en direct sur [Twitch ](https://www.twitch.tv/pykpyky)pour des sessions passionnantes et interactives. À bientôt derrière l'écran !
---
| pykpyky |
1,916,129 | Buy Negative Google Reviews | https://dmhelpshop.com/product/buy-negative-google-reviews/ Buy Negative Google Reviews Negative... | 0 | 2024-07-08T17:41:46 | https://dev.to/gomon87305/buy-negative-google-reviews-1ecg | javascript, webdev, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-negative-google-reviews/\n\n\nBuy Negative Google Reviews\nNegative reviews on Google are detrimental critiques that expose customers’ unfavorable experiences with a business. These reviews can significantly damage a company’s reputation, presenting challenges in both attracting new customers and retaining current ones. If you are considering purchasing negative Google reviews from dmhelpshop.com, we encourage you to reconsider and instead focus on providing exceptional products and services to ensure positive feedback and sustainable success.\n\nWhy Buy Negative Google Reviews from dmhelpshop\nWe take pride in our fully qualified, hardworking, and experienced team, who are committed to providing quality and safe services that meet all your needs. Our professional team ensures that you can trust us completely, knowing that your satisfaction is our top priority. With us, you can rest assured that you’re in good hands.\n\nIs Buy Negative Google Reviews safe?\nAt dmhelpshop, we understand the concern many business persons have about the safety of purchasing Buy negative Google reviews. We are here to guide you through a process that sheds light on the importance of these reviews and how we ensure they appear realistic and safe for your business. Our team of qualified and experienced computer experts has successfully handled similar cases before, and we are committed to providing a solution tailored to your specific needs. Contact us today to learn more about how we can help your business thrive.\n\nBuy Google 5 Star Reviews\nReviews represent the opinions of experienced customers who have utilized services or purchased products from various online or offline markets. These reviews convey customer demands and opinions, and ratings are assigned based on the quality of the products or services and the overall user experience. Google serves as an excellent platform for customers to leave reviews since the majority of users engage with it organically. When you purchase Buy Google 5 Star Reviews, you have the potential to influence a large number of people either positively or negatively. Positive reviews can attract customers to purchase your products, while negative reviews can deter potential customers.\n\nIf you choose to Buy Google 5 Star Reviews, people will be more inclined to consider your products. However, it is important to recognize that reviews can have both positive and negative impacts on your business. Therefore, take the time to determine which type of reviews you wish to acquire. Our experience indicates that purchasing Buy Google 5 Star Reviews can engage and connect you with a wide audience. By purchasing positive reviews, you can enhance your business profile and attract online traffic. Additionally, it is advisable to seek reviews from reputable platforms, including social media, to maintain a positive flow. We are an experienced and reliable service provider, highly knowledgeable about the impacts of reviews. Hence, we recommend purchasing verified Google reviews and ensuring their stability and non-gropability.\n\nLet us now briefly examine the direct and indirect benefits of reviews:\nReviews have the power to enhance your business profile, influencing users at an affordable cost.\nTo attract customers, consider purchasing only positive reviews, while negative reviews can be acquired to undermine your competitors. Collect negative reports on your opponents and present them as evidence.\nIf you receive negative reviews, view them as an opportunity to understand user reactions, make improvements to your products and services, and keep up with current trends.\nBy earning the trust and loyalty of customers, you can control the market value of your products. Therefore, it is essential to buy online reviews, including Buy Google 5 Star Reviews.\nReviews serve as the captivating fragrance that entices previous customers to return repeatedly.\nPositive customer opinions expressed through reviews can help you expand your business globally and achieve profitability and credibility.\nWhen you purchase positive Buy Google 5 Star Reviews, they effectively communicate the history of your company or the quality of your individual products.\nReviews act as a collective voice representing potential customers, boosting your business to amazing heights.\nNow, let’s delve into a comprehensive understanding of reviews and how they function:\nGoogle, with its significant organic user base, stands out as the premier platform for customers to leave reviews. When you purchase Buy Google 5 Star Reviews , you have the power to positively influence a vast number of individuals. Reviews are essentially written submissions by users that provide detailed insights into a company, its products, services, and other relevant aspects based on their personal experiences. In today’s business landscape, it is crucial for every business owner to consider buying verified Buy Google 5 Star Reviews, both positive and negative, in order to reap various benefits.\n\nWhy are Google reviews considered the best tool to attract customers?\nGoogle, being the leading search engine and the largest source of potential and organic customers, is highly valued by business owners. Many business owners choose to purchase Google reviews to enhance their business profiles and also sell them to third parties. Without reviews, it is challenging to reach a large customer base globally or locally. Therefore, it is crucial to consider buying positive Buy Google 5 Star Reviews from reliable sources. When you invest in Buy Google 5 Star Reviews for your business, you can expect a significant influx of potential customers, as these reviews act as a pheromone, attracting audiences towards your products and services. Every business owner aims to maximize sales and attract a substantial customer base, and purchasing Buy Google 5 Star Reviews is a strategic move.\n\nAccording to online business analysts and economists, trust and affection are the essential factors that determine whether people will work with you or do business with you. However, there are additional crucial factors to consider, such as establishing effective communication systems, providing 24/7 customer support, and maintaining product quality to engage online audiences. If any of these rules are broken, it can lead to a negative impact on your business. Therefore, obtaining positive reviews is vital for the success of an online business\n\nWhat are the benefits of purchasing reviews online?\nIn today’s fast-paced world, the impact of new technologies and IT sectors is remarkable. Compared to the past, conducting business has become significantly easier, but it is also highly competitive. To reach a global customer base, businesses must increase their presence on social media platforms as they provide the easiest way to generate organic traffic. Numerous surveys have shown that the majority of online buyers carefully read customer opinions and reviews before making purchase decisions. In fact, the percentage of customers who rely on these reviews is close to 97%. Considering these statistics, it becomes evident why we recommend buying reviews online. In an increasingly rule-based world, it is essential to take effective steps to ensure a smooth online business journey.\n\nBuy Google 5 Star Reviews\nMany people purchase reviews online from various sources and witness unique progress. Reviews serve as powerful tools to instill customer trust, influence their decision-making, and bring positive vibes to your business. Making a single mistake in this regard can lead to a significant collapse of your business. Therefore, it is crucial to focus on improving product quality, quantity, communication networks, facilities, and providing the utmost support to your customers.\n\nReviews reflect customer demands, opinions, and ratings based on their experiences with your products or services. If you purchase Buy Google 5-star reviews, it will undoubtedly attract more people to consider your offerings. Google is the ideal platform for customers to leave reviews due to its extensive organic user involvement. Therefore, investing in Buy Google 5 Star Reviews can significantly influence a large number of people in a positive way.\n\nHow to generate google reviews on my business profile?\nFocus on delivering high-quality customer service in every interaction with your customers. By creating positive experiences for them, you increase the likelihood of receiving reviews. These reviews will not only help to build loyalty among your customers but also encourage them to spread the word about your exceptional service. It is crucial to strive to meet customer needs and exceed their expectations in order to elicit positive feedback. If you are interested in purchasing affordable Google reviews, we offer that service.\n\n\n\n\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | gomon87305 |
1,916,130 | Glue work makes the dream work | Glue (Verb): To integrate different parts of a system together that would otherwise be... | 0 | 2024-07-08T17:42:05 | https://dev.to/ajparadith/glue-work-makes-the-dream-work-4g6n | womenintech, testing, career | Glue (Verb): To integrate different parts of a system together that would otherwise be incompatible.
**Glue is complex**
Being the Glue can be a lot of different things, from Coding, design or just being the human being amongst it all. People are not always rewarded for doing it well. It’s testing or Quality Engineering it’s often seen as just piecing things together but in reality it takes a lot of communication, documentation, writing, evaluating vendors or designs and how to implement the. It is so much more complicated than just “piecing things together”. In testing or Quality Engineering we often have to solve problems for ourselves. Testing tool evaluation for example.
Your job title says “Quality Engineer”, but you seem to spend most of your time in meetings. You’d like to have time to prevent bugs, but nobody else is onboarding the junior engineers, updating the roadmap, talking to the users, noticing the things that got dropped, asking questions on design documents, and making sure that everyone’s going roughly in the same direction. If you stop doing those things, the team won’t be as successful. But now someone’s suggesting that you might be happier in a less technical role. If this describes you, congratulations: you’re the glue. If it’s not, have you thought about who is filling this role on your team?
> “the less glamorous and often less promotable work, that needs to make a team successful.” — Tanya Reilly
Gluework is not a new concept in tech. Women have been talking about it for years, my fave Tanya Reilly who did a great talk about it in 2018.
Every senior person in an organisation should be aware of the less glamorous — and often less-promotable — work that needs to happen to make a team successful. Managed deliberately, glue work demonstrates and builds strong technical leadership skills. Left unconscious, it can be extremely career limiting. It pushes people into less technical roles and even out of the industry. We need to talk about how to allocate glue work deliberately, frame it usefully and make sure that everyone is choosing a career path they actually want to be on.
If you are not being paid for this Glue Work and it is detrimental to your progression — stop doing it. Stop being the unofficial lead. If you keep doing glue work — you will only get better at being glue. Make sure to focus on your own technical skills.
I am still a Staff Quality Engineer at heart, awesome woman in tech, UN Women Delegate and I believe in the value of curiosity and empathy in testing. I do all my own stunts, love food, travel, my friends, family, music and art.
If you enjoyed this story, please share to help others find it. Feel free to leave a comment — I am open to insight, learning and discussion. | ajparadith |
1,916,131 | Deep Dive into PandApache3: Launch Code | Have you ever wondered what a web server looks like from the inside? Have you ever dreamt of creating... | 0 | 2024-07-08T17:44:58 | https://dev.to/pykpyky/deep-dive-into-pandapache3-launch-code-43mh | webdev, csharp, dotnet, development |
Have you ever wondered what a web server looks like from the inside? Have you ever dreamt of creating one yourself? You've come to the right place!
Welcome to this first technical article dedicated to the development of PandApache3.
This article and the following ones aim to describe the internal workings of PandApache3, a lightweight and minimalist web server ready to compete with Apache2 (We are in favor of the retirement of 29 annuities).
These articles are not documentation. They will not be updated as PandApache3 evolves. Their goal is rather to share and explain code and design choices. Some parts of the code will be simplified to be more digestible, facilitating the general understanding of our project.
---
Before we proceed, are you familiar with PandApache3? If not, you can learn more about it in our previous article: [PandApache3, the Apache killer](https://dev.to/pykpyky/pandapache3-the-apache-killer-39ai).
---
But where to begin? Describing a project from scratch is always a challenge. We will focus on the essential elements to get started, avoiding getting lost in details that might confuse beginners. Let's start by exploring the most fundamental task of a service: startup.
---
## Liftoff: The First Steps of PandApache3

What does PandApache3 do when starting the service? Before accepting HTTP requests and listening on a port, several tasks need to be completed. In this part, we focus on the actions taken before the first connection to the service can be established.
Our startup method is called `StartServerAsync`, which is the very first method called when our server is launched.
```CSharp
public static async Task StartServerAsync()
{
Logger.Initialize();
Server.STATUS = "PandApache3 is starting";
Logger.LogInfo($"{Server.STATUS}");
ServerConfiguration.Instance.ReloadConfiguration();
_ConnectionManager = new ConnectionManager();
TerminalMiddleware terminalMiddleware = new TerminalMiddleware();
RoutingMiddleware routingMiddleware = new RoutingMiddleware(terminalMiddleware.InvokeAsync, fileManager);
LoggerMiddleware loggerMiddleware = new LoggerMiddleware(authenticationMiddleware.InvokeAsync);
Func<HttpContext, Task> pipeline = loggerMiddleware.InvokeAsync;
await _ConnectionManagerWeb.StartAsync(pipeline);
}
```
The first step is to initialize our logger. The logger is an essential class that records all actions, errors, and messages of the server. This is particularly crucial during startup, as it needs to be ready to report any potential issues, as illustrated by logging the "is starting" status on the third line.
Logging information can be available in two places depending on the chosen configuration:
- In log files, which is the classic service configuration. A PandApache3.log file is created, and each event is logged there.
- In the console, which is very useful for directly viewing logs on the console output or terminal, in addition to or instead of log files.
These two options can also be combined, allowing you to choose how to manage your logs according to your needs.
---
_Between us_
> Why opt for NoLog or logs only in the console rather than in a file? At first glance, it may seem strange not to keep logs in a file. However, this decision is strategic for PandApache3, designed to be PaaS-friendly. When managing a platform as a service (PaaS) with thousands of instances, storing logs on the server can pose accessibility and disk space issues. It is therefore wiser to redirect application-generated logs from the console to a dedicated system such as ADX or Elastic Search.
> This approach also facilitates quick feedback during application development.
> Finally, the ability to use NoLog with PandApache3 (by disabling log writing both in the file and in the console) is a direct consequence of the flexibility offered by the service.
---
## Diving into Configuration:

Like any service, PandApache3 is configurable. After initializing the logs, loading the configuration becomes the second mandatory step. This configuration, available in the form of the PandApache3.conf file on the machine, plays a crucial role in the behavior and functionality of PandApache3.
```CSharp
public void ReloadConfiguration()
{
string fullPath = Path.Combine(_configurationPath, "PandApache3.conf");
if (!File.Exists(fullPath))
{
throw new FileNotFoundException("The configuration file didn't exist", fullPath);
}
try
{
foreach (var line in File.ReadLines(fullPath))
{
if (string.IsNullOrWhiteSpace(line) || line.Trim().StartsWith("#"))
{
continue;
}
else
{
var parts = line.Split(new[] { ' ' }, 2, StringSplitOptions.RemoveEmptyEntries);
if (parts.Length == 2)
{
var key = parts[0].Trim();
var value = parts[1].Trim();
MapConfiguration(key, value);
}
}
}
Logger.LogInfo("Configuration reloaded");
}
catch (Exception ex)
{
throw new Exception($"Error during configuration reload: {ex.Message}");
}
}
public void MapConfiguration(string key, string value)
{
var actionMap = new Dictionary<string, Action<string>>
{
["servername"] = v => ServerName = v,
["serverip"] = v =>
{
if (IPAddress.TryParse(v, out var parsedIPAddress))
ServerIP = parsedIPAddress;
else
Logger.LogWarning("Server IP invalid");
},
["serverport"] = v => TrySetIntValue(v, val => ServerPort = val, "Server port invalid"),
};
if (actionMap.TryGetValue(key.ToLower(), out var action))
{
action(value);
}
else
{
Logger.LogWarning($"Unknown configuration key: {key}");
}
}
```
This `ReloadConfiguration` function loads each line from the PandApache3.conf file (excluding comments), mapping each key to a value. Then, the `MapConfiguration` function uses a dictionary (`actionMap`) to map each key to an action to perform before assigning the value to the class variable.
For example, for the line: `["servername"] = v => ServerName = v,`, the dictionary key is `servername`, and the associated action is `v => ServerName = v`, where `v` represents the value. The action is a lambda function that assigns this value to the `ServerName` property.
Now equipped with the necessary information, our server is ready to start according to the provided specifications and to provide feedback in case of issues. Let's move on to the next step: connection management!
---
_Between us_
> An error in the configuration parameter is not blocking; a warning will be issued, but the service will still start! In case the configuration file is missing, the application's default parameters will be used.
_Still between us_
> Why choose a .conf file in text format instead of JSON or YAML? Firstly, for its simplicity: it's easier to write a first configuration file in text format than editing JSON or YAML, which can be problematic without a good editor. Moreover, text format allows comments, which is very convenient for self-documentation of the configuration file. In the future, supporting multiple file formats to manage configuration is not excluded.
---
## The Heart of PandApache3: The Connection Manager

The heart of our PandApache3 server lies in its connection manager, represented by the `ConnectionManager` object.
```CSharp
_ConnectionManager = new ConnectionManager();
```
This relatively simple object has two key attributes: `TcpListener` and `pipeline`.
```CSharp
public TcpListener Listener { get; set; }
private Func<HttpContext, Task> _pipeline;
```
The `TcpListener` is a fundamental component that allows clients to connect to our server via the TCP protocol. As for our `_pipeline` variable, it represents an asynchronous function that takes an HTTP context (`HttpContext`) as a parameter and returns a task (`Task`). In a figurative sense, our pipeline is a series of actions we want to execute on each HTTP request. Each action is performed by what we call middleware.
In fact, in the following code, we set up the middlewares to be used for each received HTTP request:
```CSharp
TerminalMiddleware terminalMiddleware = new TerminalMiddleware();
RoutingMiddleware routingMiddleware = new RoutingMiddleware(terminalMiddleware.InvokeAsync);
LoggerMiddleware loggerMiddleware = new LoggerMiddleware(authenticationMiddleware.InvokeAsync);
Func<HttpContext, Task> pipeline = loggerMiddleware.InvokeAsync;
```
So we have three middlewares here:
- TerminalMiddleware
- RoutingMiddleware
- LoggerMiddleware
Each middleware calls the next one in a well-defined chain (Logger calls Routing, then Routing calls Terminal). This chain of middlewares (our pipeline) is assigned to our connection manager (`ConnectionManager`).
Now that everything is set up, we can start our connection manager:
```CSharp
await _ConnectionManagerWeb.StartAsync(pipeline);
```
The `StartAsync` function simply configures our `TcpListener` to listen on the IP address and port defined in the configuration, and then starts it:
```CSharp
public async Task StartAsync(Func<HttpContext, Task> pipeline)
{
Listener = new TcpListener(ServerConfiguration.Instance.ServerIP, ServerConfiguration.Instance.ServerPort);
Logger.Log
Info($"Web server listening on {ServerConfiguration.Instance.ServerIP}:{ServerConfiguration.Instance.ServerPort}");
Listener.Start();
_pipeline = pipeline;
}
```
There you have it, our server is now started and ready to receive incoming connections.
---
_Between us_
> What the middlewares do is not crucial at the moment. What matters is that our `ConnectionManager`, responsible for handling incoming connections on its TCP listener, will pass them all through this chain of middlewares and in this order.
However, the names are quite self-explanatory, and you can guess the role of each middleware:
- Logger: Logs the incoming request.
- Routing: Directs the request to the correct resource.
- Terminal: The last middleware in the chain, which does nothing particular but is there.
_Still between us_
> A request that goes through the middlewares does so both on the way in and on the way back (in reverse order). In our example, this means the request is first logged by the first middleware, and then the obtained response is also logged by this same middleware now become the last in the chain.
---
Thank you so much for exploring the inner workings of PandApache3 with me! Your thoughts and support are crucial in advancing this project. 🚀
Feel free to share your ideas and impressions in the comments below. I look forward to hearing from you!
Follow my adventures on Twitter [@pykpyky](https://x.com/PykPyky) to stay updated on all the news.
You can also explore the full project on [GitHub](https://github.com/MarieLePanda/PandApache3) and join me for live coding sessions on [Twitch](https://www.twitch.tv/pykpyky) for exciting and interactive sessions. See you soon behind the screen!
| pykpyky |
1,916,132 | Pergunte ao especialista - If/else ou switch | Sob que condições devo usar uma escada if-else-if em vez de um switch ao codificar uma ramificação... | 0 | 2024-07-09T22:03:19 | https://dev.to/devsjavagirls/pergunte-ao-especialista-ifelse-ou-switch-c71 | java | Sob que condições devo usar uma escada if-else-if em vez de um switch ao codificar uma ramificação com vários caminhos?
Resposta:
Em geral, use uma escada if-else-if quando as condições que controlam o processo de seleção não dependerem de um único valor.
Exemplo:
if(x < 10) // ...
else if(y != 0) // ...
else if(!done) // ...
Essa sequência não pode ser recodificada com um switch porque todas as três condições envolvem variáveis diferentes – e tipos diferentes. Que variável controlaria o switch?
Você também terá que usar uma escada if-else-if ao testar valores de ponto flutuante ou outros objetos que não sejam de tipos válidos em uma expressão switch.
| devsjavagirls |
1,916,133 | Navigating Common Git Errors: A Guide for Developers | As developers, we often encounter various errors when using Git. One such error involves attempting... | 0 | 2024-07-08T17:49:57 | https://dev.to/saint_vandora/navigating-common-git-errors-a-guide-for-developers-35bk | webdev, javascript, git, github | As developers, we often encounter various errors when using Git. One such error involves attempting to fetch all branches or create a new branch in a directory that is not a Git repository. In this article, we will explore a common scenario and provide a step-by-step guide on how to resolve it.
## The Scenario
You are working on a project and trying to fetch all branches or create a new branch using the following commands:
```bash
git fetch --all
git checkout -b dev-pilot
```
However, you encounter the following errors:
```bash
fatal: not a git repository (or any of the parent directories): .git
```
These errors indicate that the current directory is not recognized as a Git repository. Let's explore how to resolve this issue.
## Step-by-Step Guide to Fixing the Errors
### Step 1: Determine Your Goal
Before fixing the errors, you need to decide whether you want to:
1. **Initialize a new Git repository** in your current directory.
2. **Navigate to an existing Git repository**.
3. **Clone an existing Git repository** and checkout a specific branch.
## Step 2: Fixing the Errors
### Option 1: Initialize a New Git Repository
If you want to start a new Git repository in your current directory, follow these steps:
1. **Initialize a New Git Repository:**
```bash
git init
```
2. **Add a Remote Repository:**
```bash
git remote add origin https://gitlab.com/your_username/your_repository.git
```
3. **Fetch All Branches:**
```bash
git fetch --all
```
4. **Checkout a New Branch:**
```bash
git checkout -b dev-pilot
```
This sequence of commands initializes a new Git repository, links it to a remote repository, fetches all branches from the remote, and creates a new branch named `dev-pilot`.
### Option 2: Navigate to an Existing Git Repository
If you have an existing Git repository and need to navigate to it, follow these steps:
1. **Navigate to the Repository Directory:**
```bash
cd path/to/your/repository
```
2. **Fetch All Branches:**
```bash
git fetch --all
```
3. **Checkout a New Branch:**
```bash
git checkout -b dev-pilot
```
By navigating to the correct directory, you ensure that you are working within the context of an existing Git repository.
### Option 3: Clone the Repository and Checkout a Branch
If you need to clone a repository and work on a specific branch, use the following steps:
1. **Clone the Repository and Checkout the Branch:**
```bash
git clone -b dev-pilot https://gitlab.com/your_username/your_repository.git
```
This command clones the repository and directly checks out the `dev-pilot` branch, saving you additional steps.
## Conclusion
Encountering errors while using Git is common, but with the right approach, they can be resolved quickly. Whether you need to initialize a new repository, navigate to an existing one, or clone a repository and checkout a branch, following the steps outlined above will help you overcome these issues and get back to coding efficiently.
By understanding the root cause of the error and taking appropriate action, you can ensure that your workflow remains smooth and productive.
Thanks for reading...
Happy coding! | saint_vandora |
1,916,134 | Building 100% reusable UI kits | Hey devs :) Introduction With 15+ years of UI kit building & design systems, my team is... | 0 | 2024-07-08T17:51:19 | https://dev.to/joshua_mcmillan_3e37d7136/building-100-reusable-ui-kits-1nj6 | javascript, webdev, react, programming | Hey devs :)
**Introduction**
With 15+ years of UI kit building & design systems, my team is launching a platform (free beta access) that allows developers to build consistent & fully functional UI kits / design systems in days, instead of months. Giving you best practices with the most advanced system design/system design for reusing UI components, combined with access to 500+ open-source components that you can build anything with.
Yes it’s open source tech with no vendor lock. Which means you can export any part of your UI kit / project at anytime. Our goal isn’t to enhance your tech stack, not to lock you in.
**Anyways, why should you care?**
Well first what are the problems we are trying to solve, and how can we possibly build a UI kit this fast?
Reusability and consistency being the umbrella challenges that devs face on a daily basis.
For example, a few common issues…
- How do I rebrand my entire component library for a new project that has a totally different set of branding requirements?
- Collaborating with non-developers such as designers on available properties (colour tokens) to use is a pain
- How do I easily organise and inspect my library, to avoid rebuilding components that are already available (especially as a team)
- If the component has too many properties, it becomes hard to scale and reuse. Therefore encourages new components being made, which degrades the efficiency of the design system
- While component libraries such as Tailwind and Bootstrap are good off the shelf component libraries, they are typically hard and time consuming to customise to your own personal requirements, because they are built for certain use cases and styling
- The rise in design system and documentation tools such as Storybook and Zero height solve part of these issues, which is mainly communication between teams. However the effectiveness of a design system is based on the system design itself, which non of these offer. Therefore requiring you to still spend huge amounts of time and money developing a system that promotes reusability across your entire UI kit and design system code architecture.
**How does Symbols solve this challenges?**
A few key points…
- Aside from 500+ components being provided for you to build whatever you want, you can build are 100% customisable components. The properties can easily be override anytime
- Review and organise your component library into tags, to avoid building existing ones
- A highly advanced design system that does the heavy lifting of complex tasks at the code level, such as auto generated type scale for both typography and spacing. This makes creating responsive design much easier, as you no-longer have to set-up the spacing sequence for each component. Apply global design changes such as dark mode across entire library and more
- Use and fetch token names across your entire UI kit, which significantly reduces the lines of code = code becomes more reusable and scalable
- Easily build, test, document any part of your project in isolation, including functions, pages, individual components and the colours in your designs system
- Rebrand your entire UI kit/ project with automatic spacing/responsive changes, just by reconfiguring the design system menu
- With real-time collaboration and no-code / low-code tools, you can build your project with other developers and non-developers at the same time
- Avoid having to setup and maintain multiple design system tools / documents, as the platforms UX is setup to easily view your project as the documentation across all tech/non-tech teams
- Changes seen instantly to dev version of web project, before publishing to production
To note, the platform is highly configurable to your preferences. You do not need to follow the design system if you choose not to, and you can switch off certain settings of the design system, as it is treated as an overlay.
This month, we add canvas mode to the platform. Making Symbols, “Figma for developers”, with your entire functional/animated/interactive UI kits being available on a canvas. That also means the ability to test your entire UI kit in different modes, such as left to right accessibility, responsive devices, global modes and more.
**What to see more functionality and try it out?**
Here is a quick landing page I put together (I’m the marketing founder) that showcases Symbols in action via videos.
https://symbolsapp.framer.website/
Feedback is appreciated!
Any questions, let me know :) | joshua_mcmillan_3e37d7136 |
1,916,135 | Building the Foundation: Your Pathway to Mastery in Prompt Engineering. | Building the Foundation: Your Pathway to Mastery in Prompt Engineering Greetings, courageous... | 27,995 | 2024-07-08T17:51:27 | https://dev.to/ameet/building-the-foundation-your-pathway-to-mastery-in-prompt-engineering-oh3 | aws, promptengineering, genai, anthropic | **Building the Foundation: Your Pathway to Mastery in Prompt Engineering**
Greetings, courageous adventurer! You are about to embark on an intriguing journey through the domain of AI prompt engineering, poised to
learn, experiment, and innovate. This series serves as your compass, a powerful guide that will navigate you from novice to adept. Remember, each stride on this journey is a steppingstone towards mastering prompt engineering, a skill that you are absolutely capable of acquiring.
Though some of the initial codes are from my book on prompt engineering (revised edition will be released by the end of the month), the basic concepts of prompt engineering remain the same.
**We will apply these principles while using Bedrock and the foundation models to develop real-life products. Follow along with me here and other places where I will share my own product development experience.**
I am on a mission to build a tribe of AI product developers so that you rule the job market, not the other way around. A sure path to get a job fast if you follow along with me. I believe in you, and I am thrilled to walk alongside you on this exhilarating journey. Your fundamentals will be solid, your knowledge expansive, and your future as a prompt engineering expert bright. Here's to the journey ahead, and the mastery that awaits at the end!

**Here's a sneak peek at the treasures awaiting you in this journey!**
1.Super-Basic Prompts: Just like a baby's first words, we will start with the simple, foundational aspects of AI prompts.
2.The Recipe for a Prompt: Discover the ingredients of a successful prompt and how they blend together.
3.Placing Instructions at the Forefront of the Prompt: Understand the importance of clear, upfront instructions for garnering desired responses.
4.Prompts Aimed at Extracting Specific Info: Unleash the power of targeted prompts for precision extraction of information.
5.Zero-Shot Prompting: Delve into the mysterious world of zero-shot prompting and understand its intricacies.
6.One-Shot Prompting: Take a step further into one-shot prompting, witnessing the power of example-driven interactions.
7.Few-Shot Prompting: Traverse the nuanced landscape of few-shot prompting, where multiple examples guide your AI.
8.Setting a Role for ChatGPT: Play director in your own AI movie, setting roles and dictating performances.
9.Encouraging Questions in a Prompt: Discover how to spark curiosity within the AI model, encouraging a more dynamic exchange.
10.Explicit Prompts: Master the art of directness, commanding your AI to provide the exact response you seek.
11.Ambiguous Scenarios: Learn to navigate the foggy waters of ambiguity, drawing clarity from the AI's understanding.
12.Emulating a Dialogue: Step into the theatre of conversation, learning to orchestrate engaging back-and-forth with your AI.
13.Setting Your Own Role in a Prompt: Put yourself into the script, enhancing the personalization and relevance of the AI's responses.
14.And now, for the grand finale - an impressive array of 40 practice assignments, carefully distributed with 5 at the conclusion of each topic. This deliberate arrangement ensures that you thoroughly comprehend and master each topic before progr
essing to the next. These exercises are purposefully designed to cement your burgeoning skills, setting you on the path to true proficiency in the art of prompt engineering.
Feel the buzz of anticipation yet? Let's forge ahead, for this journey is one of discovery, transformation, and immense learning. You're about to become an expert, and we can't wait to see the magic you will create. Onward, prompt engineer!
| ameet |
1,916,137 | Crafting an Effective Cover Letter 📨💯🚀 | Hello, everyone! ⭐ Today, we are diving into a topic that is a bit outside from my usual... | 27,996 | 2024-07-09T13:00:00 | https://dev.to/pachicodes/crafting-an-effective-cover-letter-2mj9 | career, beginners, discuss |
Hello, everyone! ⭐
Today, we are diving into a topic that is a bit outside from my usual discussions.
While technical skills are crucial, understanding the nuances of the job application process, specifically the creation of a compelling cover letter, is equally important, so today I want to share a bit of what worked for me.
If you prefer, I have a video with this content and you can watch it here: {%embed https://www.youtube.com/watch?v=L1YvX8R3iEI&t=1s %}
---
## 1. The Role of Cover Letters
Many applicants wonder about the necessity and content of cover letters. Though not always required, a cover letter provides a unique opportunity to showcase your personality beyond the resume's formal structure.
It's your space to narrate your professional story and connect more personally with potential employers.
---
## 2. Crafting Your Cover Letter with Tools
One of my top tips for writing cover letters is using tools like Grammarly. These tools help refine your writing and ensure you present your best self.
Start your cover letter by addressing the hiring manager by name, if possible, which adds a personal touch right from the beginning.
---
## 3. Introducing Yourself
Your cover letter should open with a brief introduction about yourself. For instance, mention your name, a bit about your background, and any personal interests that relate to the job or company values. This helps create a connection with the reader and sets the tone for the rest of the letter. In your resume you probably won't mention that you have 3 cats, but here it is okay to share this piece of information, to help people connect with you.
---
🌟 **Support Open Source Innovation** 🌟
I am happy to work for an Open Source community, that is build a Ecossystem of Plugins for the JavaScript community. Your can support and encourage us to continue developing resources and fostering a vibrant community with only one click:
👉 [Star Webcrumbs on GitHub](https://github.com/webcrumbs-community/webcrumbs) ⭐
---
## 4.Highlighting Your Experience and Values
Discuss your previous positions and how they align with the job you are applying for.
For example, I have worked as a developer advocate at GitHub, so highlight what you achieved in that role and how it prepares you for the prospective job.
Also, mention aspects like support for diversity and inclusion if they align with the company’s values and it is something you care about.
---
## 5. Aligning with the Company's Culture
Tailor your cover letter to fit the company's culture and values can make a big difference, but do not do this just for show, if you don't particularly care for a certain value, don't mention it.
Research the company and echo its language and priorities in your letter if they make sense to you. T
his demonstrates your interest and alignment with the company, making your application stand out.
---
## 6. Concluding with a Strong Call to Action
End your cover letter by reiterating your passion for the role and how your experiences make you an ideal fit. Encourage the hiring manager to consider your application and invite them to discuss your potential contribution to the company further.
---
## Testing New Tools
In my pursuit of effective cover letter writing, I tried this AI tool called [CoverLetterGPT](https://coverlettergpt.xyz), which utilizes AI to craft personalized cover letters. It's a novel approach that can add a creative twist to your applications. It can give you a rally nice base for you cover letter, and you can just edit and build on that.
I found it on @vincanger profile, so shout-out to him!
---
### Final Thoughts
A well-crafted cover letter can significantly enhance your job application, providing a narrative that complements your resume. Use the opportunity to highlight how your unique skills and experiences make you the perfect candidate for the position.
[You can find my last cover letter here](https://github.com/pachicodes/cover-letter/blob/main/pachi-cover-letter.md)
If you found these tips helpful, or if you need assistance with your cover letter, feel free to comment below.
I am here to help!
Good luck with your application, and stay tuned for more insights into navigating the job application process.
---
### 🌟 **Let's Build Together!** 🌟
Do you believe in the power of Open Source and community-driven development? We are a group of developers building some cool tools for the JavaScript community!
Star us on GitHub to stay connected and contribute to our growing ecosystem of tools and resources.
👉 [Star Webcrumbs on GitHub](https://github.com/webcrumbs-community/webcrumbs)
**Thanks for reading**
Pachi 💚 | pachicodes |
1,916,138 | What a W.I.T.C.H! | Exploratory testing using a Witch. Erm - pardon? I love a heuristic. Does not mean I... | 0 | 2024-07-08T17:53:05 | https://dev.to/ajparadith/what-a-witch-5gmp | testing, exploratorytesting, webdev | ## Exploratory testing using a Witch.
Erm - pardon?
I love a heuristic. Does not mean I always remember them, so take my advice — i’m not using it.
Here’s an exploratory testing heuristic using the acronym W.I.T.C.H.
## **W.I.T.C.H Heuristic**
**W — Workflows**
- Explore common and edge workflows: Identify typical user journeys and uncommon paths. Test each step, transitions, and decision points. If you have enough experience, then hell yeah — use your Oracle knowledge!
- Integration points: Check interactions between different system components and third-party services. Ensure data consistency and flow.
- Error Handling: Validate how the system handles errors, including user errors, system errors, and edge cases.
**I — Inputs**
- Variety of inputs: Test with different data types, lengths, and special characters. Include valid, invalid, and boundary inputs.
- Data formats: Ensure the system can handle different formats like JSON, XML, CSV, etc., if applicable.
- Input sources: Test inputs from various sources such as forms, files, databases, APIs, etc.
**T — Time**
- Time-based conditions: Check how the system behaves at different times, such as leap years, time zones, and daylight saving changes. Yes it’s 2024 but you might be surprised how often hidden third parties do not cope with time changes.
- Performance testing: Assess the system’s response time, throughput, and load handling capabilities.
- Timeouts and delays: Verify the handling of network delays, server timeouts, and user session timeouts.
- How much time do you have to do your exploratory testing? What is the priority based on that.
**C — Context**
- User context: Understand different user scenarios, roles, and permissions. Test features specific to each user type.
- Environment context: Test across various environments like development, staging, and production. Ensure compatibility with different operating systems, browsers, and devices.
- Business context: Ensure alignment with business rules, workflows, and objectives. Validate against business requirements and user expectations.
**H — Heuristics and History
**
- Common heuristics: Use known heuristics like RCRCRC (Recent, Core, Risky, Configuration, Repaired, and Chronic) to guide testing. There are so many available online or via Ministry of Testing.
- Historical data: Analyse past bugs, user feedback, incident reports and anomaly detection logs — well any logs. Focus on areas with frequent issues or something you think is strange...
- Exploratory testing: Continuously adapt testing based on findings. Use exploratory charters to guide and document testing sessions.
I am a Staff Quality Engineer, awesome woman in tech, UN Women Delegate and I believe in the value of curiosity and empathy in testing. I do all my own stunts, love food, travel, my friends, family, music and art.
If you enjoyed this story, please share to help others find it. Feel free to leave a comment — I am open to insight, learning and discussion.
| ajparadith |
1,916,139 | Say Goodbye to Solo Coding 👋: Collaborate with Others on SocialCode 🤝 | We Are Live 🚀: Announcing the Launch of SocialCode We are thrilled to announce that... | 0 | 2024-07-08T17:55:53 | https://dev.to/socialcodeclub/say-goodbye-to-solo-coding-collaborate-with-others-on-socialcode-4ip7 | coding, collaboration, learning, startup | ## We Are Live 🚀: Announcing the Launch of SocialCode
We are thrilled to announce that SocialCode is now live! After many long days of hard work and dedication, we are excited to introduce a platform **designed to help developers collaborate, create, and connect**. SocialCode is more than just a tool; it’s a community where innovation thrives, knowledge is shared, and meaningful projects come to life.
---
## What is SocialCode?
Our mission is to provide **a space where programmers of all skill levels can come together to work on meaningful coding projects**. Whether you’re a seasoned developer looking to share your expertise, a student eager to learn through real-world projects, or someone looking to connect with like-minded individuals, SocialCode is the place for you.

---
## What Can You Do on Socialcode?
**1. Create and Share Project Ideas**
SocialCode members can create and share their project ideas or goals as posts. This feature **invites other members to join them on their coding journey, eliminating the solitude of coding alone**. Projects can be customized to fit specific needs, with options to select difficulty levels, relevant languages or frameworks, and more.
**2. Defined Roles Within Projects**
When a member requests to join a project, they can choose between four roles:
- **Developer**: Collaborate on coding assignments and bring the technical aspects to life.
- **Designer**: Contribute your UI/UX skills to enhance the project’s visual and user experience.
- **Product Guru**: Manage the to-do list, set priorities, and drive the project forward.
- **Mentor**: Share your wisdom, provide advice, and support the team with your expertise.

**3. Personalized Challenges**
Take on challenges that can be forked and tailored to your needs. Post these adjusted challenges as projects and invite others to collaborate. It’s a great way to learn and grow while working on something meaningful.

**4. Integrated Task Boards**
Every posted project on SocialCode comes with an integrated task board. This allows your team to easily track progress, manage tasks, and stay organized throughout the project lifecycle.

**5. Dedicated Discussion Spaces**
Communication is key to successful collaboration. That’s why each project includes its own dedicated discussion space, making it easier to share ideas, provide feedback, and solve problems together.
And that’s just the beginning. SocialCode is packed with features designed to enhance your coding experience and foster a collaborative, supportive community. Join us and explore all the ways we can help you succeed.
---
Join us today and become part of a vibrant, supportive, and innovative community. Together, we can push the boundaries of technology and create something truly remarkable.
Visit [SocialCode](https://socialcode.club/) to get started. We can’t wait to see what you’ll build with SocialCode! | socialcodeclub |
1,916,140 | Funções de escopo no Kotlin | As funções de escopo do no Kotlin é um recurso poderoso que nos permite melhor a legibilidade dos... | 0 | 2024-07-08T19:04:53 | https://dev.to/renatocardosoalves/funcoes-de-escopo-no-kotlin-2i1a | kotlin, programming | As funções de escopo do no Kotlin é um recurso poderoso que nos permite melhor a legibilidade dos nossos códigos. Elas facilitam a execução de blocos de código em um contexto específico, eliminando a necessidade de variáveis temporarias. As cinco principais funções de escopo são: **'let'**, **'run'**, **'with'**, **'apply'** e **'also'**. Vamos explorar cada uma delas em detalhes e discutir quando usá-las.
## 1. let
A função **'let'** é usada para executar um bloco de código no contexto de um objeto. A referência ao objeto é passada como argumento (**'it'** por padrão) para o bloco de código.
### Uso típico
- Executar um código em um objeto não nulo.
- Encadear chamadas de métodos
- Limitar o escopo de uma variável
**Exemplo:**
```
val name: String? = "Kotlin"
name?.let {
println("The length of the name is ${it.length}")
}
```
Neste exemplo, o bloco de código dentro do **'let'** será executado apenas se a variável **'name'** não for nula.
## 2. run
A função **'run'** é similar ao **'let'**, mas ao invés de passar a referência do objeto como argumento, ela usa o **'this'** dentro do bloco de código. É comumente usada para inicializações e retornos de valores computados a partir do objeto.
### Uso típico:
- Inicializar objetos complexos.
- Calcular e retornar um valor baseado no objeto.
**Exemplo:**
```
val result = "Hello".run {
this.length + 5
}
println(result) // Saída: 10
```## 3. with
A função **'with'** é uma função de escopo que não é uma extensão. Ela recebe um objeto como argumento e executa um bloco de código com **'this'** como referência ao objeto.
### Uso típico
- Quando você tem um objeto não nujo e deseja chamar várias funções ou propriedades dele.
- Melhorar a legibilidade ao evitar múltiplas chamadas de método no mesmo objeto.
**Exemplo:**
```
val builder = StringBuilder()
with(builder) {
append("Hello, ")
append("World!")
}
println(builder.toString()) // Saída: Hello, World!
```
## 4. apply
A função **'apply'** é usada para configurar um objeto. Ela retorna o próprio objeto, permitindo encadear chamadas de métodos.
### Uso típico:
- Configurar objetos, especialmente durante a criação de instâncias.
**Exemplo:**
```
val person = Person().apply {
name = "John"
age = 30
}
```
Neste exemplo **'apply'** é útil para configurar o objeto **'Person'** sem ter que retornar explicitamente objeto a cada atribuição.
## 5. also
A função **'also'** é similar ao **'let'**, mas em vez de usar **'it'** para referenciar o objeto, ele retorna o próprio objeto. Isso é útil para executar ações adicionais que não alteram o objeto, como logging ou validação.
### Uso típico
- Executar ações adicionais em um objeto sem alterar o resultado da expressão.
- Debugging e logging.
**Exemplo:**
```
val name = "Kotlin".also {
println("The original name is $it")
}
println(name) // Saída: The original name is Kotlin \n Kotlin
```
### Quando usar cada função de escopo
- **let**: Use quando você precisa executar um bloco de código em um objeto não nulo e deseja limitar o escopo de variáveis temporárias.
- **run**: Use para inicializações e quando você precisa calcular e retornar um valor baseado no objeto.
- **apply**: Use para configurar objetos durante a criação, especialmente quando você precisa retornar o objeto configurado.
- **also**: Use para executar ações adicionais (como logging) que não alteram o objeto, mantendo o objeto original como resultado da expressão.
### Conclusão
As funções de escopo em Kotlin são ferramentas valiosas que ajudam a escrever código mais cociso
## Conclusão
As funções de escopo em Kotlin são ferramentas valiosas que ajudam a escrever código mais conciso e legível. Cada uma tem seu propósito específico, e entender quando e como usá-las pode levar a um código mais limpo e eficiente. Ao escolher a função de escopo apropriada, considere o contexto do objeto e o que você deseja alcançar com o bloco de código. | renatocardosoalves |
1,916,141 | What’s the Difference Between Kubernetes Namespaces and VCluster? | There are several options when it comes to containerized application orchestration and management.... | 0 | 2024-07-08T18:03:55 | https://dev.to/signadot/whats-the-difference-between-kubernetes-namespaces-and-vcluster-5d6n | kubernetes, microservices, containerorchestration, virtualclusters | There are several options when it comes to containerized application orchestration and management. Two of the most popular are Kubernetes namespaces and virtual clusters. But which is better for your use case? In this blog, we’ll be comparing the differences and similarities between these two solutions so you can make the best decision for your project. Read on to learn more, and click below to learn how Signadot can help you develop and test your microservices more easily and cost-effectively.
## What are Virtual Clusters?
Virtual clusters can be a powerful tool in container orchestration and management. A virtual cluster allows you to isolate resources in a Kubernetes cluster, allowing you additional separation and control beyond the level namespaces allow you.
Using virtual clusters, you can create multiple logical clusters grouped together and set their own resources and configurations. This is great when you need separate environments for different teams or projects that need to run on the same underlying infrastructure.
Virtual clusters can isolate workloads at the cluster level, so there is no risk of interference when you have other independent clusters running on your infrastructure. This also allows you to allocate your resources based on usage priorities more easily. Virtual clusters are also highly scalable, allowing you to add more clusters easily. It also simplifies administration since you can test new features and perform upgrades without affecting your entire cluster.
Of course, they’re not without their drawbacks. Virtual clusters can be complex to manage when you have numerous clusters for a large, complex project. This can also reduce overhead efficiency.
## What are Kubernetes Namespaces?
Kubernetes namespaces are another solution for container orchestration and management. Namespaces also allow developers to isolate resources within a Kubernetes cluster, albeit at the application level rather than the more granular cluster level.
Namespaces allow you to organize clusters into virtual sub-clusters which provides easy resource-sharing and simplified management overall. Logical separation between applications allows teams to have their own isolated environment and lets them work without having to worry about interference or resource contention.
Namespaces also allow role-based access controls, which gives you more control over the access different users have to resources within a namespace. Namespaces also tend to be more simplistic than virtual clusters, allowing for minimal configuration.
However, the level of isolation that namespaces allow is limited compared to virtual clusters. If you need a more fine-tuned, granular level of isolation, namespaces may not suffice for your needs.
## How Are They Similar?
Kubernetes namespaces and virtual clusters are quite similar, as they serve similar purposes in container orchestration and management. Here are some of the features that both solutions share:
- **Isolation in a Kubernetes cluster** — Both virtual clusters and namespaces allow developers to partition resources within a Kubernetes cluster. This allows you to create separate environments for teams, projects and applications.
- **Resource allocation** — Both solutions allow you to define resource quotas and limits. This ensures resources are being used efficiently. The isolated environments also ensure there is no resource cannibalization or contention between teams.
- **Security** — Namespaces allow you to assign role-based access controls to ensure the right users can access resources. Virtual clusters can have independent clusters running on the same infrastructure, making it safer from unauthorized access.
- **Simplified management** — Both solutions are easy to implement and make management simpler. Since both solutions allow you to apply updates and configurations independently, it’s easier to test features, make upgrades, roll back changes and more without disruption.
**Kubernetes support** — Of course, both solutions are supported within the Kubernetes ecosystem. They are well-tested, have community support and there is extensive documentation. Any developer using Kubernetes can typically find the information they need about namespaces or virtual clusters to implement them and use them efficiently.
## How are They Different?
Now that we've explored the similarities between Kubernetes namespaces and virtual clusters, let's take a look at the differences.
- **Level of isolation** — The key difference between Kubernetes namespaces and virtual clusters is the level of isolation they offer. Namespaces isolate workloads at the application level, which is handy for creating separate environments within a physical cluster. Virtual clusters take isolation even further by isolating at the cluster level, meaning you can have independent clusters running on the same underlying infrastructure without interfering with each other.
- **Resource allocation capabilities** — While both solutions do allow you to set resource quotes and limits, the more granular nature of virtual clusters gives you a more fine-tuned control for resource allocation. This means there is a higher risk of resource contention when using namespaces, but this finer degree of control does come at the cost of being more complex to manage.
**Complexity** — While namespaces may not isolate at a deeper level beyond the application level, it does make them easier to create and manage, with minimal configuration. Virtual clusters can sometimes be more complex to manage and require additional monitoring comparatively.
## Where Signadot Comes In
If you’re looking for the best way to create and test your microservices, Signadot is here to help. While most developer environment platforms in Kubernetes make full copies of your environment, Signadot doesn’t make full clones or copies. Instead, you can use Signadot to create multiple sandboxes in one environment, allowing you to spin up lightweight developer environments in your staging cluster within seconds.
Sandboxes allow you to easily develop and test your most recent dependencies that exist in a remote Kubernetes environment. Not only do these sandboxes save you time and money, but they also increase developer productivity. In short, you can ship microservices more effectively and efficiently.
## Get Started for Free Today!
Ready to learn more about Signadot and how our sandboxes can help you ship your microservices faster? Find out how to scale pre-merge testing with microservices and try our Kubernetes native platform for free today!
Originally posted on [Signadot's blog](https://www.signadot.com/blog/whats-the-difference-between-kubernetes-namespaces-and-cluster-v). | signadot |
1,916,143 | Hello World | Today, I joined a free online course on Python by the Kaniyam Foundation. And, this blog is to... | 0 | 2024-07-08T18:01:26 | https://dev.to/amotbeli/hello-world-9hp | programming, python, beginners, learning | Today, I joined a free online course on Python by the Kaniyam Foundation. And, this blog is to document my progress throughout the course.
In the first live session this evening, a lot of course details were shared. Helpful information was provided regarding the installation of Python on our machines. As is customary, we began by learning to print "Hello, World!" using the `print` statement. Towards the end of the session, the importance of free and open-source software (FOSS) was also emphasized. | amotbeli |
1,916,144 | Introducing CurlDock: Simplify API Testing with Docker and Curl | Hey Dev.to community! I'm excited to share CurlDock, a lightweight open-source tool I've been... | 0 | 2024-07-08T18:26:33 | https://dev.to/ietxaniz/introducing-curldock-simplify-api-testing-with-docker-and-curl-5ajo | curl, api, docker, devtools | Hey Dev.to community!
I'm excited to share CurlDock, a lightweight open-source tool I've been working on to simplify API testing and curl script management.
## What is CurlDock?
CurlDock combines the power of curl with the convenience of Docker, offering a user-friendly interface for creating, editing, and executing curl commands. Built with Rust and React, it's designed for simplicity and ease of use.

## Key Features
- 🐳 **Dockerized**: Easy setup and consistent environments
- 🖥️ **User-Friendly Interface**: Similar to Postman or Insomnia
- 📁 **Git-Friendly**: Store scripts as .sh files for easy version control
- 🔒 **Network Isolation**: Access endpoints within your Docker network
## Quick Start
Getting started with CurlDock is as easy as running a Docker command:
```bash
docker run --name curldock --rm -e SCRIPTSFOLDER="/scripts" -v $(pwd)/scripts:/scripts -e PORT="2080" -p 2080:2080 inigoetxaniz/curldock
```
Then, just open `http://localhost:2080` in your browser!
## Why CurlDock?
- **Simplicity**: Focus on core functionality without unnecessary complexity
- **Local-First**: Designed for local development environments
- **No Authentication**: Reduced complexity for improved usability
- **Developer Control**: You manage your own security measures
## Try It Out!
I'd love for you to give CurlDock a spin and share your thoughts.
Check out the [GitHub repository](https://github.com/ietxaniz/curldock) for more details!
Happy API testing! 🚀
| ietxaniz |
1,916,145 | Animated text underline in CSS only | .animated-underline { position: relative; text-decoration:... | 0 | 2024-07-08T18:06:57 | https://dev.to/nureon22/animated-text-underline-in-css-only-4joo | css, web |

```css
.animated-underline {
position: relative;
text-decoration: none;
}
.animated-underline::after {
content: "";
position: absolute;
left: 50%;
top: 100%;
width: 100%;
height: 0.125em;
background-color: hsl(200deg, 100%, 50%);
transition: transform 320ms ease;
transform: translate(-50%, 0%) scaleX(0);
transform-origin: right;
}
.animated-underline:hover::after {
transform-origin: left;
transform: translate(-50%, 0%) scaleX(1);
}
```
| nureon22 |
1,916,147 | Unleashing the Ultimate Style with Purple Brand: Hoodies, Shirts, and Jeans | In the ever-evolving world of fashion, Purple Brand has emerged as a beacon of style and quality.... | 0 | 2024-07-08T18:08:08 | https://dev.to/ayshanoor445/unleashing-the-ultimate-style-with-purple-brand-hoodies-shirts-and-jeans-3bec | react | In the ever-evolving world of fashion, [Purple Brand](https://purplebrandofficial.com/) has emerged as a beacon of style and quality. Renowned for its unique designs and impeccable craftsmanship, Purple Brand offers a range of clothing that caters to fashion enthusiasts who seek to make a statement. This article delves into the allure of Purple Brand, with a special focus on their iconic hoodies, stylish shirts, and trendsetting jeans. Discover why Purple Brand is a must-have in your wardrobe and how each piece can elevate your style quotient.
### The Rise of Purple Brand
Purple Brand has swiftly gained popularity among fashion-forward individuals. Founded on the principles of creativity and innovation, Purple Brand has carved a niche for itself in the competitive fashion industry. The brand's commitment to excellence is evident in every piece of clothing they produce, making them a favorite among celebrities and fashion aficionados alike.
#### The Iconic Purple Brand Hoodie
When it comes to hoodies, Purple Brand stands out for its unique blend of comfort, style, and durability. Crafted from high-quality materials, [Purple Brand hoodie](https://purplebrandofficial.com/purple-brand-hoodie/) are designed to provide ultimate comfort while ensuring you look effortlessly stylish. Whether you're heading out for a casual day or need a cozy layer for those chilly evenings, a Purple Brand hoodie is your go-to option.
### Key Features of Purple Brand Hoodies
**1. Premium Fabric**
Purple Brand hoodies are made from premium fabrics that offer a soft touch and superior warmth. The fabric's breathability ensures you stay comfortable, whether you're lounging at home or out and about.
**2. Unique Designs**
Each hoodie features unique designs that reflect contemporary fashion trends. From minimalist styles to bold patterns, Purple Brand offers a variety of options to suit your personal taste.
**3. Perfect Fit**
The hoodies are tailored to provide a perfect fit, enhancing your silhouette and ensuring you look sharp. With sizes ranging from small to extra-large, there's a hoodie for everyone.
### How to Style Your Purple Brand Hoodie
Styling a Purple Brand hoodie is a breeze, thanks to its versatile design. Pair it with jeans for a casual look or layer it under a leather jacket for an edgier vibe. For a sporty look, combine it with joggers and sneakers. The possibilities are endless, making it a valuable addition to your wardrobe.
#### The Appeal of Purple Brand Shirts
Purple Brand shirts are synonymous with sophistication and elegance. Whether you need a shirt for a formal event or a casual outing, Purple Brand has got you covered. The shirts are designed to make you stand out, with attention to detail that is second to none.
### Key Features of Purple Brand Shirts
**1. Superior Quality**
[Purple Brand shirts](https://purplebrandofficial.com/purple-brand-shirt/) are crafted from the finest materials, ensuring durability and comfort. The high-quality fabric feels luxurious against the skin and is built to last.
**2. Elegant Designs**
From classic button-downs to modern prints, Purple Brand shirts are designed to cater to diverse fashion preferences.
**3. Flawless Fit**
The shirts are tailored to perfection, offering a flawless fit that flatters your body shape. With precise cuts and stitching, Purple Brand shirts exude class and style.
### Styling Tips for Purple Brand Shirts
A Purple Brand shirt can be styled in numerous ways. For a formal look, pair it with tailored trousers and dress shoes. For a more relaxed outfit, wear it with chinos and loafers. The versatility of Purple Brand shirts makes them a wardrobe essential.
#### Trendsetting Purple Brand Jeans
Jeans are a staple in every wardrobe, and [Purple Brand jeans](https://purplebrandofficial.com/purple-brand-jeans/) take this classic piece to the next level. Known for their innovative designs and exceptional quality, Purple Brand jeans are a favorite among fashion enthusiasts.
### Key Features of Purple Brand Jeans
**1. Premium Denim**
Purple Brand jeans are made from premium denim that offers the perfect balance of stretch and structure. The superior fabric guarantees a snug fit and exceptional durability, providing long-lasting comfort.
**2. Distinctive Styles**
From classic cuts to contemporary styles, Purple Brand offers a wide range of jeans to suit every taste. Whether you prefer a slim fit or a relaxed look, there's a pair of Purple Brand jeans for you.
**3. Attention to Detail**
Every pair of Purple Brand jeans is crafted with meticulous attention to detail. From the stitching to the hardware, each element is designed to enhance the overall look and feel of the jeans.
### How to Style Your Purple Brand Jeans
Styling Purple Brand jeans is effortless, thanks to their versatile design. For a casual look, pair them with a Purple Brand hoodie and sneakers. For a more polished outfit, wear them with a Purple Brand shirt and loafers. Add a statement belt to complete the look. The timeless appeal of Purple Brand jeans makes them a versatile addition to any wardrobe.
**1. Where can I buy Purple Brand clothing?**
Purple Brand clothing is available on their official website and select high-end retailers. You can also find their collections on popular online fashion platforms.
**2. Are Purple Brand hoodies true to size?**
Yes, Purple Brand hoodies are designed to be true to size. It's recommended to refer to the size chart provided on their website for the best fit.
**3. How do I care for my Purple Brand jeans?**
To ensure the longevity of your Purple Brand jeans, it's best to follow the care instructions on the label. Generally, it's advisable to wash them inside out in cold water and avoid using harsh detergents.
**4. Do Purple Brand shirts shrink after washing?**
Purple Brand shirts are made from high-quality materials that are less prone to shrinking. However, to maintain their shape and fit, it's best to follow the washing instructions provided.
### Conclusion
Purple Brand has established itself as a leader in the fashion industry, thanks to its commitment to quality, style, and innovation. Whether you're looking for a cozy hoodie, an elegant shirt, or a pair of trendsetting jeans, Purple Brand offers something for everyone. By incorporating Purple Brand pieces into your wardrobe, you can effortlessly elevate your style and make a lasting impression. Explore the world of Purple Brand and discover why it's a favorite among fashion enthusiasts.
| ayshanoor445 |
1,916,148 | 1823. Find the Winner of the Circular Game | 1823. Find the Winner of the Circular Game Medium There are n friends that are playing a game. The... | 27,523 | 2024-07-08T18:09:43 | https://dev.to/mdarifulhaque/1823-find-the-winner-of-the-circular-game-5aa2 | php, leetcode, algorithms, programming | 1823\. Find the Winner of the Circular Game
Medium
There are `n` friends that are playing a game. The friends are sitting in a circle and are numbered from `1` to `n` in **clockwise order**. More formally, moving clockwise from the <code>i<sup>th</sup></code> friend brings you to the <code>(i+1)<sup>th</sup></code> friend for <code>1 <= i < n</code>, and moving clockwise from the <code>n<sup>th</sup></code> friend brings you to the <code>1<sup>st</sup></code> friend.
The rules of the game are as follows:
1. **Start** at the <code>1<sup>st</sup></code> friend.
2. Count the next `k` friends in the clockwise direction **including** the friend you started at. The counting wraps around the circle and may count some friends more than once.
3. The last friend you counted leaves the circle and loses the game.
4. If there is still more than one friend in the circle, go back to step `2` **starting** from the friend **immediately clockwise** of the friend who just lost and repeat.
5. Else, the last friend in the circle wins the game.
Given the number of friends, `n`, and an integer `k`, return _the winner of the game_.
**Example 1:**

- **Input:** n = 5, k = 2
- **Output:** 3
- **Explanation:** Here are the steps of the game:
1. Start at friend 1.
2. Count 2 friends clockwise, which are friends 1 and 2.
3. Friend 2 leaves the circle. Next start is friend 3.
4. Count 2 friends clockwise, which are friends 3 and 4.
5. Friend 4 leaves the circle. Next start is friend 5.
6. Count 2 friends clockwise, which are friends 5 and 1.
7. Friend 1 leaves the circle. Next start is friend 3.
8. Count 2 friends clockwise, which are friends 3 and 5.
9. Friend 5 leaves the circle. Only friend 3 is left, so they are the winner.
**Example 2:**
- **Input:** n = 6, k = 5
- **Output:** 1
- **Explanation:** The friends leave in this order: 5, 4, 6, 2, 3. The winner is friend 1.
**Constraints:**
- `1 <= k <= n <= 500`
**Follow up:**
Could you solve this problem in linear time with constant space?
**Solution:**
```
class Solution {
/**
* @param Integer $n
* @param Integer $k
* @return Integer
*/
function findTheWinner($n, $k) {
$winner = 0;
for ($i = 1; $i <= $n; $i++) {
$winner = ($winner + $k) % $i;
}
return $winner + 1; // +1 because array index starts from 0
}
}
```
**Contact Links**
If you found this series helpful, please consider giving the **[repository](https://github.com/mah-shamim/leet-code-in-php)** a star on GitHub or sharing the post on your favorite social networks 😍. Your support would mean a lot to me!
If you want more helpful content like this, feel free to follow me:
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)**
| mdarifulhaque |
1,916,149 | Python Introduction Course In Tamil By Kaniyam | Today's introductory Python class was great. Mr. Syed explained very clearly what we are going to... | 0 | 2024-07-08T18:14:10 | https://dev.to/rajeshmurugan95/python-introduction-course-in-tamil-by-kaniyam-g22 | python, kaniyam, learning, beginners | Today's introductory Python class was great. Mr. Syed explained very clearly what we are going to learn in the upcoming classes. Mr. Shrini also gave excellent clarification of many doubts. Thanks to them for taking the time to share their knowledge. | rajeshmurugan95 |
1,916,150 | 3 Amazing Productivity Apps for Linux | I use Ubuntu 24.04 as my daily driver on my main laptop and these are some of the best native Linux... | 0 | 2024-07-08T18:59:20 | https://dev.to/syedumaircodes/3-amazing-productivity-apps-for-linux-pac | linux, ubuntu, productivity, beginners | I use Ubuntu 24.04 as my daily driver on my main laptop and these are some of the best native Linux apps that I use on a daily basis.
## 1. Iotas
Iotas is a note taking application for Linux distributions that is available in the Ubuntu repositories or as a Flatpak that you can download from Flathub.
It's the Linux equivalent of Apple Notes for me, it doesn't have all of the features but is enough for me. Iotas allows you to write in plain text as well as markdown and offers an editing and viewing mode to see how your text looks.
Iotas allows you to categorize your notes using category which you can add to a note by right clicking on it and Iotas will create a new folder and store the note in it.
> Iotas also offers syncing to Nextcloud but I haven't used it since I don't use Nextcloud.

## 2. Foliate
Foliate is an epub reader for Linux distributions available as a Flatpak and in some distro's repositories as well. Foliate offers a great reading experience with multiple viewing options, themes and font options so that you can customize your reading experience.
Foliate is a great epub reader but is not limited to only E-pub files, Foliate can open Mobipocket, Kindle, PDFs, and CBZ files that makes it a all-one-solution for reading books and files.
Foliate's features don't end there, it also has bookmarks, dictionary lookup, progress slider with chapter marks and much more.

## 3. Only office
Ubuntu and other Linux distributions comes with an office suite pre-installed which is known as LibreOffice. Although it's a solid option but it's not a replacement for Microsoft Office 365 for me.
I use Apache Only office as my desktop office suite of choice, It has _Waaay_ better compatibility with office 365 documents the Libreoffice and also has a nice PDF editor as well.
Only office is available on Windows and Mac as well and is not a Linux only application so you can try it as well on your computer and see if its worth switching over from office 365.

---
Which of these apps you liked the most? Share your thoughts in the comments and let me know how you plan to use any of them yourself. Don't forget to like and share if you found this helpful!
| syedumaircodes |
1,916,178 | Process kill commands | Kill PID ;kill process by process id Kill -9 PID ; kill forcefully a process Kill -15 ; kill... | 0 | 2024-07-08T18:20:43 | https://dev.to/m_abdullah/process-kill-commands-2k10 | redhat, centos, linux, processkillin | Kill PID ;kill process by process id
Kill -9 PID ; kill forcefully a process
Kill -15 ; kill gracefully a process
pkill NAMEOFProcess ; kill by name of process | m_abdullah |
1,916,213 | Understanding Proprietary and Open Source Software | Introduction: Software plays a crucial role in our daily lives, but not all software is the same.... | 0 | 2024-07-08T18:26:50 | https://dev.to/jakvel/understanding-proprietary-and-open-source-software-4m6 | beginners, opensource | Introduction:
Software plays a crucial role in our daily lives, but not all software is the same. There are two main types: proprietary software and open source software. Each has its own characteristics, benefits, and limitations. This blog will explain what these types of software are, their differences, and why they matter.

Proprietary Software:
Proprietary software is software that is only available to selected users. Users get this software in a ready-to-use format called binary form. They do not have access to the source code, which is the code written by programmers that makes the software work. Only the owner, or proprietor, of the software can fix issues, provide updates, and offer support.

There are many limitations for users of proprietary software:
Users cannot share the software with others.
Users cannot modify or change the software to suit their needs.
Examples of proprietary software include Microsoft Windows and Adobe Photoshop. These programs often come with good support and regular updates, but users have less freedom to customize or share them.
Open Source Software
Open source software is different because it is available for free and users have access to the source code. This means anyone can see how the software works, make changes to it, and share it with others. Open source software is not just for use; it encourages users to study, modify, and improve it.

Benefits of opensource software include:
*Fewer errors because many people can find and fix bugs.
*Diverse ideas and contributions from a wide community.
*Faster development and more stable products.
*Openness to new changes and improvements from the community.
*The General Public License (GPL) provides four key freedoms to users:
Use the software for any purpose.
*Study and modify the software.
*Distribute the original software.
*Distribute modified versions of the software.
Open source software emphasizes software freedom. It believes that software, which is a mix of knowledge and science, should be open and accessible to all humans, not owned by one person or organization.
Opportunities for Monetization
While open source software is free, there are still ways to make money from it:
*Providing services for the software, such as installation or customization.
*Offering online or offline support and charging for it.
*Charging for customizations made to the open source software.
History of GNU/Linux
The history of open source software is rich and interesting. Richard M. Stallman (RMS) started the movement for software freedom in the 1980s while working at the MIT AI Lab. He created the GNU Project, which stands for "GNU's Not Unix," to ensure the four software freedoms.
The GNU Project began with tools like compilers, editors, languages, network tools, servers, databases, and more. Andrew S. Tanenbaum wrote a book on operating systems design that inspired Linus Torvalds to create the Linux kernel, combining it with GNU tools to form what we now call GNU/Linux.
Conclusion
Understanding the differences between proprietary and open source software helps us appreciate the choices we have. Proprietary software offers controlled, supported environments, while open source software offers freedom, collaboration, and continuous improvement. Both have their place in the world of technology, and knowing their strengths and weaknesses allows us to make better decisions for our software needs.
Call to Action
Feel free to share your thoughts on proprietary vs. open source software in the comments below. If you found this blog helpful, share it with others to spread the knowledge!
Author Bio
Jayaram is a materials researcher with a passion for exploring different types of software. Jayaram is especially interested in how open source software can be used for materials discovery and materials-based data science, pushing the boundaries of innovation and collaboration in these fields.
Reference: [](https://www.youtube.com/watch?v=mYIeblLsd38)
 | jakvel |
1,916,214 | The Ultimate Guide to Yoga Stretches for Improved Flexibility | Flexibility is a vital component of overall health and fitness, and yoga is one of the most effective... | 0 | 2024-07-08T18:27:11 | https://dev.to/samer_samer_5201fdb838bce/the-ultimate-guide-to-yoga-stretches-for-improved-flexibility-56c3 | Flexibility is a vital component of overall health and fitness, and yoga is one of the most effective methods to enhance it. This ultimate guide to yoga stretches will help you understand how yoga can improve your flexibility, reduce tension, and contribute to a more balanced and healthy lifestyle. Whether you are a beginner or a seasoned yogi, this guide will provide you with insights and practical steps to deepen your practice.
The Importance of Flexibility
Why Focus on Flexibility?
Flexibility improves range of motion, reduces the risk of injuries, enhances muscle coordination, and can even alleviate chronic pain. For those who spend many hours at a desk or engaged in repetitive tasks, flexibility exercises can counteract the stiffness and discomfort associated with prolonged inactivity or monotonous physical activities.
How Yoga Enhances Flexibility
Yoga stretches not only elongate the muscles but also enhance the elasticity of the connective tissues surrounding the muscles and bones. Regular practice can lead to significant improvements in how you move and feel on a daily basis.
Fundamental Yoga Stretches for Beginners
Starting with the Basics
[Cat-Cow Stretch (Marjaryasana-Bitilasana)](https://blogking.uk/the-best-lower-back-yoga-stretches/)
Benefits: Enhances spinal flexibility and relieves tension in the torso.
How to: Begin on your hands and knees. Alternate between arching your back towards the ceiling (Cat) and lifting your chest and tailbone towards the sky (Cow).
Downward-Facing Dog (Adho Mukha Svanasana)
Benefits: Stretches the hamstrings, calves, and spine.
How to: Start on all fours, then lift your hips up and back, forming an inverted V-shape with your body.
Standing Forward Bend (Uttanasana)
Benefits: Stretches the hamstrings and calves; calms the mind.
How to: From a standing position, bend forward from the hip joints, not from the waist, with hands on the ground or your legs.
Gentle Expansion
Warrior I (Virabhadrasana I)
Benefits: Stretches the chest, lungs, shoulders, neck, belly, and groin.
How to: Step one foot back and bend the front knee to a 90-degree angle, raising your arms above your head.
Child’s Pose (Balasana)
Benefits: Gently stretches the hips, thighs, and ankles; calms the brain.
How to: Kneel on the floor, touch your big toes together, sit on your heels, then lay your torso down between your thighs and stretch your arms forward.
Intermediate Yoga Stretches for Increased Flexibility
Deepening Your Practice
Pigeon Pose (Eka Pada Rajakapotasana)
Benefits: Stretches the thighs, groins, and psoas.
How to: Start from all fours, bring your right knee forward and place it more or less behind your right wrist, extend your left leg back.
Cobra Pose (Bhujangasana)
Benefits: Stretches the chest while strengthening the spine and shoulders.
How to: Lie prone on the floor, stretch your legs back, tops of the feet on the floor, and lift your chest off the floor by straightening your arms.
Seated Forward Bend (Paschimottanasana)
Benefits: Stretches the spine, shoulders, and hamstrings.
How to: Sit on the floor with legs extended forward, exhale and bend forward from the hip joints.
Challenges and Considerations
Importance of Breathing: Always maintain a smooth and even breath; never hold your breath.
Don’t Rush: Moving slowly into and out of poses enhances muscular control and prevents injuries.
Advanced Yoga Stretches for Peak Flexibility
Pushing the Boundaries
King Pigeon Pose (Kapotasana)
Benefits: A deep hip and chest opener.
How to: From pigeon pose, bend your back leg, reach back with your arms, and catch your foot or ankle.
Monkey Pose (Hanumanasana)
Benefits: Intense hamstring stretch and groin opener.
How to: From a kneeling position, extend one leg forward and lower the hips towards the floor, keeping the back leg straight and long.
Safety Tips
Warm-Up: Always warm up with gentler stretches or light physical activity before attempting deep stretches.
Listen to Your Body: Never force a posture that causes pain. Yoga is about harmony and mindfulness, not competition.
Conclusion
Incorporating yoga stretches into your routine can significantly improve your flexibility, health, and well-being. This guide has provided you with tools to begin or deepen your yoga practice for better flexibility. Remember, consistency is key. Regular practice, combined with proper technique and awareness, will enable you to reap the maximum benefits of yoga. Embrace the journey of yoga as a path not just to physical flexibility but to mental and spiritual growth as well.
Enjoy your practice and the many benefits it brings!
| samer_samer_5201fdb838bce | |
1,916,216 | Showing more Article info on Daily.dev | Daily.dev is a very good extension that helps us aggregate news from several sources. When... | 0 | 2024-07-08T18:31:50 | https://dev.to/jacktt/showing-more-article-info-on-dailydev-5351 | 
Daily.dev is a very good extension that helps us aggregate news from several sources.
When browsing news, I usually scan the `Title -> Thumbnail -> Description`. However, the current view of Daily.dev has only `Title` and `Thumbnail` in Grid view and only `Title` in Listing view. This requires me to click on an article to read more in a popup, which consumes more reading time.
Fortunately, Daily.dev is open source, so we can submit feature requests or even customize the design to suit our needs.
In this case, I have submitted a feature request on the [dailydev/apps](github.com/dailydotdev/apps) repository and also implemented a new design that can serve my needs. You can review my pull request here: [pull/2060](https://github.com/dailydotdev/apps/pull/2060).
They have mentioned that they need to pass this request to the Design team before reviewing my merge request or implementing a new UI.
In the meantime, you can pull my request, build it, and install it locally using these easy steps:
### Step 1: Clone my code
```bash
git clone --branch feat/show-more-metadata git@github.com:huantt/daily-dot-dev-apps.git
```
### Step 2: Build
```shell
pnpm install
cd packages/extension
pnpm run build:chrome
```
The output will be located at `packages/extension/dist/chrome`.
### Step 3: Install
- Open Chrome.
- Click the Extension button > Manage Extensions. Alternatively, you can enter the following URL directly: [chrome://extensions](chrome://extensions/).

- Enable `Developer Mode`.

- Click on `Load unpacked`.

- Point to `packages/extension/dist/chrome`.
Open a new tab, and you will see that all articles in Grid view or Listing View now have a Title, Description, and Thumbnail.

I hope that it's helpful for you, and I also hope that Daily.dev releases a new UI soon.
{% github: github.com/huantt/daily-dot-dev-apps %}
| jacktt | |
1,916,218 | Creating a DynamoDB Table and Setting Up IAM Access Control | Introduction to DynamoDB Tables Amazon DynamoDB is a fully managed NoSQL database service provided by... | 0 | 2024-07-08T18:33:40 | https://dev.to/rashmitha_v_d0cfc20ba7152/creating-a-dynamodb-table-and-setting-up-iam-access-control-4f1k | **_Introduction to DynamoDB Tables_**
Amazon DynamoDB is a fully managed NoSQL database service provided by AWS (Amazon Web Services). It offers high performance, seamless scalability, and low latency for applications needing consistent, single-digit millisecond response times. DynamoDB is particularly well-suited for applications requiring fast and predictable performance with seamless scalability.
**_Architecture:_**

**_Step 1:Create a DynamoDB table for student details _**
Navigate to the DynamoDB dashboard and click “Create table”, as show below.

- When creating your DynamoDB table, you need to choose a name for the table and specify a Partition key.
- The Partition key determines the partition in which the data is stored, while the Sort key helps in sorting the data within the partition based on its value.
- Once you have named your table and assigned a partition key, navigate to the “Table settings” section.
- simply click the “Create table” button in orange to initiate the table creation process
**_Step2 :Add items to DynamoDB table_**
Choose your recently created table from the list. In the left panel, click on “Explore items” and then select “Create item” to begin creating a new item within the table.

We are adding 6 student detail to the table.

**_Step 3: Launch EC2 with IAM role to scan DynamoDB table_**
Launching an EC2 instance with an IAM role that allows it to scan a DynamoDB table involves several steps, including creating an IAM role with appropriate permissions, launching the EC2 instance with that IAM role, and writing code on the EC2 instance to interact with DynamoDB.
In the next screen, give your role a name and click “Create role.”
**_step 4: Attach Policy_**
In the permissions tab, click "Attach policies directly".

Search for and attach the AmazonDynamoDBReadOnlyAccess policy (or create a custom policy with specific dynamodb:Scan permissions).
**_Step 5: Use the AWS CLI in the EC2 instance to scan the DynamoDB table_**
Navigate to the EC2 dashboard, then select your EC2 Instance.Then connect into your EC2 Instance by selection one of these ways:

I selected EC2 Instance Connect. We are now connected to our instance and ready to scan the table using the following command.
```
aws dynamodb scan --table-name <table_name> --region <region_name>
```

**_Benefits:_**
_Scalability_: DynamoDB scales automatically to handle any amount of traffic and storage.
_Performance_: Provides single-digit millisecond latency for read and write operations.
_Security_: IAM enables granular access control to DynamoDB resources, enhancing security posture.
**_Conclusion:_**
Setting up a DynamoDB table and configuring IAM access control ensures your application’s data storage is scalable, performant, and secure. By following best practices in defining table schemas, creating IAM policies, and integrating services, you establish a robust foundation for building cloud-native applications on AWS. This architecture supports modern application requirements with flexibility and resilience in data management.
| rashmitha_v_d0cfc20ba7152 | |
1,916,221 | Colecciones y bases de dinámicas en base a campos de tipo fecha | En algunas ocasiones deseamos generar bases de datos y colecciones dinámicas en base a una... | 0 | 2024-07-08T19:38:13 | https://dev.to/avbravo/colecciones-y-bases-de-dinamicas-en-base-a-campos-de-tipo-fecha-45d6 | En algunas ocasiones deseamos generar bases de datos y colecciones dinámicas en base a una fecha.
Por ejemplo generar una base datos para cada año y las colecciones por mes y por cada empresa. De manera que contamos con una mejor clasificación de los documentos , lo que genera un mejor desempeño de la aplicación al distribuir los documentos en varias bases de datos y colecciones.
En el ejemplo hipotético asuma que cuenta con un modelo como el siguiente:
```
@Entity
public class Venta{
@Id(autogeneratedActive =AutogeneratedActive.ON )
private Long idventa;
@Column
private Long idempresa;
@Column
private Double total;
}
```
Ejemplo:
* Bases de dato: **ventas_2024db**
Colecciones:
```
transaccion_1_enero
transaccion_2_enero
transaccion_1_febrero
transaccion_2_febrero
```
* Bases de dato: **ventas_2024db**
Colecciones:
```
transaccion_1_enero
transaccion_2_enero
transaccion_1_febrero
transaccion_2_febrero
```
| avbravo | |
1,916,222 | Step-by-Step to AI Engineering Mastery | Hey there, your boy Nomadev is back with another AI blog. I've been away for a bit, but now I'm here... | 0 | 2024-07-08T18:38:41 | https://dev.to/thenomadevel/step-by-step-to-ai-engineering-mastery-1mpn | webdev, javascript, ai, rag | Hey there, your boy Nomadev is back with another AI blog. I've been away for a bit, but now I'm here to stay with regular updates on all things AI!
Generative AI is the buzzword these days, heralding a new era in technology. Its transformative potential is reshaping our interactions with digital systems and opening up new possibilities across diverse sectors.
Today, we're diving into the essential stages every aspiring AI engineer should climb to achieve mastery.
Let's explore the five key levels you'll need to conquer on your journey to becoming a top-tier AI engineer.

---
## Level 1: Basic Q&A Bots

Begin your AI engineering journey by understanding and building Basic Q&A Bots. This level involves:
- **Understanding LLMs**: Learn how large language models (LLMs) function as sophisticated next-word prediction engines.
- **Developing Simple Bots**: Create bots that respond to user queries with precise answers.
- **Foundation Skills**: Gain skills in programming languages like Python and familiarize yourself with AI frameworks like TensorFlow or PyTorch.
---
## Level 2: Conversational Bots

Hope you've mastered the basic Q&A bots! Now, let's crank up your AI skills with Conversational Bots. This next step will make your bots much more dynamic:
- **Building on Context**: Unlike basic bots, conversational bots utilize the "context window" that incorporates previous dialogues. This enables the AI to maintain a continuous thread throughout interactions, making conversations flow more naturally and reducing the chances of forgetting earlier exchanges.
- **Enhanced Engagement**: Equip your bots to engage more deeply by understanding and responding to the context of ongoing conversations.
- **Dialogue Management**: Learn to manage and structure conversations using AI, enhancing user experience by making interactions smoother and more intuitive.
---
## Level 3: Retrieval Augmented Generation (RAG)

Advance to integrating external knowledge bases with your bots:
- **External Databases**: Learn to incorporate external data sources to provide more accurate and informed responses.
- **Custom Knowledge Integration**: Develop skills to tailor knowledge bases specific to business or technical needs.
- **Advanced Query Handling**: Enhance your bots to handle complex queries by retrieving and generating responses from a broad range of data.
---
## Level 4: Agents and Function Calling

At this stage, expand your bots into AI agents capable of performing specific functions:
- **Building Agents**: Create AI agents that can perform tasks beyond simple conversations, such as making reservations or conducting searches.
- **Function Calling**: Implement the capability for bots to call external functions and services, enhancing their utility.
- **Tool Integration**: Learn to integrate various tools and APIs to broaden the functional scope of your applications.
---
## Level 5: LLM Operating Systems

Reach the pinnacle of AI engineering by mastering LLM Operating Systems:
- **System Design**: Understand how to design and structure large-scale AI systems that integrate various components of AI.
- **Multi-agent Systems**: Develop systems where multiple AI agents interact and perform coordinated tasks.
- **Future Technologies**: Stay ahead of the curve by researching and implementing the latest advancements in AI technology.
---
Climbing the AI engineering ladder is a challenging yet rewarding journey. Each level builds upon the last, requiring a solid understanding and mastery before moving on to more complex applications.
As you progress, remember that persistence and continuous learning are your best friends. The field of AI is always evolving, and staying updated with the latest trends and technologies is crucial.
Join AI communities, participate in forums, and collaborate with fellow AI enthusiasts. Sharing knowledge and experiences can provide new insights and open doors to exciting opportunities.
Don't be afraid to experiment and push the boundaries of what's possible. The skills and knowledge you acquire at each level will not only make you a better AI engineer but also equip you to tackle complex problems and create innovative solutions.
I hope you found this guide helpful and inspiring. If you have any questions or need further guidance, drop a comment below or reach out to me [directly on X](https://x.com/thenomadevel). Let's keep this conversation going!
See you in the next post, and happy coding! | thenomadevel |
1,916,224 | Data Science? Never Heard Of It. | So, the truth is, last year when I first picked up Data Science, I had no idea what it was. I always... | 0 | 2024-07-12T13:00:00 | https://dev.to/cyrildotexe/data-science-never-heard-of-it-4epf | datascience, beginners | So, the truth is, last year when I first picked up Data Science, I had no idea what it was.
I always wanted to take programming more seriously. I tried a little HTML and CSS a few years back but stopped and I did some simple robotics coding during secondary (high) school.
Anyway, last year I had some time on my hands, and my Aunt suggested I take this course my cousins did. She couldn’t remember the specifics, but she knew they were teaching Python.
We contacted them, and that was the first time I heard about Data Science. So, what is Data Science, and should you learn it?
I like to define it as “finding the meaning behind data”. Whether you should learn it might depend on the kind of person you are.
Although I’m sure there are other branches when it comes to programming, I mainly consider these four when someone tells me that they are interested in getting into tech, specifically programming: Data Science, Web/Software Development (yes, I group them together — sue me), Game development and Cybersecurity.
Usually it’s always Web/Software Dev that they are thinking about, but I always try to mention both Data Science and Web/Software dev so they at least have another option to consider.
So, which should you pick?
First, ask yourself: are you An Explorer or A Builder?
An explorer loves diving into the unknown, uncovering hidden insights, and searching for answers in uncharted territories. If that sounds like you, then you might be drawn to Data Science, where you’ll analyze data to discover patterns and meanings.
A builder, on the other hand, thrives on creating and constructing. Whether you're starting from scratch, collaborating on existing projects, or finding solutions on Stack Overflow, you love bringing ideas to life. If this resonates with you, then Web/Software Development might be your ideal path.
I use the word “might” because it’s not set in stone. Personally, I have interests in all four branches, although some more than others. I’m currently focusing on Data Science, but I do plan on exploring Web/Software Dev later down the line.
This is tech. Don’t be afraid to pivot and explore. There’s so much of it to see.
Stay Curious. | cyrildotexe |
1,916,226 | Desvendando os Segredos dos Pré-Processadores CSS: Domine a Arte da Estilização Web com Sass | Introdução No dinâmico mundo do desenvolvimento web, estilizar com CSS é fundamental para criar... | 0 | 2024-07-08T18:44:59 | https://dev.to/dienik/desvendando-os-segredos-dos-pre-processadores-css-domine-a-arte-da-estilizacao-web-com-sass-5bn7 |
**Introdução**
No dinâmico mundo do desenvolvimento web, estilizar com CSS é fundamental para criar interfaces atraentes e funcionais. Mas e se você pudesse ir além do CSS puro e explorar novas possibilidades com pré-processadores? Nesta palestra, vamos explorar o Sass, um dos pré-processadores mais populares e poderosos, e aprender a dominar a arte da estilização web.
**O que é o Sass?**
O Sass (Syntactically Awesome Stylesheets) é um pré-processador CSS que amplia as capacidades do CSS, tornando seu código mais organizado, eficiente e reutilizável. Ele oferece recursos como variáveis, mixins, funções e aninhamento intuitivo, facilitando a criação e manutenção de estilos complexos.
**Benefícios do Sass**
- **Organização e Legibilidade:** Reduza a repetição de código e torne seus arquivos CSS mais fáceis de entender e manter.
- **Reutilização de Código:** Crie blocos de código reutilizáveis (mixins) para economizar tempo e garantir consistência em todo o projeto.
- **Variáveis e Funções:** Use variáveis para armazenar valores e funções para realizar cálculos, tornando seu código mais dinâmico e flexível.
- **Aninhamento Intuitivo:** Aninhe seletores de forma intuitiva, facilitando a organização e hierarquia das suas regras de estilo.
- **Compilação para CSS Puro:** O Sass compila seu código em CSS puro, compatível com todos os navegadores.
### Explorando os Recursos do Sass
#### Variáveis
Armazene valores reutilizáveis em variáveis para facilitar a atualização e consistência do código.
```scss
$primary-color: #007bff; // Cor primária do tema
$secondary-color: #6c757d; // Cor secundária do tema
```
#### Mixins
Crie blocos de código reutilizáveis para evitar repetição e organizar seus estilos.
`scss
@mixin button($color) {
background-color: $color;
color: white;
padding: 10px 20px;
border: none;
cursor: pointer;
}
`
#### Funções
Realize cálculos e operações complexas dentro do seu código CSS.
```scss
@function lighten($color, $amount) {
$hex: str-replace(#, '', $color);
$red: hex-to-dec($hex[1, 2]);
$green: hex-to-dec($hex[3, 2]);
$blue: hex-to-dec($hex[5, 2]);
$new-red: $red + ($amount * 255);
$new-green: $green + ($amount * 255);
$new-blue: $blue + ($amount * 255);
@return #{$new-red:hex-dec()}{$new-green:hex-dec()}{$new-blue:hex-dec()};
}
```
#### Aninhamento Intuitivo
Aninhe seletores de forma mais intuitiva e organizada, facilitando a leitura do código.
```scss
.container {
padding: 20px;
background-color: $primary-color;
.header {
color: white;
font-size: 24px;
margin-bottom: 10px;
}
.content {
color: #333;
line-height: 1.5;
}
}
```
### Demonstração Prática: Criando um Formulário de Contato com Sass
1. **Criando Variáveis para Cores e Estilos Básicos:**
```scss
$primary-color: #007bff; // Cor primária do tema
$secondary-color: #6c757d; // Cor secundária do tema
$text-color: #333; // Cor do texto
$font-family: 'Arial', sans-serif;
$form-padding: 20px;
$form-border-radius: 5px;
$form-border: 1px solid #ccc;
$input-padding: 10px;
$input-border: 1px solid #ccc;
$input-border-radius: 4px;
$button-padding: 10px 20px;
$button-border: none;
$button-border-radius: 4px;
```
2. **Estilizando o Formulário:**
```scss
.form {
padding: $form-padding;
border: $form-border;
border-radius: $form-border-radius;
}
```
3. **Estilizando os Rótulos:**
```scss
.form label {
display: block;
margin-bottom: 5px;
font-weight: bold;
}
```
4. **Estilizando os Campos de Entrada:**
```scss
.form input[type="text"],
.form input[type="email"],
.form input[type="password"] {
width: 100%;
padding: $input-padding;
border: $input-border;
border-radius: $input-border-radius;
margin-bottom: 10px;
}
```
5. **Estilizando o Botão de Enviar:**
```scss
.form button {
background-color: $primary-color;
color: white;
padding: $button-padding;
border: $button-border;
border-radius: $button-border-radius;
cursor: pointer;
}
```
6. **Estilizando o Foco nos Campos de Entrada:**
```scss
.form input[type="text"]:focus,
.form input[type="email"]:focus,
.form input[type="password"]:focus {
outline: none;
border-color: $primary-color;
```
7. **Estilizando o Botão de Enviar com Hover:**
```scss
.form button:hover {
background-color: darken($primary-color, 10%);
}
````
8. **Estilizando Mensagens de Erro:**
```scss
.form .error {
color: red;
margin-bottom: 10px;
}
````
### Conclusão
Nesta demonstração prática, construímos um formulário de contato completo utilizando o Sass. Vimos como as variáveis, mixins e funções do Sass podem simplificar e otimizar o processo de estilização, resultando em um código mais organizado, eficiente e reutilizável.
``` | dienik | |
1,916,227 | Automação de Coleta de Dados de CNPJ com Python utilizando ReceitaWS | Neste artigo, vamos explorar como automatizar a coleta de dados de CNPJ usando Python. Utilizaremos a... | 0 | 2024-07-08T18:45:34 | https://dev.to/madrade1472/automacao-de-coleta-de-dados-de-cnpj-com-python-utilizando-receitaws-5e39 | Neste artigo, vamos explorar como automatizar a coleta de dados de CNPJ usando Python. Utilizaremos a API da ReceitaWS para obter informações detalhadas sobre cada CNPJ e armazenaremos esses dados em um arquivo CSV. Este guia é voltado para desenvolvedores que desejam automatizar a extração e armazenamento de informações de CNPJ.
Bibliotecas Utilizadas
requests: Para fazer requisições HTTP à API da ReceitaWS.
pandas: Para manipulação e formatação de dados.
time: Para controlar o intervalo entre as requisições e respeitar os limites da API.
Código Completo
Utilizando https://receitaws.com.br
```
import requests
import pandas as pd
import time
def get_cnpj_data(cnpj):
"""
Faz uma requisição à API da ReceitaWS para obter dados do CNPJ.
Args:
cnpj (str): O número do CNPJ a ser consultado.
Returns:
dict: Dados do CNPJ em formato JSON se a requisição for bem-sucedida.
None: Se a requisição falhar.
"""
url = f"https://receitaws.com.br/v1/cnpj/{cnpj}"
response = requests.get(url)
if response.status_code == 200:
return response.json()
else:
return None
def process_cnpj_data(cnpj_list):
"""
Processa uma lista de CNPJs, obtendo os dados de cada um e respeitando o limite de requisições da API.
Args:
cnpj_list (list): Lista de números de CNPJ a serem consultados.
Returns:
list: Lista de dados de CNPJ obtidos da API.
"""
data = []
for cnpj in cnpj_list:
cnpj_data = get_cnpj_data(cnpj)
if cnpj_data:
data.append(cnpj_data)
time.sleep(20) # Espera 20 segundos entre cada requisição para respeitar o limite de 3 requisições por minuto
return data
def format_data_for_sql(data):
"""
Formata os dados de CNPJ em um DataFrame do pandas, adequado para exportação.
Args:
data (list): Lista de dados de CNPJ obtidos da API.
Returns:
pandas.DataFrame: DataFrame com os dados formatados.
"""
df_rows = []
for item in data:
row = {
'cnpj': item.get('cnpj', ''),
'nome': item.get('nome', ''),
'fantasia': item.get('fantasia', ''),
'logradouro': item.get('logradouro', ''),
'numero': item.get('numero', ''),
'complemento': item.get('complemento', ''),
'cep': item.get('cep', ''),
'bairro': item.get('bairro', ''),
'municipio': item.get('municipio', ''),
'uf': item.get('uf', ''),
'telefone': item.get('telefone', ''),
'email': item.get('email', ''),
}
df_rows.append(row)
df = pd.DataFrame(df_rows)
return df
def save_to_csv(df, file_name='cnpj_data.csv'):
"""
Salva o DataFrame em um arquivo CSV.
Args:
df (pandas.DataFrame): DataFrame contendo os dados de CNPJ.
file_name (str): Nome do arquivo CSV a ser salvo.
"""
df.to_csv(file_name, index_label='ID')
if __name__ == "__main__":
cnpj_list = [
'10869047000140', #CNPJ GERADO PELO 4DEVS
# Adicione mais CNPJs conforme necessário
]
cnpj_data = process_cnpj_data(cnpj_list)
df = format_data_for_sql(cnpj_data)
save_to_csv(df)
```
Explicação do Código
Função get_cnpj_data(cnpj):
Faz uma requisição à API da ReceitaWS com o CNPJ fornecido.
Retorna os dados em formato JSON se a requisição for bem-sucedida, ou None se falhar.
Função process_cnpj_data(cnpj_list):
Itera sobre uma lista de CNPJs, chamando get_cnpj_data para cada CNPJ.
Respeita o limite de requisições da API, esperando 20 segundos entre cada requisição.
Retorna uma lista de dados de CNPJ.
Função format_data_for_sql(data):
Formata os dados de CNPJ em um DataFrame do pandas, com colunas relevantes para armazenar informações.
Prepara os dados para exportação.
Função save_to_csv(df, file_name='cnpj_data.csv'):
Salva o DataFrame em um arquivo CSV, com o nome especificado.
Bloco if __name__ == "__main__"::
Define uma lista de CNPJs para processamento.
Chama as funções para processar os dados e salvar em um arquivo CSV.
| madrade1472 | |
1,916,234 | Understanding Docker Architecture: A Beginner's Guide | For most beginners, Docker's architecture is hard to understand. In this blog post, you will learn... | 0 | 2024-07-08T18:50:48 | https://dev.to/rakibtweets/understanding-docker-architecture-a-beginners-guide-fi1 | docker, dockerarchitecture, devops, containerization | For most beginners, Docker's architecture is hard to understand. In this blog post, you will learn everything about Docker architecture and how it works, as easily as possible, with proper visual aids.
### What is Docker?
> Docker is a platform for building, running and shipping application in consistent manners. Docker Solve the famous problem “It works on my machine!”
### Now let’s talk about Docker Engine
Docker uses `client`-`server` architecture. That means docker has its `client` component that to the `server` component using the `REST API` .

Server is called Docker Engine, set in the background and takes care of building and running the docker containers.
Take a look at the image below

Technically a container is a process. It’s a special process, we are not going to discuss about this process in this blog, For now we should now that unlike virtual machines (VM), containers don’t contain full-blown ‘operating system’. Instead, All containers on a host shared the operating system (OS) of the host. More accurately, All its container share kernels of the host.
## What is Kernel ?
A kernel is a core of a operating system (OS), like engine of car. A kernel manages applications and hardware resources.
Every OS (mac, windows ,Linux) has its won kernels/engines. And this kernels has different API’s. That’s why we can not run mac application on windows or Linux. Because, under the hood this applications is to talk to the kernel underline “OS”. So that means on Linux machines, we can only run Linux containers, Because this container need Linux. On a windows machine however, we can run both ( windows + Linux) container because windows 10 shift with a custom build Linux kernel. Docker on mac uses light weight Linux virtual machine (VM).

## Now let’s see docker architecture all together

Let's explain some terms that we have seen in the docker architecture.
### What is docker daemon ?
The Docker daemon (`dockerd`) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
## What is docker client ?
The Docker client (`docker`) is the primary way that many Docker users interact with Docker. When you use commands such as `docker run`, the client sends these commands to docker daemon `dockerd`, which carries them out. The `docker `command uses the Docker API. The Docker client can communicate with more than one daemon.
### Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker looks for images on Docker Hub by default. You can even run your own private registry.
When you use the `docker pull` or `docker run` commands, Docker pulls the required images from your configured registry. When you use the `docker push` command, Docker pushes your image to your configured registry. | rakibtweets |
1,916,235 | ntegrating SignalR with Angular for Seamless Video Calls: A Step-by-Step Guide | Hi everyone! Today, we'll explore how to create a simple video call web app using WebRTC, Angular,... | 0 | 2024-07-08T18:52:51 | https://dev.to/abdullah_khrais_97a2c908d/ntegrating-signalr-with-angular-for-seamless-video-calls-a-step-by-step-guide-mdh | signalr, api, angular, videocall | Hi everyone! Today, we'll explore how to create a simple video call web app using WebRTC, Angular, and ASP.NET Core. This guide will walk you through the basics of setting up a functional application with these technologies. WebRTC enables peer-to-peer video, voice, and data communication, while SignalR will handle the signaling process needed for users to connect. We'll start with the backend by creating a .NET Core web API project and adding the SignalR NuGet package. Check out the repository links at the end for the complete code.
## Backend Setup
- **Step1: Create .NET Core API Project **
First, create a .NET Core web API project and install the SignalR package:
`dotnet add package Microsoft.AspNetCore.SignalR.Core`
- **Step 2: Create the VideoCallHub Class**
Next, create a class VideoCallHub:
```
using Microsoft.AspNetCore.SignalR;
using System;
using System.Collections.Concurrent;
using System.Threading.Tasks;
namespace Exam_Guardian.API
{
public class VideoCallHub : Hub
{
private static readonly ConcurrentDictionary<string, string> userRooms = new ConcurrentDictionary<string, string>();
public override async Task OnConnectedAsync()
{
await base.OnConnectedAsync();
await Clients.Caller.SendAsync("Connected", Context.ConnectionId);
}
public override async Task OnDisconnectedAsync(Exception exception)
{
if (userRooms.TryRemove(Context.ConnectionId, out var roomName))
{
await Groups.RemoveFromGroupAsync(Context.ConnectionId, roomName);
}
await base.OnDisconnectedAsync(exception);
}
public async Task JoinRoom(string roomName)
{
await Groups.AddToGroupAsync(Context.ConnectionId, roomName);
userRooms.TryAdd(Context.ConnectionId, roomName);
await Clients.Group(roomName).SendAsync("RoomJoined", Context.ConnectionId);
}
public async Task SendSDP(string roomName, string sdpMid, string sdp)
{
if (userRooms.ContainsKey(Context.ConnectionId))
{
await Clients.OthersInGroup(roomName).SendAsync("ReceiveSDP", Context.ConnectionId, sdpMid, sdp);
}
else
{
await Clients.Caller.SendAsync("Error", "You are not in a room");
}
}
public async Task SendICE(string roomName, string candidate, string sdpMid, int sdpMLineIndex)
{
if (userRooms.ContainsKey(Context.ConnectionId))
{
await Clients.OthersInGroup(roomName).SendAsync("ReceiveICE", Context.ConnectionId, candidate, sdpMid, sdpMLineIndex);
}
else
{
await Clients.Caller.SendAsync("Error", "You are not in a room");
}
}
}
}
```
**- Step 3: Register the Hub in Program.cs
Register the SignalR hub and configure CORS in Program.cs:
```
builder.Services.AddSignalR();
builder.Services.AddCors(options =>
{
options.AddPolicy("AllowAngularDev", builder =>
{
builder.WithOrigins("http://localhost:4200", "http://[your_ip_address]:4200")
.AllowAnyHeader()
.AllowAnyMethod()
.AllowCredentials();
});
});
app.UseCors("AllowAngularDev");
app.UseEndpoints(endpoints =>
{
endpoints.MapHub<VideoCallHub>("/videoCallHub");
endpoints.MapControllers();
});
```
**
With this, the backend setup for SignalR is complete.
## Frontend Setup
**- Step 1: Create Angular Project**
Create an Angular project and install the required packages:
`npm install @microsoft/signalr cors express rxjs simple-peer tslib webrtc-adapter zone.js`
**- Step 2: Create Service Called SignalRService,
inside this service set this code,
inside this service set this code
```
import { Injectable } from '@angular/core';
import { HubConnection, HubConnectionBuilder } from '@microsoft/signalr';
import { Subject } from 'rxjs';
@Injectable({
providedIn: 'root'
})
export class SignalRService {
private hubConnection: HubConnection;
private sdpReceivedSource = new Subject<any>();
private iceReceivedSource = new Subject<any>();
private connectionPromise: Promise<void>;
sdpReceived$ = this.sdpReceivedSource.asObservable();
iceReceived$ = this.iceReceivedSource.asObservable();
constructor() {
this.hubConnection = new HubConnectionBuilder()
.withUrl('http://[your_local_host]/videoCallHub')
.build();
this.connectionPromise = this.hubConnection.start()
.then(() => console.log('SignalR connection started.'))
.catch(err => console.error('Error starting SignalR connection:', err));
this.hubConnection.on('ReceiveSDP', (connectionId: string, sdpMid: string, sdp: string) => {
this.sdpReceivedSource.next({ connectionId, sdpMid, sdp });
});
this.hubConnection.on('ReceiveICE', (connectionId: string, candidate: string, sdpMid: string, sdpMLineIndex: number) => {
this.iceReceivedSource.next({ connectionId, candidate, sdpMid, sdpMLineIndex });
});
}
private async ensureConnection(): Promise<void> {
if (this.hubConnection.state !== 'Connected') {
await this.connectionPromise;
}
}
async joinRoom(roomName: string): Promise<void> {
await this.ensureConnection();
return this.hubConnection.invoke('JoinRoom', roomName)
.then(() => console.log(`Joined room ${roomName}`))
.catch(err => console.error('Error joining room:', err));
}
async sendSDP(roomName: string, sdpMid: string, sdp: string): Promise<void> {
await this.ensureConnection();
return this.hubConnection.invoke('SendSDP', roomName, sdpMid, sdp)
.catch(err => {
console.error('Error sending SDP:', err);
throw err;
});
}
async sendICE(roomName: string, candidate: string, sdpMid: string, sdpMLineIndex: number): Promise<void> {
await this.ensureConnection();
return this.hubConnection.invoke('SendICE', roomName, candidate, sdpMid, sdpMLineIndex)
.catch(err => {
console.error('Error sending ICE candidate:', err);
throw err;
});
}
}
```
**- Step 3: create your component called VideoCallComponent
inside VideoCallComponent.ts
set this code
```
import { Component, OnDestroy, OnInit } from '@angular/core';
import { Subscription } from 'rxjs';
import { SignalRService } from '../../../core/services/video-call-signal-r.service';
@Component({
selector: 'app-video-call',
templateUrl: './video-call.component.html',
styleUrls: ['./video-call.component.css']
})
export class VideoCallComponent implements OnInit, OnDestroy {
roomName: string = 'room1'; // Change this as needed
private sdpSubscription: Subscription;
private iceSubscription: Subscription;
private localStream!: MediaStream;
private peerConnection!: RTCPeerConnection;
constructor(private signalRService: SignalRService) {
this.sdpSubscription = this.signalRService.sdpReceived$.subscribe(data => {
console.log('Received SDP:', data);
this.handleReceivedSDP(data);
});
this.iceSubscription = this.signalRService.iceReceived$.subscribe(data => {
console.log('Received ICE Candidate:', data);
this.handleReceivedICE(data);
});
}
async ngOnInit(): Promise<void> {
await this.signalRService.joinRoom(this.roomName);
this.initializePeerConnection();
}
ngOnDestroy(): void {
this.sdpSubscription.unsubscribe();
this.iceSubscription.unsubscribe();
this.endCall();
}
async startCall() {
try {
await this.getLocalStream();
if (this.peerConnection.signalingState === 'stable') {
const offer = await this.peerConnection.createOffer();
await this.peerConnection.setLocalDescription(offer);
await this.signalRService.sendSDP(this.roomName, 'offer', offer.sdp!);
console.log('SDP offer sent successfully');
} else {
console.log('Peer connection not in stable state to create offer');
}
} catch (error) {
console.error('Error starting call:', error);
}
}
async getLocalStream() {
this.localStream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true });
const localVideo = document.getElementById('localVideo') as HTMLVideoElement;
localVideo.srcObject = this.localStream;
this.localStream.getTracks().forEach(track => this.peerConnection.addTrack(track, this.localStream));
}
initializePeerConnection() {
this.peerConnection = new RTCPeerConnection();
this.peerConnection.ontrack = (event) => {
const remoteVideo = document.getElementById('remoteVideo') as HTMLVideoElement;
if (remoteVideo.srcObject !== event.streams[0]) {
remoteVideo.srcObject = event.streams[0];
console.log('Received remote stream');
}
};
this.peerConnection.onicecandidate = (event) => {
if (event.candidate) {
this.signalRService.sendICE(this.roomName, event.candidate.candidate, event.candidate.sdpMid!, event.candidate.sdpMLineIndex!)
.then(() => console.log('ICE candidate sent successfully'))
.catch(error => console.error('Error sending ICE candidate:', error));
}
};
}
async handleReceivedSDP(data: any) {
const { connectionId, sdpMid, sdp } = data;
try {
const remoteDesc = new RTCSessionDescription({ type: sdpMid === 'offer' ? 'offer' : 'answer', sdp });
await this.peerConnection.setRemoteDescription(remoteDesc);
if (sdpMid === 'offer') {
const answer = await this.peerConnection.createAnswer();
await this.peerConnection.setLocalDescription(answer);
await this.signalRService.sendSDP(this.roomName, 'answer', answer.sdp!);
console.log('SDP answer sent successfully');
}
} catch (error) {
console.error('Error handling received SDP:', error);
}
}
async handleReceivedICE(data: any) {
const { connectionId, candidate, sdpMid, sdpMLineIndex } = data;
try {
await this.peerConnection.addIceCandidate(new RTCIceCandidate({ candidate, sdpMid, sdpMLineIndex }));
console.log('ICE candidate added successfully');
} catch (error) {
console.error('Error handling received ICE candidate:', error);
}
}
endCall() {
if (this.peerConnection) {
this.peerConnection.close();
console.log('Call ended');
}
}
}
```
**- Step 4: inside html
set this code
```
<div>
<button (click)="startCall()">Start Call</button>
</div>
<video id="localVideo" autoplay muted></video>
<video id="remoteVideo" autoplay></video>
```
| abdullah_khrais_97a2c908d |
1,916,243 | help | https://idx.google.com/majalah-5125170 | 0 | 2024-07-08T18:57:42 | https://dev.to/yayan/help-48fd | https://idx.google.com/majalah-5125170 | yayan | |
1,916,245 | Building an Angular E-commerce App: A Step-by-Step Guide to Understanding and Refining Requirements | Introduction Imagine you're tasked with building a web application. The project manager... | 28,006 | 2024-07-08T19:28:56 | https://dev.to/cezar-plescan/building-an-angular-e-commerce-app-a-step-by-step-guide-to-understanding-and-refining-requirements-hoe | ## Introduction
Imagine you're tasked with building a web application. The project manager hands you a list of requirements, but some of them are a bit... fuzzy. What do you do? Panic? Start coding blindly? Or maybe, you take a step back and start asking questions. In this series of articles, we'll follow the journey of an Angular developer as they tackle the challenge of turning vague requirements into a real, working product. We'll explore the thought processes, the questions they ask, the decisions they make, and ultimately, how they create a successful e-commerce experience for their users.
### The Approach
I'll take a pragmatic, hands-on approach, starting with a Minimum Viable Product (MVP) and gradually adding complexity and features. Along the way, I'll tackle real-world challenges, explore different architectural patterns, and share practical tips for building robust and scalable Angular applications.
### What you'll learn
By the end of this series, you'll gain a deeper understanding of:
- How to analyze and refine e-commerce requirements.
- The importance of user stories and acceptance criteria.
- The role of collaboration between developers and stakeholders.
- Architectural decisions for data flow, state management, and UI design.
- How to prioritize features and deliver value early and often.
So, buckle up and join me on this journey as we unravel the complexities of building a real-world Angular application, one architectural decision at a time.
## Describing the application
Let’s imagine I’m tasked with building an Angular e-commerce application from scratch. The product owner (PO) provides the following requirements for the Minimum Viable Product (MVP):
- **Display a list of products with their images, names, and prices**.
- **Allow users to view product details (description, images, specifications) on a separate page**.
- **Enable users to find or filter products easily**.
- **Users should be able to add products to their cart from both the product listing and product details pages**.
- **Allow users to proceed to checkout from the cart**.
This is the initial set of requirements, and more are expected to follow. The primary goal is to deliver a functional application as soon as possible.
#### Understanding the MVP
In simple terms, an MVP is the most basic version of a product that can be released to users to gather feedback and validate the concept. It focuses on core features and functionality, allowing for rapid development and iterative improvement. You can learn more about MVPs here: https://www.productplan.com/glossary/minimum-viable-product/.
#### My plan of action
With these initial MVP requirements in hand, my next steps are as follows:
1. **Thorough analysis** - Carefully review each requirement to grasp the application's core purpose and intended functionality.
2. **Clarify and refine** - Identify any unclear, conflicting, or missing details in the requirements. If needed, I'll initiate discussions with stakeholders to resolve these ambiguities and ensure a shared understanding of the project scope.
3. **Task breakdown** - Analyze the requirements to identify the data involved, the data retrieval mechanisms, and the necessary user interactions. This will help me break down the work into actionable tasks.
4. **Effort estimation** - Estimate the time and effort required for each task and communicate this with stakeholders. This allows for prioritization and potential adjustments to the scope if necessary.
5. **Implementation** - Once the requirements are clearly understood and the tasks are defined, I'll begin the actual development process.
In this article, I'll focus on the first two steps: **thorough analysis** and **clarification of the requirements**.
## Understanding and clarifying the requirements
The purpose of the app is straightforward: to build a simple e-commerce website.
I'll start by analyzing the first requirement: _"Display a list of products with their images, names, and prices."_
This statement represents a **functional requirement**. In essence, it defines a specific action or capability the system must perform to meet user needs and business objectives. It focuses on **_what_** the system does, **_not how_** it does it.
_**Note**: For a deeper dive into functional requirements, you can refer to this guide: https://qat.com/guide-functional-requirements/._
In our case, the requirement clearly states the expected outcome: _displaying a product list_. However, it leaves out crucial details about how the list should be presented and where the data will come from. To fully understand this requirement, I’ll need to clarify a few points with the PO.
### Clarification and decisions points
While the requirement states the desired outcome, it raises several questions I need to address:
- **visual presentation** - How should the list be displayed? Should it be a grid, a simple list, or something else?
- **data source** - Where will I get the product data from? Is there an existing API, or do I need to use mock data for now?
- **performance** - How many products are we dealing with? If it's a large dataset, should I implement features like pagination or infinite scrolling to optimize performance?
- **responsiveness** - How should the list adapt to different screen sizes (desktop, mobile, etc.)?
Let's delve into each of these questions and how we might approach them.
#### Visual Presentation
Since the requirement doesn't specify the layout, I'll consult with the PO. There are two likely scenarios.
**Scenario 1: The PO has design guidelines**. Ideally, the PO might have already collaborated with a designer or have a clear vision for the product list's appearance. In this scenario, they would provide detailed specifications on the layout, styling, and any specific interactions. This makes my job easier as I can directly translate these guidelines into code.
**Scenario 2: The PO needs guidance**. More often, especially in early MVP stages, the PO may need guidance. My approach would be:
1. Propose options: I'll suggest common layouts like grid and list, potentially with simple mockups to visualize them.
2. User Experience (UX) Focus: We'll discuss the pros and cons of each layout regarding UX. For instance, grids are visually appealing but might be less space-efficient on mobile devices.
3. Guided Decision: By presenting options and their implications, I'll help the PO make an informed decision.
In this case, let's say the PO decides on a grid layout with a maximum of 4 items per row. We'll also leverage the Angular Material UI library for its pre-built components and consistent styling.
#### Data source
Next, I need to figure out where the product data is coming from. The requirement doesn't mention where the product data comes from. Do we have a backend API ready, or should I use mock data for now? Also, what specific data fields (e.g., descriptions, categories) should be included?
I'll inquire about the availability of a backend API.
**API exists**: If there's a ready-to-use API, I'll need to understand the endpoint, response format, and any authentication requirements.
**No API Yet**: If the backend is still under development, I'll propose using mock data for the initial implementation. This allows us to move forward with the frontend while the backend is being built.
Since there's no backend API yet, we've decided to hardcode the products on the frontend.
#### Performance considerations
The number of products can significantly impact how we display the list. I'll ask the PO about the expected product volume.
**Scenario 1: Small List**. If we're dealing with a limited number of products (e.g., 10-20), a simple list or grid should suffice.
**Scenario 2: Large List**. If we anticipate hundreds or thousands of products, I'll need to consider performance optimization techniques like pagination, infinite scrolling, or virtual scrolling. Given that this is an MVP, I'll likely recommend starting with the simplest approach (no pagination/scrolling) to prioritize a quick launch. We can revisit this as the product catalog grows.
To keep things simple for the MVP, we’ve decided to have no pagination or scrolling mechanisms for the product list at this stage.
#### Responsiveness
Most e-commerce sites need to look good on various devices. I'll check if the PO has any specific requirements for mobile or tablet views.
**Scenario 1: Responsiveness Required**. If so, I'll need to ensure the chosen layout (grid or list) is responsive and adapts gracefully to different screen sizes.
**Scenario 2: Desktop-First**. If the focus is on desktop for the MVP, I'll still design with responsiveness in mind, but we might postpone full mobile optimization for later iterations.
The PO confirms they want a visually appealing product list on both mobile and desktop devices. Therefore, our chosen grid layout will be made responsive to adapt to various screen sizes.
### The importance of thorough requirement analysis
The process I've just walked through highlights several crucial aspects of effective requirement analysis:
- **Proactive communication**: Engaging in open and direct communication with the PO or stakeholders is key. By asking questions and seeking clarification, we avoid making assumptions and ensure the feature is built to meet the intended goals. This proactive communication not only clarifies the "**what**" of the requirement but also uncovers the underlying "**why**", allowing us to tailor solutions that truly address user needs.
- **Balancing user needs and technical constraints**: Building software is a delicate balancing act between what users desire and what is technically feasible within given constraints (time, budget, resources). By carefully considering both perspectives, we can make informed decisions that prioritize delivering value to users while also ensuring a sustainable and maintainable implementation.
- **Iterative development and the MVP mindset**: Recognizing that we are building an MVP (Minimum Viable Product) allows us to prioritize the most essential features and functionality. By starting with a simplified solution and iterating based on user feedback, we can quickly validate our assumptions, learn what works best, and make informed refinements to the product over time. This agile approach not only speeds up development but also ensures that the final product is more aligned with user expectations.
This requirement analysis process demonstrates that:
- Effective communication is vital for successful project outcomes.
- Collaboration between developers and stakeholders is key to building the right thing.
- Prioritization and iteration are fundamental principles of agile development, allowing us to deliver value quickly and adapt to changing needs.
By embracing these principles, we can create a solid foundation for our e-commerce application, ensuring that it not only meets the initial requirements but also evolves to satisfy the evolving needs of our users and business goals.
### Introducing user stories and acceptance criteria
The clarifications I've gathered are a solid foundation for development, but they primarily focus on the technical **_what_** – what the system needs to do. However, behind every technical requirement, there's a crucial **_why_** – the user's motivations and goals.
#### Personal reflection: the importance of "_why_"
In my own experience, I've found that simply following instructions without understanding the underlying purpose can be demotivating and unproductive. Whether it was homework in school, theoretical problems in university, or tasks at previous jobs, knowing the "why" behind my work always made a significant difference. It gave my efforts meaning and helped me see the bigger picture.
That's why, when working on personal projects, my biggest challenge was always to clearly define the problem I wanted to solve. Understanding the "why" gave my work direction and ensured I was building something truly valuable.
As developers, it's easy to get caught up in the technical details and forget the bigger picture. We may have been trained to focus on completing tasks and implementing features without always understanding the underlying reasons behind them. This "just do it" mindset can be demotivating and lead to solutions that miss the mark.
In real-world projects, it's essential to grasp the "why" behind our work. What problems are we solving for the users? How will our implementation bring value to them and the business? By answering these questions, we can build features that are not only technically sound but also meaningful and impactful.
#### The pitfalls of a purely technical focus
In the context of our e-commerce application, focusing solely on the technical what can lead to several pitfalls:
- **Misaligned solutions**: If we don't understand the user's underlying motivations, we might build a technically impressive feature that doesn't actually solve their problem or address their needs. This leads to wasted effort and resources.
- **Missed opportunities**: You might miss opportunities to enhance the user experience if you don't explicitly consider the user's goals and motivations.
- **Misaligned priorities**: Without a clear understanding of the user's needs, it's harder to prioritize features effectively. You might end up spending time on technical details that don't significantly impact the user experience.
- **Communication barriers**: Technical jargon can create communication barriers between stakeholders and developers, leading to misunderstandings and delays.
- **Disengaged developers**: When developers don't understand the "why" behind their work, they can become disengaged and less invested in the project's success. This can negatively impact their motivation and productivity.
#### Shifting to a user-centric perspective with user stories
**User stories** help us bridge this gap. They shift the focus from the technical implementation to the user's perspective. Let's see how we can transform our first requirement into a user story:
> _As a potential customer, I want to see a list of products with their images, names, and prices so that I can quickly browse and compare products before making a purchase decision._
This user story clearly articulates the user's goal (browsing and comparing products) and the reason behind it (making an informed purchase decision). By understanding this "why," we can make better design and implementation choices that truly cater to the user's needs.
_**Note**: You can find more information about user stories at https://www.visual-paradigm.com/guide/agile-software-development/what-is-user-story/ or https://www.atlassian.com/agile/project-management/user-stories._
### Adding clarity with Acceptance Criteria
In addition to this part, we have to define a set of conditions that must be met for the user story to be considered complete and accepted by the product owner or stakeholders. This is what is called **Acceptance Criteria**.
Here is how they can be composed:
> - The product listing page displays a grid of products.
> - Each product card shows the product image, name, and price.
> - The product list has a maximum of 4 items per row.
> - The list does not have pagination or infinite scrolling functionality (for the MVP).
_**Note**: You can learn more about acceptance criteria at https://www.productplan.com/glossary/acceptance-criteria/ or https://www.altexsoft.com/blog/acceptance-criteria-purposes-formats-and-best-practices/._
#### How to compose effective Acceptance Criteria
**Focus on user outcomes**: Instead of specifying technical details, focus on what the user should be able to achieve or experience when the feature is implemented.
**Use clear and concise language**: Write in plain language that everyone, including non-technical stakeholders, can understand. Avoid vague terms like "_user-friendly_" or "_intuitive_." Be specific about what the user should see, do, or experience.
**Each criterion should be verifiable**: Write criteria that can be objectively tested to determine if they have been met. Use specific values or conditions whenever possible.
**Collaborate with stakeholders**: Work with the product owner, designers, and other stakeholders to ensure the acceptance criteria accurately reflect their expectations. Share your draft acceptance criteria with stakeholders early on to gather feedback and make necessary adjustments.
### Benefits of using User Stories and Acceptance Criteria
**User-Centric Focus**: The core principle of user stories is to shift the perspective from a purely functional description to understanding the user's needs and motivations. By framing the requirement as "As a potential customer, I want to see a list of products... so that I can quickly browse and compare products," you immediately highlight the value the feature brings to the user. This helps the team focus on delivering solutions that directly address user needs.
**Shared understanding**: User stories create a common language between stakeholders (including the PM, designers, and developers). The simple "_As a... I want... so that..._" format is easy to understand and avoids technical jargon. This ensures everyone is aligned on the purpose of the feature and what it needs to accomplish.
**Clearer priorities**: User stories are easier to prioritize than purely functional requirements. By understanding the "why" behind the feature, stakeholders can assess its importance relative to other features and prioritize development efforts accordingly.
**Flexibility and adaptability**: User stories are inherently flexible. They are not rigid specifications but rather invitations for conversation and collaboration. As the team learns more about user needs and technical constraints, the user story can be refined and adapted without requiring a complete rewrite of the requirements.
**Testable outcomes**: User stories naturally lead to the definition of acceptance criteria, which are specific, measurable conditions that must be met for the story to be considered complete. This creates a clear definition of "done" and facilitates testing and quality assurance.
### Documenting the discussions
Once I've thoroughly discussed and clarified the requirements, it's crucial to formally document them in a shared location accessible to the entire team. This documentation serves as a single source of truth, ensuring everyone is aligned and preventing misunderstandings.
Here is how I can translate the first requirement into a user story:
> _**User Story**_:
>
> _As a potential customer, I want to see a clear and visually appealing list of products with their images, names, and prices, so that I can quickly browse and compare items before making a purchase decision._
>
> _**Acceptance Criteria**:_
> - _Layout_:
> - _The product listing page displays products in a grid layout._
> - _Each row of the grid contains a maximum of 4 product items._
> - _The layout is responsive and adapts to different screen sizes._
> - _Product Card Content:_
> - _Each product card displays a clear product image._
> - _The product name is prominently displayed below the image._
> - _The price is displayed clearly, using the appropriate currency symbol._
> - _Functionality:_
> - _The product list is initially loaded when the user navigates to the product listing page._
> - _MVP Considerations:_
> - _The product list does not include pagination or infinite scrolling in this initial version._
> - _Error handling for data fetching issues will be addressed in a later iteration._
> - _Technical Notes:_
> - _Data Source: Product data will be initially hardcoded on the frontend._
> - _UI Framework: The Angular Material library will be used for styling and components._
In this e-commerce example, I've translated the first functional requirement into a single user story. However, this is not always the case. A single functional requirement can often give rise to multiple user stories, each representing different perspectives and needs of various users.
In an Agile environment, the responsibility of writing user stories is often shared. While the Product Owner is ultimately accountable for the product backlog, developers can contribute by creating user stories and validating them with the PO to ensure they are clear, concise, and actionable.
### The importance of clarifying and refining requirements
As you've seen, even a seemingly simple requirement like "_Display a list of products with their images, names, and prices_" can raise numerous questions. Before a single line of code is written, it's crucial to address these questions through collaborative discussions with the Product Owner and other stakeholders.
**Why not assume? The value of shared understanding**
While it might be tempting to make assumptions based on experience or best practices (e.g., assuming a grid layout for the product list), it's essential to resist that urge. Explicitly discussing design choices and technical considerations with stakeholders ensures everyone is on the same page.
For instance, while a grid layout might seem like the obvious choice, the PO might have a different vision in mind or specific design constraints to consider. By collaborating early on, we avoid misunderstandings, reduce the risk of rework, and build a product that aligns with everyone's expectations.
**Business value first: Balancing innovation and practicality**
As developers, we're often eager to implement cutting-edge features or solve complex technical challenges. However, it's crucial to remember that our primary goal is to deliver business value. This means prioritizing features that directly address user needs and contribute to the product's success.
In the context of an MVP (Minimum Viable Product), the focus should be on getting the core functionality working quickly and efficiently. We can always add more sophisticated features and optimizations in later iterations, based on user feedback and evolving business needs.
**The Agile mindset: Embracing iteration and feedback**
This approach aligns with the core principles of Agile development methodologies:
- **Incremental Delivery**: Start with a basic version of the product and gradually add features and refinements.
- **Continuous Feedback**: Regularly gather feedback from users and stakeholders to validate assumptions and guide development.
- **Adaptability**: Be prepared to adjust plans and priorities based on feedback and changing requirements.
Remember, we're not just building software; we're building a product that solves real problems for real people. By fostering open communication, clarifying assumptions, and prioritizing user needs, we set ourselves up for success, creating a product that delights users and drives business growth.
### Recap of the clarification process
As I delved into the first requirement, "Display a list of products with their images, names, and prices," several questions and decisions emerged. This seemingly simple requirement opened up a wealth of considerations:
1. Visual Presentation: We recognized the need to clarify the desired layout of the product list. By proposing options and mockups, we ensured a shared understanding of the visual presentation with stakeholders.
2. Data Source: We proactively addressed the absence of a backend API by suggesting mock data for the initial development phase. This keeps the project moving forward while the backend is being developed.
3. Performance and Scalability: We considered the potential for the product list to grow over time. While simpler solutions were prioritized for the MVP, we acknowledged the need for future optimizations like pagination or infinite scrolling if the list becomes extensive.
4. Responsiveness: We discussed the importance of responsiveness across different devices, but agreed to focus on the desktop version for the MVP, prioritizing timely delivery.
5. Documentation: We emphasized the importance of documenting these decisions and assumptions, creating a clear reference point for the team and stakeholders.
This clarification process revealed a crucial shift in perspective. Initially, we focused on the technical **_what_** - the specific steps to implement the product list. However, by engaging in discussions and exploring different options, we naturally began to consider the **_why_** - the user's needs and the value this feature would bring to them.
This shift in focus leads us to the next logical step: framing the requirement as a user story and defining clear acceptance criteria. User stories provide a powerful tool for encapsulating the user's perspective and goals, while acceptance criteria serve as a checklist to ensure we meet those expectations.
## Summarize and looking ahead
In this article, I've explored the initial steps of building an Angular e-commerce application. By carefully analyzing and clarifying even a seemingly simple requirement like "_Display a list of products_", I've demonstrated the importance of understanding the underlying user needs and business goals. I've also highlighted the value of collaborative discussions with stakeholders, documenting decisions, and prioritizing features for the MVP.
In the next article, I'll dive deeper into the **technical design** and **implementation** of the product listing page. We'll explore how to structure our Angular components, fetch data from a mock source, and leverage Angular Material to create a visually appealing and user-friendly interface. Stay tuned for the next article in the series, where I'll continue the journey towards building a successful e-commerce platform
______
What are your thoughts on the importance of user stories and acceptance criteria? Have you encountered similar challenges in your own projects? Share your experiences and insights in the comments below!
| cezar-plescan | |
1,916,250 | Top-Rated Physiotherapy Clinic in Toronto | Experience unparalleled physiotherapy clinic toronto care at Toronto's leading physiotherapy clinic,... | 0 | 2024-07-08T19:03:55 | https://dev.to/brett_wyman_845b6cb9a8c82/top-rated-physiotherapy-clinic-in-toronto-4hni | Experience unparalleled [physiotherapy clinic toronto](https://www.bodydynamics.ca/physiotherapy/) care at Toronto's leading physiotherapy clinic, where our expert team is dedicated to your recovery and wellness. | brett_wyman_845b6cb9a8c82 | |
1,916,254 | Apparently everyone speaks English here | That's a great skill, I am planning to break down here every step and what I did to become fluent (I... | 0 | 2024-07-08T19:05:24 | https://dev.to/ocelotmocha93/apparently-everyone-speaks-english-here-4dje | That's a great skill, I am planning to break down here every step and what I did to become fluent (I said fluent, not master) in English.
| ocelotmocha93 | |
1,916,255 | Trying Kotlin Multiplatform for the First Time: Step by Step Building an App with KMP | After getting inspired by KotlinConf, I decided to try Kotlin Multiplatform (KMP) for the first time... | 0 | 2024-07-08T19:06:27 | https://dev.to/gadipuranto/trying-kotlin-multiplatform-for-the-first-time-step-by-step-building-an-app-with-kmp-5845 | kotlin, kmp, android, mobile | After getting inspired by KotlinConf, I decided to try Kotlin Multiplatform (KMP) for the first time and build an app with this technology. KMP is a technology developed by JetBrains that allows developers to write code together for various platforms using the Kotlin programming language. The main goal is to reduce code duplication and increase productivity by sharing business logic across multiple platforms, while still providing flexibility for platform-specific implementations where needed.
### Key Aspects of Multiplatform Kotlin:
1. **Code Sharing:** You can write common code once and use it on different platforms such as Android, iOS, web, and desktop.
2. **Platform-Specific Code:** KMP allows you to write platform-specific code if needed, providing optimal flexibility and performance.
3. **Supported Platforms:** KMP supports a wide range of platforms including Android, iOS, JVM, JavaScript, and native desktop applications.
4. **Gradle-based:** KMP projects use Gradle for build automation, making it easy to manage dependencies and build processes.
5. **Interoperability:** Multiplatform Kotlin code can easily interact with existing platform-specific code, such as Java for Android and Swift for iOS.
### Benefits of Using Multiplatform Kotlin:
1. **Reduced Development Time:** By sharing code across multiple platforms, you can significantly reduce development and maintenance time.
2. **Consistency:** Shared business logic ensures consistency across various platform-specific applications.
3. **Flexibility:** You can still use platform-specific libraries and frameworks if needed.
4. **Kotlin features:** You can use modern features of the Kotlin language on all platforms.
5. **Growing Ecosystem:** There are more and more multiplatform libraries available for common tasks.
### Getting to Know Multiplatform Kotlin
In this first step, I started by understanding the basic concepts of KMP and its advantages. I learned how KMP enables efficient application development by reducing code duplication and ensuring consistency across multiple platforms. KMP provides flexibility in writing platform-specific code
This is the first chapter of my journey to learn Multiplatform Kotlin. In the next article, I'll talk about how to set up your development environment with KMP, including practical steps on using Gradle for build automation and dependency management and Building an App with KMP.
Be sure to follow next topics: "Setting Up Your Development Environment," where we'll go into more detail about how to get started with your KMP project and set up all the necessary tools for multiplatform development. Thank you for reading, and I hope this article about learn Kotlin Multiplatform (KMP) for the first time and build an app gives you a clear picture of the potential of Kotlin Multiplatform in app development. Leave question and Stay tuned for the next discussion! | gadipuranto |
1,916,256 | Trying Kotlin Multiplatform for the First Time: Step by Step Building an App with KMP | After getting inspired by KotlinConf, I decided to try Kotlin Multiplatform (KMP) for the first time... | 0 | 2024-07-08T19:06:27 | https://dev.to/gadipuranto/trying-kotlin-multiplatform-for-the-first-time-step-by-step-building-an-app-with-kmp-91 | kotlin, kmp, android, mobile | After getting inspired by KotlinConf, I decided to try Kotlin Multiplatform (KMP) for the first time and build an app with this technology. KMP is a technology developed by JetBrains that allows developers to write code together for various platforms using the Kotlin programming language. The main goal is to reduce code duplication and increase productivity by sharing business logic across multiple platforms, while still providing flexibility for platform-specific implementations where needed.
### Key Aspects of Multiplatform Kotlin:
1. **Code Sharing:** You can write common code once and use it on different platforms such as Android, iOS, web, and desktop.
2. **Platform-Specific Code:** KMP allows you to write platform-specific code if needed, providing optimal flexibility and performance.
3. **Supported Platforms:** KMP supports a wide range of platforms including Android, iOS, JVM, JavaScript, and native desktop applications.
4. **Gradle-based:** KMP projects use Gradle for build automation, making it easy to manage dependencies and build processes.
5. **Interoperability:** Multiplatform Kotlin code can easily interact with existing platform-specific code, such as Java for Android and Swift for iOS.
### Benefits of Using Multiplatform Kotlin:
1. **Reduced Development Time:** By sharing code across multiple platforms, you can significantly reduce development and maintenance time.
2. **Consistency:** Shared business logic ensures consistency across various platform-specific applications.
3. **Flexibility:** You can still use platform-specific libraries and frameworks if needed.
4. **Kotlin features:** You can use modern features of the Kotlin language on all platforms.
5. **Growing Ecosystem:** There are more and more multiplatform libraries available for common tasks.
### Getting to Know Multiplatform Kotlin
In this first step, I started by understanding the basic concepts of KMP and its advantages. I learned how KMP enables efficient application development by reducing code duplication and ensuring consistency across multiple platforms. KMP provides flexibility in writing platform-specific code
This is the first chapter of my journey to learn Multiplatform Kotlin. In the next article, I'll talk about how to set up your development environment with KMP, including practical steps on using Gradle for build automation and dependency management and Building an App with KMP.
Be sure to follow next topics: "Setting Up Your Development Environment," where we'll go into more detail about how to get started with your KMP project and set up all the necessary tools for multiplatform development. Thank you for reading, and I hope this article about learn Kotlin Multiplatform (KMP) for the first time and build an app gives you a clear picture of the potential of Kotlin Multiplatform in app development. Leave question and Stay tuned for the next discussion! | gadipuranto |
1,916,260 | Automate Github Pull Requests With NodeJS API | Integrating with the GitHub API using Node.js, TypeScript, and NestJS involves several steps. Here's... | 0 | 2024-07-08T19:07:42 | https://dev.to/redbonzai/automate-github-pull-requests-with-nodejs-api-48gj | nextjs, javascript, programming, typescript | Integrating with the GitHub API using Node.js, TypeScript, and NestJS involves several steps. Here's a detailed guide to achieve this:
### 1. Set Up a New NestJS Project
First, create a new NestJS project if you don't have one already:
```bash
npm i -g @nestjs/cli
nest new github-pr-automation
cd github-pr-automation
```
### 2. Install Axios and Other Necessary Dependencies
You need Axios to make HTTP requests:
```bash
npm install axios
npm install @nestjs/config
npm install @types/axios --save-dev
```
### 3. Configure Environment Variables
Create a `.env` file in the root of your project and add your GitHub personal access token and other necessary configurations:
```env
GITHUB_TOKEN=your_personal_access_token
GITHUB_OWNER=your_github_username_or_org
GITHUB_REPO=your_repository_name
```
### 4. Create a Configuration Module
Set up a configuration module to read environment variables. Update `app.module.ts` to import the configuration module:
```typescript
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { GithubModule } from './github/github.module';
@Module({
imports: [
ConfigModule.forRoot({
isGlobal: true,
}),
GithubModule,
],
})
export class AppModule {}
```
### 5. Create the GitHub Module
Generate a GitHub module and service:
```bash
nest generate module github
nest generate service github
```
### 6. Implement the GitHub Service
In `src/github/github.service.ts`, implement the service to create a pull request:
```typescript
import { Injectable, HttpService } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import axios from 'axios';
@Injectable()
export class GithubService {
private readonly githubToken: string;
private readonly owner: string;
private readonly repo: string;
constructor(private configService: ConfigService) {
this.githubToken = this.configService.get<string>('GITHUB_TOKEN');
this.owner = this.configService.get<string>('GITHUB_OWNER');
this.repo = this.configService.get<string>('GITHUB_REPO');
}
async createPullRequest(head: string, base: string, title: string, body: string): Promise<void> {
const url = `https://api.github.com/repos/${this.owner}/${this.repo}/pulls`;
const data = {
title,
head,
base,
body,
};
const headers = {
Authorization: `token ${this.githubToken}`,
Accept: 'application/vnd.github.v3+json',
};
try {
const response = await axios.post(url, data, { headers });
console.log('Pull request created successfully:', response.data.html_url);
} catch (error) {
console.error('Error creating pull request:', error.response ? error.response.data : error.message);
}
}
}
```
### 7. Create a Controller to Trigger the Pull Request
Generate a GitHub controller:
```bash
nest generate controller github
```
In `src/github/github.controller.ts`, implement the controller:
```typescript
import { Controller, Post, Body } from '@nestjs/common';
import { GithubService } from './github.service';
@Controller('github')
export class GithubController {
constructor(private readonly githubService: GithubService) {}
@Post('create-pull-request')
async createPullRequest(
@Body('head') head: string,
@Body('base') base: string,
@Body('title') title: string,
@Body('body') body: string,
): Promise<void> {
await this.githubService.createPullRequest(head, base, title, body);
}
}
```
### 8. Testing the API
You can test the API using tools like Postman or curl. Start your NestJS application:
```bash
npm run start
```
Send a POST request to `http://localhost:3000/github/create-pull-request` with the required data:
```json
{
"head": "feature-branch",
"base": "main",
"title": "Automated Pull Request",
"body": "This is an automated pull request."
}
```
### 9. Full Code Structure
Here’s a summary of the code structure:
```
src/
├── app.module.ts
├── github/
│ ├── github.controller.ts
│ ├── github.module.ts
│ ├── github.service.ts
├── main.ts
.env
```
### `app.module.ts`
```typescript
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { GithubModule } from './github/github.module';
@Module({
imports: [
ConfigModule.forRoot({
isGlobal: true,
}),
GithubModule,
],
})
export class AppModule {}
```
### `github.module.ts`
```typescript
import { Module } from '@nestjs/common';
import { GithubService } from './github.service';
import { GithubController } from './github.controller';
@Module({
providers: [GithubService],
controllers: [GithubController],
})
export class GithubModule {}
```
### `github.service.ts`
```typescript
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import axios from 'axios';
@Injectable()
export class GithubService {
private readonly githubToken: string;
private readonly owner: string;
private readonly repo: string;
constructor(private configService: ConfigService) {
this.githubToken = this.configService.get<string>('GITHUB_TOKEN');
this.owner = this.configService.get<string>('GITHUB_OWNER');
this.repo = this.configService.get<string>('GITHUB_REPO');
}
async createPullRequest(head: string, base: string, title: string, body: string): Promise<void> {
const url = `https://api.github.com/repos/${this.owner}/${this.repo}/pulls`;
const data = {
title,
head,
base,
body,
};
const headers = {
Authorization: `token ${this.githubToken}`,
Accept: 'application/vnd.github.v3+json',
};
try {
const response = await axios.post(url, data, { headers });
console.log('Pull request created successfully:', response.data.html_url);
} catch (error) {
console.error('Error creating pull request:', error.response ? error.response.data : error.message);
}
}
}
```
### `github.controller.ts`
```typescript
import { Controller, Post, Body } from '@nestjs/common';
import { GithubService } from './github.service';
@Controller('github')
export class GithubController {
constructor(private readonly githubService: GithubService) {}
@Post('create-pull-request')
async createPullRequest(
@Body('head') head: string,
@Body('base') base: string,
@Body('title') title: string,
@Body('body') body: string,
): Promise<void> {
await this.githubService.createPullRequest(head, base, title, body);
}
}
```
With this setup, you can automate pull requests using the GitHub API within a NestJS application written in TypeScript.
Further, here's how you can extend the existing NestJS service to include the following functions in order to make this more comprehensive.
1. **Create Pull Requests**
2. **Approve Pull Requests**
3. **Close Pull Requests**
4. **Comment on Pull Requests**
5. **Request Changes on Pull Requests**
### 1. Extending the GitHub Service
Update the `GithubService` to include methods for each of these actions:
```typescript
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import axios from 'axios';
@Injectable()
export class GithubService {
private readonly githubToken: string;
private readonly owner: string;
private readonly repo: string;
constructor(private configService: ConfigService) {
this.githubToken = this.configService.get<string>('GITHUB_TOKEN');
this.owner = this.configService.get<string>('GITHUB_OWNER');
this.repo = this.configService.get<string>('GITHUB_REPO');
}
private getHeaders() {
return {
Authorization: `token ${this.githubToken}`,
Accept: 'application/vnd.github.v3+json',
};
}
async createPullRequest(head: string, base: string, title: string, body: string): Promise<void> {
const url = `https://api.github.com/repos/${this.owner}/${this.repo}/pulls`;
const data = {
title,
head,
base,
body,
};
try {
const response = await axios.post(url, data, { headers: this.getHeaders() });
console.log('Pull request created successfully:', response.data.html_url);
} catch (error) {
console.error('Error creating pull request:', error.response ? error.response.data : error.message);
}
}
async approvePullRequest(pullRequestNumber: number): Promise<void> {
const url = `https://api.github.com/repos/${this.owner}/${this.repo}/pulls/${pullRequestNumber}/reviews`;
const data = {
event: 'APPROVE',
};
try {
const response = await axios.post(url, data, { headers: this.getHeaders() });
console.log('Pull request approved successfully:', response.data);
} catch (error) {
console.error('Error approving pull request:', error.response ? error.response.data : error.message);
}
}
async closePullRequest(pullRequestNumber: number): Promise<void> {
const url = `https://api.github.com/repos/${this.owner}/${this.repo}/pulls/${pullRequestNumber}`;
const data = {
state: 'closed',
};
try {
const response = await axios.patch(url, data, { headers: this.getHeaders() });
console.log('Pull request closed successfully:', response.data);
} catch (error) {
console.error('Error closing pull request:', error.response ? error.response.data : error.message);
}
}
async commentOnPullRequest(pullRequestNumber: number, comment: string): Promise<void> {
const url = `https://api.github.com/repos/${this.owner}/${this.repo}/issues/${pullRequestNumber}/comments`;
const data = {
body: comment,
};
try {
const response = await axios.post(url, data, { headers: this.getHeaders() });
console.log('Comment added successfully:', response.data);
} catch (error) {
console.error('Error adding comment:', error.response ? error.response.data : error.message);
}
}
async requestChangesOnPullRequest(pullRequestNumber: number, comment: string): Promise<void> {
const url = `https://api.github.com/repos/${this.owner}/${this.repo}/pulls/${pullRequestNumber}/reviews`;
const data = {
body: comment,
event: 'REQUEST_CHANGES',
};
try {
const response = await axios.post(url, data, { headers: this.getHeaders() });
console.log('Requested changes successfully:', response.data);
} catch (error) {
console.error('Error requesting changes:', error.response ? error.response.data : error.message);
}
}
}
```
### 2. Update the GitHub Controller
Extend the controller to handle requests for these new methods:
```typescript
import { Controller, Post, Body, Param, Patch } from '@nestjs/common';
import { GithubService } from './github.service';
@Controller('github')
export class GithubController {
constructor(private readonly githubService: GithubService) {}
@Post('create-pull-request')
async createPullRequest(
@Body('head') head: string,
@Body('base') base: string,
@Body('title') title: string,
@Body('body') body: string,
): Promise<void> {
await this.githubService.createPullRequest(head, base, title, body);
}
@Post('approve-pull-request/:number')
async approvePullRequest(@Param('number') pullRequestNumber: number): Promise<void> {
await this.githubService.approvePullRequest(pullRequestNumber);
}
@Patch('close-pull-request/:number')
async closePullRequest(@Param('number') pullRequestNumber: number): Promise<void> {
await this.githubService.closePullRequest(pullRequestNumber);
}
@Post('comment-pull-request/:number')
async commentOnPullRequest(
@Param('number') pullRequestNumber: number,
@Body('comment') comment: string,
): Promise<void> {
await this.githubService.commentOnPullRequest(pullRequestNumber, comment);
}
@Post('request-changes/:number')
async requestChangesOnPullRequest(
@Param('number') pullRequestNumber: number,
@Body('comment') comment: string,
): Promise<void> {
await this.githubService.requestChangesOnPullRequest(pullRequestNumber, comment);
}
}
```
### 3. Testing the API
You can now test the new endpoints using tools like Postman or curl.
- **Create Pull Request**
```json
POST http://localhost:3000/github/create-pull-request
{
"head": "feature-branch",
"base": "main",
"title": "Automated Pull Request",
"body": "This is an automated pull request."
}
```
- **Approve Pull Request**
```json
POST http://localhost:3000/github/approve-pull-request/1
```
- **Close Pull Request**
```json
PATCH http://localhost:3000/github/close-pull-request/1
```
- **Comment on Pull Request**
```json
POST http://localhost:3000/github/comment-pull-request/1
{
"comment": "This is a comment on the pull request."
}
```
- **Request Changes on Pull Request**
```json
POST http://localhost:3000/github/request-changes/1
{
"comment": "Please make the following changes..."
}
```
With this setup, you can create, approve, close, comment on, and request changes on pull requests using the GitHub API within your NestJS application. What we are missing now are Jest / Vite tests in order to properly build out this system. | redbonzai |
1,916,263 | Trying Kotlin Multiplatform for the First Time: Step by Step Building an App with KMP | After getting inspired by KotlinConf, I decided to try Kotlin Multiplatform (KMP) for the first time... | 0 | 2024-07-08T19:14:17 | https://dev.to/adeeplearn/trying-kotlin-multiplatform-for-the-first-time-step-by-step-building-an-app-with-kmp-459a | kotlin, kmp, android, mobile | After getting inspired by KotlinConf, I decided to try Kotlin Multiplatform (KMP) for the first time and build an app with this technology. KMP is a technology developed by JetBrains that allows developers to write code together for various platforms using the Kotlin programming language. The main goal is to reduce code duplication and increase productivity by sharing business logic across multiple platforms, while still providing flexibility for platform-specific implementations where needed.
### Key Aspects of Multiplatform Kotlin:
1. **Code Sharing:** You can write common code once and use it on different platforms such as Android, iOS, web, and desktop.
2. **Platform-Specific Code:** KMP allows you to write platform-specific code if needed, providing optimal flexibility and performance.
3. **Supported Platforms:** KMP supports a wide range of platforms including Android, iOS, JVM, JavaScript, and native desktop applications.
4. **Gradle-based:** KMP projects use Gradle for build automation, making it easy to manage dependencies and build processes.
5. **Interoperability:** Multiplatform Kotlin code can easily interact with existing platform-specific code, such as Java for Android and Swift for iOS.
### Benefits of Using Multiplatform Kotlin:
1. **Reduced Development Time:** By sharing code across multiple platforms, you can significantly reduce development and maintenance time.
2. **Consistency:** Shared business logic ensures consistency across various platform-specific applications.
3. **Flexibility:** You can still use platform-specific libraries and frameworks if needed.
4. **Kotlin features:** You can use modern features of the Kotlin language on all platforms.
5. **Growing Ecosystem:** There are more and more multiplatform libraries available for common tasks.
### Getting to Know Multiplatform Kotlin
In this first step, I started by understanding the basic concepts of KMP and its advantages. I learned how KMP enables efficient application development by reducing code duplication and ensuring consistency across multiple platforms. KMP provides flexibility in writing platform-specific code
This is the first chapter of my journey to learn Multiplatform Kotlin. In the next article, I'll talk about how to set up your development environment with KMP, including practical steps on using Gradle for build automation and dependency management and Building an App with KMP.
Be sure to follow next topics: "Setting Up Your Development Environment," where we'll go into more detail about how to get started with your KMP project and set up all the necessary tools for multiplatform development. Thank you for reading, and I hope this article about learn Kotlin Multiplatform (KMP) for the first time and build an app gives you a clear picture of the potential of Kotlin Multiplatform in app development. Stay tuned for the next discussion! | adeeplearn |
1,916,439 | Custom hook for Api calls(Reactjs) | I'm sure during developing complex and big react applications, everybody will struggle with code... | 0 | 2024-07-08T19:32:22 | https://dev.to/a8rts/custom-hook-for-api-callsreactjs-4bnn | webdev, javascript, programming, react | I'm sure during developing complex and big react applications, everybody will struggle with code reusability. One approach is this(custom hook for api calls). Let's go for creating them.
First of all i'm sorry about my bad writing, my language is not originally English :)
Of course we have to get data from the server several times in out application. So, how we can handle these need effectively?
> useGet custom hook
The code of that custom hook will be like this:
```
import axios from "axios";
import { useState } from "react";
import { toast } from "react-toastify";
import apiErrors from "../../features/utils/ApiErrorMessages.json";
export default function useGet() {
const [getRes, setGetRes] = useState({});
const [getLoading, setGetLoading] = useState(true);
const [getError, setGetError] = useState({});
const getData = (url: string | undefined, config: object) => {
if (url) {
axios
.get(url, config)
.then((res) => {
setGetRes(res.data);
setGetLoading(false);
})
.catch((err) => {
setGetError(err);
setGetLoading(false);
toast.error(apiErrors.server_network_error);
});
}
};
return { getData, getRes, getError, getLoading };
}
```
Explanation: when i was searching for creating hook like this useGet, i was channeling with a problem. That problem was causing a infinite loop in everytime i use that hook! I fixed that by providing a function to call in the usage side from the useGet return instead of calling on its own hook.
You can see it obviously.(getData function)
What really we are doing in that hook is like this:
1-set the needed states
2-declare getData function(this function send a get request to the given url and save response and errors in our states)
3-return all the states and function we wrote
How about the usage? I have to say it will be something like this:
```
const {getData , getRes , getErrors , getLoading} = useGet()
useEffect(() => {
async const fetche = () => {
await getData({url : '/api/users' , config : {}});
}
},[])
// your rendering logic
```
I don't know how i was good in explaining what is happening in my codes, but i hope you understand that.
> usePost custom hook
The code:
```
import axios from "axios";
import { useState } from "react";
import apiErrors from "../../features/utils/ApiErrorMessages.json";
import { toast } from "react-toastify";
type postDataProps = {
url: string | undefined;
data: object;
config: object;
};
export default function usePost() {
const [postRes, setPostRes] = useState({});
const [postLoading, setPostLoading] = useState(true);
const [postError, setPostError] = useState({});
const postData = ({ url, data, config }: postDataProps) => {
if (url) {
axios
.post(url, data, config)
.then((res) => {
setPostLoading(false);
setPostRes(res.data);
})
.catch((err) => {
setPostLoading(false);
setPostError(err);
toast.error(apiErrors.server_network_error);
});
}
};
return { postData, postRes, postError, postLoading };
}
```
Be happy! The logic is the same. But it is difference in the data property! We know when we wanna post something to the server we have to send data with it. So we send data here too.
It was so easy to implement them. Now you can reuse your calling api functionality across your application!
Maby i don't do best practices in these hooks well. I'm studing. But it works now.
Don't forgot reusability.
Happy coding. | a8rts |
1,916,264 | The Best 5 Free IP Geolocation APIs for Programmers | Free IP geolocation APIs play a crucial role in enabling developers to integrate location-based... | 0 | 2024-07-08T19:15:28 | https://dev.to/sameeranthony/the-best-5-free-ip-geolocation-apis-for-programmers-l39 | api, geolocation, programmer | Free IP geolocation APIs play a crucial role in enabling developers to integrate location-based services into their applications effortlessly. These APIs provide accurate geographical information based on IP addresses, making them invaluable for a variety of applications, from targeted advertising to fraud prevention. Here's a look at some of the top free IP geolocation APIs that developers can leverage:
## 1. IPstack
IPstack offers a comprehensive **[free IP geolocation API](https://ipstack.com/product)** with features like timezone information, currency details, and even security insights. It's widely used for e-commerce applications and content personalization due to its rich data set and reliability.
## 2. GeoJS
GeoJS provides a straightforward geolocation API that developers can use without any registration or API keys. It offers basic geolocation data like country and region code, making it ideal for quick integrations where simplicity is key.
## 3. IP-API
IP-API is renowned for its accuracy and reliability in IP geolocation lookup. It supports multiple formats including JSON, CSV, XML, and plain text, making it versatile for different integration needs. Its free tier offers sufficient requests per minute, suitable for moderate to heavy usage.
## 4. FreeGeoIP
FreeGeoIP is another popular free geolocation API that provides country, region, city, and ISP information based on IP addresses. It's known for its ease of use and quick response times, making it ideal for applications requiring real-time location data.
## 5. IPinfo
IPinfo offers a robust free IP geolocation API that provides details such as city, region, country, and even company data associated with an IP address. Developers appreciate its simplicity and comprehensive documentation, making integration straightforward across various platforms.
## Choosing the Right API for Your Project
When selecting a free **[IP geolocation API](https://ipstack.com/documentation)**, consider factors such as data accuracy, rate limits, ease of integration, and additional features like timezone or currency information. Assess your project's specific needs and choose an API that aligns best with your technical requirements and scalability goals.
## Conclusion
Free IP geolocation APIs empower developers to enhance their applications with location-based services without the overhead costs. Each of the APIs mentioned—IPinfo, GeoJS, IP-API, FreeGeoIP, and IPstack—offers unique features and benefits, catering to diverse development needs. Whether you're building a mobile app, e-commerce platform, or analytics tool, integrating a reliable geolocation API can significantly enrich user experience and operational efficiency. Explore these APIs, experiment with their capabilities, and choose the one that best fits your project's demands for accurate and timely geolocation data.
| sameeranthony |
1,916,434 | Why Jenkins is still MVP among CI/CD tools? | Jenkins is a powerful, open-source automation server used for continuous integration and continuous... | 0 | 2024-07-08T19:20:53 | https://dev.to/mcieciora/why-jenkins-is-still-mvp-among-cicd-tools-2npn | devops, cicd, jenkins |
Jenkins is a powerful, open-source automation server used for continuous integration and continuous delivery. It allows developers to automate the building, testing, and deployment of applications, facilitating rapid development cycles. Jenkins supports a vast array of plugins, making it highly customizable and adaptable to various development environments and workflows. Its strong community support ensures continuous improvements and updates, keeping Jenkins relevant and robust in the ever-evolving DevOps landscape.
Other top CI/CD tools include CircleCI, GitLab CI, TeamCity, and Bamboo. CircleCI is known for its ease of use and seamless integration with GitHub, making it a favorite among small to medium-sized projects. GitLab CI, part of the GitLab ecosystem, offers integrated version control and CI/CD capabilities, providing a cohesive environment for development. TeamCity, developed by JetBrains, is known for its powerful feature set and extensive integrations, particularly beneficial for large enterprise environments. Bamboo, by Atlassian, integrates well with other Atlassian products like Jira and Bitbucket, making it ideal for organizations already using Atlassian's suite of tools.
Jenkins' standout features include its vast plugin ecosystem, which offers unparalleled flexibility and extensibility. It supports distributed builds across multiple machines, allowing for efficient resource utilization and faster build times. Jenkins' pipeline-as-code capability, using Groovy scripts, provides powerful automation and version control for CI/CD pipelines. Additionally, its strong community and comprehensive documentation make troubleshooting and customization more accessible.
When comparing the main advantages between these solutions, Jenkins' flexibility and extensive plugin library are significant strengths, allowing it to cater to diverse project needs and environments. While CircleCI and GitLab CI excel in user-friendliness and integration with specific platforms, Jenkins' ability to adapt to various workflows and environments makes it a more versatile choice. TeamCity and Bamboo offer robust enterprise features and integrations, but Jenkins' open-source nature and community-driven development keep it at the forefront of innovation, ensuring it remains the MVP among CI/CD tools.
mcieciora
| mcieciora |
1,916,435 | Fixed Window Counter: Under the Hood of express-rate-limit | To ensure a single user or a client cannot overwhelm the system with too many requests in a short... | 0 | 2024-07-08T19:21:47 | https://dev.to/dkn1ght23/fixed-window-counter-express-rate-limit-280 | javascript, node, express, systemdesign | To ensure a single user or a client cannot overwhelm the system with too many requests in a short period of time, we use rate limiting. [express-rate-limit](https://www.npmjs.com/package/express-rate-limit) is popular among developers due to its ease of implementation and understanding. But how does it work under the hood? Let’s dive in.
There are a few popular algorithms used to implement rate limiters. express-rate-limiter uses the **Fixed Window Counter** Algorithm to manage rate limits.
The Fixed Window Rate Limiting algorithm splits time into fixed-size windows (e.g., one minute or one hour). It tracks the number of requests made by a client within a single window and rejects additional requests after the limit is reached. No further requests will be processed until the window resets.
For example, if the limit is set to five requests per minute, any additional requests after the fifth request will be rejected until the next window starts, restarting the request count to zero.

However, a notable drawback of this limiter is how it handles requests at the edges of the window, leading to a behavior known as the "burstiness problem."
In the algorithm, the request count resets at fixed intervals (e.g., every 1 minute). This approach can become problematic if a client makes many requests at the end of one window and then immediately at the beginning of the next, potentially overwhelming the server.

In conclusion, while the Fixed Window Counter Algorithm is straightforward and easy to implement, it has limitations, particularly in handling requests near the edges of the window. Developers should be aware of the "burstiness problem" and consider alternative algorithms or additional strategies to mitigate this issue and ensure a more balanced rate limiting approach.
| dkn1ght23 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.