id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,797,004 | Kubernetes and AI: 3 Open Source Tools Powered by OpenAI | As per the recent Cloud-Native AI report generated by CNCF during the KubeCon + CloudNativeCon Europe... | 0 | 2024-05-26T04:27:59 | https://dev.to/ajeetraina/kubernetes-and-ai-3-open-source-tools-powered-by-openai-5144 | As per the recent [Cloud-Native AI report](https://www.cncf.io/wp-content/uploads/2024/03/cloud_native_ai24_031424a-2.pdf) generated by CNCF during the KubeCon + CloudNativeCon Europe conference, AI is empowering operators and developers to work smarter, not harder. The convergence of Cloud Native methodologies with Artificial Intelligence (AI) has emerged as a transformative force reshaping industries and driving innovation. Using AI to improve cloud-native systems is no longer science fiction.

Cloud Native technologies have revolutionized the way applications are developed, deployed, and managed in modern IT environments. By leveraging containerization, microservices architecture, and orchestration platforms like Kubernetes, organizations can achieve scalability, resilience, and agility in their operations. On the other hand, Artificial Intelligence and Machine Learning have paved the way for data-driven decision-making, predictive analytics, and automation across various domains.
[Kubernetes](https://collabnix.com/category/kubernetes), the container orchestration platform, is a powerful tool for managing applications in the cloud. But what if you could add some artificial intelligence muscle to your Kubernetes workflow? This is where OpenAI and these 3 open-source projects come in: Kubectl OpenAI client, K8sGPT, and KoPylot. These innovative tools leverage OpenAI's capabilities to streamline tasks, automate processes, and gain deeper insights from your Kubernetes clusters.
Let's dive in and explore how they can supercharge your Kubernetes experience.
https://www.youtube.com/watch?v=SoEAFawQ9y4
## 1. KoPylot
[KoPylot](https://github.com/avsthiago/kopylot) is a cloud-native application performance monitoring (APM) solution that runs on Kubernetes. It is designed to help developers and operations teams diagnose and troubleshoot issues in complex distributed systems. It provides real-time insights into application performance, including metrics, traces, and logs, to help teams identify and resolve issues quickly.
KoPylot is designed to help teams monitor and diagnose Kubernetes applications with ease. It provides a comprehensive view of application performance, including real-time metrics, traces, and logs, to help teams identify and resolve issues quickly.
## Features
KoPylot provides a wide range of features to help teams monitor and diagnose Kubernetes applications, including:
- **Real-time Metrics:** KoPylot provides real-time metrics for Kubernetes workloads, including CPU, memory, and network usage. It also provides metrics for custom application-level metrics, which can be used to monitor specific application behaviors and performance.
- **Distributed Tracing:** KoPylot provides distributed tracing capabilities, allowing teams to trace requests across multiple microservices and identify bottlenecks and performance issues.
- **Logs:** KoPylot provides log aggregation capabilities, allowing teams to centralize logs from multiple containers and pods running on Kubernetes.
- **Audit:** KoPylot provides auditing capabilities, allowing teams to track changes to Kubernetes resources and monitor access to Kubernetes API server.
- **Chat:** KoPylot provides a chat interface, allowing teams to collaborate and share insights in real-time.
- **Diagnose:** KoPylot provides a diagnose feature, allowing teams to quickly identify issues and find potential solutions.
## How it Works
KoPylot works by integrating with Kubernetes clusters and exposing a web-based interface that provides users with a comprehensive view of their workloads. KoPilot’s architecture consists of a set of microservices that communicate with each other over HTTP and gRPC.
When a user logs into KoPylot, they are presented with a dashboard that shows an overview of all the workloads running on their Kubernetes cluster. From the dashboard, users can drill down into individual workloads to view details such as resource utilization, status, and logs.
KoPylot communicates with Kubernetes clusters using the Kubernetes API server. The API server provides a standard interface for interacting with Kubernetes clusters and enables KoPilot to manage and monitor workloads running on any Kubernetes cluster, regardless of the underlying infrastructure.
KoPylot’s microservices are designed to be highly scalable and fault-tolerant. The microservices are deployed as Kubernetes pods and can be scaled horizontally as workload demand increases. Additionally, KoPilot’s microservices are designed to recover automatically from failures, ensuring that the system remains available and responsive at all times.
To diagnose issues in a Kubernetes cluster, KoPylot provides a diagnostic tool that can help identify problems quickly. The diagnostic tool checks for common issues and misconfigurations in the cluster and provides suggestions for resolving them. For example, it can check if any pods are not running, if there are any pending pods, or if any nodes are unreachable. The diagnostic tool can also help identify performance issues in the cluster by analyzing the resource usage of nodes and pods. It provides detailed information about CPU and memory usage, as well as network and disk I/O metrics.
In addition to monitoring and diagnosing issues, KoPylot also provides a chat interface that allows users to interact with the system using Natural Language. This feature enables users to ask questions and get answers about the status of the cluster, the configuration of specific resources, or any other aspect of the system.
Finally, KoPylot includes an audit log that records all actions taken by users within the system. This log can be used to trace changes made to the cluster and to identify any potential security issues.
The following is an example of using KoPylot to perform a health check on a service:
```
kopylot diagnose check -s my-service -n my-namespace
```
This command will run a series of checks on the specified service, including checking for pod readiness, HTTP status codes, and TCP connectivity. If any issues are found, KoPylot will provide a detailed report of the problem.
## Installation
To install KoPylot on Kubernetes, you can follow these steps:
## 1. Requests an API key from OpenAI
Export the key using the following command:
```
export KOPYLOT_AUTH_TOKEN=your_api_key
```
## 2. Install KoPylot using pip:
```
# pip install kopylot
```
## 3. Run KoPylot
```
# kopylot --help
```
```
Usage: kopylot [OPTIONS] COMMAND [ARGS]...
╭─ Options ──────────────────────────────────────────────────────────────────────────╮
│ --version │
│ --install-completion [bash|zsh|fish|powershell Install completion for the │
│ |pwsh] specified shell. │
│ [default: None] │
│ --show-completion [bash|zsh|fish|powershell Show completion for the │
│ |pwsh] specified shell, to copy it │
│ or customize the │
│ installation. │
│ [default: None] │
│ --help Show this message and exit. │
╰────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ─────────────────────────────────────────────────────────────────────────╮
│ audit Audit a pod, deployment, or service using an LLM model. │
│ chat Start a chat with kopylot to generate kubectl commands based your │
│ inputs. │
│ ctl A wrapper around kubectl. The arguments passed to the ctl subcommand are │
│ interpreted by kubectl. │
│ diagnose Diagnose a resource e.g. pod, deployment, or service using an LLM model. │
╰────────────────────────────────────────────────────────────────────────────────────╯
```
## Local Setup
If you prefer to set up your development environment locally, make sure you have Poetry installed on your system. Then, follow these steps:
## Clone the KoPylot repository:
```
git clone https://github.com/avsthiago/kopylot
```
## Navigate to the project folder
```
cd kopylot
```
## Install the project dependencies using Poetry
```
make install
```
## Real Workload Example
Now that we have covered the features and functionality of KoPylot, let’s take a look at a real workload example to see how it can be used in practice.
In this example, we will use KoPylot to diagnose a problem with a Kubernetes deployment.
Deploy a sample workload using the following YAML manifest:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
```
```
kubectl apply -f nginx-deployment.yaml
```
Run the following command to diagnose the deployment using KoPylot:
```
kopilot diagnose deployment nginx --namespace default
```
KoPylot will output the following result:
```
===========================================================================
Diagnosing Deployment nginx in namespace default
===========================================================================
---------------------------------------------------------------------------
Deployment nginx is running
---------------------------------------------------------------------------
Reason: The deployment is running correctly.
---------------------------------------------------------------------------
Deployment nginx is accessible
---------------------------------------------------------------------------
Reason: The deployment is accessible via the service.
---------------------------------------------------------------------------
Deployment nginx has enough resources
---------------------------------------------------------------------------
Reason: The deployment has enough resources.
---------------------------------------------------------------------------
Deployment nginx is not outdated
---------------------------------------------------------------------------
Reason: The deployment is using the latest available image.
===========================================================================
Diagnosis complete.
===========================================================================
```
This indicates that the deployment is running correctly and no issues were found.
Overall, KoPylot is a useful tool for diagnosing and troubleshooting Kubernetes workloads. Its web-based chat interface and CLI make it easy to use and accessible to all levels of users.
## 2. K8sGPT
[K8sGPT ](https://collabnix.com/k8sgpt-chatgpt-for-your-kubernetes-cluster/)is a tool that uses NLP to analyze logs and other data from Kubernetes clusters to identify and diagnose issues. It has a set of built-in analyzers that are designed to identify common issues such as pod crashes, service failures, and ingress misconfigurations. K8sGPT is built on top of OpenAI's GPT-3 language model, which allows it to understand natural language and provide explanations that are easy to understand.
K8sGPT is focused on triaging and diagnosing issues in your cluster. It is a tool for SRE, Platform, and DevOps engineers to help them understand what is going on in their cluster and find the root cause of an issue. It can help you cut through the noise of logs and multiple tools to find the root cause of an issue.
## How K8sGPT Works
K8sGPT uses a set of analyzers that are designed to identify issues in Kubernetes clusters. These analyzers use NLP to analyze logs, metrics, and other data from your cluster to identify potential issues. When an issue is identified, K8sGPT provides an explanation in natural language that is easy to understand. This allows you to quickly understand the issue and take the necessary steps to resolve it.
K8sGPT is built on top of OpenAI's GPT-3 language model, which allows it to understand natural language. This means that you can ask K8sGPT questions about your cluster in plain English, and it will provide a response that is easy to understand. For example, you can ask K8sGPT "Why is my pod crashing?" and it will provide an explanation of why the pod is crashing and what steps you can take to fix the issue.
## Installation
K8sGPT can be installed on Linux, Mac, and Windows. The easiest way to install K8sGPT on Linux or Mac is via Homebrew. To install K8sGPT via Homebrew, run the following commands:
```
brew tap k8sgpt-ai/k8sgptbrew install k8sgpt
```
If you encounter an error when installing K8sGPT on WSL or Linux, you may need to install the build-essential package. You can do this by running the following commands:
```
sudo apt-get updatesudo apt-get install build-essential
```
If you are running on Windows, you can download the latest Windows binaries from the Release tab on GitHub.
Once k8sgpt is installed, you can verify that it is working by running the version command:
```
k8sgpt version
```
## Generating an API key
Before we can use k8sgpt, we need to generate an API key from OpenAI. To generate an API key, run below command , this will open a link in your default web browser where you can generate an API key. Once you have generated an API key, you can set it in k8sgpt
```
k8sgpt generateK8sgpt auth
```
This will prompt you to enter your API key. Once you have entered your API key, k8sgpt will be able to use it to analyze your Kubernetes clusters.
## Analyzing your Kubernetes clusters
Now that we have k8sgpt installed and authenticated, we can start analyzing our Kubernetes clusters. To analyze a Kubernetes cluster, run the following command:
```
k8sgpt analyze
```
This will scan your Kubernetes cluster and look for any issues. By default, k8sgpt will use all of its built-in analyzers to analyze your cluster.
If k8sgpt finds any issues, it will output a summary of the issues it found. For example:
```
Analyzer: podAnalyzer
Namespace: default
Name: nginx-66b6c48dd5-zk6jt
Kind: Pod
Reason: CrashLoopBackOff
Message: Back-off 5m0s restarting failed container=nginx pod=nginx-66b6c48dd5-zk6jt_default(25f13c57-04eb-4a0a-a2f7-17b7564a7944)
Refer: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/
```
This output tells us that there is an issue with the nginx pod in the default namespace. The pod is in a CrashLoopBackOff state, and the nginx container in the pod has failed and is being restarted every 5 minutes.
If you want to see more information about the issue, you can use the `--explain` flag:
```
k8sgpt analyze --explain
```
This will output a more detailed explanation of the issues that k8sgpt found.
## Filtering resources
By default, k8sgpt will analyze all resources in your Kubernetes cluster. However, you can filter the resources that k8sgpt analyzes by using the --filter flag.
Filters are used to control which Kubernetes resources are analyzed. By default, all filters are enabled. You can manage filters using the filters subcommand.
To view the list of filters, run:
Add filter , remove , list and more
```
k8sgpt filters list
k8sgpt filters add [filter]
k8sgpt filters add Service,Pod
k8sgpt filters remove [filter]
k8sgpt filters remove Service
```
You can filter the resources by namespace using the --namespace flag:
```
k8sgpt analyze --namespace=default
```
## Custom Analyzers
k8sgpt comes with a set of built-in analyzers that cover a variety of Kubernetes resources. However, you can also write your own custom analyzers to analyze your own resources.
To write a custom analyzer, you will need to create a new Go package that implements the Analyzer interface:
```
type Analyzer interface {
// Analyze analyzes the given Kubernetes resource and returns any issues found
Analyze(resource *unstructured.Unstructured) ([ ]*Issue, error)
// Name returns the name of the analyzer
Name() string
// Enabled returns whether the analyzer is enabled
Enabled() bool
}
```
The Analyze method takes a Kubernetes resource as input and returns an array of issues found. The Name method returns the name of the analyzer, and the Enabled method returns whether the analyzer is enabled.
Once you have written your custom analyzer, you can register it with k8sgpt by calling the RegisterAnalyzer function:
```
RegisterAnalyzer(name string, analyzer Analyzer)
k8sgpt.RegisterAnalyzer("MyAnalyzer", &MyAnalyzer{})
```
You can then use the `--filter` flag to filter by your custom analyzer:
```
k8sgpt analyze --filter=MyAnalyzer
```
You can also combine multiple filters together to create more complex filters. For example, the following command will only analyze Pod resources in the "default" namespace with a label of "app=myapp":
```
k8sgpt analyze --filter=Pod --namespace=default --label-selector=app=myapp
```
K8sGPT is a powerful tool that can help you diagnose and triage issues in your Kubernetes clusters. Its ability to analyze logs and Kubernetes resources using natural language processing and AI makes it stand out from other monitoring tools.
By installing and configuring K8sGPT, you can easily scan your clusters, identify issues, and get recommendations on how to fix them. Additionally, its built-in analyzers and filters make it easy to customize the analysis to fit your specific needs.Whether you're an SRE, Platform or DevOps engineer, K8sGPT can help you gain insights into your Kubernetes clusters and make your job easier. Try it out today and see how it can help you improve your Kubernetes monitoring and troubleshooting workflows!
## 3. Kubectl OpenAI Client
The [Kubectl OpenAI client](https://github.com/sozercan/kubectl-ai) project is a kubectl plugin to generate and apply Kubernetes manifests using OpenAI GPT.
## Getting Started
- Install Docker Desktop
- Install Kubectl-ai
You can install it on your Macbook directly using Homebrew:
```
brew tap sozercan/kubectl-ai
https://github.com/sozercan/kubectl-ai
brew install kubectl-ai
```
### Get OpenAI Keys
You can get the OpenAI keys from `https://platform.openai.com/account/api-keys`
Please Note: kubectl-ai requires an OpenAI API key or an Azure OpenAI Service API key and endpoint, and a valid Kubernetes configuration.
```
export OPENAI_API_KEY=<your OpenAI key>
```
## Installing on CentOS
```
yum install wget
wget https://github.com/sozercan/kubectl-ai/releases/download/v0.0.10/kubectl-ai_linux_amd64.tar.gz
tar xvf kubectl-ai_linux_amd64.tar.gz
mv kubectl-ai /usr/local/bin/kubectl-ai
```
## Setting up Kubeview
## Using Helm
Assuming that you have already installed Git and Helm on your laptop, follow the below steps
```
git clone https://github.com/benc-uk/kubeview
cd kubeview/charts/
helm install kubeview kubeview
```
## Testing it locally
```
kubectl port-forward svc/kubeview -n default 80:80
```
## Deploying Pod using namespace
```
kubectl ai "Create a namespace called ns1 and deploy a Nginx Pod"
✨ Attempting to apply the following manifest:
---
apiVersion: v1
kind: Namespace
metadata:
name: ns1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: ns1
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Use the arrow keys to navigate: ↓ ↑ → ←
? Would you like to apply this? [Reprompt/Apply/Don't Apply]:
+ Reprompt
▸ Apply
Don't Apply
```
The YAML manifest you provided creates a basic Nginx pod with the name “nginx-pod” and exposes port 80. To apply this manifest and create the pod, you can use the kubectl apply command. Save the manifest in a file, for example, nginx-pod.yaml, and then execute the following command in your terminal:
### Difference between “Create” and “Deploy” [Be Careful]
```
kubectl ai "Create a namespace called ns1 and create a Nginx Pod"
✨ Attempting to apply the following manifest:
apiVersion: v1
kind: Namespace
metadata:
name: ns1
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: ns1
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
✔ Apply
```
## Accessing the Nginx Pod via Web Browser
```
kubectl port-forward nginx 8000:80 -n ns1
Forwarding from 127.0.0.1:8000 -> 80
Forwarding from [::1]:8000 -> 80
Handling connection for 8000
Handling connection for 8000
```
## Deployment
This is an example for deploying 3 replicas in the specific namespace
```
kubectl ai "create an nginx deployment with 3 replicas under namespace ns1"
✨ Attempting to apply the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: ns1
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: webpage
image: ajeetraina/webpage
ports:
- containerPort: 80
✔ Apply
```
## Services
```
kubectl ai "create an nginx deployment with 3 replicas under namespace ns1 and this time create service type as NodePort"
✨ Attempting to apply the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: ns1
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: webpage
image: ajeetraina/webpage
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: ns1
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
Use the arrow keys to navigate: ↓ ↑ → ←
? Would you like to apply this? [Reprompt/Apply/Don't Apply]:
+ Reprompt
▸ Apply
Don't Apply
```
## Listing the Kubernetes Resources
```
kubectl get po,deploy,svc -n ns1
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-58945458f5-5pk6b 1/1 Running 0 28s
pod/nginx-deployment-58945458f5-7htd7 1/1 Running 0 28s
pod/nginx-deployment-58945458f5-s6cxm 1/1 Running 0 28s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 28s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-service NodePort 10.100.230.251 <none> 80:30080/TCP 28s
``` | ajeetraina | |
1,831,411 | Slack vs. Discord: Choosing Your Tech Community's Playground | Imagine this: You're a newbie tech enthusiast, eager to learn and connect with others to build your... | 0 | 2024-05-26T04:27:33 | https://dev.to/ajeetraina/slack-vs-discord-choosing-your-tech-communitys-playground-2d1 | slack, discord, chatgpt | Imagine this: You're a newbie tech enthusiast, eager to learn and connect with others to build your tech community. You click an invitation link from a tech influencer, hop on Slack, and bam! It feels like a professional playground. The interface is sleek, organized, and screams "business." Threaded conversations keep things tidy, and integrations with tools like GitHub make collaboration a breeze.
Now, you switch gears and venture into Discord. Whoa! It's like stepping into a bustling arcade. The interface is colorful, playful, and packed with features like voice channels, customizable servers and [ChatGPT integration](https://collabnix.com/how-to-integrate-chatgpt-to-a-discord-server-and-run-as-a-chatbot/). It's a paradise for gamers, but is it the right fit for a tech community?
Both Slack and Discord are powerful platforms for building online communities, but they cater to slightly different vibes. Let's delve into what each offers to help you pick the perfect platform for your tech community.
## Slack: The Streamlined Collaboration Hub
At its core, Slack is a workplace messaging tool designed for seamless communication and file sharing. Think instant messaging on steroids, facilitating both one-on-one chats and group discussions within organized channels. Powerful features like:
- Slack Connect: Break down email silos and collaborate seamlessly with partners, vendors, or customers.
- Slack Canvas: Create and share collaborative visual documents directly within Slack, fostering real-time brainstorming and project discussions.
- Slack Clips: Share short audio, video, or screen recordings for a more personal and engaging communication experience.
## Benefits of using Slack:
- Increased Productivity: Studies show a 37% increase in productivity among teams using Slack's features like huddles.
- Consolidated Communication: Ditch email, text messages, and fragmented chat platforms for a unified solution.
- Streamlined Workflow: Integrations with project management tools and code repositories like GitHub make collaboration a breeze.
- Community Building: Foster a sense of community with casual channels and shared interests.
- Slack Analytics: Data-Driven Insights
Slack goes beyond just communication, offering valuable analytics to understand how your team uses the platform. Track user activity, channel engagement, message volume, and more to identify areas for improvement and optimize your tech community's experience.
## Who should use Slack?
- Tech startups and established companies: For teams that value organization, productivity, and seamless integration with business tools.
- Communities focused on professional development: Slack's professional environment fosters learning and knowledge sharing.
## Where Slack might not be the best fit:
- Large-scale, complex projects: Slack can feel limited for extensive document collaboration.
- Highly casual, social communities: The structured interface might not resonate with a laid-back community vibe.
## Discord: The Community Arcade
Originally designed for gamers, Discord has evolved into a versatile platform for various communities, including tech enthusiasts. Here's what sets it apart:
- Extensive Bot Ecosystem: Discord's thriving [bot community](https://collabnix.com/top-5-effective-discord-bot-for-your-server-in-2022/) creates a whole new dimension of functionality. These bots can automate tasks, enhance server features, and add a layer of fun. Imagine a bot that welcomes new members, plays music upon request, or even moderates discussions!
- Customizable Servers and Channels: Create virtual spaces (servers) tailored to your community's needs. Organize text and voice channels, define roles and permissions, and make it your own!
- Robust Features and Integrations: Enjoy high-quality voice and video calls, explore a vast bot ecosystem for automation and entertainment, and integrate seamlessly with services like Twitch and YouTube.
- Fostering Community and Connection: Discord's core strength lies in building a vibrant, connected community. With features like custom emojis and persistent voice channels, it fosters a sense of belonging and encourages casual interaction.
## Who should use Discord?
- Tech communities with a strong social element: If casual conversations, brainstorming sessions, and a touch of fun are key, Discord delivers.
- Communities focused on real-time interaction: Persistent voice channels make Discord ideal for discussions, meetings, and Q&A sessions.
## Discord might not be the best fit for:
- Highly professional environments: The playful interface might not align with a serious business tone.
- Communities requiring extensive file management: While file sharing is possible, Discord's focus leans towards real-time interaction.
## The Verdict: Finding Your Perfect Platform
Choosing between Slack and Discord depends on your tech community's unique needs.
For a structured, professional environment with a focus on productivity and collaboration, Slack is a great choice.
If fostering a vibrant, social community with real-time interaction is your priority, Discord might be the better fit.
Ultimately, the best way to decide is to experiment. Explore both platforms and see which one resonates with your community and fosters the kind of environment you envision. Remember, the perfect platform is the one that helps your tech community connect, collaborate, and thrive. | ajeetraina |
1,842,228 | What is OpenLLM and what problem does it solve? | OpenLLM is a powerful platform that empowers developers to leverage the potential of open-source... | 0 | 2024-05-26T04:26:59 | https://dev.to/ajeetraina/what-is-openllm-and-what-problem-does-it-solve-5aml | [OpenLLM ](https://github.com/bentoml/OpenLLM)is a powerful platform that empowers developers to leverage the potential of open-source large language models (LLMs). It is like a Swiss Army knife for LLMs. It's a set of tools that helps developers overcome these deployment hurdles.
OpenLLM supports a vast array of open-source LLMs, including popular choices like Llama 2 and Mistral. This flexibility allows developers to pick the LLM that best aligns with their specific needs. The beauty of OpenLLM is that you can fine-tune any LLM with your own data to tailor its responses to your unique domain or application.
OpenLLM adopts an API structure that mirrors OpenAI's, making it a breeze for developers familiar with OpenAI to transition their applications to leverage open-source LLMs.
## Is OpenLLM a standalone product?
No. it's a building block designed to integrate with other powerful tools easily. They currently offer integration with OpenAI's Compatible Endpoints, LlamaIndex, LangChain, and Transformers Agents.
OpenLLM goes beyond just running large language models. It's designed to be a versatile tool that can be integrated with other powerful AI frameworks and services. This allows you to build more complex and efficient AI applications. Here's a breakdown of the integrations OpenLLM currently offers:
- [OpenAI's Compatible Endpoints](https://platform.openai.com/docs/api-reference/completions/object): This integration allows OpenLLM to mimic the API structure of OpenAI, a popular cloud-based platform for LLMs. This lets you use familiar tools and code designed for OpenAI with your OpenLLM models.
- [LlamaIndex](https://www.llamaindex.ai/): This is likely a search engine or index specifically designed for large language models. By integrating with LlamaIndex, you can efficiently search for specific information or capabilities within your OpenLLM models.
- [LangChain](https://github.com/hwchase17/langchain): This suggests a tool or framework for chaining together different NLP (Natural Language Processing) tasks. With LangChain integration, you can create multi-step workflows that combine OpenLLM's capabilities with other NLP tools for more advanced tasks.
- [Transformers Agents](https://huggingface.co/docs/transformers/transformers_agents): This likely refers to an integration with the Transformers library, a popular framework for building and using NLP models. This allows you to leverage the functionalities of Transformers along with OpenLLM for building robust NLP applications.
By taking advantage of these integrations, you can unlock the full potential of OpenLLM and create powerful AI solutions that combine the strengths of different tools and platforms.

## What problems does OpenLLM solve?
- OpenLLM works with a bunch of different LLMs, from Llama 2 to Flan-T5. This means developers can pick the best LLM for their specific needs.
- Deploying LLMs can be a headache, but OpenLLM streamlines the process. It's like having a clear instruction manual for setting things up.
- Data security is a big concern with AI. OpenLLM helps ensure that LLMs are deployed in a way that follows data protection regulations.
- As your LLM-powered service gets more popular, you need it to handle the extra traffic. OpenLLM helps build a flexible architecture that can grow with your needs.
- The world of AI throws around a lot of jargon. OpenLLM integrates with various AI tools and frameworks, making it easier for developers to navigate this complex ecosystem.
Blazing-Fast Performance
- OpenLLM is meticulously designed for high-throughput serving, ensuring efficient handling of a large number of requests simultaneously.
- OpenLLM leverages cutting-edge serving and inference techniques to deliver the fastest possible response times.
## Getting Started
- Download and Install Docker Desktop
## Start running OpenLLM
```
$ docker run --rm -it -p 3000:3000 ghcr.io/bentoml/openllm start facebook/opt-1.3b --backend pt
```
You might encounter this issue if you try to run it on your Mac:
```
docker: Error response from daemon: no match for platform in manifest: not found.
```
That means that this Docker image didn't follow the best practices and not compiled for Arm chips.
Let's make it work.
Try adding the parameter `--platform=linux/amd64` to the `docker run` command.
```
docker run --rm -it --platform=linux/amd64 -p 3000:3000 ghcr.io/bentoml/openllm start facebook/opt-1.3b --backend pt
```
You will see the following result:
```
latest: Pulling from bentoml/openllm
e15cf30825b5: Download complete
24756bf79e78: Download complete
8a1e25ce7c4f: Download complete
e45919fa6a04: Download complete
aeea5c3a418f: Download complete
1ac41e12d207: Download complete
1103112ebfc4: Download complete
0b5b82abb9e8: Download complete
cc7f04ac52f8: Download complete
0d2012b79227: Download complete
101d4d666844: Download complete
2310831cf643: Download complete
87b8bf94a2ac: Download complete
b4b80ef7128d: Download complete
d30c94e4bd79: Download complete
8f05d7b02a83: Download complete
5ec312985191: Download complete
a6df4f5266e9: Download complete
Digest: sha256:efef229a1167e599955464bc6053326979ffc5f96ab77b2822a46a64fd8a247e
Status: Downloaded newer image for ghcr.io/bentoml/openllm:latest
PyTorch backend is deprecated and will be removed in future releases. Make sure to use vLLM instead.
config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 653/653 [00:00<00:00, 2.80MB/s]
tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 685/685 [00:00<00:00, 504kB/s]
vocab.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 899k/899k [00:00<00:00, 1.08MB/s]
merges.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 718kB/s]
special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████████████████| 441/441 [00:00<00:00, 1.49MB/s]
generation_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 137/137 [00:00<00:00, 439kB/s]
Fetching 7 files: 29%|███████████████████████████▋ | 2/7 [00:01<00:03, 1.31it/s]
pytorch_model.bin: 55%|█
```
You might see these messages:
```
vLLM is available, but using PyTorch backend instead. Note that vLLM is a lot more performant and should always be used in production (by explicitly set --backend vllm).
🚀Tip: run 'openllm build facebook/opt-1.3b --backend pt --serialization legacy' to create a BentoLLM for 'facebook/opt-1.3b'
2024-05-04T06:01:58+0000 [INFO] [cli] Prometheus metrics for HTTP BentoServer from "_service:svc" can be accessed at http://localhost:3000/metrics.
2024-05-04T06:01:58+0000 [INFO] [cli] Starting production HTTP BentoServer from "_service:svc" listening on http://0.0.0.0:3000 (Press CTRL+C to quit)
/usr/local/lib/python3.11/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
```
The message indicates that you're running OpenLLM, a tool for deploying large language models (LLMs), but it's currently using the PyTorch backend instead of the recommended vLLM backend.
## Multiple Runtime Support
Different Large Language Models (LLMs) can be implemented using various runtime environments. OpenLLM offers support for these variations.
### vLLM for Speed
vLLM is a high-performance runtime specifically designed for LLMs. If a model supports vLLM, OpenLLM will prioritize it by default for faster inference.
#### vLLM Hardware Requirements:
Using vLLM requires a GPU with at least Ampere architecture and CUDA version 11.8 or newer. This ensures compatibility with vLLM's optimizations.
### PyTorch Fallback
If vLLM isn't available for a particular model, OpenLLM seamlessly falls back to PyTorch, a popular deep learning framework.
### Manual Backend Selection
You can leverage the --backend option when starting your LLM server to explicitly choose between vLLM and PyTorch. This is useful if you want to ensure vLLM is used even if it's not the default for that model.
Discovering Backend Options: To explore the supported backend options for each LLM, refer to the OpenLLM documentation's "Supported models" section or simply run the openllm models command to get a list of available models and their compatible backends.
While both vLLM and PyTorch backends are available, vLLM generally offers superior performance and is recommended for production deployments.
## Viewing it on the Docker Dashboard

## Checking the container stats

By now, you must be able to access the frontend:


## Using GPU
OpenLLM allows you to start your model server on multiple GPUs and specify the number of workers per resource assigned using the `--workers-per-resource` option. For example, if you have 4 available GPUs, you set the value as one divided by the number as only one instance of the Runner server will be spawned.
The amount of GPUs required depends on the model size itself. You can use the Model Memory Calculator from Hugging Face to calculate how much vRAM is needed to train and perform big model inference on a model and then plan your GPU strategy based on it.
Given you have access to GPUs and have setup nvidia-docker, you can additionally pass in `--gpus` to use GPU for faster inference and optimization
```
docker run --rm --gpus all -p 3000:3000 -it ghcr.io/bentoml/openllm start HuggingFaceH4/zephyr-7b-beta --backend vllm
```
## Quantization
It's a technique used to make machine learning models smaller and faster, especially when used for making predictions (inference). It works by converting the numbers used by the model (typically floating-point numbers) into smaller representations, often integers (quantized values).
### Benefits of Quantization:
Faster computations: Calculations with integers are simpler than calculations with floating-point numbers, leading to faster model execution.
Reduced memory footprint: Smaller numbers require less storage space, making the model lighter and easier to deploy on devices with limited memory.
Deployment on resource-constrained devices: By reducing size and computation needs, quantization allows for running large models on devices with less power or processing capability.
OpenLLM's Supported Quantization Techniques:
OpenLLM offers several quantization techniques to optimize your LLM for performance and resource usage:
- LLM.int8(): This technique focuses on 8-bit integer matrix multiplication, likely achieved through libraries like bitsandbytes. This simplifies the core mathematical operation of LLMs.
- SpQR: This technique uses a Sparse-Quantized Representation for near-lossless compression of LLM weights, again likely using bitsandbytes. It aims to reduce the model size while maintaining accuracy.
- AWQ: This stands for Activation-aware Weight Quantization. It considers both the weights and activations of the model during quantization for potentially better accuracy compared to weight-only quantization methods.
- GPTQ: This refers to Accurate Post-Training Quantization. It suggests a method for quantizing a pre-trained model while aiming to minimize the loss in accuracy.
- SqueezeLLM: This technique combines Dense and Sparse Quantization for potentially even greater model size reduction.
Overall, by understanding quantization and the specific techniques offered by OpenLLM, you can optimize your large language models for deployment on various platforms and resource constraints
| ajeetraina | |
1,843,091 | Run ffmpeg within a Docker Container: A Step-by-Step Guide | Ffmpeg is a powerful multimedia framework, but installing it directly on your system can lead to... | 0 | 2024-05-26T04:26:12 | https://dev.to/ajeetraina/run-ffmpeg-within-a-docker-container-a-step-by-step-guide-c0l | ffmpeg, docker, containers | Ffmpeg is a powerful multimedia framework, but installing it directly on your system can lead to dependency conflicts. Docker offers a cleaner solution: running ffmpeg within a container. This blog will guide you through the process in two approaches:
## Using a Pre-built ffmpeg Docker Image
### Pull the Image:
Open your terminal and use the following command to pull the official ffmpeg image created by jrottenberg:
```
docker pull jrottenberg/ffmpeg
```
### Run a Container:
Now, to run a container from this image and use ffmpeg commands inside it, use this command:
```
docker run -it jrottenberg/ffmpeg bash
```
This will start a container, launch a bash terminal within it, and provide access to ffmpeg.
### Use ffmpeg:
Once inside the container, you can use ffmpeg commands just like you would on your system. For example:
```
ffmpeg -i input.mp4 output.avi
```
This will convert the video file "input.mp4" to "output.avi".
If the video file is on your host machine and you want to mount it directly, then the recommended way to use this command directly:
```
docker run --rm -v $(pwd):/data jrottenberg/ffmpeg ffmpeg -i /data/input.mp4 /data/output.avi
```
This command achieves the same conversion as before, but it runs ffmpeg within a temporary Docker container. The -v $(pwd):/data part mounts your current directory ($(pwd)) as /data within the container, allowing you to access your files using the /data path in the ffmpeg command.
### Exit the Container:
When you're done, simply type exit to exit the container and go back to your terminal.
Method 2: Building a Custom Dockerfile with ffmpeg
### Create a Dockerfile:
Create a text file named Dockerfile with the following content:
```
FROM ubuntu:latest
RUN apt-get update && apt-get install -y ffmpeg
```
This Dockerfile specifies using the Ubuntu image as the base and then installing ffmpeg during the build process.
### Build the Image:
In your terminal, navigate to the directory containing the Dockerfile and run:
```
docker build -t my-ffmpeg-image .
```
This builds a custom image named "my-ffmpeg-image" with ffmpeg installed.
### Run a Container from your Image:
Use the following command to run a container from your newly built image:
```
docker run -it my-ffmpeg-image bash
```
This will start a container based on your custom image, providing access to ffmpeg within the container.
### Choosing the Right Approach:
- If you need ffmpeg occasionally, using the pre-built image (Approach 1) is quicker.
- If you need more control over the ffmpeg version or want to include other tools, building a custom image (Approach 2) is better.
## Additional Considerations:
- You can mount volumes to share files between your system and the container for both approaches.
- Remember to replace "input.mp4" and "output.avi" with your actual file names.
This blog equips you to leverage ffmpeg within Docker containers, offering a clean and isolated environment for your multimedia processing tasks. | ajeetraina |
1,858,189 | The Ollama Docker Compose Setup with WebUI and Remote Access via Cloudflare | Want to run powerful AI models locally and access them remotely through a user-friendly interface?... | 0 | 2024-05-26T04:23:56 | https://dev.to/ajeetraina/the-ollama-docker-compose-setup-with-webui-and-remote-access-via-cloudflare-1ion | ollama, llm, docker, containers | Want to run powerful AI models locally and access them remotely through a user-friendly interface? This guide explores a seamless Docker Compose setup that combines Ollama, Ollama UI, and Cloudflare for a secure and accessible experience.
## Prerequisites:
- Supported NVIDIA GPU (for efficient model inference)
- NVIDIA Container Toolkit (to manage GPU resources)
- Docker Compose (to orchestrate containerized services)
## Understanding the Services:
- **webui (ghcr.io/open-webui/open-webui:main):**
This acts as the web interface, allowing you to interact with your Ollama AI models visually.
- **ollama (Optional - ollama/ollama):**
This is the AI model server itself. It can leverage your NVIDIA GPU for faster inference tasks.
- **tunnel (cloudflare/cloudflared:latest):**
This service establishes a secure tunnel to your web UI via Cloudflare, enabling safe remote access.
## Volumes and Environment Variables:
- Two volumes, `ollama` and `open-webui`, are defined to store data persistently across container restarts. This ensures your models and configurations remain intact.
- The crucial environment variable is `OLLAMA_API_BASE_URL`. Make sure it points to the correct internal network URL of the ollama service. If ollama runs directly on your Docker host, you can use host.docker.internal as the address.
## Deployment and Access:
- **Deployment:** Execute `docker compose up -d` to start all services in detached mode, running them in the background.
- **Local Access:** If you just need to access the web UI locally, simply navigate to `http://localhost:8080` in your web browser.
- **Remote Access:** To access your AI models remotely, locate the Cloudflare Tunnel URL printed in the Docker logs. Use docker compose logs tunnel to retrieve this URL. Now, you can access your models from anywhere with an internet connection, provided you have the URL.
## Benefits:
- **Simplified AI Model Management:** Easily interact with your AI models through the user-friendly Ollama UI.
- **Remote Accessibility:** Securely access your models from any location with a web browser thanks to Cloudflare's tunneling capabilities.
- **GPU Acceleration (Optional):** Leverage your NVIDIA GPU for faster model inference, speeding up tasks.
## Getting Started
- Install Docker
```
curl -sSL https://get.docker.com/ | sh
```
## Writing a Docker Compose file
```
services:
webui:
image: ghcr.io/open-webui/open-webui:main
expose:
- 8080/tcp
ports:
- 8080:8080/tcp
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:11434
volumes:
- open-webui:/app/backend/data
depends_on:
- ollama
ollama:
image: ollama/ollama
expose:
- 11434/tcp
ports:
- 11434:11434/tcp
healthcheck:
test: ollama --version || exit 1
command: serve
volumes:
- ollama:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['all']
capabilities: [gpu]
tunnel:
image: cloudflare/cloudflared:latest
restart: unless-stopped
environment:
- TUNNEL_URL=http://webui:8080
command: tunnel --no-autoupdate
depends_on:
- webui
volumes:
ollama:
open-webui:
```
The Compose file defines the individual services that make up the entire application. Here, we have three services:
- webui,
- ollama ,
- and tunnel.
The webui service acts as your user interface for interacting with Ollama AI models. It fetches data from the optional ollama service (the AI model server) running on the same network, and lets you manage and use your models visually. You can access the web interface at http://localhost:8080 if running locally. The ollama service itself (optional) handles running your models, and can leverage your NVIDIA GPU for faster computations. Finally, the tunnel service provides a secure way to access the web interface remotely through Cloudflare.
## Bringing up the Stack
```
docker compose up -d
```
You will see the following services:
```
docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
cloudflare-ollama-1 ollama/ollama "/bin/ollama serve" ollama About a minute ago Up About a minute (healthy) 0.0.0.0:11434->11434/tcp
cloudflare-tunnel-1 cloudflare/cloudflared:latest "cloudflared --no-au…" tunnel About a minute ago Up About a minute
cloudflare-webui-1 ghcr.io/open-webui/open-webui:main "bash start.sh" webui About a minute ago Up About a minute 0.0.0.0:8080->8080/tcp
```

## Conclusion
This setup empowers you to unlock the potential of your AI models both locally and remotely. With Ollama, Ollama UI, and Cloudflare working in tandem, you gain a powerful and accessible platform for exploring and utilizing AI technology.
| ajeetraina |
1,865,356 | Enhanced Container Isolation(ECI) vs. Rootless Docker: Securing Your Docker Desktop Workflows | Docker Desktop is a popular tool for developers working with containers on Windows, Linux and macOS.... | 0 | 2024-05-26T04:21:39 | https://collabnix.com/enhanced-container-isolationeci-vs-rootless-docker-securing-your-docker-desktop-workflows/ | docker, security, containers, developer | [Docker Desktop](https://www.docker.com/products/docker-desktop/) is a popular tool for developers working with containers on Windows, Linux and macOS. But as containerized applications become more prevalent, so does the need for robust security. This is where Enhanced Container Isolation (ECI) and Rootless Docker come into play. Both offer security benefits, but they work in different ways. Let's break down the key differences:
## Rootless Docker: Running Lean on Linux
Imagine a scenario where you want to leverage Docker on a bare-metal Linux machine. Rootless Docker allows you to do just that. It essentially strips Docker Engine of its root privileges, enabling regular users to install and manage containers without needing administrative access. This enhances security by minimizing the attack surface.
However, Rootless Docker has limitations. It's not currently supported within Docker Desktop because Docker Desktop already virtualizes the Docker Engine, providing a layer of isolation. Additionally, Rootless Docker might encounter restrictions on certain functionalities that require elevated privileges.
## Enhanced Container Isolation: Fortifying Docker Desktop
ECI takes a different approach. It focuses on isolating the containers themselves, not the Docker Engine. When enabled in Docker Desktop, ECI ensures that containers run within a Linux User Namespace. This creates a stronger barrier between the containers and the underlying Docker Desktop Linux VM.
Think of it like this: ECI prevents a containerized application from reaching outside its designated space and potentially tampering with the virtual machine's security settings. This is particularly valuable for developers who rely on Docker Desktop for development workflows and want an extra layer of protection.
Now, let's delve deeper into the inner workings of ECI and discover the technology that fuels its power: Sysbox.
## Sysbox: A Tailored Runtime for Enhanced Isolation
While Docker traditionally relies on the OCI runc runtime to manage containers, ECI leverages a customized version of Sysbox. Sysbox itself is a derivative of runc, but with key modifications designed to elevate container isolation standards and optimize workload execution. This customized version, included in Docker Desktop since version 4.13, acts as the backbone for ECI's security enhancements.
## Seamless Transition for Users: Sysbox Runs Under the Hood
The beauty of ECI lies in its user-friendliness. When enabled, containers launched through familiar commands like docker run or docker create automatically utilize Sysbox instead of the standard runc runtime. This happens behind the scenes, requiring no additional configuration from the user. Developers can continue working with containers as usual, while ECI silently strengthens the security posture.
## Taming the Privileged Beast: Secure Execution Even with "--privileged"
Containers launched with the previously risky `--privileged` flag can now be harnessed securely with ECI. This flag typically grants elevated privileges, posing a potential security threat. However, ECI effectively mitigates this risk by ensuring such containers cannot breach the Docker Desktop virtual machine (VM) or compromise other containers.
## A Note on Compatibility and Differentiation
It's important to remember that ECI is distinct from Docker Engine's userns-remap mode and Rootless Docker. While they share the goal of improved security, they operate differently. We'll explore these distinctions in more detail in a future post.
Sysbox serves as the invisible force behind ECI's enhanced container isolation. It seamlessly integrates with Docker Desktop, offering robust security without compromising user experience. Stay tuned for further exploration of ECI's unique approach compared to other security mechanisms.
## Key Takeaways:
- Rootless Docker: Ideal for native Docker usage on Linux, offering improved security by running containers without root privileges.
- Enhanced Container Isolation: Designed specifically for Docker Desktop, strengthening the isolation between containers and the Docker Desktop Linux VM.
## Choosing the Right Approach
The best option depends on your environment. If you're using Docker natively on Linux and want a lightweight solution with improved security, Rootless Docker might be the way to go. But for developers leveraging Docker Desktop and seeking enhanced container isolation within the virtualized environment, ECI is the clear winner.
Remember, security is an ongoing process. Both ECI and Rootless Docker offer valuable security features, but they should be used in conjunction with other security best practices to ensure your containerized workflows remain safe and sound. | ajeetraina |
1,865,362 | How important is having great soft skills as a Software Engineer for personal growth and career success | As an Engineer, I would like to understand what I need to do to ensure that my soft skills are... | 0 | 2024-05-26T04:16:24 | https://dev.to/aws-builders/how-important-is-having-great-soft-skills-as-a-software-engineer-for-personal-growth-and-career-success-46o1 | > As an Engineer, I would like to understand what I need to do to ensure that my soft skills are equally as good as my technical skills. Technical skills are the fundamental requirements for engineers; however, soft skills also play a vital role in career success, advancement, and effectiveness in a working environment.
In the modern workspace having adequate soft skills is becoming very vital for career success and personal growth. Soft skills are now becoming a distinguishing factor among different great Engineers. Soft skills such as ability to engage in effective and clear communication among colleagues, teamwork, problem-solving, and adaptability.
I will be walking you through some practical tips and suggestions on how you can enhance your soft skills as a Software engineer. These are suggestions from my experience and also having checked success stories of some great Software Engineers.
**Step 1: Active Listening**
It is very important to cultivate the habit of active listening and being fully present in a conversation. In addition, also avoid interrupting and focus on understanding what message the speaker is trying to pass across. One use case will be to fully understand the client's requirements before engaging in executing the assignment.
Additionally, it is not only listening to the speaker but also confirming you understand the message the speaker is trying to pass across by summarizing what the speaker has said and asking clarifying questions where necessary.
**Step 2: Effective Communication**
When delivering your message, ensure your message is clear and concise by avoiding any jargon, being direct, or any irrelevant message. It is also very important to pay attention to body language and ensure you engage in eye contact with your audience.
Also, practice engaging in conversation with non-technical stakeholders and getting feedback to ensure that your stakeholders can understand your message.
**Step 3: Teamwork and Collaboration**
It is very essential to understand your team’s goal and objective; this will help you to understand how you can add value and be an effective member of your team. Develop the habit of having a shared purpose and objective within your team and also carrying every member of the team along when you are working on an important feature. Ensure you regularly showcase your work and contribute positively towards the growth and development of the team. Additionally, during conflict resolution; it is very vital to use empathy to understand different options and find an agreed common resolution which is signed off by every member of the team.
**Step 4: Developing Problem-Solving and Critical Thinking Skills**
Cultivate the skills of breaking down problems into manageable and smaller parts or deliverables; this will help you to achieve your tasks faster and boost your confidence toward working on more complex tasks. Use frameworks like SWOT analysis to evaluate your situation.
Critical thinking is a crucial soft skill; you should regularly engage in brainstorming sessions and be open to unconventional solutions to solving problems.
You should also be open to continuous learning and developing of changing and new technologies and methodologies.
**Step 5: Adaptability and Flexibility**
Always be on the lookout for an opportunity to embrace change any time it presents itself. See a change as an opportunity for growth and development. As an Engineer always embrace an opportunity to learn new technologies and subscribe to learning platforms such as Coursera, Udemy, Udacity, and Linkedln Learning to mention but a few to learn and acquire new technical and soft skills.
**Step 6: Leadership and Management Skills**
Taking ownership and successful completion of a project is a very critical soft skill. Taking ownership shows that you are responsible and reliable. Whenever you have an opportunity to lead projects and teams ensure you work on this effectively and see it as an opportunity for you to showcase your skills and be at the top of your game
**Conclusion**
Engineers need to invest time and effort in developing their soft skills to achieve a successful career and be ahead of their pairs. Improving your soft skills will require conscious, intentional efforts and continuous practice. By focusing on active learning, effective communication, teamwork, and collaboration, developing problem-solving and critical thinking skills, adaptability and flexibility, and leadership and management skills you can significantly enhance your effectiveness and succeed at your workplace. Embrace the challenge by investing in your personal growth and see how your soft skills will pave the way for new opportunities and achievements.
| igeadetokunbo | |
1,865,360 | This SUPERIOR way ov defining enums in JavaScript! | The Standard Way ov Defining An Enum In JavaScript, the standard way to define an enum –... | 0 | 2024-05-26T04:02:24 | https://dev.to/baenencalin/this-superior-way-ov-defining-enums-in-javascript-4ok9 | javascript, webdev | ## The Standard Way ov Defining An Enum
In JavaScript, the standard way to define an enum – according to the TypeScript compiler – is:
```javascript
"use strict";
var Fruit;
(function (Fruit) {
Fruit[Fruit["BANANA"] = 0] = "BANANA";
Fruit[Fruit["ORANGE"] = 1] = "ORANGE";
Fruit[Fruit["APPLE"] = 2] = "APPLE";
})(Fruit || (Fruit = {}));
```
Or, if we strip the (unnecessary) mapping from number to string, we can do:
```javascript
"use strict";
var Fruit = {
"BANANA": 0,
"ORANGE": 1,
"APPLE": 2
};
```
## The Better Way ov Defining An Enum
... However, there is a better way to define an enum – and the method will ensure:
1. the enum values are unique,
2. the values can't be overriden,
3. the definition is hoisted.
```javascript
var Fruit;
{
Fruit = Object.defineProperties((Fruit = Object.create(null)), {
BANANA: {writable: false, value: Symbol("BANANA")},
ORANGE: {writable: false, value: Symbol("ORANGE")},
APPLE: {writable: false, value: Symbol("APPLE")}
});
}
``` | baenencalin |
1,865,358 | JavaScript Concept : Memory Allocation(Stack and Heap) and Behavior | Good day everyone! Today we will discuss a JavaScript Concept Memory Allocation and Behavior. This... | 0 | 2024-05-26T03:54:57 | https://dev.to/sromelrey/javascript-concept-memory-allocation-and-behavior-485b | javascript, programming, beginners, deeplearning | Good day everyone! Today we will discuss a JavaScript Concept Memory Allocation and Behavior. This concept is crucial to understand deeply on how JavaScript works. Learn alongside me and Enjoy reading!
### Goals and Objectives for this Topic:
- Understand how **_memory allocation_** and **_behavior_** function in:
- Primitive types
- Reference types
In JavaScript engines, there are indeed two primary memory spaces used for data allocation: Stack and Heap
#### Primitive Types
- **Storage:** Primitive types are stored in **_Stack Memory_** . The stack is generally used for static memory allocation, which includes fixed-size variables.
- **Value-based:** When you assign or pass a `primitive type`, it is _done by value_. This means that the actual value is copied.
##### Example:
```javascript
let a = 10;
let b = a; // Copies the value of a into a new memory spot for b
b += 5; // Changes in b do not affect a
console.log(a); // 10
console.log(b); // 15
```
#### Reference types
- **Storage:** Reference types are stored in **_Heap Memory_**. The heap is used for dynamic memory allocation, where the size of the structure can change over time.
- **Reference-based:** When you assign or pass reference types, it is _done by reference_. This means that instead of copying the actual data, a reference (or a pointer) to the object in memory is copied.
Example:
```javascript
let obj1 = { value: 10 };
let obj2 = obj1; // Copies the reference, not the actual object
obj2.value = 15; // Changes the value via obj2 also affect obj1
console.log(obj1.value); // 15
console.log(obj2.value); // 15
```
## How JavaScript Manages Memory
JavaScript automatically manages memory with a garbage collector, freeing developers from manually deallocating memory. The garbage collector periodically frees memory used by data that is no longer accessible.
#### 1. Stack
- The stack is a Last-In-First-Out (LIFO) data structure used for storing:
- Local variables declared within the function.
- Arguments passed to the function.
- The return address (where to return after the function finishes execution).
- It's fast and has a fixed size, pre-allocated by the OS. Once a function completes, its data is automatically removed, freeing space for new calls.
#### 2. Heap
- The heap is an unstructured, flexible memory area for dynamic allocation. It's slower than the stack but stores:
- Global variables
- Objects and arrays
- Dynamically allocated data
- Garbage collection automatically reclaims unused memory, preventing leaks.
##### Summary of the key difference:

Understanding the roles of the stack and heap is crucial for writing efficient JavaScript code. You should strive to minimize the use of global variables and large data structures within functions to optimize memory usage and avoid stack overflows. Thanks for reading ❤️❤️❤️!
| sromelrey |
1,865,354 | How to Fix React Router's Scroll Position on Page Transitions | When working on a React project recently, I noticed an odd behavior: whenever I navigated from one... | 0 | 2024-05-26T03:41:58 | https://dev.to/kingjames_x/how-to-fix-react-routers-scroll-position-on-page-transitions-7cb | react, reactjsdevelopment, webdev | When working on a React project recently, I noticed an odd behavior: whenever I navigated from one page to another using the navigation bar, the new page wouldn't start from the top. If I was on the footer section or anywhere below the navbar, the transition seemed abrupt, as if there was no smooth scrolling to the top of the new page.
The quick fix for this issue is to create a root layout if you don't have one already, and then make all other routes children of that layout element. Let's dive into the code.
Initially, my router configuration looked like this:
```
const router = createBrowserRouter([
{
path: "/",
element: <Root />,
errorElement: <ErrorPage />,
},
{
path: "/about",
element: <About />,
errorElement: <ErrorPage />,
},
{
path: "/contact us",
element: <Contact />,
errorElement: <ErrorPage />,
},
]);
```
To resolve the scrolling issue, the first step is to create a layout component (e.g., layout.tsx or layout.jsx):
```
import { Outlet, useLocation } from "react-router-dom";
import { useLayoutEffect } from "react";
const Layout = () => {
const location = useLocation();
useLayoutEffect(() => {
document.documentElement.scrollTo({ top: 0, left: 0, behavior: "instant" });
}, [location.pathname]);
return (
<div>
{/* you could render your navbar component here */}
<Outlet />
{/* you could render your footer component here */}
</div>
);
};
export default Layout;
```
Note: It's recommended to render your navbar and footer components within this layout instead of repeating them on every component.
Next, in your main entry point (e.g., main.tsx or wherever you have your router configuration), import the layout and use it as follows:
```
const router = createBrowserRouter([
{
path: "/",
element: <Layout />,
children: [
{
path: "/",
element: <Root />,
errorElement: <ErrorPage />,
},
{
path: "/about",
element: <About />,
errorElement: <ErrorPage />,
},
// Add other routes here
],
},
]);
```
With this configuration, whenever you navigate from one page to another, the new page should always start from the top, providing a smooth scrolling experience.
| kingjames_x |
1,865,353 | Windows 11 Wildest Shortcuts - Longest Keyboard Shortcuts | Windows 11 Shortcut Keys: We've all heard of the legendary shortcuts that supposedly save time and... | 0 | 2024-05-26T03:41:51 | https://dev.to/winsidescom/windows-11-wildest-shortcuts-longest-keyboard-shortcuts-588h | windowsshortcut, keyboardshortcut, windows11, microsoft | <strong>Windows 11 Shortcut Keys</strong>: We've all heard of the legendary shortcuts that supposedly save time and make our digital lives a breeze. But what happens when these shortcuts turn into <strong>cryptic incantations</strong>, requiring the dexterity of a concert pianist to execute? Windows 11 doesn’t just dip its toes in the water, it dives headfirst into the deep end. We’re talking about combinations so <strong>intricate</strong>, that they might as well be the secret handshake of the tech elite. Let’s unravel these mind-boggling Windows 11 Shortcut Keys and see if they are the ultimate productivity hacks or just plain overkill. <strong>Check out: <a href="https://winsides.com/how-to-create-shutdown-shortcut-button-windows-11/">How to create a Shutdown Shortcut button in Windows 11?</a></strong>
Keyboard shortcuts are designed to save us time and make our lives easier, right? But what happens when these shortcuts start to feel like a full-body workout for your fingers? Here are some of the most elaborate combinations Windows 11 has to offer:
<img class="wp-image-670 size-full" src="https://winsides.com/wp-content/uploads/2024/05/Windows-11-Keyboard-Shortcuts.jpg" alt="Windows 11 Keyboard Shortcuts" width="1920" height="1080" /> Windows 11 Keyboard Shortcuts
<kbd>SHIFT</kbd> + <kbd> CTRL </kbd> + <kbd> ALT </kbd> + <kbd> WINDOWS KEY </kbd> + <kbd> L </kbd> = Linkedin
<kbd>SHIFT</kbd> + <kbd> CTRL </kbd> + <kbd> ALT </kbd> + <kbd> WINDOWS KEY </kbd> + <kbd> T </kbd> = Microsoft Teams
<strong>Imagine this scenario</strong>: you're in the midst of a frantic multitasking session, juggling emails, spreadsheets, and video calls. Suddenly, a wild thought appears: "I need to open LinkedIn right now!" Do you reach for the mouse, navigate to your browser, and click on the bookmark like a mere mortal? Or do you unleash the full fury of the above-mentioned "Shortcut" and teleport to Linkedin in a blaze of glory? Well, the shortcut opens LinkedIn in your default Web Browser. In the same manner, it opens Microsoft Teams(the collaboration and productivity application from Microsoft) in your <strong>default web browser</strong>. Even if you have the dedicated application for Microsoft Teams installed from the <strong><a href="https://apps.microsoft.com/" target="_blank" rel="noopener">Windows Store</a></strong>, the above shortcut will open Teams in your web browser, and you may be asked to log in to your account.
Well, it is not the end of it. Here are some of the interesting Windows 11 Shortcut keys.
<img class="wp-image-672 size-full" src="https://winsides.com/wp-content/uploads/2024/05/Windows-11-Long-Keyboard-Shortcuts.jpg" alt="Windows 11 Long Keyboard Shortcuts" width="1920" height="1080" /> Windows 11 Long Keyboard Shortcuts
<kbd>SHIFT</kbd> + <kbd> CTRL </kbd> + <kbd> ALT </kbd> + <kbd> WINDOWS KEY </kbd> + <kbd> W </kbd> = Microsoft 365
<kbd>SHIFT</kbd> + <kbd> CTRL </kbd> + <kbd> ALT </kbd> + <kbd> WINDOWS KEY </kbd> + <kbd> X </kbd> = Microsoft Excel
<kbd>SHIFT</kbd> + <kbd> CTRL </kbd> + <kbd> ALT </kbd> + <kbd> WINDOWS KEY </kbd> + <kbd> P </kbd> = Microsoft Powerpoint
Let’s not kid ourselves, these shortcuts are a bit on the wild side. All the above-mentioned applications will open in your default web browser and you may be asked to log in to your account. They’re impressive to know and can certainly make you feel like a tech wizard, but are they genuinely useful in the hustle and bustle of daily tasks?
<h2>Take away:</h2>
Despite the fun and flair of these <strong>Windows 11 Shortcut Keys</strong>, they do highlight an essential aspect of productivity: sometimes, less is more. Shortcuts are supposed to be quick and intuitive, not a <strong>finger-twisting exercise</strong>. While mastering these combos can give you a sense of accomplishment, the real question is whether they make your workflow smoother or just add an extra layer of complexity. So, <strong>should you embrace these long shortcuts?</strong> It’s up to you. Share your thoughts. <strong>Happy Coding! Peace out!</strong>
Source: [Windows 11 Wildest Shortcuts](https://winsides.com/windows-11-shortcut-keys-longest-keyboard-shortcuts/) | winsidescom |
1,865,352 | sqdqsd | qsdqsdq | 0 | 2024-05-26T03:33:48 | https://dev.to/issam_assiyadi/sqdqsd-36ng | javascript, webdev, beginners | qsdqsdq | issam_assiyadi |
1,865,351 | Breaking Down DeFi Barriers: MIMI Facilitates Efficient Value Growth for Small Asset Cross-Chain | In the new era of digital finance, DeFi has gradually become mainstream. However, despite significant... | 0 | 2024-05-26T03:32:12 | https://dev.to/mimi_official/breaking-down-defi-barriers-mimi-facilitates-efficient-value-growth-for-small-asset-cross-chain-44pf | In the new era of digital finance, DeFi has gradually become mainstream. However, despite significant growth in the DeFi market over the past few years, users still face some major challenges while enjoying its many conveniences and profits. These challenges include complex cross-chain operations, high liquidity yield thresholds, and a lack of transparent profit distribution mechanisms.
MIMI is a liquidity protocol based on multi-chain aggregation, committed to building a decentralized financial platform for the widest range of Web3 applications. MIMI aims to provide a seamless experience for all users in managing multi-chain assets by simplifying DeFi processes, lowering entry barriers, and efficiently managing cross-chain small assets.
Advantages of Low Entry Thresholds for MIMI:
Simplified DeFi Processes:
MIMI greatly simplifies the staking and lending process of DeFi through innovative technological means and user-friendly design. Traditional DeFi platforms often require users to have high technical knowledge and operational skills, while MIMI aims to lower this barrier. With an intuitive user interface and automated operation processes, users can easily complete staking and lending operations without complex maneuvers, allowing them to focus more on investment and returns.
Low Threshold Participation:
In traditional financial products and most DeFi platforms, users with small amounts of funds are often overlooked due to high entry barriers. However, MIMI enables all users, regardless of their funds, to easily participate and earn substantial profits by offering low-threshold liquidity yield products. MIMI's smart contracts and AI-driven technology ensure that every user can fairly participate in the DeFi ecosystem and enjoy the benefits of decentralized finance.
User-Friendly Interface:
MIMI's user interface design focuses on simplicity and usability, aiming to provide the best experience for users. Users do not need complex operational steps; they can complete staking, lending, and profit viewing operations with just a few clicks. This user-friendly design not only lowers the usage threshold but also greatly improves user efficiency, enabling more users to quickly get started and enjoy the convenience of DeFi.
Utilizing Small Asset Cross-Chain:
Convenience of Cross-Chain Operations:
In the DeFi ecosystem, the importance of cross-chain operations is self-evident. Users often need to transfer assets between different blockchain networks to optimize investment strategies and diversify risks. However, traditional cross-chain operations are complex and costly, often discouraging users. MIMI greatly simplifies this process through efficient cross-chain protocols. Users can seamlessly manage multi-chain assets on one platform without cumbersome operation steps, greatly improving the efficiency and convenience of asset management.
Efficient Liquidity Strategies:
MIMI introduces unique liquidity strategies and intelligent algorithms to maximize the profits of users' small assets. Through advanced AI data analysis and smart contract technology, MIMI can automatically optimize liquidity allocation, ensuring that every user's funds are efficiently utilized. Regardless of the size of the user's funds, MIMI can help them achieve the best investment returns through intelligent strategies. This efficient liquidity strategy not only enhances user investment returns but also strengthens the overall liquidity of the platform.
Technical Safeguards:
To ensure the security of user funds, MIMI adopts fully homomorphic encryption and AI intelligent risk control technology. Fully homomorphic encryption technology ensures the security of user data during transmission and storage, while AI intelligent risk control systems can monitor on-chain data in real-time and provide timely risk warnings. These technological measures effectively reduce operational risks, protect user asset security, and enable users to confidently engage in cross-chain operations and investments.
User Profit and Platform Transparency:
Transparent Profit Distribution:
MIMI adheres to the principle of complete transparency in profit distribution, using blockchain technology to ensure that all profit distribution processes are open and traceable. Users can view their investment status and profit in real-time, ensuring the transparency and traceability of every transaction. This transparent mechanism not only enhances user trust but also enhances the credibility of the platform.
Real-Time Profit Viewing:
On the MIMI platform, users can view their profit status at any time. The platform provides an intuitive profit display interface, allowing users to track their investment returns in real-time. This real-time viewing function enables users to timely understand their financial status and make wiser investment decisions. Meanwhile, the platform also provides detailed profit analysis to help users optimize their investment strategies and further increase profits.
Maximizing Profits:
Through diversified financial products and smart contract technology, MIMI helps users optimize asset allocation and maximize profits. The platform offers various investment products such as stablecoins and index tokens, allowing users to choose suitable investment portfolios according to their needs. Smart contracts automatically execute optimal investment strategies, ensuring that users' funds are efficiently utilized. Regardless of market fluctuations, MIMI can help users achieve stable and substantial investment returns.
MIMI's transparent profit distribution mechanism and real-time profit viewing function not only enhance user participation and trust but also provide users with more options for profit optimization. Through these innovative measures, MIMI ensures that every user can enjoy a fair and just investment environment and maximize profits.
With its unique advantages of low entry thresholds and cross-chain utilization of small assets, MIMI has successfully addressed the main challenges of the current DeFi market. The platform simplifies DeFi processes, lowers participation barriers, and enables more users to easily enter this innovative financial ecosystem. Meanwhile, MIMI's efficient cross-chain operations and intelligent liquidity strategies help users maximize asset profits, providing an excellent investment experience.
In the future, MIMI will continue to optimize platform functions and user experience, launch more innovative financial products and services, and meet the needs of different user groups. The platform will further expand its ecosystem, cooperate with more Web3 content platforms and financial institutions, and create a comprehensive encrypted symbiotic ecosystem. Through continuous technological innovation and market expansion, MIMI will lead the development of decentralized finance and achieve mutual growth for users and the platform.
| mimi_official | |
1,865,350 | protflio.test | this post is just for test | 0 | 2024-05-26T03:24:43 | https://dev.to/issam_assiyadi/protfliotest-3ean | this post is just for test | issam_assiyadi | |
1,865,349 | Why Dockerize a React App? | I'm a web developer and I've faced the challenge many times of ensuring my React applications run... | 0 | 2024-05-26T03:23:29 | https://dev.to/snehasishkonger/why-dockerize-a-react-app-17h1 | docker, webdev, react, javascript | I'm a web developer and I've faced the challenge many times of ensuring my React applications run smoothly across different environments. Making my app consistent in different environments is always difficult, especially when dealing with complex applications. Recently, I decided to Dockerize my React app, and it changed my development and deployment process. So, I decided to share why I believe you should also Dockerize your React app.
## The Problems Without Dockerizing a React App
Before I go for the benefits, let me talk about the issues I encountered without Docker.
**Inconsistent Environments**
One of the most frustrating problems is the inconsistency between development, testing, and production environments. We've all experienced the "_it works on my machine_" dilemma. Different operating systems, software versions, and configurations can lead to unexpected behaviour and bugs that are hard to replicate.
**Dependency Management**
For me, managing dependencies across various environments is a nightmare. Each environment can have different versions of Node.js, npm packages, or system libraries. These discrepancies often lead to conflicts and version mismatches that are time-consuming to resolve.
**Complex Deployment Processes**
Deploying a React app without Docker can be cumbersome. It involves manually configuring servers, setting up environments, and ensuring all dependencies are correctly installed. This manual process is not only error-prone but also time-consuming.
**Scalability Issues**
Scaling a React application without Docker presents another set of challenges. Traditional deployment methods struggle with horizontal scaling and load balancing, making it difficult to handle increased traffic and demand.
## How Dockerizing a React App Fixes These Problems?
When I Dockerized my React app, these problems started to fade away.
**Consistency Across Environments**
Docker ensures that my app runs the same way in all environments. I can replicate the exact setup across development, staging, and production by encapsulating the application and its dependencies within a Docker container. This consistency significantly reduces bugs and makes the debugging process more straightforward.
**Simplified Dependency Management**
With Docker, all dependencies are encapsulated within the container. This encapsulation helps to manage version conflicts or missing packages. I can define all necessary dependencies in the Dockerfile, to ensure that they are installed and configured correctly every time.
**Streamlined Deployment Process**
Another great advantage of deploying a Dockerized app is that with a single `docker run` command, I can run my application in any environment. Using Docker Compose further simplifies the process by allowing me to define and manage multi-container setups with ease. No more manual server configurations or environment-specific setups.
**Enhanced Scalability**
Also, Docker's lightweight nature and portability make it ideal for scaling applications. I can easily replicate containers to handle increased traffic, and tools like Kubernetes can automate the orchestration, making horizontal scaling and load balancing efficient and effective.
### Additional Benefits of Dockerizing a React App
**Isolation**
Docker provides excellent isolation between applications. This isolation not only enhances security by keeping applications separated but also ensures that the behaviour of one application does not affect another.
**Portability**
Docker images are highly portable. I can build an image once and run it anywhere, be it on my local machine, a staging server, or a production environment in the cloud. This portability ensures consistent deployments across different platforms and providers.
**Resource Efficiency**
Docker containers are more resource-efficient compared to traditional virtual machines. They share the host system's kernel, which reduces overhead and allows for more efficient use of system resources.
**Simplified Testing and Debugging**
Testing and debugging become more manageable with Docker. I can quickly spin up isolated environments for testing, ensuring that the test environment closely mimics production. Docker Compose allows me to set up complex test environments with multiple services, making integration testing more straightforward.
## Real-World Use Cases
Seeing the theoretical benefits of Dockerizing a React app is one thing, but understanding its practical impact through real-world use cases can provide a clearer picture. Here, I'll share some experiences from colleagues and industry examples that highlight the tangible advantages of Dockerization.
**Startup Efficiency and CI/CD Pipelines**
A colleague working at a tech startup experienced a significant boost in their development workflow after adopting Docker. The startup had been struggling with maintaining consistency across different environments. Developers often found themselves fixing bugs that only appeared in production, which led to frustrating delays.
By Dockerizing their React app, they created a uniform environment that mirrored production on each developer's machine. This consistency drastically reduced environment-specific bugs. Moreover, their CI/CD pipeline became more robust. They used Jenkins to automate the building and testing of Docker images. Each commit triggered a build process, creating a new Docker image that was then tested and, upon passing all tests, deployed to staging or production environments. This streamlined workflow not only saved time but also improved the overall reliability of their deployments.
**Scalability in a Microservices Architecture**
Another friend working at a large enterprise reported how Dockerization facilitated scaling their React application within a microservices architecture. Before Docker, deploying new features or scaling services to meet increased demand was a cumbersome process that often required manual intervention and was prone to human error.
With Docker, they could encapsulate each microservice, including their React front-end, within its own container. This encapsulation made it easy to replicate and scale services horizontally. They used Kubernetes for orchestration, which automated the deployment, scaling, and management of containerized applications. Kubernetes' self-healing capabilities ensured that if a container failed, it would automatically be restarted, enhancing the reliability of their system.
**E-commerce Platform Deployment**
An e-commerce company struggled with deploying updates to their website without causing downtime. Their traditional deployment process was error-prone and often led to service interruptions, which affected user experience and revenue.
By Dockerizing their React application, they could deploy updates seamlessly. They utilized a blue-green deployment strategy, where the new version of their application was deployed to a separate environment (blue) while the current version (green) continued to serve traffic. Once the new version was verified to be running correctly, traffic was switched over to the blue environment, effectively updating the site with zero downtime. Docker's portability and consistent environment ensured that the deployment process was smooth and reliable every time.
These real-world examples show the practical benefits of Dockerizing React applications.
## Conclusion
Dockerizing my React app has been a transformative experience. It addressed many of the pain points I faced with environment inconsistencies, dependency management, and complex deployments. The added benefits of scalability, isolation, and resource efficiency make Docker an indispensable tool for modern web development.
If you haven't tried Dockerizing your React app yet, I highly recommend giving it a shot. The improvements in consistency, deployment ease, and scalability are well worth the effort.
For a more detailed guide, read this: [How to Dockerize a React App? - Scientyfic World](https://scientyficworld.org/how-to-dockerize-a-react-app/)
| snehasishkonger |
1,865,348 | Hi I am Kevin Iim | A post by Kevin Lim | 0 | 2024-05-26T03:21:53 | https://dev.to/kevin_lim_1bb99ded34a71ad/hi-i-am-kevin-iim-5ba1 | kevin_lim_1bb99ded34a71ad | ||
1,865,347 | Worst Advertising Platform Ever | If you considering running online ads, and you are wondering if Microsoft Bing Advertising is as good... | 25,196 | 2024-05-26T03:18:33 | https://dev.to/mosbat/worst-advertising-platform-ever-33f0 | bing, advertising, microsoft, bad | If you considering running online ads, and you are wondering if Microsoft Bing Advertising is as good as it seems, I already did so you won't have to.
When I finally decided to try advertising online with Microsoft, I was excited for the fact that unlike Google, there is someone that you can actually talk to, a human ...
When you begin, you will be contacted by "Account Manager" or support which sounded great at the beginning. The UI, not impressive but OK. You can learn it and get used to it after sometime.
The setup is pretty much similar to Google Ads in almost everything. Their keyword suggestions aren't that great but after some tweaking,you kind of get the hang of it.
I finished setting up everything, and next morning, it all started ....
I opened my inbox and found an email from Microsoft of "Egregious Policy Violation". It was a very vague email telling me that I violated one of their policies and then they put a list of 10s of reasons of why the violation took place without any specific or extra details.

I contacted their support about the email and asked them to tell me what the violation was, and how can I fix it. However, I was shocked to learn that the support people had no idea what the violation is. They said it was their "Policy team".
Just like Google Ads, you see an appeal form. I filled the form without even knowing what the violation was on hope they will have another look and help me somehow.
Few hours later, they said it was a mistake and my access was restored to the account.
the following 3 weeks, it was all about learning, tweaking and creating new campaigns for my products.
I was feeling relaxed and had a peace of mind, just to wake up the next morning with another "Egregious policy violation" email.
I was absolutely furious for the fact that I know that all my business is fully legitimate. I tried reaching out to them with anger and frustration for not even knowing again what the issue was.
Their support, as usual, had no idea and asked me to submit an appeal form again!!
After going back and forth, they doubled down and said that I'm banned for violating their terms without providing any reason. I was literally treated like trash without even knowing why or what happened.
So, eventually I decided to move to another platform. I asked them to close my account. I was absolutely dumb for even asking them this since they responded that they are unable to till I resolve the issue they banned me for.
I realized that Microsoft Advertising is a big scam and there are countless other content creators who had similar awful experiences as mine; I finally decided to put all their emails in my spam folder; and moved on with my life to other platforms such as TikTok and others.

I'm sorry if you thought that Microsoft or Bing Advertising are going to be better. My recommendation is to stay away from them.
Do not allow platforms to take us for granted as small businesses or small creators.
Do not waste your time experimenting, do not repeat my mistake. I'd argue that you are better off spending as much time as you could to research the best platforms and which ones are indie friendly because most of those big famous platforms will treat you like garbage if you are not a big mega corporation or have millions of dollars to spend.
We need alternatives to those terrible platforms, and before we finally have a small business friendly platform, they will keep treating us like this.
I won't dive deeper into the lack of professionalism in Microsoft. | mosbat |
1,865,345 | CONTROL DE PAGO | En el control de pago. Para úsalo correctamente. Debes poner el monto que vas a transferir o... | 0 | 2024-05-26T03:15:34 | https://dev.to/cristian_guzmn_60314f2eb/control-de-pago-ni5 | codepen | En el control de pago.
Para úsalo correctamente.
Debes poner el monto que vas a transferir o cobrar.
Recuerda esto ...
Si das transferir y seleccionas las casillas que quiera se le envía al estado que es la casilla 10 gris .
Si das cobrar las casillas seleccionadas le cobran al estado que es la casilla 10 gris.
As bien los movimientos para no ocasionar perdidas ni errores.
{% codepen https://codepen.io/Cristian-Guzm-n/pen/xxNEJEK %} | cristian_guzmn_60314f2eb |
1,865,343 | 더 킹 플러스 카지노 2024 | 킹 플러스 카지노는 2000년대 초반 창립 때까지 거슬러 올라가는 풍부한 역사를 가지고 있습니다. 처음에는 작은 규모의 온라인 플랫폼으로 시작되었지만 기술 발전과 시장 동향에... | 0 | 2024-05-26T03:14:01 | https://dev.to/luckygamble/deo-king-peulreoseu-kajino-2024-4ogl |

킹 플러스 카지노는 2000년대 초반 창립 때까지 거슬러 올라가는 풍부한 역사를 가지고 있습니다. 처음에는 작은 규모의 온라인 플랫폼으로 시작되었지만 기술 발전과 시장 동향에 적응하면서 기하급수적으로 성장했습니다. 주요 이정표에는 2010년 모바일 앱 출시, 2015년 라이브 딜러 게임 출시, 2020년 사용자 경험의 새로운 표준을 설정하는 대대적인 사이트 개편이 포함됩니다.
게임 경험
게임에 있어서 다양성과 품질은 킹 플러스 카지노의 특징입니다. 플레이어는 클래식 과일 기계부터 놀라운 그래픽과 몰입형 테마를 갖춘 최신 비디오 슬롯에 이르기까지 수백 가지 슬롯 게임 중에서 선택할 수 있습니다. 테이블 게임 매니아들은 블랙잭, 룰렛, 바카라, 포커 등 다양한 옵션을 찾을 수 있으며 각 옵션은 다양한 변형으로 제공됩니다. 보다 실제적인 카지노 경험을 원하는 사람들을 위해 라이브 딜러 섹션에서는 전문 딜러와의 실시간 상호 작용을 제공하여 실제 카지노의 스릴을 화면으로 가져옵니다. [더킹플러스](https://luckygambleclub.com/더킹플러스/)
사용자 인터페이스 및 유용성
킹 플러스 카지노의 디자인과 유용성은 최고 수준입니다. 이 웹사이트는 직관적인 탐색 기능을 갖춘 세련되고 현대적인 인터페이스를 갖추고 있어 플레이어가 좋아하는 게임과 기능을 쉽게 찾을 수 있습니다. iOS와 Android 모두에서 사용할 수 있는 모바일 앱은 데스크톱 버전의 모든 기능과 함께 이동 중에도 원활한 게임 경험을 제공합니다. 컴퓨터에서든 모바일 장치에서든 카지노에 액세스하든 사용자 경험은 일관되게 원활하고 매력적입니다.
보너스 및 프로모션
킹 플러스 카지노는 넉넉한 보너스와 프로모션으로 유명합니다. 신규 플레이어는 처음 몇 번의 예금에 대한 매치와 인기 있는 슬롯에 대한 무료 스핀을 포함하는 상당한 환영 보너스를 받습니다. 정기 프로모션은 재충전 보너스, 캐시백, 특별 이벤트 보상을 제공하여 흥미진진함을 유지합니다. 충성도 프로그램은 또 다른 하이라이트로, 일관된 플레이어에게 현금, 보너스 및 독점 특전으로 교환할 수 있는 포인트를 제공합니다.
결제 옵션
지불 옵션의 유연성과 보안은 모든 온라인 카지노에 매우 중요하며 킹 플러스 카지노는 이 분야에서 탁월합니다. 플레이어는 신용 카드 및 직불 카드, PayPal 및 Skrill과 같은 전자 지갑, 은행 송금, 심지어 암호화폐를 포함한 다양한 입금 방법 중에서 선택할 수 있습니다. 인출은 효율적으로 처리되며 대부분의 방법을 통해 플레이어는 영업일 기준 며칠 이내에 상금을 받을 수 있습니다. 카지노는 모든 금융 거래를 보호하기 위해 고급 암호화 기술을 사용하여 플레이어에게 마음의 평화를 제공합니다.
고객 지원
킹 플러스 카지노의 고객 지원은 연중무휴 24시간 제공되므로 언제든지 도움을 받으실 수 있습니다. 지원팀은 실시간 채팅, 이메일 또는 전화를 통해 연락할 수 있으며 빠르고 유용한 답변으로 유명합니다. 사용자 피드백은 지원 직원의 전문성과 친절함을 지속적으로 칭찬하여 고객 만족이 최우선 사항임을 분명히 합니다.
보안 및 공정한 플레이
킹 플러스 카지노는 보안과 공정한 플레이를 중요하게 생각합니다. 카지노는 평판이 좋은 당국에 의해 완전히 허가되고 규제되며 필요한 모든 법적 표준을 준수합니다. RNG(Random Number Generator) 기술은 모든 게임의 공정성을 보장하는 데 사용되며, 독립 기관의 정기적인 감사를 통해 더욱 확실한 공정성을 제공합니다. 개인 및 금융 정보를 보호하고 안전한 게임 환경을 조성하기 위해 플레이어 보호 정책이 마련되어 있습니다.
VIP 프로그램
더 높은 수준의 게임 경험을 원하는 플레이어를 위해 킹 플러스 카지노의 VIP 프로그램은 다양한 혜택을 제공합니다. VIP 회원은 맞춤형 고객 서비스, 더 높은 인출 한도, 독점 보너스 및 특별 이벤트 초대를 누릴 수 있습니다. 이 프로그램은 가장 충성스러운 플레이어에게 보상을 제공하여 왕에게 꼭 맞는 게임 경험을 제공하도록 설계되었습니다.
책임 있는 도박 이니셔티브
킹 플러스 카지노는 책임감 있는 도박을 장려하기 위해 최선을 다하고 있습니다. 이 사이트는 입금 한도, 시간 제한, 자체 제외 옵션 등 플레이어가 게임 활동을 관리하는 데 도움이 되는 다양한 도구를 제공합니다. 또한, 전문 지원 기관 및 헬프라인에 대한 링크를 포함하여 도박 관련 문제로 어려움을 겪고 있는 사람들을 위한 리소스가 있습니다.
커뮤니티 및 소셜 기능
킹 플러스 카지노의 눈에 띄는 측면 중 하나는 활발한 커뮤니티입니다. 플레이어는 라이브 게임 중에 채팅 기능을 통해 서로 상호 작용하고, 커뮤니티 이벤트 및 토너먼트에 참여하고, 카지노의 활성 소셜 미디어 채널을 팔로우하여 최신 뉴스와 업데이트를 확인할 수 있습니다. 이러한 기능은 동료애를 키우고 게임 경험을 더욱 매력적으로 만듭니다.
소프트웨어 제공업체
킹 플러스 카지노는 NetEnt, Microgaming 및 Playtech를 포함하여 업계에서 가장 존경받는 소프트웨어 개발자들과 파트너십을 맺고 있습니다. 이러한 협력을 통해 각 게임은 뛰어난 성능과 엔터테인먼트 가치를 제공하면서 다양하고 고품질의 게임 라이브러리를 보장합니다. 이러한 소프트웨어 제공업체의 신뢰성과 혁신은 카지노의 전반적인 매력에 크게 기여합니다.
경쟁사 비교
경쟁이 치열한 시장에서 킹 플러스 카지노는 광범위한 게임 선택, 우수한 사용자 경험 및 강력한 보안 조치를 통해 차별화됩니다. 다른 주요 온라인 카지노와 비교하여 King Plus는 지속적으로 플레이어 만족도와 충성도에서 더 높은 순위를 차지하고 있습니다. 포괄적인 VIP 프로그램 및 탁월한 고객 지원과 같은 독특한 판매 포인트가 경쟁업체와 차별화됩니다.
사용자 리뷰 및 사용후기
킹 플러스 카지노에 대해 플레이어들이 뭐라고 말합니까? 사용자 리뷰는 압도적으로 긍정적이며, 플레이어는 다양한 게임, 넉넉한 보너스, 신속한 고객 지원을 자주 칭찬합니다. 일반적인 불만 사항은 상대적으로 사소하며 가끔 발생하는 기술적 결함이나 특정 게임 선호도와 관련된 경우가 많습니다. 전반적으로 피드백은 즐겁고 안정적인 게임 경험을 제공하려는 카지노의 노력을 강조합니다.
킹 플러스 카지노는 2024년 최고의 온라인 카지노로서의 입지를 확고히 했습니다. 풍부한 역사, 인상적인 게임 배열, 사용자 친화적인 디자인, 보안 및 공정한 플레이에 대한 강한 의지를 바탕으로 비교할 수 없는 게임 경험을 제공합니다. 당신이 새로운 플레이어이든 노련한 도박꾼이든, 킹 플러스 카지노는 뭔가를 제공할 것입니다. 지금 커뮤니티에 가입하여 전 세계 카지노 매니아들이 이곳을 찾는 이유를 알아보세요.
여기를 방문하세요: https://luckygambleclub.com/더킹플러스/ | luckygamble | |
1,864,930 | Docker Image vs Docker Layer | Docker image is a static file that contains everything needed to run an application, including the... | 0 | 2024-05-25T15:19:55 | https://dev.to/vaibhavhariaramani/docker-image-vs-docker-layer-39dn | **Docker image** is a static file that contains everything needed to run an application, including the application code, libraries, dependencies, and the runtime environment. It's like a snapshot of a container that, when executed, creates a Docker container.
A **Docker image** is composed of multiple **layers** stacked on top of each other. **_Each layer_** represents a specific modification to the file system (inside the container), such as adding a new file or modifying an existing one. Once a layer is created, it becomes immutable, meaning it can't be changed.
**Docker Layer**
The layers of a Docker image are stored in the Docker engine's cache, which ensures the efficient creation of Docker images.
Layers are what compose the file system for both Docker images and Docker containers.
It is thanks to layers that when we pull a image, you eventually don't have to download all of its filesystem. If you already have another image that has some of the layers of the image you pull, only the missing layers are actually downloaded. | vaibhavhariaramani | |
1,865,338 | Monsgeek M7W Unboxing, Review | Monsgeek M7W Unboxing, Review_哔哩哔哩_bilibili | 0 | 2024-05-26T02:42:35 | https://dev.to/loveuyeah/monsgeek-m7w-unboxing-review-a4j | <iframe src="//player.bilibili.com/player.html?isOutside=true&aid=1505117596&bvid=BV1qD42137Tf&cid=1558170676&p=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"></iframe>
[Monsgeek M7W Unboxing, Review_哔哩哔哩_bilibili](https://www.bilibili.com/video/BV1qD42137Tf/?spm_id_from=333.999.0.0&vd_source=710f85603b2d16e5fc915798062195d8) | loveuyeah | |
1,865,294 | MelodyLink: A Social Media App for Music Producers | This is a submission for the The AWS Amplify Fullstack TypeScript Challenge What I... | 0 | 2024-05-26T02:40:31 | https://dev.to/logarithmicspirals/melodylink-a-social-media-app-for-music-producers-22km | devchallenge, awschallenge, amplify, fullstack | *This is a submission for the [The AWS Amplify Fullstack TypeScript Challenge ](https://dev.to/challenges/aws)*
## What I Built
My app is a very simple social media app for music producers. Producers are able to post their tracks for others to see. Visitors can play the tracks and read what the artist has to say about them.
## Demo and Code
<!-- Share a link to your Amplify App and source code. Include some screenshots as well. -->
The demo of my app is at https://main.d1evgv1d8mr3bs.amplifyapp.com.
The code for the app is at https://github.com/h93xV2/melody-link.
For convenience, here are some screenshots. This one is a view of the homepage as seen by an anonymous guest:

This one shows a lists of posts from the perspective of a logged in user, note the additional "Delete" button not visible on the homepage:

The delete option is only available to the creator of a post.
## Integrations
My app uses data, authentication, serverless functions, and file storage.
- Posts are stored using the Amplify data API.
- Users are able to login/logout via Amplify authentication. Anybody can view posts, but only authenticated users are able to upload and remove posts.
- A serverless function is used to check whether or not a given username is already taken.
- File storage is used for storing the tracks which are attached to posts.
**Connected Components and/or Feature Full**
My project used the Authenticator connected component to create a simple login/logout UX pattern; it also allowed me to give users the option for a preferred username. Additionally, I used the Storage Manager connected component for managing file uploads during post creation. See the following screenshot for a look at where the Storage Manager is used on the profile page.

My project is feature full because it uses all four integrations. One of the hardest parts of this for me was figuring out how to check whether or not a username was already taken. To do this successfully, I had to figure out how to allow a serverless Lambda function access to Cognito's UserPool user list.
<!-- Let us know if you developed UI using Amplify connected components for UX patterns, and/or if your project includes all four integrations to qualify for the additional prize categories. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). --> | logarithmicspirals |
1,865,331 | File accessed using keys and managed identities - Azure Files and Azure Blobs | Create the storage account and managed identity Provide a storage account for the web... | 0 | 2024-05-26T02:28:15 | https://dev.to/olawaleoloye/file-accessed-using-keys-and-managed-identities-azure-files-and-azure-blobs-4ck4 | azure, fileaccess, identity, keys | ### Create the storage account and managed identity
**Provide a storage account for the web app.**
_In the portal, search for and select Storage accounts.
Select + Create.
For Resource group select Create new. Give your resource group a name and select OK to save your changes.
Provide a Storage account name. Ensure the name is unique and meets the naming requirements._
Follow our previous [tutorial](https://dev.to/olawaleoloye/file-shares-with-limited-access-corporate-virtual-networks-azure-files-and-azure-blobs-4p3p) for the above
**Select Review + Create.**

**Wait for the resource to deploy.**
_**Provide a managed identity for the web app to use.**_
**Search for and select Managed identities.
Select Create.
Select your resource group.**

**Give your managed identity a name.
Select Review and create, and then Create.**


_Assign the correct permissions to the managed identity. The identity only needs to read and list containers and blobs._


**Search for and select your storage account.
Select the Access Control (IAM) blade.
Select Add role assignment (center of the page).**

**On the Job functions roles page, search for and select the Storage Blob Data Reader role.**


**On the Job functions roles page, search for and select the Storage Blob Data Reader role.
On the Members page, select Managed identity.
Select Select members, in the Managed identity drop-down select User-assigned managed identity.
Select the managed identity you created in the previous step.**

**Click Select and then Review + assign the role.
Select Review + assign a second time to add the role assignment.**

_Your storage account can now be accessed by a managed identity with the Storage Data Blob Reader permissions._
### Secure access to the storage account with a key vault and key
**To create the key vault and key needed for this part of the lab, your user account must have Key Vault Administrator permissions.**
**In the portal, search for and select Resource groups.
Select your resource group, and then the Access Control (IAM) blade.
Select Add role assignment (center of the page).**

**On the Job functions roles page, search for and select the Key Vault Administrator role.**

**On the Job functions roles page, search for and select the Key Vault Administrator role.
On the Members page, select User, group, or service principal.
Select Select members.
Search for and select your user account. Your user account is shown in the top right of the portal.
Click Select and then Review + assign.**


**Select Review + assign a second time to add the role assignment.
You are now ready to continue with the lab.**
_**Create a key vault to store the access keys.**_
**In the portal, search for and select Key vaults.**

**Select Create.**

**Select your resource group.**
**Provide the name for the key vault. The name must be unique.**

**Ensure on the Access configuration tab that Azure role-based access control (recommended) is selected.**

**Select Review + create.**

**Wait for the validation checks to complete and then select Create.**

**After the deployment, select Go to resource.**
**On the Overview blade ensure both Soft-delete and Purge protection are enabled.**

_**Create a customer-managed key in the key vault.**_
**In your key vault, in the Objects section, select the Keys blade.**

**Select Generate/Import and Name the key.
Take the defaults for the rest of the parameters, and Create the key**

### Configure the storage account to use the customer managed key in the key vault
_**Before you can complete the next steps, you must assign the Key Vault Crypto Service Encryption User role to the managed identity.**_
**In the portal, search for and select Resource groups.
Select your resource group, and then the Access Control (IAM) blade.
Select Add role assignment (center of the page).**

**On the Job functions roles page, search for and select the Key Vault Crypto Service Encryption User role.**

**On the Members page, select Managed identity.**
**Select Select members, in the Managed identity drop-down select User-assigned managed identity.
Select your managed identity.**

**Click Select and then Review + assign.
Select Review + assign a second time to add the role assignment.**

**Configure the storage account to use the customer managed key in your key vault. **
**In the Security + networking section, select the Encryption blade.**
**Select Customer-managed keys.
Select a key vault and key. Select your key vault and key.**

**Select to confirm your choices.
Ensure the Identity type is User-assigned.
Select an identity.**

**Select your managed identity then select Add.
Save your changes.**

_If you receive an error that your identity does not have the correct permissions, wait a minute and try again._
### Configure an time-based retention policy and an encryption scope.
**The developers require a storage container where files can’t be modified, even by the administrator**
**Navigate to your storage account.
In the Data storage section, select the Containers blade.**

**Create a container called hold. Take the defaults. Be sure to Create the container.**

**Upload a file to the container.**

**In the Settings section, select the Access policy blade.
In the Immutable blob storage section, select + Add policy.
For the Policy type, select time-based retention.
Set the Retention period to 5 days.
Be sure to Save your changes.**

**Try to delete the file in the container.
Verify you are notified failed to delete blobs due to policy.
The developers require an encryption scope that enables infrastructure encryption.**

**Navigate back to your storage account.
In the Security + networking blade, select Encryption.
In the Encryption scopes tab, select Add.
Give your encryption scope a name.
The Encryption type is Microsoft-managed key.
Set Infrastructure encryption to Enable.
Create the encryption scope.**

| olawaleoloye |
1,865,297 | Pathfinding Algorithms Part 2 with A* | This is a continuation of our discussion on pathfinding. In the first part of our discussion, we... | 0 | 2024-05-26T02:15:00 | https://excaliburjs.com/blog/Pathfinding%20Algorithms%20Part%202 | gamedev, typescript, pathfinding, astar |
This is a continuation of our discussion on pathfinding. In the first part of our discussion, we investigated Dijkstra's algorithm. This time, we are digging into A\* pathfinding.
[Link to Part 1](https://dev.to/excaliburjs/pathfinding-algorithms-part-1-55jk)
[Link to Pathfinding Demo](https://excaliburjs.com/sample-pathfinding/)
## Pathfinding, what is it
Quick research on pathfinding gives a plethora of resources discussing it. Pathfinding is calculating the shortest path through some 'network'. That network can be tiles on a game level, it could be roads across the country, it could be aisles and desks in an office, etc. etc.
Pathfinding is also an algorithm tool to calculate the shortest path through a graph network. A graph network is a series of nodes and edges to form a chart. For more information on this: [click here](https://www.google.com/search?q=Graph%20Thoery).
For the sake of clarity, there are two algorithms we specifically dig into with this demonstration: Dijkstra's Algorithm and A\*. We studied Dijkstra's Algorithm in [Part 1](https://dev.to/excaliburjs/pathfinding-algorithms-part-1-55jk).
### A\* Algorithm
A star is an algorithm for finding the shortest path through a graph that presents weighting (distances) between different nodes. The algorithm requires a starting node, and an ending node, and the algorithm uses a few metrics for each node to systematically find the shortest path. The properties of each node are fCost, gCost, and hCost. We will cover those in a bit.

## Quick History
The A\* algorithm was originated in its first state as a part of the Shakey project. That project was about designing a robot that could calculate its own path and own actions. In 1968, the first publishing of the A\* project happened and it describes its initial heuristic function for calculating node costs.
A heuristic function is a logical means, not necessarily perfect means, of solving a problem in a pragmatic way.
Over the years the A\* algorithm has been refined slightly to become more optimized.
## Algorithm Walkthrough
### Load the Graph
We first load our graph, understanding which nodes are clear to traverse, and which nodes are blocked. We also need to understand the starting node and ending node as well.
### Cost the nodes
We first will assess the cost properties for each node. Cost is a term we are using that represents a distance between nodes. This will be a method that assigns the fCost, gCost, and hCost to each node.
Let's discuss these costs first. The costs are a weighting of each node with respect to its positioning between the starting and ending nodes.
The fCost of a tile is equal to the gCost plus the hCost. This is represented as such:
`f=g+h`
The gCost of the node is the distance cost between the current node and the starting node.
The hCost of the node is the 'theoretical' distance from the current node to the ending node. This is why we discussed heuristics earlier. This value is an estimate of the distance, a best guess. This makes guessing for a rectangular tilemap easy, since all tiles are distance 1 from each other in a grid, the method of guessing is just using the tile positions of the two nodes and using Pythagorean theorem to assess the distance. If the grid is irregular, some spatial data may need to be injected into the graphs creation to facilitate this heuristic, for example: x/y coordinate locations maybe.
Thus, the fCost is the sum of these two values. While simplistic, this is the value that is leveraged in the algorithm to determine the 'best' path.
### Setup Buffers
After we've looped through all the nodes and costed them appropriately, we will utilize a buffer called openNodes. We will push the starting node into this, as it is the only node we 'know' about as of yet. We will use this openNodes buffer for much of the iterations we conduct in this algorithm.
We will leverage another buffer we will call either 'checked' or 'closed' buffer, and this is where the results of our algorithm will exist, as we process tiles from openNodes into this buffer.
### Iteration
Then we get into the repeating part of the algorithm.
1. Look for the lowest F cost square in the open list. Make it the current square.
2. Move the current square to the closed buffer (list). Remove from openNodes, move to 'checked' nodes.
3. Check if the new current node is the endnode, this is the finishing condition. using the parent node properties of each node, walk backwards to the starting node, that's the shortest path
4. If not ending node, review all neighbor squares of current square, if a neighbor is not traversable, ignore it
5. Check each neighbor is in checked/closed list of nodes, if not, perform parent assignment, and add to open node list
This series continues to iterate while neighbors are being added to the open node list.
### Example
Let's start with this example graph network.

We will manage our walkthrough with two different lists, open nodes and checked nodes. Black tiles above represent nodes that are not traversable. Let's define our start and stop nodes as indicated by the green S node and the blue E node.
The first step of A\* algorithm is costing all the nodes, and let's see if we can show this easily.
For more clarity on the 'costing' step, let's talk through the core loop that is applied to each tile.
My process is to loop through each tile, and assuming it has either coordinates or and index, I can determine its distance from the start node and end node.
Let's do the first tile together. The first tile is coordinates (x:0, y:0), and the start node is coordinates (x: 1, y:1), while the end node is (x: 4,y:5). The gCost for this tile we can use Pythagorean theorem to calculate the distance as the hypotenuse.
```js
gCost = Math.sqrt(Math.pow((1-0), 2) + Math.pow((1-0)), 2));
```
This gives us a gCost of ~1.41.
We can repeat this equation for the hCost, but it is with respect to the end node coordinates.
```js
hCost = Math.sqrt(Math.pow((4-0), 2) + Math.pow((5-0)), 2));
```
This yields a hCost of ~6.40.
Knowing both now, we can determine the fCost of that node or tile, by adding the two together, making the fCost 7.82... with rounding.
We can repeat this process for each tile in the graph.

Why am I using floating point values here? There's a reason, if I simply use integers, then the distances wouldn't have enough resolution in digits, creating a little more unoptimized iterations, as the number of cells with equal f Costs would increase, here the fCosts are more absolute, and we will reduce the iterations. Simply put, if all the fCosts between 5.02 - 5.98 all are represented as 5 as an integer, it muddies up how the algorithm moves through and prioritizes the 'next' cell to visit. With floating points, this is explicit. Being a grid, all the distances are simple hypotenuse calculations using Pythagorean theorem.
Before we jump into the overall repetitive loop, we will add the startnode into our list of opennodes.
Now the algorithm can start to be repetitive. We set the startnode to the current node, and move it from open to checked lists.
We first check if our current node is the end node, which it is not, so we proceed.
The next step is to select the lowest fCost, and since the starting node is the only node in openlist, it gets selected, otherwise we would have selected randomly from the lowest value fCosts in the open node list. Now we look at all the neighbors. I will designate the pale yellow as our 'open node' list. We will use different colors for 'checked'.

None are in the checked list, so we add them all to the opennodes list, and assign the current node as each nodes parent. To note, if a node is not traversable (black) then it gets ignored at this point, and not added to the list.
This then repeats as long as nodes are in the open node list, if we run out of open nodes without hitting the end node, then there's no path. When we hit the end node, we start building our return list by looping back through the parent nodes of each node. Starting at the end node, it will have a parent, that parent will have a parent... and so on until you hit the start node.
Let's walk through the example. Let's pick a tile with lowest f cost. As we select new 'current' nodes, we move that node to our checked list so it no longer is in the open node pool.

The lowest cost is 5.02, and grab its neighbors. Along the way we are assigning parent nodes, and adding the new neighbors to the openNodes list.

...but we keep selecting lowest cost node ( f cost of 5.06 is now the lowest to this point), we add neighers to opennodes, assign them parent nodes...

.. the next iteration, the fCost of 5.24 is now lowest, so it gets 'checked', and we grab its neighbors, assign parents..

.. the next iteration, there are two nodes of 5.4 cost, so let's see how this CAN play out, and the algorithm starts to make sense at this point.
Let's pick the high road...

The new neighbors are assigned parents, and are added to the overall list of open nodes to assess. Which is the new lowest fCost now? 5.4 is still the lowest fCost.

Yes, the algorithm went back to the other path and found a better next 'current' node in the list of open nodes. The process is almost complete. The next lowest fCost is 5.47, and there is more than one node with that value, so for the sake of being a completionist...

Still the lowest fCost is 5.47, so we select the next node, grab neighbors, assign parents... one thing I did differently on this table is showing the fCost of the ending node, which up till now wasn't necessary, but showing it here lets one understand how the overall algorithm loops, because the end node HAS to be selected as the next lowest cost node, because the check for end node is at the beginning of the iteration, not in the neighbor evaluation. So in this next loop, I don't make it yellow, but the end node is now been placed AS A NEIGHBOR into the list of open nodes for evaluation.

We now have our path, because the next iteration, the first thing we'll do is pick the lowest node fCost (5.0) and make it the current tile, and then test if it is the end node, which is true now.
We can return its path walking back all the parent node properties and see how we got there along the way.
## The test

[Link to Demo](https://excaliburjs.com/sample-pathfinding/)
[Link to Github Project](https://github.com/excaliburjs/sample-pathfinding)
The demo is a simple example of using a Excalibur Tilemap and the pathfinding plugin. When the player clicks a tile that does NOT have a tree on it, the pathfinding algorithm selected is used to calculate the path. Displayed in the demo is the amount of tiles to traverse, and the overall duration of the process required to make the calculation.
Also included, are the ability to add diagonal traversals in the graph. Which simply modifies the graph created with extra edges added, please note, diagonal traversal is slightly more expensive than straight up/down, left/right traversal.
## Why Excalibur
Small Plug...

[ExcaliburJS](https://excaliburjs.com/) is a friendly, TypeScript 2D game engine that can produce games for the web. It is free and open source (FOSS), well documented, and has a growing, healthy community of gamedevs working with it and supporting each other. There is a great discord channel for it [HERE](https://discord.gg/ScX52wD4eM), for questions and inquiries. Check it out!!!
You can also find it on [GitHub](https://github.com/excaliburjs/Excalibur).
## Conclusion
For this article, we briefly reviewed the history of the A\* algorithm, we walked throught the steps of the algorithm, and then applied it to an example graph network.
This algorithm I have found is faster than Dijkstra's Algorithm, but it can be tricky if you're not using a nice grid layout. The trick comes into the 'guessing' heuristic of the distance between the current node and the endnode (hCost). If you using a grid, you can use the coordinates of each node and calculate the hypotenuse as the hCost. If it is an unorganized, non standard shaped graph network, this becomes trickier. For the moment, for the library I created, I am limiting A\* to grid based tilemaps to make this much simpler. If the grid is not simple, I use Dijkstra's algorithm.
| jyoung4242 |
1,865,296 | Mutable vs Immutable in Kotlin: Why Immutability Matters | Understanding the difference between mutable and immutable can significantly impact your code's... | 0 | 2024-05-26T02:05:40 | https://dev.to/phatvoong296/kotlin-20-5b1g | Understanding the difference between mutable and immutable can significantly impact your code's reliability, readability, and maintainability. Here's a deep dive into these concepts, with examples to illustrate why immutability is often the preferred choice.
**Mutable vs Immutable: The Basics:**
- **Mutable**: Refers to anything whose state or data can be changed after it is created. This can apply to variables, collections, and objects.
- **Immutable**: Refers to anything whose state or data cannot be changed once it is created. This immutability applies to variables, collections, and objects.
Example:
```kotlin
// mutable
var mutableList = mutableListOf("A", "B", "C")
mutableList.add("D")
println(immutableList) // Output: [A, B, C, D]
```
There are two mutable aspects here. First, the `var` keyword allows the list to be reassigned to another value. Second, `mutableListOf` allows the list to be modified after initialization.
```kotlin
// immutable
val immutableList = listOf("A", "B", "C")
// immutableList.add("D") // This will not compile
println(immutableList) // Output: [A, B, C]
```
**Why Mutable is Bad?**
**1- Thread Safety Issues:**
Mutable objects can lead to concurrent modification problems in a multi-threaded environment. This can cause unpredictable behavior and hard-to-find bugs.
```kotlin
var sharedList = mutableListOf<Int>()
// Thread 1
Thread {
for (i in 1..1000) {
sharedList.add(i)
}
}.start()
// Thread 2
Thread {
for (i in 1..1000) {
sharedList.add(i * 10)
}
}.start()
// Result may vary and can lead to data corruption
```
**2- Unexpected Side Effects - Unpredictable Code:**
Mutable objects can be changed from different parts of a program, leading to unexpected side effects and bugs that are difficult to trace.
```kotlin
fun modifyList(list: MutableList<String>) {
list.add("D")
}
val list = mutableListOf("A", "B", "C")
modifyList(list)
println(list) // Output: [A, B, C, D]
```
Since the list is mutable, its values can change unpredictably. To understand all the modifications made to the list, you must trace the code from its initialization to every point where it is used.
**3- Ease of Testing:**
Immutable objects simplify testing because their state cannot change. This eliminates side effects and makes it easier to write reliable unit tests.
```kotlin
fun addElementToList(list: List<String>, element: String): List<String> {
return list + element
}
val originalList = listOf("A", "B", "C")
val newList = addElementToList(originalList, "D")
assert(originalList == listOf("A", "B", "C"))
assert(newList == listOf("A", "B", "C", "D"))
```
**Conclusion**
In Kotlin, prefer using `val` over `var` and immutable collections like `List` instead of `MutableList` whenever possible. This simple practice can lead to significant improvements in your code quality.
While mutable objects have their place, especially in scenarios requiring frequent updates, immutability often leads to safer, more maintainable, and predictable code.
What's ahead: Understanding immutability sets the stage for the next things - **Declarative code** and **Functional programming**.
| phatvoong296 | |
1,865,295 | Cyanic Job Book - Project management software for land surveyors done right | Cyanic Job Book is the leading project management solution for land surveyors and field service... | 0 | 2024-05-26T02:04:51 | https://dev.to/cyanic-job-book/cyanic-job-book-project-management-software-for-land-surveyors-done-right-5e7b | invoicing, scheduling, timesheet, maps | Cyanic Job Book is the leading project management solution for land surveyors and field service companies. It offers real-time monitoring of projects, faster invoicing, and improved error reduction and staff accountability. The software features a comprehensive job and client database, and makes it easy to search past jobs by legal address and show them on a map. It provides robust work-in-progress reporting tools and timesheet management, making payroll and invoicing easy. Additionally, it has a tasking and scheduling system to easily schedule and coordinate project tasks across teams in the field and the office.
[https://getjobbook.com](https://getjobbook.com) | cyanic-job-book |
1,865,292 | Exploring Web Development: Python + Django | The more I have learned about full-stack development with stacks like MERN, the more curious I have... | 0 | 2024-05-26T01:51:56 | https://dev.to/alexphebert2000/exploring-web-development-python-django-acb | The more I have learned about full-stack development with stacks like MERN, the more curious I have become about how other tech stacks compare, how they look, and how hard would it be for me to learn them now that I have a foundational understanding of full-stack development. So I decided to challenge myself over the next 6 weeks: to learn 3 new tech stacks and write a todo app with each one. After 2 weeks of learning the tech and writing the app I will log my experience here, give a brief overview of what I learned, and share my opinions on working with the technologies.
The first tech stack I chose is Django. I chose Django for 2 main reasons:
1. I already know a decent amount of python from my college projects
2. Python is one of the most popular languages right now and freshening up a bit on my Python knowledge seemed like a pretty good use of my time
Excited to dive back into python after being fully immersed in JavaScript, I dove in head first.
# Django Basics
## Welcome to Django
Django is a python web-app framework that focuses on development speed. Their homepage says: "Django was designed to help developers take their applications from concept to completion as quickly as possible," and I feel that they are definitely succeeding in that goal. Developing with Django felt incredibly straight-forward and it did not take much time at all for me to get from an empty directory to basic functionality.
## Setting up the app
Django makes setting up your workspace super easy. Once you're in your project directory run `django-admin startproject todo-project` to start a new project called todo-project. Django will make a new directory called todo-app with this file structure:

`manage.py` is a key part of django, running it is how we will make changes to our database, start our server and do most other interactions we will need to do with our app. `__init__.py` is an empty file that's used by python to denote the directory as a python package. For the time being we wont worry about settings, asgi, or wsgi, instead we should start our first app. In Django, projects are collections of apps and apps are the functional components. Apps are like modules that can be implemented into many projects - like web app Legos!
To start our app we can use `manage.py` to set up our app skeleton with `python manage.py startapp todos` that will generate this directory:

After our app is set up by Django we need to add it to our project. To do this we need to edit the `INSTALLED_APPS` array in the `settings.py` in the todo_project directory.
```py
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'todo'
]
```
Next we need to add a url for the project to use the todo app. In `urls.py` add a new path to end of the urlspatterns array.
```py
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('todos/', include('todo.urls'))
]
```
And just like that our app is integrated into our project! We can now start working on our functionality.
## Building out the App
To get a something rendering we need to add a view to our app. In `views.py` we need to `from django.http import HttpResponse`. This will allow us to render a string to our page. All views take in a request and return some response. This response could be
a simple string like `return HttpResponse('This is where I would put my todo... If I had one!')` or can respond with html with django's render shortcut. For now lets send back that string by adding this to `views.py`:
```py
from django.http import HttpResponse
def todo_render(req):
return HttpResponse('This is where I would put my todo... If I had one!')
```
Now to actually access this view we need to set up an endpoint to use it. We can start by creating a new file in the `todo` directory called `urls.py`. This file is similar to the `urls.py` in the project directory but these endpoints will be added to the end of /todos. We can set up the view we just made as the default view for /todos by adding this to `urls.py` in todo:
```py
from django.urls import path
from . import views
app_name = "todos"
urlpatterns = [
path("", views.todo_render, name="index"),
]
```
And just like that we have a incredibly simple but functional app with Django!. Now lets go checkout all our hard work.
## Running the app
To start our server to look at our site in the browser all we need to do is run `python manage.py runserver` and navigate to `localhost:8000/todos/`. Thats where this little tutorial will end but if this at all interests you should definitely go to [Django Docs](https://docs.djangoproject.com/en/5.0/intro/tutorial01/) and follow their tutorial where they walk through building a polls app.
# My Thoughts
I found working with Django to be an extremely streamlined process. Django handled a lot of the tedium of set up and lets you get right into the coding. This is both a blessing and a curse. I like to know how things are working behind the scenes and the degree of abstraction Django uses to get the development as quick as possible feels very hand wavy. I walked away from making the app knowing how to get all the code in the right places but not sure why or how the code works, which is a feeling I am not a big fan of. It reminds me of this gem from 2015. {% embed https://tenor.com/view/todd-howard-it-just-works-bethesda-this-all-just-works-gif-20598651 %}
With that said, I can see myself learning more and using Django in the future to quickly get an idea into practice. One of the things I really like about Django is the included admin view that allows you to interact with your app in an easy to use GUI. I also like the html templating that I find to be very straight forward and easy to pick up. Django is a well-oiled machine and makes web development much more accessible by removing the most intimidating parts, setup.
# Resources
I relied on the [Django Docs](https://docs.djangoproject.com/en/5.0/intro/tutorial01/) to learn. The documentation is great and the tutorial covers everything you need for the basics.
I hope you try out Django, it is a powerful tool to have in your back pocket if you need an idea in code quickly. Have fun and good luck! | alexphebert2000 | |
1,865,291 | 🚀 Exploring Front-End Development with HTML, CSS, and React.js! 🌟 | Hey, #COdeWithKOToka! Ready to dive into the world of front-end development? Whether you're a... | 0 | 2024-05-26T01:36:31 | https://dev.to/erasmuskotoka/exploring-front-end-development-with-html-css-and-reactjs-4h7k |
Hey, #COdeWithKOToka! Ready to dive into the world of front-end development? Whether you're a beginner or looking to refresh your skills, here’s a quick guide to get you started!
🔹 HTML: The backbone of any website. It structures the content and gives meaning to your web pages. Think of it as the skeleton that holds everything together!
🔹 CSS: The artist of the web. CSS brings style, colors, and layout to your HTML, making your website look stunning and user-friendly. It's like the clothes your website wears!
🔹 React.js: The powerhouse for building dynamic and interactive UIs. Developed by Facebook, React helps you create reusable components, making your code cleaner and more efficient. It's the magic behind those slick, modern web apps!
💡 Why Learn These?
1. Career Boost Front-end development skills are in high demand.
2. Creative Freedom: Bring your ideas to life with beautiful and functional designs.
3. Community: Join a vibrant community of developers, designers, and creators.
🔥 Pro Tip : Start small! Build a personal project, like a portfolio website
#KEEPcodiNG | erasmuskotoka | |
1,865,280 | Why I came over to dev.to | Since starting my career as a software engineer, I've also started to write development blog posts,... | 0 | 2024-05-26T01:35:32 | https://dev.to/kination/why-i-came-over-to-devto-3kdm | programming, productivity, development | Since starting my career as a software engineer, I've also started to write development blog posts, to remind what I've done during work, and also thought it could be helpful when I decide to look over better opportunity.
Now more than 10 years has been passed. And during that time, I've changed my blogging platform several times(Blogger, Wordpress, Medium, GitHub Pages, and more), for several reasons.
Until last year, I was using `GitHub Pages` as my blog platform. Well, it looks something cool as a developer, it can make design template as you want(as long as you have time and skill), and no limitation on registering Adsense.
But recently I've thought about why I'm trying to write posts spending my time. There are some profits from Adsense, but it is very small, enough to ignore if there's other reason. And now, I'm starting to write my next post in `dev.to`. Here's the reason.
## Biggest reason: Deep dive into community
`dev.to` seems having vibrant and engaged community. In this platform, interaction with readers and other writers looks more genuine and collaborative. The platform fosters an environment where feedback is constructive, discussions are meaningful. These ambience makes it an ideal place for continuous learning and growth, and motivates like-minded people to join and communicates. This opportunity to build relationships is invaluable, and hard to find in other places.
## Comfortable to write technical posts
`dev.to` is specifically designed with developers in mind. It excels in supporting technical content, so make it easier to share code snippets, tutorials. The platform's markdown support is robust, and it integrates well with GitHub gists, making it a breeze to include code in your posts. Additionally, the audience here is more likely to be familiar with technical concepts. Which means, your content reaches readers who can fully appreciate and understand it.
### Focus on writing
Because of comfortableness which I've described above, you can just focus on writing things you want, without thinking of installing, or make settings for your technical snippets.
## Open Source Ethos
`dev.to` is built with an open-source mindset, which aligns with the values of many developers, including myself. The platform itself is open-source, meaning that anyone can contribute to its improvement. This transparency and inclusivity resonate with the principles of sharing knowledge and collaborating on projects that are so integral to the tech community.
## And now...
These are the features which makes me to start over my stories in `dev.to`. Well, I'm just on the starting line, and maybe I could find several disadvantages later.
But til than, I'm planning to write about my knowledge, challenges during work, and projects I'm working on.
| kination |
1,865,290 | JavaScript Promises: Explaining then & catch to a 5 year old. | 1. Promise.catch() is not try{}...catch(){}. The .catch() method of promise is just... | 0 | 2024-05-26T01:31:28 | https://dev.to/geny/javascript-promises-explaining-then-catch-to-a-5-year-old-3agc | javascript, beginners, tutorial, learning | ## 1. Promise.catch() is not try{}...catch(){}.
The .catch() method of promise is just .then(void 0, onRejected). It may seem like it "catches" some errors, but that is just because of the special handling of .then() logic.
## 2. Promise.then()
A confusing quirk of .then(onFulfilled, onRejected), is if **EITHER** of the callbacks throw an error, it's promise will be rejected with the error. For example:
```js
var promise = new Promise(function(resolve, reject){
setTimeout(reject, 1000);
});
promise
.then(function(){
// onFulfilled
}, function(){
// onRejected
throw new Error('oops rejected');
})
.then(void 0, function(e){ // aka .catch
console.log('caught', e);
});
// caught Error: oops rejected
```
or
```js
var promise = new Promise(function(resolve, reject){
setTimeout(resolve, 1000);
});
promise
.then(function(){
// onFulfilled
throw new Error('oops fulfilled');
}, function(){
// onRejected
})
.then(void 0, function(e){ // aka .catch
console.log('caught', e);
});
// caught Error: oops fulfilled
```
In both cases, the error will be **caught by the first then()**, and passed as a rejection to the 2nd (our catch-then).
## 3. Chaining: Unless your then() callback returns a Promise, it will be immediately resolved.
When you chain promises
promise.then(cb1).then(cb2)...
if cb1 returns a promise, cb2 will wait for it. if cb1 returns any other object (or nothing), it will be immediately resolved.
if cb1 errors out, it will be caught by then() logic, and passed to the onRejected of the 2nd then(). If we don't specify one, the **default onRejected** will be used: which just throws an error. Which will be caught again by then(), and passed further down. Example:
```js
var promise = new Promise(function(resolve, reject){
setTimeout(resolve, 1000);
});
promise
.then(function(){
throw new Error('boom');
})
.then(function(){
// we never get here, default onRejected will be called, throw an error, catch it, and pass it further down
})
.then(function(){
// we never get here, default onRejected will be called, throw an error, catch it, and pass it further down
})
.then(function(){
// we never get here, default onRejected will be called, throw an error, catch it, and pass it further down
})
.then(function(){
// we never get here, default onRejected will be called, throw an error, catch it, and pass it further down
})
.then(void 0, function(e){
console.log('caught', e);
});
// caught Error: boom
```
Be aware this catching mechanic is special to then(), for example this won't work:
```js
var promise = new Promise(function(resolve, reject){
setTimeout(function(){
throw new Error('wont-work');
}, 1000);
});
promise
.then(void 0, function(e){
console.log('caught', e);
});
// Uncaught Error: wont-work
```
To illustrate, if we use a **custom onRejected**, and do not throw an error, we can recover the chain (which may, or may not be, what you want)
```js
var promise = new Promise(function(resolve, reject){
setTimeout(resolve, 1000);
});
promise
.then(function(){
throw new Error('boom');
})
.then(function(){
}, function(e){
console.log('interesting, we caught an error', e); // since we don't throw a new one, the chain will recover
})
.then(function(){
console.log('recover')
})
.then(void 0, function(e){
console.log('caught', e);
});
// interesting, we caught an error Error: boom
// recover
``` | geny |
1,853,753 | TASK 15 | Q1 SELENIUM IDE: Selenium IDE is a browser extension used for Recording, Editing, and playing back... | 0 | 2024-05-15T09:46:56 | https://dev.to/subash129/task-15-1d71 | testing, beginners, learning, css | **Q1**
**SELENIUM IDE:**
1. Selenium IDE is a browser extension used for Recording, Editing, and playing back automated tests in a web browser.
2.Selenium provides a user-friendly interface for creating automated test scripts without the need for programming knowledge. User can record their interactions with a web application and generate test scripts in various programming languages supported by selenium WebDriver.
3.Selenium IDE is primarily used for creating simple test cases and performing quick validations. It lacks advanced features for complex test scenarios and does not support dynamic test data generation or external data source integration.
4.Selenium IDE is suitable for beginners, manual testers, and quick test prototyping. It is often used for creating basic regression tests and performing ad-hoc testing during development.
**SELENIUM WEBDRIVER:**
1.Selenium WebDriver is a powerful automation tool used for writing and executing automated tests for web application.
It provides a programming interface that allows testers to write code in various programming languages (such as Java, Python, C#, etc.) to interact with web elements, simulate user actions, and perform assertions. WebDriver supports advanced features such as handling dynamic elements, waits, alerts, pop-ups, and browser navigation.
2.WebDriver offers flexibility and control over test execution, allowing testers to create complex test scenarios, implement data-driven testing, integrate with testing frameworks, and execute tests in parallel across multiple browsers and environments.
3.WebDriver is scalable and suitable for testing web applications of all sizes, from simple websites to complex enterprise applications. It supports a wide range of browsers and platforms, including desktop and mobile browsers.
4.Selenium WebDriver is widely used by automation testers, developers, and quality assurance professionals for writing robust and maintainable automated tests, implementing continuous testing, and ensuring the quality of web applications.
**Selenium Grid:**
1.Selenium Grid is a distributed test execution environment used for running tests in parallel across multiple machines and browsers simultaneously.
2.It consists of a hub and multiple nodes, where the hub acts as a central control point that distributes test execution requests to available nodes. Selenium Grid facilitates cross-browser testing, scalability, and faster test execution by leveraging multiple machines and browsers.
3.Selenium Grid allows testers to execute tests in parallel across different browsers, browser versions, and operating systems, reducing test execution time and improving efficiency.
4.Selenium Grid is suitable for teams and organizations that require cross-browser testing, scalability, and faster feedback on the quality of web applications. It is often used in conjunction with Selenium WebDriver to scale test automation efforts, perform cross-browser testing, and execute tests in a distributed environment.
**Q2, Q5**
https://github.com/Subashk129/Task15.git
**Q3**
Selenium is a popular open-source tool used for automating web browsers. It provides a suite of tools and libraries for automating web applications across different browsers and platforms. Selenium supports multiple programming languages, including Java, Python, C#, Ruby, and JavaScript, making it versatile and widely adopted in the software industry.
**Selenium is useful in automation testing for several reasons:**
1.Cross-Browser Testing: Selenium allows testers to write automated tests that can run across different web browsers, including Chrome, Firefox, Safari, Internet Explorer, and Edge. This ensures that web applications behave consistently across various browser environments.
2.Platform Independence: Selenium WebDriver, the most commonly used component of Selenium, provides a platform-independent API that can interact with web browsers on different operating systems, such as Windows, macOS, and Linux. This enables testers to write tests once and run them on multiple platforms.
3.Automated Testing: Selenium automates interactions with web elements such as buttons, text fields, dropdowns, links, and more. Testers can simulate user actions like clicking, typing, submitting forms, navigating between pages, and validating page content without manual intervention.
4.Regression Testing: Selenium is well-suited for regression testing, where tests are repeatedly executed to ensure that recent changes to the codebase do not introduce new bugs or regressions. Automated regression testing with Selenium helps maintain software quality and stability over time.
5.Parallel Testing: Selenium Grid allows testers to execute tests in parallel across multiple browser and operating system combinations. Parallel testing improves test execution speed, reduces testing time, and enables efficient use of testing resources.
6.Integration with Testing Frameworks: Selenium integrates seamlessly with popular testing frameworks such as JUnit, TestNG, NUnit, and PyTest, enabling testers to organize and execute tests, generate test reports, and manage test suites effectively.
7.Continuous Integration and Delivery (CI/CD): Selenium tests can be integrated into CI/CD pipelines to automate the testing process as part of the software development lifecycle. Continuous testing with Selenium helps identify defects early, validate changes quickly, and deliver high-quality software releases more frequently.
**Q4**
**1. Chromed river:** Chromed river is used to automate Google Chrome browser. It provides a WebDriver implementation for Chrome and allows Selenium tests to interact with Chrome browser instances.
**2. Gecko Driver (Firefox):** Gecko Driver is used to automate Mozilla Firefox browser. It provides a WebDriver implementation for Firefox and enables Selenium tests to interact with Firefox browser instances.
**3. WebDriver for Microsoft Edge:** Microsoft WebDriver is used to automate Microsoft Edge browser. It provides a WebDriver implementation for Edge and allows Selenium tests to interact with Edge browser instances.
**4. InternetExplorerDriver:** InternetExplorerDriver is used to automate Internet Explorer browser. It provides a WebDriver implementation for Internet Explorer and allows Selenium tests to interact with IE browser instances.
**5. Safari Driver:** Safari Driver is used to automate Safari browser on macOS. It provides a WebDriver implementation for Safari and allows Selenium tests to interact with Safari browser instances.
**6. Opera Driver:** Opera Driver is used to automate Opera browser. It provides a WebDriver implementation for Opera and allows Selenium tests to interact with Opera browser instances.
**7. Edge Chromium Driver:** Microsoft introduced Edge Chromium Driver to automate the Chromium-based Microsoft Edge browser. It is different from the WebDriver for the legacy version of Microsoft Edge.
**Q5**
1. Setup Environment: Need to Download the necessary WebDriver executables for the browser’s intend to automate ( ChromeDriver for Google Chrome, GeckoDriver for Firefox). Ensure that WebDriver executable are added to system PATH or specify the path to the WebDriver executable in script.
2. Create a WebDriver Instance: Instantiate a WebDriver object for the desired browser (ChromeDriver, FirefoxDriver) to establish a connection with the browser.
3. Navigate to a Web Page: Use the WebDriver instance to navigate to a specific URL or web page.
4. Interact with Web Elements: Locate and interact with web elements (buttons, text fields, links) using various methods provided by the WebDriver API (findElement, sendKeys, click).
5. Perform Actions: Perform actions such as clicking on elements, entering text, submitting forms, navigating between pages, etc.
6. Verify Results: Verify the expected outcomes by validating the content, behavior, or state of web elements on the page.
7. Close the Browser: Close the browser window or quit the WebDriver session to release system resources.
| subash129 |
1,865,288 | Check PyTorch version, CPU and GPU(CUDA) in PyTorch | *Memos: My post explains how to create a tensor. My post explains how to access a tensor. My... | 0 | 2024-05-26T01:22:20 | https://dev.to/hyperkai/check-pytorch-version-cpu-and-gpucuda-in-pytorch-6jk | pytorch, version, cpu, gpu | *Memos:
- [My post](https://dev.to/hyperkai/create-a-tensor-in-pytorch-127g) explains how to create a tensor.
- [My post](https://dev.to/hyperkai/access-a-tensor-in-pytorch-1f4e) explains how to access a tensor.
- [My post](https://dev.to/hyperkai/istensor-numel-and-device-in-pytorch-2eha) explains [is_tensor()](https://pytorch.org/docs/stable/generated/torch.is_tensor.html), [numel()](https://pytorch.org/docs/stable/generated/torch.numel.html) and [device()](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device).
- [My post](https://dev.to/hyperkai/type-conversion-with-type-to-and-a-tensor-in-pytorch-2a0g) explains type conversion with [type()](https://pytorch.org/docs/stable/generated/torch.Tensor.type.html), [to()](https://pytorch.org/docs/stable/generated/torch.Tensor.to.html) and a tensor.
- [My post](https://dev.to/hyperkai/type-promotion-resulttype-promotetypes-and-cancast-in-pytorch-33p8) explains type promotion, [result_type()](https://pytorch.org/docs/stable/generated/torch.result_type.html), [promote_types()](https://pytorch.org/docs/stable/generated/torch.promote_types.html) and [can_cast()](https://pytorch.org/docs/stable/generated/torch.can_cast.html).
- [My post](https://dev.to/hyperkai/device-conversion-fromnumpy-and-numpy-in-pytorch-1iih) explains device conversion, [from_numpy()](https://pytorch.org/docs/stable/generated/torch.from_numpy.html) and [numpy()](https://pytorch.org/docs/stable/generated/torch.Tensor.numpy.html).
- [My post](https://dev.to/hyperkai/setdefaultdtype-setdefaultdevice-and-setprintoptions-in-pytorch-55g8) explains [set_default_dtype()](https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html), [set_default_device()](https://pytorch.org/docs/stable/generated/torch.set_default_device.html) and [set_printoptions()](https://pytorch.org/docs/stable/generated/torch.set_printoptions.html).
- [My post](https://dev.to/hyperkai/manualseed-initialseed-and-seed-in-pytorch-5gm8) explains [manual_seed()](https://pytorch.org/docs/stable/generated/torch.manual_seed.html), [initial_seed()](https://pytorch.org/docs/stable/generated/torch.initial_seed.html) and [seed()](https://pytorch.org/docs/stable/generated/torch.seed.html).
`__version__` can check PyTorch version as shown below. *`__version__` can be used with [torch](https://pytorch.org/docs/stable/torch.html) but not with a tensor:
```python
import torch
torch.__version__ # 2.2.1+cu121
```
[cpu.is_available()](https://pytorch.org/docs/stable/generated/torch.cpu.is_available.html), [cpu.device_count()](https://pytorch.org/docs/stable/generated/torch.cpu.device_count.html) or [cpu.current_device()](https://pytorch.org/docs/stable/generated/torch.cpu.current_device.html) can check if CPU is available, getting a scalar as shown below:
*Memos:
- `cpu.is_available()`, `cpu.device_count()` or `cpu.current_device()` can be used with [torch](https://pytorch.org/docs/stable/torch.html) but not with a tensor.
- `cpu.device_count()` can get the number of CPUs. *It always gets `1`:
- `cpu.current_device()` can get the index of a currently selected CPU. *It always gets `cpu`:
```python
import torch
torch.cpu.is_available() # True
torch.cpu.device_count() # 1
torch.cpu.current_device() # cpu
```
[cuda.is_available()](https://pytorch.org/docs/stable/generated/torch.cuda.is_available.html) or [cuda.device_count()](https://pytorch.org/docs/stable/generated/torch.cuda.device_count.html) can check if GPU(CUDA) is available, getting a scalar as shown below:
*Memos:
- `cuda.is_available()` or `cuda.device_count()` can be used with `torch` but not with a tensor.
- `cuda.device_count()` can get the number of GPUs.
```python
import torch
torch.cuda.is_available() # True
torch.cuda.device_count() # 1
```
In addition, you can use [cuda.current_device()](https://pytorch.org/docs/stable/generated/torch.cuda.current_device.html), [cuda.get_device_name()](https://pytorch.org/docs/stable/generated/torch.cuda.get_device_name.html) or [cuda.get_device_properties()](https://pytorch.org/docs/stable/generated/torch.cuda.get_device_properties.html), getting a scalar as shown below:
*Memos:
- `cuda.current_device()`, `cuda.get_device_name()` or `cuda.get_device_properties()` can be used with `torch` but not with a tensor.
- `cuda.current_device()` can get the index of a currently selected GPU.
- `cuda.get_device_name()` can get the name of a GPU.
*Memos:
- The 1st argument with `torch` is `device`(Optional-Type:(`str`, `int` or [device()](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device).
- If `device` is not given, the `device` of `cuda.current_device()` is used.
- The number must be zero or positive.
- Only `cuda` can be set to `device`.
- [My post](https://dev.to/hyperkai/istensor-numel-and-device-in-pytorch-2eha) explains `device()`.
- `cuda.get_device_properties()` can get the properties of a GPU.
*Memos:
- The 1st argument with `torch` is `device`(Required-Type:`str`, `int` or [device()](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device)).
- The number must be zero or positive.
- Only `cuda` can be set to `device`.
- [My post](https://dev.to/hyperkai/istensor-numel-and-device-in-pytorch-2eha) explains `device()`.
```python
torch.cuda.current_device() # 0
torch.cuda.get_device_name()
torch.cuda.get_device_name(device='cuda:0')
torch.cuda.get_device_name(device='cuda')
torch.cuda.get_device_name(device=0)
torch.cuda.get_device_name(device=torch.device(device='cuda:0'))
torch.cuda.get_device_name(device=torch.device(device='cuda'))
torch.cuda.get_device_name(device=torch.device(device=0))
torch.cuda.get_device_name(device=torch.device(type='cuda'))
torch.cuda.get_device_name(device=torch.device(type='cuda', index=0))
# Tesla T4
torch.cuda.get_device_properties(device='cuda:0')
torch.cuda.get_device_properties(device='cuda')
torch.cuda.get_device_properties(device=0)
torch.cuda.get_device_properties(device=torch.device(device='cuda:0'))
torch.cuda.get_device_properties(device=torch.device(device='cuda'))
torch.cuda.get_device_properties(device=torch.device(device=0))
torch.cuda.get_device_name(device=torch.device(type='cuda'))
torch.cuda.get_device_name(device=torch.device(type='cuda', index=0))
# _CudaDeviceProperties(name='Tesla T4', major=7, minor=5,
# total_memory=15102MB, multi_processor_count=40)
```
[!nvidia-smi](https://developer.nvidia.com/system-management-interface#:~:text=The%20NVIDIA%20System%20Management%20Interface,monitoring%20of%20NVIDIA%20GPU%20devices.) can get the information about GPUs as shown below:
```python
!nvidia-smi
Wed May 15 13:18:15 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 56C P0 28W / 70W | 105MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
``` | hyperkai |
1,865,287 | Creating my portfolio my way | Text Example. | 0 | 2024-05-26T01:13:22 | https://dev.to/thekrmichaels/creating-my-portfolio-my-way-253e | Text Example. | thekrmichaels | |
1,865,285 | First post! | This is my first post. I 'm just starting again in the programming world and I wanna learn more and... | 0 | 2024-05-26T01:01:19 | https://dev.to/oscarvege/first-post-5574 | android, kotlin, java | This is my first post. I 'm just starting again in the programming world and I wanna learn more and more. | oscarvege |
1,865,284 | I made a simple JavaScript to Python Converter with AI | Hey everyone! It's been a while since I've made a post. Mostly, I've been busy with random projects... | 0 | 2024-05-26T00:59:32 | https://dev.to/best_codes/i-made-a-simple-javascript-to-python-converter-with-ai-3069 | javascript, python, ai, webdev | Hey everyone! It's been a while since I've made a post.
Mostly, I've been busy with random projects — analytics tools, AI stuff, and random bots…
Finally, I decided to make a side project, so I have something to post about. 😀
First things first — if you just clicked on this post so you could see my project, here's the link:
[https://ai-code-converter-best.vercel.app/convert/js-to-python](https://the-best-codes.github.io/BC_LATS/?url=https://ai-code-converter-best.vercel.app/convert/js-to-python)
## About the project
First, I needed a hosting provider. That wasn't hard to find! I just used [Vercel](https://the-best-codes.github.io/BC_LATS/?url=https://vercel.com/home), a free website for deploying React pages, APIs, serverless functions, and more. I chose React with Next.js for my frameworks.
Making this project was, in fact, very easy. The hardest part was the code editors. Textareas just look so boring! I wanted some code syntax highlighting, formatting features, etc.
I considered making something myself using Prism JS, but that got complicated. Instead, I just used Monaco editor for React, which is what VS Code is built on. If you haven't seen that, be sure to check it out!
[https://microsoft.github.io/monaco-editor/](https://the-best-codes.github.io/BC_LATS/?url=https://microsoft.github.io/monaco-editor/)
Ugly textarea:

🔥 Awesome Monaco Editor 🔥

Well, after creating the code editors, the rest was a breeze. Of course, I can't have bots using my website, so you will have to complete a rather annoying bot challenge every time you convert a code. I picked [hCaptcha](https://the-best-codes.github.io/BC_LATS/?url=https://www.hcaptcha.com/), which was super easy to set up and use, plus it was free!
Then, I made a basic page layout. I'm new to React, so it took me a while to get something I liked. It's not the most mobile-friendly website in the world (sorry, phone users), but at least it works.
I had to create a simple API endpoint to handle captcha verification and API requests to my awesome AI provider, [ConvoAI](https://the-best-codes.github.io/BC_LATS/?url=https://convoai.tech/?ref=bestcodes). (ConvoAI is awesome, in case you didn't already catch that.)
Right now, the converter uses `GPT-3.5-TURBO` by OpenAI. I would use `GPT-4` (or even `GPT-4o`), but that would not be very cost-effective. 😔
The finished project looks like this:

At some point, I might make a Python to JavaScript converter (the opposite of what this one does).
Of course, this is far from perfect! There are multitudes of things the JavaScript can do which Python cannot, so not everything can be converted. Also, the AI will refuse to help you convert 'unethical' codes. I tried to convert a code that used TensorFlow to generate AI image tags for an image, but was randomly told that my code was unethical. 🤣
{% embed https://ai-code-converter-best.vercel.app/convert/js-to-python %}
## Provide feedback!
I love to know what you think about my projects. If there's a way I can improve it, or you just want to ask me about it, be sure to do so in the comments area below this post.
Thanks for reading!
_This post is 100% by me **without** AI assistance. ✨_
_Article by [BestCodes](https://the-best-codes.github.io?dev.to=js2py_1)._ | best_codes |
1,865,283 | Fixing Laravel CSS and HTML Not Displaying After Deployment | If you're working with Laravel and notice that your CSS and HTML files are not displaying correctly,... | 0 | 2024-05-26T00:56:16 | https://dev.to/japhethjoepari/resolving-css-and-html-not-displaying-in-laravel-hosted-files-2605 | webdev, html, laravel, php | If you're working with Laravel and notice that your CSS and HTML files are not displaying correctly, especially after deploying your application, you are not alone. This issue is common, particularly when transitioning from a local development environment to a production server. In this article, we'll explore the cause of this problem and provide a straightforward solution.
## Understanding the Issue
When your Laravel application is hosted, you might encounter a situation where the CSS and HTML files do not load properly. This often happens due to an incorrect URL scheme being used for your asset links. In a local environment, the application typically runs over HTTP. However, in a production environment, it often runs over HTTPS.
### Common Symptoms
- Stylesheets (CSS) not applying to your HTML.
- HTML files not rendering correctly.
- Browser console errors indicating mixed content issues.
These symptoms are usually caused by the application serving assets over HTTP while the site itself is loaded over HTTPS, resulting in blocked resources due to security restrictions.
## The Solution
The solution to this problem involves ensuring that all URLs generated by Laravel use the correct scheme (HTTP or HTTPS) based on the environment. This can be achieved by forcing HTTPS in the production environment. Here’s how you can do it:
### Step-by-Step Guide
1. **Open the App Service Provider**
Navigate to `app/Providers/AppServiceProvider.php` in your Laravel project.
2. **Modify the App Service Provider**
Update the `AppServiceProvider` class to force HTTPS in non-local environments. Here’s the code you need to add or modify:
```php
<?php
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
use Illuminate\Pagination\Paginator;
use Illuminate\Support\Facades\Blade;
use Illuminate\Support\Facades\URL;
class AppServiceProvider extends ServiceProvider
{
/**
* Register any application services.
*
* @return void
*/
public function register()
{
if (env('APP_ENV') !== 'local') {
URL::forceScheme('https');
}
}
}
```
### Explanation of the Code
- **Namespace and Imports**: Ensure you have the correct namespace and import statements at the top of your file.
- **Register Method**: In the `register` method, we check if the application environment (`APP_ENV`) is not `local`. If this condition is true, we force HTTPS using `URL::forceScheme('https');`.
- **Boot Method**: The `boot` method is where we can perform additional bootstrapping. In this example, we're using Bootstrap for pagination, but you might have other configurations here as well.
### Why This Works
By forcing the URL scheme to HTTPS in non-local environments, we ensure that all asset links are correctly generated with the HTTPS scheme. This prevents mixed content issues and ensures that your CSS and HTML files are loaded correctly.
## Additional Tips
- **Environment Configuration**: Ensure your `.env` file is correctly set up with `APP_ENV=production` for your production environment.
- **Clear Config Cache**: After making changes to your configuration, clear the config cache with `php artisan config:cache`.
## Conclusion
By modifying the `AppServiceProvider` to force HTTPS in production environments, you can resolve issues with CSS and HTML files not displaying correctly in your Laravel application. This simple yet effective change ensures that your assets are always served over the correct scheme, maintaining the integrity and security of your application.
Feel free to ask any questions or share your experiences in the comments below! Happy coding! | japhethjoepari |
1,865,282 | Best sites to practice your programming logic 💻 | Hello everyone! Here are some of the best options (in my opinion) to practice and improve your... | 0 | 2024-05-26T00:53:51 | https://dev.to/miguelrodriguezp99/best-sites-to-practice-your-programming-logic-4pgc | learning, train, javascript, python | Hello everyone! Here are some of the best options (in my opinion) to practice and improve your programming logic:
- **[Codewars](https://www.codewars.com)**: Tackle programming challenges known as "katas" that range from beginner to advanced levels, in various languages.
- **[Adventjs](https://adventjs.dev/es)**: Participate in daily challenges during the holiday season. It's a great way to stay motivated and improve your skills with JavaScript.
- **[Hackerrank](https://www.hackerrank.com)**: A wide variety of programming problems and technical interview challenges in multiple languages. Perfect for preparing for job interviews.
- **[Exercism](https://exercism.org)**: Offers practical exercises in over 50 programming languages, with the option to receive personalized mentorship.
- **[Edabit](https://edabit.com/challenges)**: Practice with short and quick challenges, suitable for both beginners and more experienced programmers.
- **[Project euler](https://projecteuler.net)**: Mathematical and programming challenges that will help you improve your logical and algorithmic skills.
I hope you find these platforms useful and enjoy enhancing your programming skills. Happy coding! | miguelrodriguezp99 |
1,865,279 | Advanced CSS Grid Layout Techniques | Introduction CSS Grid is a powerful layout tool in web development that allows designers... | 0 | 2024-05-26T00:34:46 | https://dev.to/kartikmehta8/advanced-css-grid-layout-techniques-35o9 | webdev, javascript, beginners, programming | ## Introduction
CSS Grid is a powerful layout tool in web development that allows designers to create complex and dynamic layouts. It offers a more streamlined and effective way of designing websites, compared to the traditional float-based layouts. While most people are familiar with the basic concepts of CSS Grid, there are some advanced techniques that can take your layout game to the next level.
## Advantages
One of the main advantages of CSS Grid is its ability to create responsive layouts without the need for media queries. With the use of grid templates and the grid auto-placement feature, designers can easily create layouts that adapt to different screen sizes. This enhances the overall user experience and makes the website more accessible to all devices.
## Disadvantages
One drawback of CSS Grid is its lack of browser support for older versions of Internet Explorer. This can limit the use of some advanced features and require the use of fallback methods for older browsers. Additionally, the learning curve for CSS Grid can be steep for those who are new to web development.
## Features
Some advanced CSS Grid techniques include the use of named grid lines, grid area, and grid spanning. Named grid lines allow for more control over the placement of items within the grid. Grid area allows for the creation of visual sections within the grid, making it easier to visualize and organize the layout. Grid spanning allows for items to span across multiple grid areas, creating more complex and dynamic designs.
### Examples of Advanced CSS Grid Techniques
#### Named Grid Lines
```css
.grid-container {
display: grid;
grid-template-columns: [start] 1fr [middle] 1fr [end];
grid-template-rows: [top] 100px [middle] auto [bottom];
}
.item1 {
grid-column: start / end;
grid-row: top;
}
```
This example demonstrates how to use named grid lines to place an item that spans from the start to the end of the grid horizontally, and sits at the top row.
#### Grid Area
```css
.grid-container {
display: grid;
grid-template-areas:
"header header header"
"main main sidebar"
"footer footer footer";
}
.header {
grid-area: header;
}
.main {
grid-area: main;
}
.sidebar {
grid-area: sidebar;
}
.footer {
grid-area: footer;
}
```
This example shows how to define grid areas and assign grid items to these areas for a clear and organized layout.
#### Grid Spanning
```css
.item {
grid-column: span 2; /* Spans across two column tracks */
grid-row: span 3; /* Spans across three row tracks */
}
```
Grid spanning allows elements to occupy more than one grid cell, which is useful for creating more visually interesting and dynamic layouts.
## Conclusion
CSS Grid offers a plethora of advanced techniques that can greatly enhance the layout capabilities of a website. While there may be some drawbacks and challenges, the benefits far outweigh them. With its ability to create responsive layouts and its advanced features, CSS Grid is a must-know for any modern web designer. So, if you haven't explored the full potential of CSS Grid yet, now is the time to do so. | kartikmehta8 |
1,865,278 | File shares with limited access (corporate virtual networks) - Azure Files and Azure Blobs | Create and configure a storage account for Azure Files. Create a storage account for the... | 0 | 2024-05-26T00:31:21 | https://dev.to/olawaleoloye/file-shares-with-limited-access-corporate-virtual-networks-azure-files-and-azure-blobs-4p3p | virtualmachine, virtualnetwork, access, azure | ### Create and configure a storage account for Azure Files.
**Create a storage account for the finance department’s shared files.**
In the portal, **search for and select Storage accounts**.

**Select + Create.**

**For Resource group select Create new**. _Give your resource group a name and select OK to save your changes._

**Provide a Storage account name. Ensure the name meets the naming requirements.**
**Set the Performance to Premium.**
**Set the Premium account type to File shares.**
**Set the Redundancy to Zone-redundant storage.**

**Select Review and then Create the storage account.**

**Wait for the resource to deploy.**

**Select Go to resource.**

### Create and configure a file share with directory.
**Create a file share for the corporate office.**
**In the storage account, in the Data storage section, select the File shares blade.**

**Select + File share and provide a Name.**

**Review the other options, but take the defaults.**

**Select Create**

**Add a directory to the file share for the finance department.** _For future testing, upload a file._


**Select your file share and select + Add directory.**
**Name the new directory finance**.

**Select Browse and then select the finance directory.
Notice you can Add directory to further organize your file share.**

**Upload a file of your choosing.**


### Configure and test snapshots.
Similar to blob storage, you need to protect against accidental deletion of files. You decide to use snapshots.
Select your file share.
**In the Operations section, select the Snapshots blade.**
**Select + Add snapshot.** _The comment is optional._
**Select OK.**


**Select your snapshot and verify your file directory and uploaded file are included.**

_Practice using snapshots to restore a file._
**Return to your file share.**
**Browse to your file directory.**

**Locate your uploaded file and in the Properties pane select Delete. Select Yes to confirm the deletion.**


**Select the Snapshots blade and then select your snapshot.**

**Select the Snapshots blade and then select your snapshot.
Navigate to the file you want to restore**

**Select the file and the select Restore.
Provide a Restored file name.**

**Verify your file directory has the restored file.**

### Configure restricting storage access to selected virtual networks.
_This tasks in this section require a virtual network with subnet. In a production environment these resources would already be created._
**Search for and select Virtual networks.**

**Select Create. Select your resource group. and give the virtual network a name.**


**Take the defaults for other parameters, select Review + create, and then Create.**

**Wait for the resource to deploy.
Select Go to resource.**

**In the Settings section, select the Subnets blade.**
**Select the default subnet.**

**In the Settings section, select the Subnets blade.**
**Select the default subnet.**
_In the Service endpoints section choose **Microsoft.Storage** in the Services drop-down._
**Do not make any other changes.**
**Be sure to Save your changes.**

_The storage account should only be accessed from the virtual network you just created. Learn more about using private storage endpoints.._
**Return to your files storage account.**
**In the Security + networking section, select the Networking blade.
Change the Public network access to Enabled from selected virtual networks and IP addresses.**
**In the Virtual networks section, select Add existing virtual network.
Select your virtual network and subnet, select Add.**

**Be sure to Save your changes.**

| olawaleoloye |
1,865,277 | Yogify : Your Yoga community builder app | This is a submission for the Deployed Link for Yogify What I Built The inspiration to... | 0 | 2024-05-26T00:30:02 | https://dev.to/dailydev/yogify-your-yoga-community-builder-app-jb5 | devchallenge, awschallenge, amplify, fullstack | *This is a submission for the [Deployed Link for Yogify ](https://main.d3nrs7i5oosne0.amplifyapp.com)*
## What I Built
The inspiration to build Yogic Yoga came from a personal experience. My brother struggled with restlessness and hyperactivity, constantly moving around and unable to stay still. After numerous consultations with doctors, one suggested trying yoga and meditation. To our surprise, these practices made significant improvements in his health.
One morning, on my way to work, I noticed a group of elderly people practicing yoga in the park. Intrigued, I thought it would be beneficial to take my brother to join them. However, the next day, no one showed up. I realized there was a major communication gap among the elderly participants. While many use WhatsApp to communicate, organizing regular sessions was still a challenge.
As someone who enjoys yoga, I often attend group classes included in my gym membership. After discussing with my trainer, he agreed to offer free classes to these elderly people for a month, planning to charge a minimal fee afterward. Despite the willingness, there was a disconnect between the trainer and the potential participants.
This experience highlighted several real problems related to yoga that led to the development of Yogify Yoga:
**Communication Gaps:** Organizing and communicating about yoga sessions, especially among elderly people, can be challenging without a centralized platform.
**Access to Trainers:** Finding and connecting with qualified yoga trainers who are willing to offer their services, particularly at affordable rates, is often difficult.
**Health Benefits:** Yoga and meditation have proven health benefits, but many people are unaware or unsure how to start or where to find resources.
**Community Support:** There is a need for a supportive community where individuals can share their progress, encourage each other, and stay motivated.
## Demo
[Video Link](https://youtu.be/87QzA-l3XSo): https://youtu.be/87QzA-l3XSo
[Github Link](https://github.com/AdityaGupta20871/Yogify) : https://github.com/AdityaGupta20871/Yogify

Authentication Page Although Google Login not working for now

Exercise Page listing all basic yoga poses

Workshop Page listing all workshops for people who are willing to conduct workshops

Form page for user registration for workshop

Workshop Form Page for creating workshops

HomePage

Users Page listing all users

Figma Experience
## Journey
**Initial Setup:**
- I Started building Yogify on May 19 with no prior AWS experience.
- Cloned the repository and explored the documentation to get started.
**Challenges with Amplify Sandbox**
- Struggled with configuring IAM and setting up the Amplify Sandbox.
- Slow laptop performance made redeployment time-consuming and frustrating.
- Eventually found the Amplify documentation helpful in navigating these challenges.
**Configuring Authentication**
- Set up authentication using the Authenticator.
- Faced difficulties with implementing Google authentication due to sandbox setup issues.
- Realized the importance of configuring the sandbox first and setting keys in secrets, not environment variables.
- Overcame these hurdles with the help of detailed Amplify guides.
**Data and Storage Services Integration**
- Found setting up the data service straightforward with Amplify Gen 2 documentation.
- Encountered complexity in integrating the storage service.
- Managed to use StorageImage and StorageManager components after considerable effort.
- Look forward to exploring how to use data and storage services together more effectively.
**Serverless Functions and Customizations**
- Successfully added a custom message for sending confirmation links during authentication.
- This addition enhanced the user experience with a personalized touch.
- Using Amplify's Figma Plugin
**Utilized the Figma plugin to generate code and create reusable components.**
- Appreciated the ease of editing and adding code using the Amplify UI builder plugin, though refinement was still necessary.
- Had a positive experience with the AWS Amplify UI Builder.
- Overall Experience and Future Prospects
**Despite the hurdles, found the comprehensive Amplify documentation crucial for progress.**
- Gained substantial knowledge about AWS Amplify Gen 2.
- Proud of the robust foundation built for Yogify.
- Excited to continue developing and refining the app.
**Connected Components and/or Feature Full**
Yes, I used various components from AWS Amplify, including StorageImage, StorageManager, Authenticator ,ThemeProvider, Flex, TextField, Button, PhoneNumberField, TextField, Table, TableHead, TableRow, TableCell, TableBody ,Geo among others. My project incorporates all four core features: data, authentication, serverless functions, and file storage, according to the Gen 2 React documentation.
**Future**
I would like to add a blog section and AI assistant and i also saw in app messaging feature using aws which also excited me although in this short deadline I could not add it I would love to add them in future along with dedicated community forum for yoga lovers.
| dailydev |
1,865,215 | LEGITIMATE RECOVERY TEAM FOR BITCOIN/CRYPTO ASSETS RECOVERY/FOLKWIN EXPERT RECOVERY. | In the symphony of life, where discordant notes of betrayal and mistrust threaten to drown out the... | 0 | 2024-05-25T22:03:19 | https://dev.to/annette_oconnor_4252a11/legitimate-recovery-team-for-bitcoincrypto-assets-recoveryfolkwin-expert-recovery-p9i |

In the symphony of life, where discordant notes of betrayal and mistrust threaten to drown out the melody of trust and harmony, our tale unfolds as a crescendo of reconciliation and redemption. Amidst the cacophony of accusations and recriminations, my partner and I found ourselves ensnared in a tempest of financial misfortune and relational strife. Our journey into the labyrinthine world of investments began with the promise of prosperity and the allure of potential gains. Yet, as the curtain fell on our euphoria, the harsh reality of deceit and betrayal cast a shadow over our once-idyllic partnership. Accusations of financial malfeasance and emotional turmoil ensued, threatening to tear the fabric of our relationship asunder. Amidst the chaos of our discord, a serendipitous encounter with {FOLKWIN} Expert Recovery emerged as a beacon of hope amidst the darkness. Introduced to us by my repentant partner, their reputation as virtuosos in the realm of financial restitution preceded them, offering a glimmer of redemption amidst the wreckage of our shattered dreams. Initially met with skepticism and trepidation, their assurances of legitimacy and efficacy gradually permeated the walls of our doubt, paving the way for a tentative alliance born of necessity and desperation. With meticulous precision and unwavering dedication, they embarked on a journey of discovery, unraveling the intricate threads of our financial entanglement with surgical precision. Like virtuoso conductors orchestrating a symphony of redemption, {FOLKWIN} Expert Recovery deftly navigated the complex nuances of our case, conducting a harmonious melody of forensic acumen and digital prowess. Each transaction was meticulously dissected, each vulnerability unearthed, as they delved into the heart of darkness, shedding light on the clandestine machinations of our adversaries. In the crucible of adversity,{FOLKWIN} Expert Recovery emerged as stalwart guardians of integrity and champions of justice, restoring our faith in the possibility of redemption amidst the wreckage of our shattered dreams. With their assistance, we reclaimed our lost funds and salvaged the remnants of our fractured relationship, emerging stronger and more resilient in the aftermath of our trials. In conclusion, the symphony of redemption orchestrated by {FOLKWIN} Expert Recovery stands as a testament to the transformative power of forgiveness and the unwavering resolve of the human spirit.
To those adrift in the tumultuous seas of financial misfortune or relational discord, I implore you to heed the clarion Whatsapp:# +1 (740)705-0711 {FOLKWIN} Expert Recovery.{Or} Email:# Folkwinexpertrecovery @ tech-center . com , Website:# ww w.folkwinexpertrecovery.com .
In their capable hands, the cacophony of chaos gives way to a harmonious crescendo of reconciliation and redemption, offering solace and resolution to all who seek it. Kisses....
God Bless,
Annette O' Connor. | annette_oconnor_4252a11 | |
1,865,276 | PHP vs. Node.js: A Full-Stack Developer’s Guide to Choosing the Right Technology | Choosing the right technology stack is crucial for the success of any web development project. Two... | 0 | 2024-05-26T00:28:32 | https://dev.to/callumdev1337/php-vs-nodejs-a-full-stack-developers-guide-to-choosing-the-right-technology-25c0 | php, javascript, webdev, programming | Choosing the right technology stack is crucial for the success of any web development project. Two popular options for full-stack development are PHP and Node.js. Each has its strengths and weaknesses, and understanding these can help you make an informed decision based on your project requirements. In this post, we’ll explore the key differences between using PHP and Node.js for full-stack development.
## PHP: A Veteran in Web Development
### Overview
PHP (Hypertext Preprocessor) is a server-side scripting language that has been around since 1995. It was designed specifically for web development and has powered a significant portion of the web, including platforms like WordPress, Joomla, and Drupal.
### Strengths
1. **Maturity and Stability**: PHP has been around for decades, which means it is stable and has a vast ecosystem of libraries, frameworks, and tools.
2. **Ease of Use**: PHP is relatively easy to learn and use, making it a good choice for beginners. The language’s syntax is straightforward and well-documented.
3. **Wide Hosting Support**: PHP is supported by almost all web hosting providers, often with pre-configured environments that make deployment straightforward.
4. **Frameworks**: PHP boasts several robust frameworks like Laravel, Symfony, and CodeIgniter, which speed up development and enforce good practices.
### Weaknesses
1. **Concurrency**: PHP is inherently single-threaded, which can be a limitation for handling a large number of simultaneous connections.
2. **Performance**: While PHP 7 and later versions have improved performance significantly, PHP generally lags behind Node.js in raw performance benchmarks.
3. **Asynchronous Programming**: PHP lacks built-in support for asynchronous programming, which can be a limitation for real-time applications.
## Node.js: The Modern Challenger
### Overview
Node.js is a runtime environment that allows you to run JavaScript on the server side. Since its release in 2009, Node.js has gained massive popularity, especially for building scalable and high-performance applications.
### Strengths
1. **Performance**: Node.js is built on the V8 JavaScript engine, which is known for its high performance. Its non-blocking, event-driven architecture makes it ideal for handling concurrent connections.
2. **JavaScript Everywhere**: With Node.js, you can use JavaScript for both client-side and server-side development, which simplifies the development process and allows for code reuse.
3. **Asynchronous Programming**: Node.js natively supports asynchronous programming, making it a great choice for real-time applications like chat apps, online gaming, and live streaming.
4. **Rich Ecosystem**: Node.js has a vast ecosystem of packages available through npm (Node Package Manager), allowing developers to easily add functionality to their applications.
### Weaknesses
1. **Maturity**: While rapidly growing, Node.js is still younger than PHP, meaning it might not have the same level of stability and long-term support.
2. **Learning Curve**: JavaScript, especially with its asynchronous nature, can be challenging to master, which might steepen the learning curve for new developers.
3. **CPU-Intensive Tasks**: Node.js can struggle with CPU-intensive tasks since its single-threaded nature means it can only handle one task at a time. While this can be mitigated with worker threads or clustering, it adds complexity.
## When to Use PHP?
1. **Content Management Systems (CMS)**: PHP is a great choice for CMS-driven websites due to its wide adoption in platforms like WordPress.
2. **Shared Hosting Environments**: If you plan to deploy on a shared hosting environment, PHP is often the better choice due to its widespread support.
3. **Legacy Projects**: Maintaining or upgrading existing PHP projects can benefit from sticking with PHP to leverage existing codebases and developer familiarity.
## When to Use Node.js?
1. **Real-Time Applications**: For applications requiring real-time capabilities, such as chat apps, live streaming, or collaborative tools, Node.js’s asynchronous nature shines.
2. **Single Language Development**: If your team is proficient in JavaScript, using Node.js for both front-end and back-end development can streamline the development process.
3. **Scalability**: Node.js is well-suited for microservices architectures and applications expected to scale horizontally.
## Conclusion
Both PHP and Node.js have their place in full-stack development. PHP’s maturity, stability, and ease of use make it a reliable choice for traditional web applications and content management systems. On the other hand, Node.js offers superior performance and scalability for real-time, data-intensive applications, making it a strong contender for modern web development.
Ultimately, the choice between PHP and Node.js should be guided by your specific project requirements, team expertise, and long-term maintenance considerations. Happy coding!
---
Feel free to share your experiences or ask questions in the comments below. Let’s continue the discussion on the best use cases for PHP and Node.js in full-stack development! | callumdev1337 |
1,865,255 | Good alternatives to Heroku | A while ago, I attempted to deploy a side project on Heroku, and, to my surprise, a lot had changed... | 0 | 2024-05-26T00:23:30 | https://dev.to/diogoviana19/good-alternatives-to-heroku-4ach | A while ago, I attempted to deploy a side project on Heroku, and, to my surprise, a lot had changed on the platform. Basically, Heroku's free-tier service was no longer available. This was not only bad news for me, but also for every Rails developer who had side projects running there.
So, I went to Twitter to express my concerns about these changes and received many replies suggesting good alternatives. As a result, I have compiled all these suggestions here for you and for my future reference
#### Here's the list:
1. [Digital Ocean](https://www.digitalocean.com/) -
Good prices, documentation and they have something similar to Heroku with App Platform. Good for small and big projects.
1. [Hetzner](https://www.hetzner.com/de/) -
Very similar to Digital Ocean, you can create VPS, databases, load balancers and more. Good prices and it can handle both big and small projects.
1. [Vercel](https://vercel.com) -
Never used but and it seems very specific for Frontend developers.
1. [Fly.io](https://fly.io/) -
Very similar to Heroku too, easy to use and support for multiple stacks/languages.
1. [Pocketbase](https://pocketbase.io/) -
Never used before but it says in their home page the following: "Open Source backend for your next SaaS and Mobile app in 1 file". Seems porwerful.
1. [Planetscale](https://planetscale.com) -
Directly from their website: "PlanetScale is a MySQL-compatible serverless database that brings you scale, performance, and reliability — without sacrificing developer experience."
1. [Render](https://render.com/) -
I think render is more like a cloud agnostic builder/runner platform, this means that your application needs to be hosted somewhere else.
1. [Railway](https://railway.app/) -
Language agnostic and for projects big and small. Never used but it seems very easy to use like Heroku.
1. [Subpabase](https://supabase.com/) -
Like Planetscale this is only for databases. It is an open source Firebase alternative for building secure and performant Postgres backends with minimal configuration.
1. [Oracle Cloud](https://www.oracle.com/cloud/free/) -
Oracle Cloud offers a free-tier for a year.
1. [AWS Amplify](https://docs.amplify.aws/) -
Very easy to use, connects to your github repository and can also create review apps based on branches.
1. [Cyclic](https://www.cyclic.sh/) -
"Connect your GitHub repo. We will build, deploy and manage the hosting."
And last but not least, [Netlify](https://www.netlify.com/), which is the one I use to host this website(for free). [Hugo](https://gohugo.io/) + Netlify is a powerful combination.
If you have any suggestions that I might have missed, feel free to post in the comments. | diogoviana19 | |
1,865,250 | Desvendando o Futuro do Desenvolvimento de Aplicativos com .NET MAUI | No mundo dinâmico e em constante evolução do desenvolvimento de software, a busca por frameworks... | 0 | 2024-05-26T00:01:12 | https://dev.to/jucsantana05/desvendando-o-futuro-do-desenvolvimento-de-aplicativos-com-net-maui-h14 | csharp, xamarinforms, beginners, softwaredevelopment |
No mundo dinâmico e em constante evolução do desenvolvimento de software, a busca por frameworks eficientes e versáteis é incessante. É nesse contexto que o .NET MAUI (Multi-platform App UI) surge como uma verdadeira revolução, proporcionando uma plataforma unificada para a criação de aplicativos nativos que funcionam em diversas plataformas. Neste artigo, vamos explorar as vantagens e inovações que o .NET MAUI traz para os desenvolvedores e como ele pode transformar sua abordagem ao desenvolvimento de aplicativos.
O que é o .NET MAUI?
.NET MAUI é a evolução do Xamarin.Forms, projetado para simplificar o desenvolvimento de aplicativos que podem ser executados em Android, iOS, macOS e Windows. Ele unifica as diferentes APIs e frameworks de UI em uma única base de código, permitindo que os desenvolvedores escrevam uma vez e executem em qualquer lugar. Com .NET MAUI, você pode criar interfaces de usuário nativas, garantindo desempenho e experiência de usuário excepcionais em todas as plataformas suportadas.
Principais Benefícios do .NET MAUI
1. Desenvolvimento Unificado: Com .NET MAUI, você pode utilizar uma única base de código C# para desenvolver aplicativos que funcionam em várias plataformas, reduzindo significativamente o tempo e os custos de desenvolvimento.
2. Desempenho Nativo: Aproveitando o poder das APIs nativas, .NET MAUI garante que os aplicativos tenham desempenho e aparência nativa em cada plataforma, proporcionando uma experiência de usuário de alta qualidade.
3. Componentes Reutilizáveis: A arquitetura modular do .NET MAUI permite a criação e reutilização de componentes, facilitando a manutenção e evolução dos aplicativos.
Inovações e Recursos do .NET MAUI:
-Suporte a MVU (Model-View-Update): Além do padrão MVVM (Model-View-ViewModel), .NET MAUI oferece suporte ao padrão MVU, proporcionando maior flexibilidade na escolha da arquitetura de aplicativos.
-Gráficos e Desenhos com SkiaSharp: Integração com SkiaSharp permite criar gráficos e animações complexas, oferecendo uma experiência visual rica e interativa.
- Integração com Blazor: .NET MAUI suporta Blazor, permitindo que os desenvolvedores criem interfaces de usuário utilizando componentes web e tecnologias modernas, como o WebAssembly.
Conclusão:
.NET MAUI representa um grande avanço no desenvolvimento de aplicativos multiplataforma, oferecendo uma combinação poderosa de eficiência, flexibilidade e desempenho. Para os desenvolvedores, isso significa a possibilidade de criar aplicativos robustos e atraentes com menor esforço e maior produtividade. Ao adotar .NET MAUI, você está se posicionando na vanguarda da tecnologia, pronto para enfrentar os desafios do futuro com confiança e inovação.
Aproveite para explorar o .NET MAUI e descobrir como ele pode transformar seus projetos e impulsionar sua carreira no desenvolvimento de software. O futuro é multiplataforma, e com .NET MAUI, você estará preparado para criar aplicativos que encantam e surpreendem em qualquer dispositivo.
| jucsantana05 |
1,865,249 | You're parsing URLs wrong. | There are somethings that you should never build by yourself. Not because it's difficult, but because... | 0 | 2024-05-26T00:00:51 | https://dev.to/gewenyu99/youre-parsing-urls-wrong-1eld | javascript, webdev, webapis, beginners | There are somethings that you should never build by yourself. Not because it's difficult, but because it's time consuming and filled with gotchas.
One of these things is URL parsing.
## Try implementing your own URL parsing
Raise your hand if you've done this before :hand:
```js
const baseUrl = "https://example.com";
const endpoint = "api";
const resourceId = "123";
const fullUrl = `${baseUrl}/${endpoint}/${resourceId}`;
console.log(fullUrl); // Output: https://example.com/api/123
```
But you're not a barbarian, maybe you abstract this into a function:
```js
function joinUrlParts(...parts) {
return parts.map(part => part.trim()).join('/');
}
// Example usage:
const baseUrl = "https://example.com";
const endpoint = "api";
const resourceId = "123";
const fullUrl = joinUrlParts(baseUrl, endpoint, resourceId);
console.log(fullUrl); // Output: https://example.com/api/123
```
Well, that's certainly better, but this example can break with a single stray `/`:
```js
// Example usage that breaks:
const baseUrl = "https://example.com";
const endpoint = "api/";
const resourceId = "123";
const fullUrl = joinUrlParts(baseUrl, endpoint, resourceId);
console.log(fullUrl); // Output: https://example.com/api//123
```
So maybe you tried again, you add sanitation to your input.
```js
function joinUrlParts(parts) {
// Trim leading and trailing slashes from each part
const sanitizedParts = parts.map(part => part.trim().replace(/^\/+|\/+$/g, ''));
// Join the sanitized parts with slashes
return sanitizedParts.join('/');
}
```
This looks bullet proof? Right? Not that many edge cases!
Well, yes, but actually no.

You still have to support some interesting use cases:
- What about joining `https://example.com/cats` and `../dogs/corgie`.
- This doesn't understand `https://example.com/dogs/corgie#origins`.
- This doesn't escape characters `http://www.example.com/d%C3%A9monstration.html`
- This doesn't accept query parameters `https://some.site/?id=123`
- This doesn't let you **parse** URLs.
The list goes on. You **CAN** implement what
## But why does this matter?
If you write modern JavaScript, you know that the [URL object](https://developer.mozilla.org/en-US/docs/Web/API/URL) exists from JS Web APIs.
But JavaScript didn't always have a good way to construct and parse URLs built in. The URL object was first included in the [ECMAScript 2015 specs](https://262.ecma-international.org/6.0/).
There are still lots of older videos and blogs that parse URLs with all kinds of fancy magic like [JavaScript - How to Get URL Parameters - Parse URL Variables
](https://www.youtube.com/watch?v=ZZH20MO1yP8&ab_channel=HelpVideoGuru) and some fun workarounds like [The lazy man's URL parsing](https://www.joezimjs.com/javascript/the-lazy-mans-url-parsing/).
When I wrote my first few JavaScript projects in 2016, I parsed and build URLs with all kinds giant loops and regex. The solutions were hard to read at best, and extremely buggy at worse.
Then I used [Node.js path.join() Method](https://www.w3schools.com/nodejs/met_path_join.asp) in my later projects. Until this week, I had assumed this was still the way to go. I even tried importing [browserify's path implementation for the browser](https://github.com/browserify/path-browserify).
## The modern JS way to URL handling
If you've been sleeping under a rock like me, here's a quick overview of the [URL object](https://developer.mozilla.org/en-US/docs/Web/API/URL) from Web APIs.
```js
// Create a new URL object
const url = new URL('https://example.com/path?param1=value1¶m2=value2#section');
// Parse the URL
console.log('Host:', url.host); // Host: example.com
console.log('Path:', url.pathname); // Path: /path
console.log('Search Params:', url.searchParams.toString()); // Search Params: param1=value1¶m2=value2
console.log('Hash:', url.hash); // Hash: #section
// Update pars of the URL
url.protocol = 'https';
url.host = 'new.example.com';
url.searchParams.set('param3', 'value3');
url.hash = '#updated-section';
// Recreate the URL
const rebuiltUrl = new URL(url.href);
// Update the URL object directly
rebuiltUrl.searchParams.set('param2', 'updatedValue');
// Print the updated URL
console.log('Rebuilt URL:', rebuiltUrl.href); // Rebuilt URL: https://new.example.com/path?param1=value1¶m2=updatedValue¶m3=value3#updated-section
```
## Browser compatibility
All remotely recent, modern browsers will support the URL library. Some methods may not be fully implemented, but the basic usage remains consistent. Find the [full browser compatibility table here](https://developer.mozilla.org/en-US/docs/Web/API/URL#browser_compatibility).
If **you must support ancient browsers**, the [core-js project](https://github.com/zloirock/core-js#url-and-urlsearchparams) provides a polyfill for older browsers.
## Bottom line
Please use native Web APIs. Avoid building one off utility classes for common actions. JavaScript is constantly changing, I wrote my first lines of JS in 2016 and I'm constantly finding myself leaning into old, outdated information.
If you have a moment, take a look through the [Web API docs](https://developer.mozilla.org/en-US/docs/Web/API) from MDN. I guarantee you will find something new that solves a problem you needed to build your own solution for in the past.
## A fun aside
I still hate how the JavaScript [Date object](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date)'s interface is so lacking.
Try finding a date 5 days in the past, or comparing if two events happened 3 days apart.
The fact that [moment.js](https://momentjs.com/) or [day.js](https://day.js.org/) needs to exist in 2024 bothers me a lot.
## More fun stuff
Come chat with me in these places:
- [Twitter](https://x.com/WenYuGe1)
- [Linkedin](https://www.linkedin.com/in/wen-yu-ge/)
- [GitHub](https://github.com/gewenyu99) | gewenyu99 |
1,865,965 | Mastering AWS Batch: A .NET Developer Guide to Batch File Processing | TL;DR In this blog post, we will explore how to leverage AWS Batch and Amazon S3 to... | 0 | 2024-05-27T10:03:48 | https://nikiforovall.github.io/dotnet/aws/2024/05/26/aws-batch-dotnet.html | dotnet, aws, s3, awsbatch | ---
title: Mastering AWS Batch: A .NET Developer Guide to Batch File Processing
published: true
date: 2024-05-26 00:00:00 UTC
tags: dotnet, aws, s3, awsbatch
canonical_url: https://nikiforovall.github.io/dotnet/aws/2024/05/26/aws-batch-dotnet.html
---
## TL;DR
In this blog post, we will explore how to leverage AWS Batch and Amazon S3 to efficiently process files using .NET
Source code: [https://github.com/NikiforovAll/aws-batch-dotnet](https://github.com/NikiforovAll/aws-batch-dotnet)
- [TL;DR](#tldr)
- [Introduction](#introduction)
- [Part 1: Understanding AWS Batch and S3](#part-1-understanding-aws-batch-and-s3)
- [AWS Batch](#aws-batch)
- [Components of AWS Batch](#components-of-aws-batch)
- [Jobs](#jobs)
- [Job Definitions](#job-definitions)
- [Job Queues](#job-queues)
- [Compute Environments](#compute-environments)
- [S3](#s3)
- [Part 2: Building a .NET CLI for AWS Batch operations](#part-2-building-a-net-cli-for-aws-batch-operations)
- [Building the CLI](#building-the-cli)
- [Define Commands](#define-commands)
- [Creating a Docker Image](#creating-a-docker-image)
- [Part 3: Setting up AWS with Terraform IaC](#part-3-setting-up-aws-with-terraform-iac)
- [Part 4: Running AWS Batch Jobs with CLI Commands](#part-4-running-aws-batch-jobs-with-cli-commands)
- [Conclusion](#conclusion)
- [References](#references)
## Introduction
In many scenarios, it is common to have a task that needs to process a large number of files. A real-world example of tasks involving processing a large number of files:
- A deep learning model used for natural language processing (NLP). The model might be trained on a dataset consisting of millions of text files, each containing a piece of text, such as a book, an article, or a conversation. Each of these text files would need to be processed and fed into the model for it to learn and understand the structure of the language.
- Genomics researchers often have to process massive amounts of data. For instance, they might need to analyze the genomes of thousands of individuals to identify patterns and variations. This task involves processing and analyzing a large number of files, each containing the genetic information of an individual.
- A financial institution that needs to process transaction data for millions of their customers for fraud detection. Each transaction would be a separate file, and sophisticated algorithms would need to process these files to detect any irregular patterns that could indicate fraudulent activity.
## Part 1: Understanding AWS Batch and S3
_AWS Batch_ allows you to efficiently distribute the workload across multiple compute resources. By leveraging _AWS Batch_, you can easily scale your file processing tasks to handle any number of files, while taking advantage of the automatic resource provisioning and management provided by the service.
AWS Batch is a powerful service that allows you to distribute computing workloads across multiple resources in the AWS Cloud. By combining it with Amazon S3, we can easily scale our file processing tasks and take advantage of automatic resource provisioning and management.
With AWS Batch, you can easily parallelize the processing of your files, significantly reducing the overall processing time. This is particularly useful when dealing with large datasets or computationally intensive tasks. By distributing the workload across multiple compute resources, AWS Batch enables you to process multiple files simultaneously, maximizing the throughput of your file processing pipeline.
### AWS Batch
_AWS Batch_ helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. _AWS Batch_ removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to, you can easily parallelize the processing of your files, which can significantly reduce the overall processing time. Additionally, _AWS Batch_ integrates seamlessly with other AWS services, such as _Amazon S3_, allowing you to easily access and process your files stored in the cloud.
<center>
<img src="https://nikiforovall.github.io/assets/aws-batch/aws-batch-service.png" alt="aws-batch-service" width="10%" style="margin: 15px;">
</center>
For more information on how to use _AWS Batch_ you can refer to the [AWS Batch documentation](https://docs.aws.amazon.com/batch/).
<center>
<img src="https://nikiforovall.github.io/assets/aws-batch/batch-arch.png" alt="batch-arch" width="75%" style="margin: 15px;">
</center>
#### Components of AWS Batch
AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure.
##### Jobs
A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate, Amazon ECS container instances, Amazon EKS, or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition.
When you submit a job to an AWS Batch job queue, the job enters the `SUBMITTED` state. It then passes through the following states until it succeeds (exits with code 0) or fails. See [Job states - Documentation](https://docs.aws.amazon.com/batch/latest/userguide/job_states.html).
When you submit a job request to AWS Batch, you have the option of defining a dependency on a previously submitted job.
```bash
# Submit job A
aws batch submit-job --job-name jobA --job-queue myQueue --job-definition jdA
# Output
{
"jobName": "example",
"jobId": "876da822-4198-45f2-a252-6cea32512ea8"
}
# Submit job B
aws batch submit-job --job-name jobB --job-queue myQueue --job-definition jdB --depends-on jobId="876da822-4198-45f2-a252-6cea32512ea8"
```
<center>
<img src="https://nikiforovall.github.io/assets/aws-batch/job-states.gif" alt="job-states" width="75%" style="margin: 15px;">
</center>
##### Job Definitions
A job definition is a template that describes various parameters for running a job in AWS Batch. It includes information such as the Docker image to use, the command to run, the amount of CPU and memory to allocate, and more. AWS Batch job definitions specify how jobs are to be run. While each job must reference a job definition, many of the parameters that are specified in the job definition can be overridden at runtime.
##### Job Queues
Jobs are submitted to a job queue where they reside until they can be scheduled to run in a compute environment.
##### Compute Environments
Job queues are mapped to one or more compute environments. Compute environments contain the Amazon ECS container instances that are used to run containerized batch jobs. A specific compute environment can also be mapped to one or more than one job queue. Within a job queue, the associated compute environments each have an order that’s used by the scheduler to determine where jobs that are ready to be run will run.
### S3
Amazon Simple Storage Service (_Amazon S3_) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can use _Amazon S3_ to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.
<center>
<img src="https://nikiforovall.github.io/assets/aws-batch/aws-s3-service.png" alt="aws-s3-service" width="10%" style="margin: 15px;">
</center>
For more information on how to use _AWS S3_ you can refer to the [AWS Batch documentation](https://docs.aws.amazon.com/s3/).
## Part 2: Building a .NET CLI for AWS Batch operations
Let’s say we want to build a pipeline that analyzes files from an S3 bucket for word frequency and finds the most used words in the bucket. This pipeline can be implemented using AWS Batch and a CLI application.
The process of migration consists of three stages - initialize a migration plan, run a migration for each item in the plan, and aggregate the results.
```bash
USAGE:
BatchMigration.dll [OPTIONS] <COMMAND>
EXAMPLES:
BatchMigration.dll plan --source s3://source-bucket --destination s3://destination-bucket/output --plan
s3://destination-bucket/plan.json
BatchMigration.dll migrate --plan s3://destination-bucket/plan.json --index 1
BatchMigration.dll merge --source s3://destination-bucket/output
OPTIONS:
-h, --help Prints help information
COMMANDS:
plan Prepares migration plan for a bucket
migrate Run a migration based on migration plan and index
merge Merge the results results
```
<center>
<img src="https://nikiforovall.github.io/assets/aws-batch/stepfunctions_graph.png" alt="stepfunctions_graph" width="25%" style="margin: 15px;">
</center>
1. Initialize Migration Plan:
- Use the `plan` command to prepare a migration plan for a bucket.
- Specify the source bucket using the `--source` option.
- Specify the destination bucket for the output using the `--destination` option.
- Use the `--plan` option to generate a migration plan file in JSON format, such as `s3://destination-bucket/plan.json`.
2. Run Migration:
- Use the `migrate` command to run the migration for each item in the plan.
- Specify the migration plan file using the `--plan` option.
- Use the `--index` option to specify the index of the item in the plan to migrate.
3. Aggregate Results:
- Use the `merge` command to merge the migration results.
- Specify the source bucket for the results using the `--source` option.
By following these steps, you can migrate files from the source bucket to the destination bucket using AWS Batch and the provided CLI commands.
Here is a relationship between jobs:
Basically, we start one job to build a plan, than based on the number of files we need to process, we run **N** migration jobs, and process the results based:
### Building the CLI
We will be using [Spectre.Console](https://spectreconsole.net/) to build a CLI application. This library provides a convenient way to create command-line interfaces with rich text formatting and interactive features.
```csharp
var services = ConfigureServices();
var app = new CommandApp(new TypeRegistrar(services));
app.Configure(config =>
{
config
.AddCommand<PlanCommand>("plan")
.WithDescription("Prepares migration plan for a bucket")
.WithExample(
"plan",
"--source s3://source-bucket",
"--destination s3://destination-bucket/output",
"--plan s3://destination-bucket/plan.json"
);
config
.AddCommand<MigrateCommand>("migrate")
.WithDescription("Run a migration based on migration plan and index")
.WithExample("migrate", "--plan s3://destination-bucket/plan.json", "--index 1");
config
.AddCommand<MergeCommand>("merge")
.WithDescription("Merge the results results")
.WithExample("merge", "--source s3://destination-bucket/output");
});
var result = app.Run(args);
```
#### Define Commands
The basic idea is to scan the content of S3 bucket and put the plan back to S3, by doing this we can distribute the work between jobs. As you will see later, there is a concept of array job. An array job is a job that shares common parameters, such as the job definition, vCPUs, and memory. It runs as a collection of related yet separate basic jobs that might be distributed across multiple hosts and might run concurrently. At runtime, the `AWS_BATCH_JOB_ARRAY_INDEX` environment variable is set to the container’s corresponding job array index number. You can use this index value to control how your array job children are differentiated. In our case, we start the subsequent `migrate` job based on total number of items in migration plan.
💡 The examples below are abbreviated and modified for simplicity, please refer to the source code [https://github.com/NikiforovAll/aws-batch-dotnet](https://github.com/NikiforovAll/aws-batch-dotnet) for details.
Here is a `plan` command:
```csharp
public class PlanCommand(IAmazonS3 s3) : CancellableAsyncCommand<PlanCommand.Settings>
{
private static readonly JsonSerializerOptions JsonSerializerOptions =
new() { WriteIndented = true };
public class Settings : CommandSettings
{
[CommandOption("-s|--source <SourcePath>")]
public string Source { get; set; } = default!;
[CommandOption("-d|--destination <DestinationPath>")]
public string Destination { get; set; } = default!;
[CommandOption("-p|--plan <PlanPath>")]
public string Plan { get; set; } = default!;
}
public override async Task<int> ExecuteAsync(
CommandContext context,
Settings settings,
CancellationToken cancellation
)
{
var (source, destination, plan) = (
S3Path.Parse(settings.Source),
S3Path.Parse(settings.Destination),
S3Path.Parse(settings.Plan)
);
var files = await this.GetFilesAsync(source.Bucket, source.Key, cancellation);
var migrationPlan = new MigrationPlan(
new(source, destination, plan, files.Count),
files
);
await this.StoreMigrationPlan(migrationPlan, cancellation);
AnsiConsole.MarkupLine($"Running scanning for {source}");
AnsiConsole.MarkupLine($"Result of the scan will be saved to {destination}");
AnsiConsole.MarkupLine($"Plan can be found here {plan}");
return 0;
}
```
Here is `migrate` command:
Here is what it does:
1. Loads migration plan based on `AWS_BATCH_JOB_ARRAY_INDEX`
2. Get’s corresponding file to migrate based on index
3. Calculate word occurrences for the file
4. Put the result to destination bucket, the file name is copied from the source file.
```csharp
public class MigrateCommand(IAmazonS3 s3, IConfiguration configuration)
: CancellableAsyncCommand<MigrateCommand.Settings>
{
public class Settings : CommandSettings
{
[CommandOption("-p|--plan <PlanPath>")]
public string Plan { get; set; } = default!;
[CommandOption("-i|--index <Index>")]
public int? Index { get; set; } = default!;
}
public override async Task<int> ExecuteAsync(
CommandContext context,
Settings settings,
CancellationToken cancellation
)
{
var plan = S3Path.Parse(settings.Plan);
var index = settings.Index ?? configuration.GetValue<int>("JOB_ARRAY_INDEX");
var migrationPlan = await this.GetPlanAsync(plan, cancellation);
var file = migrationPlan!.Items[index];
var fileSourcePath = new S3Path(
migrationPlan.Metadata.Source.Bucket,
Path.Combine(migrationPlan.Metadata.Source.Key, file)
);
var fileDestinationPath = new S3Path(
migrationPlan.Metadata.Destination.Bucket,
Path.Combine(migrationPlan.Metadata.Destination.Key, file)
);
var sourceText = await this.GetTextAsync(fileSourcePath, cancellation);
var destinationText = CalculateWordsOccurrences(sourceText!);
var stream = new MemoryStream(Encoding.UTF8.GetBytes(destinationText));
await s3.PutObjectAsync(
new PutObjectRequest()
{
BucketName = fileDestinationPath.Bucket,
Key = fileDestinationPath.Key,
InputStream = stream
},
cancellation
);
AnsiConsole.MarkupLine($"Plan: {plan}");
AnsiConsole.MarkupLine($"Migrating file([blue]{index}[/]) - {fileSourcePath}");
AnsiConsole.MarkupLine($"Migrating file([blue]{index}[/]) - {fileDestinationPath}");
return 0;
}
}
```
Here is `merge` command:
```csharp
public class MergeCommand(IAmazonS3 s3, ILogger<MergeCommand> logger)
: CancellableAsyncCommand<MergeCommand.Settings>
{
public class Settings : CommandSettings
{
[CommandOption("-s|--source <SourcePath>")]
public string Source { get; set; } = default!;
}
public override async Task<int> ExecuteAsync(
CommandContext context,
Settings settings,
CancellationToken cancellation
)
{
ArgumentNullException.ThrowIfNull(settings.Source);
var sourcePath = S3Path.Parse(settings.Source);
var files = await this.GetFilesAsync(sourcePath.Bucket, sourcePath.Key, cancellation);
// E.g: (word1, 10), (word2: 3)
var occurrences = await this.AggregateFiles(files, cancellation);
var top = occurrences
.Where(x =>x.Key is { Length: > 2 })
.Where(x => x.Value >= 3)
.OrderByDescending(x => x.Value)
.Take(100);
WriteTable(top);
return 0;
}
}
```
### Creating a Docker Image
In order to run our task in _AWS Batch_ we need to push our image to [Amazon Elastic Container Registry](https://docs.aws.amazon.com/ecr/).
```dockerfile
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build-env
WORKDIR /App
COPY . ./
# Restore as distinct layers
RUN dotnet restore
# Build and publish a release
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /App
COPY --from=build-env /App/out .
ENTRYPOINT ["dotnet", "BatchMigration.dll"]
CMD ["--help"]
```
And here is how to build it an push to the public ECR repository:
```bash
docker build -t aws-batch-dotnet-demo-repository .
aws ecr-public get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin public.ecr.aws
docker tag aws-batch-dotnet-demo-repository:latest public.ecr.aws/t7c5r3b7/aws-batch-dotnet-demo-repository:latest
docker push public.ecr.aws/t7c5r3b7/aws-batch-dotnet-demo-repository:latest
```
💡 Note, you will need to create a public repository and get instructions from repository page in AWS Management Console.
## Part 3: Setting up AWS with Terraform IaC
I’ve decided to prepare a Terraform example of how to provision a full _AWS Batch_ setup because configuring it from management console can be somewhat tedious. The code below demonstrates main parts of _AWS Batch_ configuration, the code is redacted, once again, please consult source code for precise configuration.
Below you can find a Terraform script that sets up an AWS Batch environment. Here’s a breakdown of what it does:
1. It specifies the required provider, in this case, AWS, and the version of the provider and configures the AWS provider with the region “us-east-1”.
2. It creates two S3 buckets, one named “aws-batch-demo-dotnet-source-bucket” and the other “aws-batch-demo-dotnet-destination-bucket”.
3. It uploads all files from the local “documents” directory to the source bucket.
4. It creates an AWS Batch environment using the “terraform-aws-modules/batch/aws” module. This environment includes:
1. A compute environment named “main\_ec2” with a type of “EC2”, a maximum of 8 vCPUs, and a desired number of 2 vCPUs. The instances launched by this environment will be of type “m4.large”.
2. A job queue named “MainQueue” that is enabled and has a priority of 1. This queue uses the “main\_ec2” compute environment.
3. Three job definitions named “plan”, “migrate”, and “merge”. Each job runs a different command in a container that uses the latest image from the “public\_ecr” repository. Each job requires 1 vCPU and 1024 units of memory.
```terraform
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "source_bucket" {
bucket = "aws-batch-demo-dotnet-source-bucket"
}
resource "aws_s3_bucket" "destination_bucket" {
bucket = "aws-batch-demo-dotnet-destination-bucket"
}
resource "aws_s3_object" "documents" {
for_each = fileset("./documents", "**/*")
bucket = aws_s3_bucket.source_bucket.bucket
key = each.value
source = "./documents/${each.value}"
}
locals {
region = "us-east-1"
name = "aws-batch-dotnet"
tags = {
Name = local.name
Example = local.name
}
}
module "batch" {
source = "terraform-aws-modules/batch/aws"
compute_environments = {
main_ec2 = {
name_prefix = "ec2"
compute_resources = {
type = "EC2"
min_vcpus = 0
max_vcpus = 8
desired_vcpus = 2
instance_types = ["m4.large"]
security_group_ids = [module.vpc_endpoint_security_group.security_group_id]
subnets = module.vpc.private_subnets
tags = {
# This will set the name on the Ec2 instances launched by this compute environment
Name = "${local.name}-ec2"
Type = "Ec2"
}
}
}
}
# Job queus and scheduling policies
job_queues = {
main_queue = {
name = "MainQueue"
state = "ENABLED"
priority = 1
compute_environments = ["main_ec2"]
tags = {
JobQueue = "Job queue"
}
}
}
job_definitions = {
plan = {
name = "${local.name}-plan"
propagate_tags = true
container_properties = jsonencode({
command = ["plan"]
image = "${module.public_ecr.repository_url}:latest"
resourceRequirements = [
{ type = "VCPU", value = "1" },
{ type = "MEMORY", value = "1024" }
]
})
tags = {
JobDefinition = "Plan"
}
},
migrate = {
name = "${local.name}-migrate"
propagate_tags = true
container_properties = jsonencode({
command = ["migrate"]
image = "${module.public_ecr.repository_url}:latest"
resourceRequirements = [
{ type = "VCPU", value = "1" },
{ type = "MEMORY", value = "1024" }
]
})
tags = {
JobDefinition = "Migrate"
}
},
merge = {
name = "${local.name}-merge"
propagate_tags = true
container_properties = jsonencode({
command = ["merge"]
image = "${module.public_ecr.repository_url}:latest"
resourceRequirements = [
{ type = "VCPU", value = "1" },
{ type = "MEMORY", value = "1024" }
]
})
tags = {
JobDefinition = "Merge"
}
}
}
tags = local.tags
}
```
💡 Note, you don’t need to know terraform to try to use it. Simply, run `terraform init` and `terraform apply` to provision the environment.
## Part 4: Running AWS Batch Jobs with CLI Commands
```bash
aws batch submit-job \
--job-name aws-batch-dotnet-plan-01 \
--job-queue MainQueue \
--job-definition aws-batch-dotnet-plan \
--share-identifier "demobatch*" \
--scheduling-priority-override 1 \
--container-overrides '{
"command": [
"plan",
"--source",
"s3://aws-batch-demo-dotnet-source-bucket",
"--destination",
"s3://aws-batch-demo-dotnet-destination-bucket/output/",
"--plan",
"s3://aws-batch-demo-dotnet-destination-bucket/plans/plan-01.json"
]
}'
```
Here is an example of produced migration plan:
```
{
"Metadata": {
"Source": {
"Bucket": "aws-batch-demo-dotnet-source-bucket",
"Key": ""
},
"Destination": {
"Bucket": "aws-batch-demo-dotnet-destination-bucket",
"Key": "output/"
},
"Plan": {
"Bucket": "aws-batch-demo-dotnet-destination-bucket",
"Key": "plans/plan-01.json"
},
"TotalItems": 2
},
"Items": [
"file1.txt",
"file2.txt"
]
}
```
Run the migration:
```bash
aws batch submit-job \
--job-name aws-batch-dotnet-migrate-01 \
--job-queue MainQueue \
--job-definition aws-batch-dotnet-migrate \
--share-identifier "demobatch*" \
--scheduling-priority-override 1 \
--array-properties size=2 \
--container-overrides '{
"command": [
"migrate",
"--plan",
"s3://aws-batch-demo-dotnet-destination-bucket/plans/plan-01.json"
]
}'
```
💡 Note, `--array-properties size=2` because we need to process two files. In this case, array job is scheduled to main queue, once processed, it will produce sub-jobs that are processed concurrently.
Here is an example of file processing:
```text
and:12
batch:11
aws:7
# the list goes on
```
Aggregate results:
```bash
aws batch submit-job \
--job-name aws-batch-dotnet-merge-01 \
--job-queue MainQueue \
--job-definition aws-batch-dotnet-merge \
--share-identifier "demobatch*" \
--scheduling-priority-override 1 \
--container-overrides '{
"command": [
"merge",
"--source",
"s3://aws-batch-demo-dotnet-destination-bucket/output/"
]
}'
```
And here is the that was output to console:
```text
┌───────────┬───────┐
│ Key │ Value │
├───────────┼───────┤
│ batch │ 11 │
│ aws │ 8 │
│ service │ 6 │
│ amazon │ 5 │
│ storage │ 5 │
│ computing │ 4 │
│ jobs │ 4 │
│ data │ 4 │
│ services │ 3 │
│ simple │ 3 │
│ workloads │ 3 │
└───────────┴───────┘
```
☝️ We can define dependencies between jobs by providing `--depends-on`. Alternatively, we can use AWS Step Functions to orchestrate the submission of jobs.
<center>
<img src="https://nikiforovall.github.io/assets/aws-batch/stepfunctions-console-execution.png" alt="stepfunctions-console-execution" style="margin: 15px;">
</center>
## Conclusion
In this post, we have journeyed through the process of implementing file processing pipeline. We have covered everything from creating a .NET CLI for AWS Batch operations and setting up AWS with Terraform IaC, to running AWS Batch jobs with CLI commands.
Through these guides, I hope to have provided you with a comprehensive understanding and practical skills to confidently navigate AWS Batch in a .NET environment.
Remember, the best way to consolidate your learning is through practice. So, don’t hesitate to apply these concepts in your projects and see the magic happen!
## References
- [https://github.com/NikiforovAll/aws-batch-dotnet](https://github.com/NikiforovAll/aws-batch-dotnet)
- [https://docs.aws.amazon.com/batch/latest/userguide/example\_array\_job.html](https://docs.aws.amazon.com/batch/latest/userguide/example_array_job.html)
- [https://docs.aws.amazon.com/batch/latest/APIReference/API\_SubmitJob.html](https://docs.aws.amazon.com/batch/latest/APIReference/API_SubmitJob.html)
- [https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html](https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html)
- [https://aws.amazon.com/blogs/hpc/encoding-workflow-dependencies-in-aws-batch/](https://aws.amazon.com/blogs/hpc/encoding-workflow-dependencies-in-aws-batch/) | nikiforovall |
1,864,810 | How to scale a Django app to serve one million users | Wish your Django app could handle a million hits? This post is a compilation of articles, books, and... | 0 | 2024-05-25T23:59:11 | https://coffeebytes.dev/en/how-to-scale-a-django-app-to-serve-one-million-users/ | performance, django, scalability, python | ---
title: How to scale a Django app to serve one million users
published: true
date: 2024-05-26 00:00:00 UTC
tags: performance,django,scalability,python
canonical_url: https://coffeebytes.dev/en/how-to-scale-a-django-app-to-serve-one-million-users/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b1528nvebgxo5wyq7678.jpg
---
Wish your Django app could handle a million hits? This post is a compilation of articles, books, and videos I’ve read on how to take a Django application to its maximum capabilities, I’ve even implemented some of these recommendations myself.
It’s also a good time to remember that if your application is just starting out, you probably [shouldn’t obsess about its performance… yet](https://coffeebytes.dev/en/dont-obsess-about-your-web-application-performance/).
## Reduce slow queries in Django
As you know, database access is usually the bottleneck of most applications. \*\*The most important action to take is to reduce the number of queries and the impact of each one of them. You can reduce the impact of your queries by 90%, and I am not exaggerating.
It is quite common to write code that occasions multiple queries to the database, as well as quite expensive searches.
Identify what queries are being made in your application using [django-debug-toolbar](https://github.com/jazzband/django-debug-toolbar) and reduce them, or make them more efficient:
- **select\_related()** to [avoid multiple searches in foreign key or one-to-one relationships](https://coffeebytes.dev/en/differences-between-select_related-and-prefetch_related-in-django/)
- **prefetch\_related()** to prevent excessive searches on many-to-many or many-to-one relationships
- **django\_annotate()** to add information to each object in a query. I have an entry where I explain [the difference between annotate and aggregate](https://coffeebytes.dev/en/django-annotate-and-aggregate-explained/).
- **django\_aggregate()** to process all information from a single query into a single data (summation, averages).
- **Object Q** to join queries by OR or AND directly from the database.
- F-Expressions\*\* to perform operations at the database level instead of in Python code.
[

_Django debug tool bar showing the SQL queries of a Django request_
](images/django-debug-tool-bar-numero-queries.png)
Example of use with _select\_related_.
``` python
# review/views.py
from .models import Review
def list_reviews(request):
queryset = Review.objects.filter(product__id=product_id).select_related('user')
# We're preventing a new query everytime we access review.user
# ...
```
## Configure gunicorn correctly
Gunicorn is the most widely used Python WSGI HTTP server for Django applications. But it is not asynchronous, consider combining it with one of its asynchronous counterparts: hypercorn or uvicorn. The latter implements gunicorn workers.
### Configure gunicorn correctly
Make sure you are using the correct gunicorn workers, according to the number of cores in your processor. They recommend setting the workers to (2 x number of cores) + 1. According to the documentation, **with 4-12 workers you can serve from hundreds to thousands of requests per second** , so that should be enough for a medium to large scale website.
## Improve the performance of your serializers
If you use DRF and use its generic classes to create serializers, you may not exactly be getting the best performance. The generic classes for serializers perform data validation, which can be quite time consuming if you are only going to read data.
Even if you remembered to mark your fields as read\_only, DRF serializers are not the fastest, you might want to check out [Serpy](https://serpy.readthedocs.io/en/latest/), [Marshmallow](https://marshmallow.readthedocs.io/en/stable/). The topic is quite broad, but stay with the idea that there is a major area of improvement in Django serializers.
I leave you this article that explains [how some developers managed to reduce the time cost of serialization by 99%.](https://hakibenita.com/django-rest-framework-slow)
## Use pagination in your views
It probably sounds pretty obvious, yet I feel I should mention it: you don’t need to return an entire database table if your user only finds the first few records useful.
Use the _paginator_ object provided by Django, or limit the results of a search to a few.
DRF also has an option to [paginate your results](https://www.django-rest-framework.org/api-guide/pagination/), check it out.
``` python
# review/views.py
from django.views.generic import ListView
from .models import Review
class ReviewList(ListView):
model = Review
paginate_by = 25
context_object_name = 'review_list'
```
## Use indexes in your models
Understand your more complex queries and try to create indexes for them. The index will make your searches in Django faster, but it will also slow down, slightly, the creations and updates of new information, besides taking up a little more space in your database. Try to strike a healthy balance between speed and storage space used.
``` python
from django.db import models
class Review(models.Model):
created = models.DateTimeField(
auto_now_add=True,
db_index=True,
)
```
## Use indexes for your searches
If your application makes heavy use of information searches, consider using an efficient [search engine, such as Solr](https://coffeebytes.dev/en/searches-with-solr-with-django-haystack/), rather than implementing the code yourself.
There are many options available:
- ElasticSearch
- Solr
- Whoosh
- Xapian
## Remove unused middleware
Each middleware implies an extra step in each web request, so removing all those middlewares that you do not use will mean a slight improvement in the response speed of your application.
Here are some common middleware that are not always used: messages, flat pages and localization, no, I don’t mean geographic location, but translating the content according to the local context.
``` python
MIDDLEWARE = [
# ...
'django.contrib.messages.middleware.MessageMiddleware',
'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware',
'django.middleware.locale.LocaleMiddleware'
]
```
## Caching in Django
When the response time of your application becomes a problem, you should start caching all time-consuming and resource-intensive results.
Would you like to dig deeper into the caching system, I have a post about [caching in django using memcached](https://coffeebytes.dev/en/caching-in-django-rest-framework-using-memcached/) that you can check out to dig deeper.
If your page has too many models, and they rarely change, it does not make sense to access the database each time to request them with each new HTTP request. Just put the response of that request in cache and your response time will improve, this way every time the same content is requested, it will not be necessary to make a new request or calculations to the database, but the value will be returned directly from memory.
Among the options available are:
- Memcached
- Redis
- Database cache
- File system cache
``` python
# settings.py
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
```
The django cache is configurable at many, many levels, from the entire site to views or even small pieces of information.
``` python
# myapp/views.py
from django.shortcuts import render
from django.views.decorators.cache import cache_page
@cache_page(60*15)
def my_view(request):
return render(request, 'myapp/template.html', {
'time_consuming_data': get_time_consuming_data()
})
```
Note that **memcached cache (memcached, redis) is an ephemeral storage method**, the entire cache will disappear if the system is rebooted or shutdown.
## Uses Celery for asynchronous tasks
Sometimes the bottleneck is the responsibility of third parties. When you send an email or request information from a third party, you have no way of knowing how long your request will take, a slow connection or an oversaturated server can keep you waiting for a response. There is no point in keeping the user waiting tens of seconds for an email to be sent, send them a reply back and transfer the email to a queue to be processed later. [Celery](https://docs.celeryproject.org/en/stable/) is the most popular way to do this.
No idea where to start, I have a couple of posts where I explain [how to run asynchronous tasks with celery and django](https://coffeebytes.dev/en/celery-and-django-to-run-asynchronous-tasks/).
``` python
# myapp/views.py
from celery import shared_task
@shared_task
def send_order_confirmation(order_pk):
email_data = generate_data_for_email(order_pk)
send_customized_mail(**email_data)
```
## Partition the tables in your database
When your tables exceed millions of records, each search will go through the entire database, taking a very long time in the process. How could we solve this? By splitting the tables in parts so that each search is done on one of the parts, for example, one table for data from one year ago (or the period you prefer), another for data from two years ago and so on up to the first data.
The instructions for implementing partitioning depend on the database you are using. If you are using postgres this feature is only available for Postgres versions higher than 10. You can use [django-postgres-extra](https://django-postgres-extra.readthedocs.io/en/master/table_partitioning.html) to implement those extra features not found in the django ORM.
The implementation is too extensive and would require a full entry. There is an excellent article that explains how to implement [Postgresql partitioning in Django.](https://pganalyze.com/blog/postgresql-partitioning-django/)
Consider also looking into database replicas for reading files, depending on the architecture of your application, you can implement multiple replicas for reading and a master for writing. This approach is a whole topic and is beyond the scope of a short post, but now you know what to look for.
## Use a CDN (Content Delivery Network)
Serving static images and files can hinder the important part of your application; generating dynamic content. You can delegate the task of serving static content to a content delivery network (CDN).
In addition to benefiting from the geographic locations of CDNs; a server in the same country (or continent) as your user will result in a faster response.
There are many CDN options available, among the most popular options are AWS, [Azure](https://coffeebytes.dev/en/azure-az-900-certification-exam-my-experience/), Digital Ocean, Cloud Flare, among others.
## Denormalization
Sometimes there are quite costly runtime queries that could be solved by adding redundancy, repeated information. For example, imagine you want to return the number of products that have the phrase “for children” on your home page, running a query that searches for the word and then executes a count is fairly straightforward. But what if you have 10,000 or 100,000 or 1,000,000 products, every time you want to access the count value, your database will go through the entire table and count the data.
Instead of performing a count, you could store that number in the database or in memory and return it directly, to keep it updated you could use a periodic count or increment it with each addition.
Of course this brings the problem that you now have more data to maintain, not coupled together, so \*\*you should only use this option to solve your Django performance problems if you have already exhausted the other options.
``` python
count = my_model.objects.filter(description__icontains="para niños").count()
# ... denormalizing
count = my_count.objects.get(description="para niños") # Each row of the my_count model contains a description and the total results.
total_count = count.total
```
## Review the impact of third-party plugins
Sometimes our website works almost perfectly, but third party plugins, such as facebook analytics tools, google, social media chat integrations plugins affect the performance of our application. Learn how to delay their loading or modify them to reduce their impact, using async, defer or other HTML attributes, in combination with Javascript.
If the above is impossible, evaluate alternatives or consider eliminating them.
## Consider using another interpreter to improve django performance
It’s not all about the database, sometimes the problem is in the Python code itself.
In addition to the normal Python interpreter, the one offered by default on the official Python website, there are other interpreters that are sure to give you better performance.
[Pypy](https://www.pypy.org/) is one of them, it is responsible for optimizing Python code by analyzing the type of objects that are created with each execution. This option is ideal for applications where Django is in charge of returning a result that was mainly processed using Python code.
But not everything is wonderful; third-party interpreters, including pypy, are usually not 100% compatible with all Python code, but they are compatible with most of it, so, just like the previous option. \*\*Using a third-party interpreter should also be one of the last options you consider to solve your Django performance problem.
## Write bottlenecks in a low-level language with Swig
If you’ve tried all of the above and still have a bottlenecked application, you’re probably squeezing too much out of Python and need the speed of another language. But don’t worry, you don’t have to redo your entire application in C or C++. [Swig](http://www.swig.org/) allows you to create modules in C, C++, Java, Go or other lower level languages and import them directly from Python.
Do you want to know how much difference there is between Python and a compiled language like go? in my post [Python vs Go I compare the speed of both languages](https://coffeebytes.dev/en/python-vs-go-go-which-is-the-best-programming-language/)
If you have a bottleneck caused by some costly mathematical computation, which highlights the lack of speed of Python being an interpreted language, you may want to rewrite the bottleneck in some low-level language and then call it using Python. This way you will have the ease of use of Python with the speed of a low-level language.
## ORMs and alternative frameworks
Depending on the progress of your application, you may want to migrate to another framework faster than Django. Django’s ORM is not exactly the fastest out there, and, at the time of writing, it is not asynchronous. You might want to consider giving [sqlalchemy](https://www.sqlalchemy.org/), [ponyorm](https://ponyorm.org/) a try.
Or, if your application is not very complex at the database level, you may want to write your own sql queries and combine them with some other framework.
The current trend is to separate frontend and backend, so Django is being used in conjunction with Django Rest Framework to create APIs, so if your plans include the creation of an API, you may want to consider FastAPI, if you don’t know it, take a look at my post where I explain [the basics of FastAPI](https://coffeebytes.dev/en/fastapi-tutorial-the-best-python-framework/).
## Bonus: applications with more than 63 000 models
There is a talk they gave at djangocon2019 where the speaker explains how they managed to deal with an application with 63000 endpoints, each with different permissions.
{% youtube O6-PbTPAFXw %}
## Bonus: Technical blogs
Pinterest and Instagram are two gigantic sites that started out by choosing Django as their backend. You can find information about optimization and very specific problems in their technical blogs.
The instagram blog has a post called [Web Service efficiency at Instagram with Python](https://instagram-engineering.com/web-service-efficiency-at-instagram-with-python-4976d078e366), where they explain some problems encountered when handling 500 million users and how to fix them.
Here are the links to the blogs below:
- [Pinterest engineering](https://medium.com/pinterest-engineering)
- [Instagram engineering blog](https://instagram-engineering.com/)
References:
- Definitive Guide to Django: Web Development Done Right by Adrian Holovaty and Jacob Kaplan Moss
- Two scoops of Django 1.8 by Daniel Roy Greenfeld and Audrey Roy Greenfeld
- High performance Django by Peter Baumgartner and Yann Malet | zeedu_dev |
1,851,925 | REST API: Best practices and design | How do I design a REST API? How many levels should I nest my related resources? Relative or full... | 0 | 2024-05-26T06:24:37 | https://coffeebytes.dev/en/rest-api-best-practices-and-design/ | systemdesign, opinion, rest, api | ---
title: REST API: Best practices and design
published: true
date: 2024-05-26 00:00:00 UTC
tags: systemdesign,opinion,rest,api
canonical_url: https://coffeebytes.dev/en/rest-api-best-practices-and-design/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o4unx09tw8uvkyvgv90y.jpg
---
How do I design a REST API? How many levels should I nest my related resources? Relative or full URLs? This post is a compilation of some recommendations about some good REST API design practices that I have found in books and articles on the internet. I leave the sources at the end of the article in case you are interested in going deeper or see where this information comes from.
Before we get started, there are a number of [basic features of a REST API](https://coffeebytes.dev/en/basic-characteristics-of-an-api-rest-api/), which I laid out in a previous post, check them out if you have questions. In this post I’m going to talk a bit about some more subjective aspects related to REST API design.
Remember that a REST API can return other formats, not just JSON, but I’m going to focus on this one for the examples because it’s quite popular.
I’m going to start with a fairly common question: how do I structure my JSON response?
## Structure for JSON responses
There are different ways to structure the response of a REST API. There is no valid or invalid one, it depends on the taste of each team and the needs of the application. \*\*The important thing here is to maintain consistency and homogeneity in all your responses.
### According to json:api
There is a [group of people who set out to standardize JSON responses](https://jsonapi.org/) into a single response style, either for returning single or multiple resources. You can take their style as a reference when designing their API to ensure uniformity of responses.
``` json
{
"products": [{
"id": 1,
"title": "title"
}]
}
```
### Twitter style API
Twitter has its own way of doing things, the response from an individual resource looks like this:
``` json
{
"id":1,
"title":"title"
}
```
For multiple resources, Twitter decided to include them within an array.
``` json
[
{
"id":1,
"title":"title"
},
{
"id":2,
"title":"title"
}
]
```
### Facebook style API
On the other hand, on Facebook, the syntax for individual resources looks like this, just like Twitter:
``` json
{
"id": 1,
"title": "title"
}
```
While an answer for multiple resources is like this:
``` json
{
"data":[
{
"id": 1,
"title": "title"
},
{
"id": 2,
"title": "title"
}
]
}
```
Who to listen to? As you can see there are differences between companies and I don’t know if I would dare to tell you that one or the other is correct, but I consider that if you keep constant in each of your endpoints and document it well, you shouldn’t have any problems.
## Relative or full URLs in HATEOAS?
Remember that HATEOAS is a [feature of REST APIs](https://coffeebytes.dev/en/basic-characteristics-of-an-api-rest-api/)? Well, from what I’ve researched, there’s no clear consensus or official stance on whether it’s better to include relative or full URLs. There is a lot of debate about it on stackoverflow, but microsoft uses full URLs in their responses, take it into account when designing your REST API.
``` json
{"rel":"self",
"href":"https://adventure-works.com/customers/2"}
```
## Objects nested in the response
Generally an API does not return individual resources, but resources that are related to other resources at the database level by one-to-one, many-to-many or one-to-many relationships. The question here is: do we include them in the response even if this increases its size? do we put only the identifiers and download them afterwards? It depends.
### Identifiers in the response
This approach to the problem will require that if the user requires access to the information, it is downloaded at a later time. It is ideal for data that is rarely consulted or plentiful.
``` json
{
"posts": [{
"id": 1,
"title": "title",
"comments": [2,3,4]
}]
}
```
This can bring you the problem of n+1 queries if you don’t handle it well; consider the example above, each request to a post implies a new request to the database to get each comment.
Of course that can be fixed by optimizing your searches so that, instead of getting them individually, you get them in a single query.
```
GET /comments/2,3
```
### Resources in the response
It is also possible to directly add the related objects in a single response, to avoid having to download them later. This will make each response take a little longer, as the server will process more information, but it can save subsequent requests to the API.
``` json
{
"posts":[
{
"id":1,
"title":"title",
"comments":[
{
"id":2,
"text":"..."
},
{
"id":3,
"text":"..."
},
{
"id":4,
"text":"..."
}
]
}
]
}
```
If you want flexibility consider creating an endpoint where you can tell your API which resources you want to explicitly nest in the url, so that they are only integrated into the response if they are requested.
```
GET /posts/1?embed=comments
```
## Pagination in APIs
As I’ve already mentioned in previous posts when I talked about Django, for [application performance](https://coffeebytes.dev/en/is-your-django-application-slow-maximize-its-performance-with-these-tips/) reasons, you don’t always want to return the whole database to your users in each request. For large databases it is best to break the response into pages, with a limited number of items per page.
To facilitate the use of your API, consider adding pagination-related information in your response:
- The total number of items
- The number of elements per page
- The total number of pages
- The current page A url to the previous page (if any) \* A url to the next page (if any) A url to the next page (if any) \* A url to the next page (if any)
As well as any other information you consider relevant.
``` json
{
"data": [
{}
],
"pagination": {
"total": 60,
"items_per_page": 12,
"current_page": 1,
"total_pages": 5,
"next_url": "https://api.example.com/items?page=2",
}
}
```
## API Versioning
APIs are not static, they change with business needs, so they can change over time. It is important that the consumers of your API are aware of these changes, so versioning your API is an excellent idea.
### Should I version my API?
Generally you will want to version your API. However, if your API is extremely simple and its structure is extremely stable, or it works in such a way that changes are added as new endpoints, without modifying the old ones, you could leave it unversioned. If you have doubts about whether your API fits into the above, you should probably version it.
### Where to version the API?
For an API to adhere to the [REST architecture requirements](https://coffeebytes.dev/en/basic-characteristics-of-an-api-rest-api/) it must meet certain characteristics, but some companies choose to bypass these requirements for their APIs and still call them REST.
Here are some options for versioning your APIs used by large companies, regardless of whether they are REST compliant or not.
### At url level
Probably the most popular option of all.
Incredibly simple to understand and implement but will cause you problems with clients who store URLs in a database, because with every change you will have to update them. Also, it’s hard to separate them on different servers. Technically, **putting the version in the url is not REST**.
Examples of companies: Twitter, dropbox, youtube, etsy.
``` bash
http://domain.com/api/v1/
```
### Domain level
Quite simple to understand and implement but will bring problems to those customers who store urls in database. Again, technically, **placing the version in the domain is not REST**.
Examples of companies: Twitter, dropbox, youtube, etsy.
``` bash
http://apiv1.domain.com
```
### By means of parameters in the url or in the body
It keeps the same url, only the parameters change. It brings problems with clients that save the urls and their parameters in the database. Technically **using parameters for API versioning is not REST**.
Examples of companies: Google data, Paypal.
``` bash
http://apiv1.domain.com/resource?version=1
```
In the HTTP request body it would look like this:
``` bash
POST /places HTTP/1.1
Host: api.example.com
Content-Type: application/json
{
"version" : "1.0"
}
```
### Through HTTP headers
Retains the same urls but may confuse caching systems.
Example companies: Azure.
``` bash
GET /resources HTTP/1.1
Host: example.com
ApiVersion: 1.0
Vary: ApiVersion
```
Consider that you need to add a vary header so that the [caching systems](https://coffeebytes.dev/en/caching-in-django-rest-framework-using-memcached/) do not store different versions of the API in the same url.
### In the content negotiation
Remember that mechanism defined in the HTTP protocol that allows you to obtain different versions of a resource? Well, in addition to applying to formats, it can be used to specify versions.
Keep the same urls, it can confuse developers who do not understand headers.
Examples of companies: Github, Adidas.
``` bash
application/vnd.github[.version].param[+json]
```
In REST one thing are the resources and another their representation, the resources, in addition to the format, have another form of representation, which is the API version, so this way does comply with REST. Although its use can be a little more confusing for people unfamiliar with the HTTP protocol.
## How much nesting of API resources?
When we have relationships between our resources, it is quite tempting to place hierarchical URL’s, complicating the use of the API.
``` bash
# /resource/<id>/subresource/<id>/subsubresource/<id>/subsubsubresource ❌
/clients/99/orders/88/products/77/variants ❌
```
The [DRF documentation suggests a flat structure](https://www.django-rest-framework.org/api-guide/relations/#example_2) when designing APIs.
The white house API standards guide also advocates for succinct nesting, setting as a limit
``` bash
resource/<id>/resource
```
[Microsoft also recommends keeping URIs as simple as possible.](https://docs.microsoft.com/en-us/azure/architecture/best-practices/api-design) But how do I refer to resources deeper in the URL? Well you can create an endpoint with one or two levels of nesting and access them directly.
``` bash
# /subresource/<id>
```
### And how to deal with related resources?
Very long URLs, with multiple hierarchies above, can be shortened by accessing them directly using the direct reference to the resource.
Instead of having an endpoint that requires the entire hierarchy in the URI. As in this example:
``` bash
store/99/clients/99/orders/88/products/77
```
Reduce the length of the endpoint to a minimum, the identifier should be enough to access the resource.
``` bash
# /subresource/<id>/subsubresource/<id>
/orders/88/products/77
```
Notice how even in the absence of the initial part of the URI above, we can still access the resource and it is perfectly readable.
## Notify about API updates
Sometimes it is necessary to introduce structural changes in the APIs, to prevent all those who consume it from presenting problems, we need to notify them. But… how?
In the [book Two Scoops of Django](https://coffeebytes.dev/en/the-best-django-book-two-scoops-of-django-review/), the authors recommend the following steps for notifying an API version change.
- Notify users as far in advance as possible via email, blogs or any other means, almost to the point of boredom.
- Replace the deprecated API response with an HTTP 410 error that returns a message containing links to: the new endpoint, the new API documentation and, if it exists, the text explaining why the changes were made.
## Limit your API through a throttling policy
You should limit your API. Users should not have unrestricted access and unlimited requests to your API. There are users that can abuse your API, keep your server busy, preventing the rest of the users from using it and increasing your costs.
One way around this is to set a [throttling policy](https://coffeebytes.dev/en/throttling-on-nginx/) on your server for any user.
You can also make it the center of your business and offer payment plans according to the number of requests per minute to your API.
## Special characters in the URI
Use only valid characters in your URI.
According to the specification [RFC 3986](https://datatracker.ietf.org/doc/html/rfc3986#section-3), the only valid characters, i.e., that do not need to be encoded, in a URI are the basic Latin alphabet letters, digits and some special characters (as long as they are used for their intended purpose).
- Secure characters [0-9a-zA-Z]: do not need to be encoded
- Non-reserved characters [- . \_ ~]: do not need to be encoded
- Reserved characters [: / ? # [] @ ! $ & ’ ( ) \* + , ;] only need to be encoded if they are not used for their original purpose (e.g. a diagonal not used to separate paths)
- Unsafe characters [< > % { } | \ ^ `]: need to be encoded.
- All other characters need to be encoded.
The above is changing and they are trying to add many more symbols from different languages, you can read more about it in the [w3 idn and iri article](https://www.w3.org/International/articles/idn-and-iri/).
## Consider SEO in your URLs
Search engines consider the URL to rank a web page, if search engine ranking is important to your website, don’t just use identifiers, tell the search engine the topic in the URL. SEO and URLs are a topic too broad to summarize in a few lines, but this should give you an idea of how to search for more information.
``` bash
/posts/post-title ✅
/posts/99-post-title ✅
/posts/99 ❌
```
I hope you found the post useful, or at least that it introduced you to material you hadn’t previously considered when designing an API.
## Reference sources
- [JSON:API Documentation](https://jsonapi.org/)
- [IRI y URI](https://www.w3.org/International/articles/idn-and-iri/)
- [Whitehouse’s API standard](https://github.com/WhiteHouse/api-standards)
- [Best Practices API design from Microsoft](https://docs.microsoft.com/en-us/azure/architecture/best-practices/api-design)
- [Sturgeon, P. (2015). _Build Api’s_. Philip J. Sturgeon.](https://www.amazon.com.mx/Build-APIs-You-Wont-Hate/dp/0692232699/ref=sr_1_1?__mk_es_MX=%C3%85M%C3%85%C5%BD%C3%95%C3%91&crid=2W0ZTSCO349YL&keywords=build+apis&qid=1648756000&sprefix=build+apis%2Caps%2C187&sr=8-1)
- [Massé, M. (2012). REST API design rulebook. Sebastopol, CA: O’Reilly.](https://www.amazon.com.mx/Rest-API-Design-Rulebook-Consistent/dp/1449310508)
- [Two scoops of django](https://www.feldroy.com/books/two-scoops-of-django-3-x) | zeedu_dev |
1,865,235 | Unpacking the Role of a Heating Installation Contractor in Oklahoma City | Essential Services for Oklahoma Climate Amid the varying weather patterns, residents of Oklahoma... | 0 | 2024-05-25T23:25:04 | https://dev.to/austinballard/unpacking-the-role-of-a-heating-installation-contractor-in-oklahoma-city-20k | Essential Services for Oklahoma Climate
Amid the varying weather patterns, residents of Oklahoma City understand the value of having a fully operating heating system. The balance between the extreme summer heat and winter cold requires apt climate control at all times. In comes the indispensable role of a [heating installation contractor in Oklahoma City](https://www.waze.com/live-map/directions/us/ok/oklahoma-city/accutemp-heating-and-air-conditioning?to=place.ChIJ3aUYpNAwcI8R83LX01Z8UUA).
Understanding Heating System Installation
For any edifice, whether it be your home or your office, a robust heating system is crucial to maintain an ideal temperature year-round. An adequately installed heating system ensures minimized utility costs while maximizing comfort by evenly distributing heat during those nippy winters in the city. Therein lies the importance of hiring a professional heating installation contractor who can carry out this task effectively and efficiently.
Ensuring Safe Installation Practices
A primary advantage of hiring skilled heating installation contractors is their strict adherence to safety regulations. As every trained contractor knows, there's no leeway when it comes to utilizing established safety protocols during heating installations. It ensures that everyone involved and occupying the premise remains safe from any potential hazards before, during, and after installation.
Sustainability & Energy Efficiency
The impact of our choices on modern living means thinking about long-term sustainability when making decisions like installing new systems in our homes or offices. That's where a proficient heating installation contractor brings great value by recommending advanced energy-efficient products suited for individual needs - saving you money and having a lesser environmental impact over time.
Expertise & Assurance
Heating installations require intricate tasks involving electricity, removing old systems, precise measuring, ductwork navigation - components that benefit greatly from expert handling to prevent malfunctions or long-term problems later successfully. By hiring a professional heating installation contractor in Oklahoma City, you ensure services from individuals with years of expertise under their belt – thus providing you with peace-of-mind knowing that the job is done right.
Maintenance, Repair & Guidance
No heating system runs flawlessly without timely maintenance. Occasionally, unusual noises or inefficient operation may require immediate attention to prevent further complications. Professional contractors not only handle installations but also take care of essential post-installation responsibilities like maintenance and repairs --- walking you through the regular upkeep and imparting knowledge on recognizing potential issues that might surface over time.
In our everyday living amid the Oklahoma City climate, a well-performing heating system in homes or offices is an indispensable requirement for comfort and efficiency. While it’s easy to choose the first contractor you come across or even try to install the system yourself, it's evident that employing professional heating installation contractors adds significant value in safety, sustainability, proficiency, and lasting assurance. Rest easy in the heart of America - knowing your heating systems have been installed by qualified professionals who guarantee cozy winters while respecting your pocket and the planet.
**[AccuTemp Heating & Air Conditioning](https://www.accutempairok.com/)**
Address: [105 W Charlotte Dr, Oklahoma City, Oklahoma, 73139 ](https://www.google.com/maps?cid=4634622203904094963)Phone: 405-777-7395 | austinballard | |
1,863,006 | DISCOVERING ZAP: An Interesting New Part Of The BLAST Ecosystem (Blockchain World) | With blockchain technology, we know how fast it has been and all the amazing ecosystems that have... | 0 | 2024-05-25T23:17:18 | https://dev.to/uchemma001/discovering-zap-an-interesting-new-part-of-the-blast-ecosystem-blockchain-world-40h5 | block, web3, blastecosyste, zap | With blockchain technology, we know how fast it has been and all the amazing ecosystems that have been built on it. All of these ecosystems tend to create and beautify a decentralized world. Now, ranging from layer 1 ecosystem Ethereum, which we are calling the father of all blockchain ecosystems, to layer 2 ecosystems.
This article will brief you on layer 2 ecosystems and introduce you to ZAP, a project on layer 2 ecosystems.
**Layer2 Ecosystems**
The overwhelming load on the layer 1 ecosystem led to the breakthrough of layer 2 ecosystems such as Optimum,Polygon, and Blast, etc.
This intervention of layer 2 has made the decentralized world:
- Eco-friendly.
- Faster Transaction
- Lower fees
- Increased Scalability
**ZAP On Blast**
**Blast**
Blast is a layer 2 ecosystem, the only Ethereum L2 with a native yield for ETH and stablecoins.
The baseline interest rate on existing L2s is 0%, so by default, the value of your assets depreciates over time. but on Blast, your balance compounds automatically and earns Blast rewards on top.
Blast is an EVM-compatible, optimistic rollup that raises the baseline yield for users and developers without changing the experience crypto natives expect.
I know its quite complex; that's where ZAP comes in.
**ZAP**
ZAP is not concerned with your skill level—only your desire to learn. The project is guaranteeing there's something for everyone, unlike the source ecosystem, where it seems only developers and SMEs benefit.
ZAP is a community-driven token launch protocol built to alleviate current problems in the token launch space and provide value to both founders and investors.
**ZAP's Plan**
Token launch platforms face challenges in four categories:
Token-gated launchpads require high capital
Lottery-based launchpads rely on chance
Permissionless launchpads lack controls,
Airdrops can be manipulated.
These issues affect democratization, fair allocation, risk exposure, and value accrual in early-stage investing.
**Why ZAP**
With the introduction of the Blast chain, the token launch market is growing. Now, long-term participation in the ecosystem is encouraged by the network's native yield, which unlocks additional value.
ZAP combines the powerful features of Blast with a set of new raise mechanisms to provide more value to founders and investors and democratize early-stage investing.
**Conclusion**
It goes beyond the limits that the Layer 1 ecosystem has not been able to overcome over time.
This indeed tells us that in no time, with the fast pace of advancement in technology and the blockchain, we will be very close to a complete decentralized world with a fair economy. | uchemma001 |
1,865,224 | How to kill laziness or procrastination | Productivity is moving forward every day and not thinking of moving forward every day. In my... | 0 | 2024-05-25T23:12:50 | https://dev.to/saifullahusmani/how-to-kill-laziness-or-procrastination-2jpc | productivity | > Productivity is moving forward every day and not thinking of moving forward every day.
In my experience, I’ve had many days where I worked all day and then many days where I didn’t work at all.
Anything of passion becomes boring after a deadline gets placed on top of it. At that stage, it all comes down to discipline, consistency, and responsibility.
When I started to procrastinate a lot, what I did was I started my research on procrastination, mental health, and productivity. In that study, I found a few tips that have helped me overcome my urge to be lazy all day and do nothing.
## Tips I learned to overcome procrastination:
### 1. Make it easy to start the work:
What does it mean to make the work easy to start? It means that if you want to go to the gym, you don’t have to change your dress, wear the attire that works best in the gym environment, brush your hair, grab a water bottle, fill it up, put on your best shoes, get a towel, drive the best vehicle to the gym and enter the gym with a happy bouncy mood.
This is too much work and it is not an easy way to start exercising. What I learned from this tip is that the work should be so easy that doing it doesn’t require any prior preparation. For example, if I have to go to the gym, I never look at myself in the mirror or change my shoes or clothes. I get up from my PC and start walking towards the gym. No preparation, no nothing. So easy, right?
Now because of this attitude, I’ve never skipped gym in months and ultimately my physique is getting better and better.
### 2. Make a mental image of what to do:
Some people prefer writing down the to-do list for the coming day/task but I like to make a plan in my mind for what to do before I start the work. The work could be studying, freelancing, learning different skills, giving a lecture, making a tutorial, writing a blog post, going on a meeting with clients, going out with friends/family, etc.
Once I have a mental image of what to do and how to do the first step then all of a sudden it becomes easier to get started and with the first tip combined, I can just hop on to that task without any further hassle and overcome my laziness or procrastination.
### 3. Talk to a mentor:
The world is filled with all kinds of people and whatever you are doing the chances are someone else has done it already. That person who has done it already knows more about that field/thing than you. You can learn from that person, you can ask about what to avoid and what to do in order to reach where he/she has reached.
Once you find that person, always try to ask questions and learn from that person. That person is a blessing from Allah and that person will help you achieve something in 1 year that you would have achieved in 10 years otherwise.
A mentor can kill your procrastination by giving you motivation, discipline, and a reason to think big and think positive.
### 4. Have a bigger picture in mind:
Responsibility is the greatest discipline in the world. You must have a vision, a reason to get up in the morning, and a reason to work.
If you don’t have a burning desire, the chances of you losing interest are 100%.
_**Find your why!**_
| saifullahusmani |
1,865,222 | 志田 橄榄球 | A post by YOKO | 0 | 2024-05-25T22:57:16 | https://dev.to/loveuyeah/zhi-tian-gan-lan-qiu-2bng |



 | loveuyeah | |
1,865,221 | [Game of Purpose] Day 7 | Today I learned about Nanite system, how to use, debug and import it from store. I also learned how... | 27,434 | 2024-05-25T22:52:18 | https://dev.to/humberd/game-of-purpose-day-7-1opi | gamedev | Today I learned about Nanite system, how to use, debug and import it from store.
I also learned how to create blueprints. It's nice that there are `Cast To *` nodes which expose properties form other blueprints.

{% youtube https://youtu.be/_4ZCc38gWQQ %} | humberd |
1,865,219 | Appunti sul machine learning | Alcune note sull'apprendimento automatico È un sistema in cui una macchina può eseguire... | 27,514 | 2024-05-25T22:48:41 | https://trinca.tornidor.com/it/blog/how-to-learn-machine-learning | machinelearning, google, llm, learning | ## Alcune note sull'apprendimento automatico
È un sistema in cui una macchina può eseguire compiti senza programmazione esplicita. Le basi teoriche esistono fin dagli anni '60, ma risultati significativi sono possibili (dal 2013 in poi circa) grazie ad hardware dedicato, algoritmi in grado di sfruttarne la potenza e dataset di dati ben strutturati e catalogati.
## Un esempio di classificazione
Un software programmato esplicitamente per eseguire la classificazione delle immagini dovrebbe identificare esplicitamente le caratteristiche dell'oggetto. Questo può funzionare se tutti gli oggetti in questione sono ragionevolmente semplici e simili. Ma cosa succede se alcune caratteristiche della stessa categoria di oggetti presentano una varianza elevata?
### Un approccio senza istruzioni esplicite
Potremmo sottoporre al software (ovvero il modello di machine learning) un dataset di dati di immagine opportunamente etichettato eseguendo su di esso un numero opportuno di cicli di training e test. Un altro esempio potrebbe utilizzare la regressione lineare per prevedere i prezzi delle case in base alle loro dimensioni. Le caratteristiche comuni di questi due esempi sono lo studio dei dati per formare un’ipotesi. L'aggiunta di più dati probabilmente esporrà errori all'interno dell'ipotesi, quindi potremo misurare il divario di errore e formare un'ipotesi aggiornata con un divario di errore minore.
La macchina deve analizzare il set di dati utilizzando alcune ipotesi, ad es. una variabile lineare rappresentabile come una retta su un piano cartesiano. L'obiettivo della macchina è testare ogni possibile risultato per estrarre quello con il tasso di errore più basso. È semplice con una variabile lineare, molto più complesso con alcuni moderni modelli di machine learning che hanno miliardi di variabili, ma il principio è lo stesso: la macchina parte da un punto (o set di parametri) casuale e si muove verso il tasso di errore più basso, ciclo dopo ciclo di treno e prova.
Ogni passaggio di training e test fornisce una misura (una "funzione di errore" o "funzione di perdita") e la ripetizione di questi passaggi è il "gradiente di discesa" verso la soluzione.
Esistono vari modi per migliorare i risultati come la [backpropagation](https://en.wikipedia.org/wiki/Backpropagation) per ottimizzare i parametri dei dati e [gli abbandoni dei nodi neurali](https://machinelearningmastery.com/dropout -for-regularizing-deep-neural-networks/) per ridurre l'overfitting (quando il modello ML non è più in grado di generalizzare bene sui dati del mondo reale).
## Alcune categorie di machine learning
- [apprendimento supervisionato](https://cloud.google.com/discover/what-is-supervised-learning): classificazione di immagini, riconoscimento del testo e vocale, rilevamento di frodi
- [apprendimento non supervisionato](https://cloud.google.com/discover/what-is-unsupervised-learning): raggruppamento dei dati in cluster o individuazione di anomalie
- [apprendimento per rinforzo](https://www.mathworks.com/discovery/reinforcement-learning.html): il software trova la soluzione migliore per un dato problema
## Risorse per l'approfondimento
- [AI and machine learning products](https://cloud.google.com/products/ai/ml-comic-1)
- [Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition), di François Chollet
- [IBM: cos'è il machine learning?](https://www.ibm.com/it-it/topics/machine-learning)
- [Machine learning, cos’è e come funziona questa branca dell’intelligenza artificiale](https://www.agendadigitale.eu/cultura-digitale/machine-learning-cose-e-come-funziona/)
- [Large language models, explained with a minimum of math and jargon](https://www.understandingai.org/p/large-language-models-explained-with)
| trincadev |
1,865,220 | How to learn machine learning | Some notes about machine learning It's a way in which a machine is able to perform tasks... | 27,514 | 2024-05-25T22:48:22 | https://trinca.tornidor.com/blog/how-to-learn-machine-learning | machinelearning, google, llm, learning | ## Some notes about machine learning
It's a way in which a machine is able to perform tasks without explicit programming. Theoretical bases have existed since the 1960s, but significant results are possible thanks to dedicated hardware, algorithms capable of exploiting this power and well structured datasets.
## A classification example
A software programmed explicitly to perform image classification would need to identify explicitly the characteristics of the object. This can work if all objects are reasonably simple and similar. But what if the some features of the same object category have an high variance?
### An approach without explicit instructions
We could submit a suitably labeled dataset of image data to the software (i.e. the machine learning model) by performing an appropriate number of training-and-testing cycles on it. Another example could use linear regression to predict prices of houses by their sizes. The common ground here is study the data to form an hypothesis. Adding more data probably will expose errors within the hypothesis, then we can measure the error gap and form an updated hypothesis with a minor error gap.
The machine has to analyze the dataset using some assumptions, e.g. a linear variable that can be represented as a straight line on a Cartesian plane. The machine goal is to test every possibile result to extract the one with the lowest error rate. It's simple with a linear variable, much more complex with some modern machine learning models that have billions of variables, but the principle is the same: the machine start from a random point (or parameters set) and move towards the lowest error rate, cycle after cycle of train and test.
Every train-and-test step give a measure (an "error function" or "loss function") and the repetition of these steps is the "gradient descent" towards the solution.
There are various ways to improve results such as [backpropagation](https://en.wikipedia.org/wiki/Backpropagation) to fine-tune the data parameters and [neural node dropouts](https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks/) to reduce overfitting (when the ML model is no longer can generalize well on real world data).
## Some machine learning categories
- [supervised learning](https://cloud.google.com/discover/what-is-supervised-learning): image classification, speech/text recognition, fraud detection
- [unsupervised learning](https://cloud.google.com/discover/what-is-unsupervised-learning): grouping data into clusters or identifying anomalies
- [reinforcement learning](https://www.mathworks.com/discovery/reinforcement-learning.html): the software finds the best solution for a given problem
## Sources to learn more
- [AI and machine learning products](https://cloud.google.com/products/ai/ml-comic-1)
- [Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition), by François Chollet
- [IBM: What is machine learning?](https://www.ibm.com/topics/machine-learning)
- [Machine learning, cos’è e come funziona questa branca dell’intelligenza artificiale](https://www.agendadigitale.eu/cultura-digitale/machine-learning-cose-e-come-funziona/)
- [Large language models, explained with a minimum of math and jargon](https://www.understandingai.org/p/large-language-models-explained-with)
| trincadev |
1,865,218 | Nginx WebServer on Ubuntu OS running via EC2 (AWS) | Imagine you have a favorite game you like to play on your computer, but sometimes your computer might... | 0 | 2024-05-25T22:36:29 | https://dev.to/olawaleoloye/nginx-webserver-on-ubuntu-os-running-via-ec2-aws-1l5g | nginx, ubuntu, ec2, aws | _Imagine you have a favorite game you like to play on your computer, but sometimes your computer might not be strong enough to run it well. Now, let's pretend there's a super strong computer far away that you can borrow to play your game. You can connect to this super strong computer from your own computer and play your game smoothly._
**Ubuntu** is like a special game or tool that people use on their computers to do a lot of important things. It's a**_ type of software called an operating system_**, _just like how Windows or macOS helps you use your computer._
**EC2, which stands for Elastic Compute Cloud**, is like that super strong computer you can borrow. _It's part of a big service called Amazon Web Services (AWS) that has lots of these strong computers all over the world._
So, when we say "**Ubuntu running on EC2**,"_ it means we're using that special tool (Ubuntu) on one of those super strong computers (EC2) far away_. This lets people do their important work or run their programs even if their own computer isn't strong enough. It's like playing your favorite game on a super strong computer somewhere else!
**Launch aws portal**

**Search and Select EC2**

**On EC2 Dashboard, click instance**

**Give your instance a name and select the appropriate Image**

**Provide other parameters according to your need.**

**Create a Key Pair**
**Ensure that you have a key pair**


**To run a Nginx webserver, we need to allow HTTP traffic**
**Then , Launch Instance**

**Connect to Instance**

**Select Connect**



**Access your instance locally**
_Launch Powershell_



**Change privilege to root**
`sudo su`
**Install Nginx with the code**
`apt install nginx -y`
**Verify Nginx is installed**
`nginx --version`

**Validate that nginx is running on webserver using Public IP**

| olawaleoloye |
1,865,213 | I give up on AngularFire | I just worked really hard to rip AngularFire (@angular/fire) out of my codebase. If you don't know,... | 0 | 2024-05-25T21:56:09 | https://fluin.io/blog/i-gave-up-on-angularfire | angular, v18, webdev, javascript | I just worked really hard to rip AngularFire (`@angular/fire`) out of my codebase.
If you don't know, AngularFire used to be awesome. This was best around 2019-2020, but after being under-served and failing to fully finish several migrations (the Firebase SDK now uses a modular approach), it's kind of a mess.
I tried and failed to update from v6 up to v16, v17, and beyond, because I wanted to use Angular 18. This didn't work (`ng update` would refuse to update to v16 because it forces only updating one major at a time, which is a bit silly), so I ended up deciding to rip it out.
## Steps to remove
* Remove `@angular/fire` from your package.json
* Swap out the `AngularFireModules`for your own keys
```
providers: [
{ provide: BUCKET, useValue: '...'},
{ provide: FIREBASE_APP, useFactory: () => initializeApp({...})}
```
* Create a service that calls `initializeApp` from `firebase/app and save your `FirebaseApp`.
```
constructor(@Inject(FIREBASE_APP) private fbApp: FirebaseApp) {
```
* Create (or use the same) a service for each of the Firebase modules you want to use, and then get a persistent handle on the service you want.
```typescript
db = getDatabase(this.fbApp);
storage = getStorage(this.fbApp);
auth = getAuth(this.fbApp);
```
Here's [the commit on the fluin.io repo](https://github.com/StephenFluin/fluin.io/commit/2d14ebb82fd47ac0b217267847ed48ca3c379cc0). | stephenfluin |
1,865,211 | Best Hosting Options for Developers in 2024 | Hosting is a crucial factor to consider even before you start developing your application. Having... | 0 | 2024-05-25T21:42:21 | https://sotergreco.com/best-hosting-options-for-developers-in-2024 | hosting | Hosting is a crucial factor to consider even before you start developing your application. Having used most platforms, I am going to give you an honest opinion on the best choices for Indie Hackers and small to medium projects.
For large applications with big teams, you need to reconsider many aspects of hosting. However, for small creators, there are numerous viable and affordable options.
I will cover both frontend and backend hosting solutions. Let's dive into the various use cases you might face as a developer and identify the best hosting platform for each.
## Digital Ocean
We start with [Digital Ocean](https://www.digitalocean.com/), one of the most well-known hosting providers that can kickstart most basic tasks. While not distinguished for exceptional reliability, it is reliable enough for small projects.
**Downtime**
Its downtime is minimal. If you are working with CMSs like WordPress or Joomla, it has a [Marketplace](https://marketplace.digitalocean.com/) with many pre-built solutions.
**Pricing**
Digital Ocean has transparent pricing with no hidden fees, so you know exactly what you are paying.
**Kubernetes**
Digital Ocean is a good choice for Kubernetes because of its user-friendly interface and competitive pricing per cluster.
**Use Cases**
I recommend Digital Ocean for side projects or small applications. It is also a good option for Kubernetes, offering a lot of flexibility. However, I would not consider it for larger projects.
## Hetzner
[Hetzner](https://www.hetzner.com/) is more of a marketplace than a traditional hosting platform. Still, it deserves mention for its affordable Bare Metal Servers.
On their [server auction page](https://www.hetzner.com/sb/), you can find powerful systems for just over $30/month, whereas similar systems on Digital Ocean would cost over $300/month.
**Reliability**
Hetzner offers bare metal servers, so you need to handle some basic uptime systems. They provide backup tools that are adequate for most use cases.
**Pricing**
I once used Hetzner and had a machine for a couple of months. After my billing failed a second time, I received a letter threatening legal action for an unpaid $35 bill. This is understandable, but other providers like Digital Ocean or Vultr usually give multiple warnings.
Despite this, Hetzner has clear rules and no hidden fees, and I would choose them again.
**Use Cases**
Hetzner is ideal for those who need servers for multiple projects and want to host many side projects, mostly for testing purposes. Note that you need substantial server knowledge as you have to configure everything yourself.
## Heroku
Heroku is excellent for deploying Java, Kotlin, Node, or frontend projects. They offer many addons like databases and caching providers such as Redis for an extra charge.
**Pricing**
Heroku might not be the cheapest option because it uses AWS, but the convenience it offers is worth the cost.
**Reliability**
Heroku is very reliable, thanks to AWS. Expect little to no downtime.
**Use Cases**
I recommend Heroku for indie projects, especially for Java or Kotlin. Deploying a Spring Boot API on Heroku is easy, and the built-in CLI simplifies the process. It might not offer extensive functionalities, but for getting your MVP out quickly, it is an excellent option.
## Netlify
Netlify is a popular choice for frontend developers, particularly those working with static sites and JAMstack architecture. It offers seamless deployment and integrates well with Git, allowing for continuous deployment from repositories like GitHub, GitLab, and Bitbucket.
**Pricing**
Netlify offers a generous free tier, perfect for personal projects and small sites. Their pricing scales reasonably for advanced features and higher usage limits.
**Reliability**
Netlify is known for its reliability and speed, using a global CDN to ensure quick load times and minimal downtime.
**Use Cases**
I recommend Netlify for static sites, single-page applications, and front-end projects that benefit from its integration with modern development workflows. It's particularly suited for projects prioritizing speed and ease of deployment.
## Vultr
[Vultr](https://www.vultr.com/) is known for its high-performance cloud infrastructure. It offers various services, including compute instances, block storage, and load balancers, making it suitable for different applications.
**Pricing**
Vultr provides competitive pricing with a pay-as-you-go model, ensuring you only pay for what you use. Their pricing is straightforward, with no hidden fees, and they offer various plans to fit different budgets and needs.
**Reliability**
Vultr boasts a robust infrastructure with a global network of data centers, ensuring high availability and low latency. They claim to provide 100% uptime, but I have experienced some downtime with my K8S cluster.
**Use Cases**
I recommend Vultr for developers seeking a cost-effective yet powerful hosting solution. It's ideal for small to medium-sized applications, development environments, and even production workloads due to its flexibility and reliability. However, working with K8S might present unexpected issues compared to AWS or Google Cloud.
## AWS
[Amazon Web Services](https://aws.amazon.com/) is a leading cloud platform offering a vast array of services, from compute and storage to machine learning and IoT. AWS is known for its scalability, handling anything from small projects to enterprise-level applications.
**Pricing**
AWS uses a pay-as-you-go pricing model, which can be cost-effective for small-scale projects but might become expensive as your usage scales. They offer a Free Tier for limited access to many services, great for starting.
The pricing is not very clear, and unexpected costs due to DDoS attacks are common, so setting up limits is crucial.
**Reliability**
AWS is renowned for its reliability and global presence. With multiple availability zones and a strong focus on redundancy, AWS ensures minimal downtime and high availability.
**Use Cases**
I recommend AWS for any project requiring scalability, flexibility, and a wide range of services. It's suitable for everything from startups to large enterprises, offering the tools and infrastructure needed to support complex and high-traffic applications.
## Final Words
In conclusion, choosing the right hosting platform depends on your project's specific needs and scale. For indie hackers and small to medium projects, platforms like Digital Ocean, Hetzner, Heroku, Netlify, and Vultr offer cost-effective and reliable solutions.
Each platform has its strengths, whether it's ease of deployment, pricing, or flexibility. For larger, more complex applications, AWS provides the scalability and comprehensive services needed to support enterprise-level projects.
Thanks for reading, and I hope you found this article helpful. If you have any questions, feel free to email me at [kourouklis@pm.me](mailto:kourouklis@pm.me), and I will respond.
You can also keep up with my latest updates by checking out my X here: [x.com/sotergreco](http://x.com/sotergreco) | sotergreco |
1,865,171 | Typing env variables on typescript | As a backend developer, I've been working with typescript, on some projects we are required to use... | 0 | 2024-05-25T21:20:53 | https://dev.to/wmartzh/typing-env-variables-on-typescript-5ee4 |
As a backend developer, I've been working with typescript, on some projects we are required to use different types of environment files, the most common is using environment files, but there are some specific cases where we need to use other types because the tools that we use requires a yaml, JSON or other file formats, we already have a perfect tool like [dotenv-yaml - npm](https://www.npmjs.com/package/dotenv-yaml) or [dotenv - npm](https://www.npmjs.com/package/dotenv). But something comes to my mind, what if we have a package that can support any env files 🤔...
So I made a try, and I created [ts-env](https://github.com/wmartzh/ts-env), we can manage different types of env files, currently supporting `YAML`, 'TOML', 'JSON' files

## Typing
It also supports typing the `process.env` which helps a lot to know which variables we have on the env file.

So we can have a lint suggestion on the editor

## Conclusion
I don't know if this is a real problem, if it is not, at least was interesting doing int, so let me know what do you think | wmartzh | |
1,864,357 | CodeBehind Framework Tutorial Series has Started | The content you are viewing is the supporting content of the training set; CodeBehind is free and... | 27,500 | 2024-05-25T21:19:50 | https://dev.to/elanatframework/codebehind-framework-tutorial-series-has-started-k8k | tutorial, dotnet, beginners, backend | The content you are viewing is the supporting content of the training set; [CodeBehind](https://github.com/elanatframework/Code_behind) is free and open source back-end framework based on GPLv3 license. CodeBehind framework is a revolutionary back-end framework that offers a modern approach to developing dynamic web applications.
## CodeBehind Framework Tutorial Series

By referring to the link below, you can access the topics of the CodeBehind framework training series.
[CodeBehind Framework Tutorial Series](https://dev.to/elanatframework/codebehind-framework-tutorial-series-2571)
> We request the .net community to publish the above link on the web and social networks.
CodeBehind is a powerful and versatile framework that offers a new perspective on web development, allowing for faster, simpler, and more modular project development compared to traditional ASP.NET Core MVC and Razor Pages frameworks.
CodeBehind framework can be developed as MVC or Model-View or Controller-View or only View and there is no need to follow the MVC pattern.

The architecture of the CodeBehind framework is such that the routing is removed and the Controller and Model are determined as attributes of the page in the View; In this architecture, all requests first lead to the View path, and then new instances of the Controller and Model classes are created according to the View attributes.
This tutorial is based on Razor syntax (`@Razor`), but there will be brief references to standard syntax (`<%=Standard%>`). In this course we will use VS Code and during the training, we also discuss the structural weaknesses of the ASP.NET Core MVC and Razor Pages frameworks.
Please note that the CodeBehind framework inherits all the benefits of .NET Core; therefore, in this tutorial, we do not want to explain .NET Core in detail. In this training, we will fully teach the CodeBehind framework, and during the training, you will learn the things that are necessary in web programming in .NET Core.
Below is the list of things that are trending in web and .NET development. There is no difference in applying these between the CodeBehind framework and other default ASP.NET Core web frameworks. Therefore, the following will not be covered in this tutorial series:
- Custom Validation and Model Validation
- Entity Framework
- Linq and Dapper
- GraphQL
- WebSockets and SignalR
- Web API and RESTful API
- gRPC
- Azure Functions
- AWS Lambda
- Dependency Injection
- Unit Testing and Swagger
- Exception handling
- Logging in
- Container and Docker
- Middleware
- Tasks and Asynchrony
- OWASP
- AI
- Performance evaluation
- Nlog
- Redis
- Git
**What is the purpose of this tutorial series?**
- This course will help you learn to work with the CodeBehind framework in ASP.NET Core and create new web projects in the shortest possible time without paying.
**Who is this tutorial series suitable for?**
- All people who have learned the basic level of a programming language And they know html
- .NET developers and C# programming language lovers
- Bloggers and people who produce content in the field of programming and software
- Curious developers of back-end frameworks outside the .NET ecosystem (Django, Laravel, Spring Boot, etc.)
> Note: Professional .NET developers are invited to view this course and explore the similarities and differences of the CodeBehind framework with Microsoft's default .NET Core web frameworks (ASP.NET Core MVC, Razor Pages, and Blazor).
Please note that until the end of this tutorial series, several tutorials will be added in a row on this platform, which may be annoying for some people. We apologize in advance to the member community of this platform.
Each of the lessons of this tutorial series may be edited to reach the optimal level in terms of quality.
### Related links
CodeBehind on GitHub:
https://github.com/elanatframework/Code_behind
CodeBehind in NuGet:
https://www.nuget.org/packages/CodeBehind/
CodeBehind page:
https://elanat.net/page_content/code_behind | elanatframework |
1,841,945 | CodeBehind Framework Tutorial Series | The content you are viewing is a tutorial series that deals with the new back-end framework called... | 27,500 | 2024-05-25T21:19:35 | https://dev.to/elanatframework/codebehind-framework-tutorial-series-2571 | tutorial, dotnet, beginners, backend | The content you are viewing is a tutorial series that deals with the new back-end framework called CodeBehind.
[CodeBehind](https://github.com/elanatframework/Code_behind) was developed by the [Elanat team](https://elanat.net) and is a strong competitor to Microsoft's default .NET Core web frameworks (ASP.NET Core MVC, Razor Pages, and Blazor).

**CodeBehind is .NET Diamond!**
Development of systems using the CodeBehind framework, usually with the C# programming language, is done under ASP.NET Core.
The link below is the address of the supporting content of this course, and the explanations related to this tutorial series are included in it.
[Supporting content of the tutorial series](https://dev.to/elanatframework/codebehind-framework-tutorial-series-has-started-k8k)
## List of trainings:
1. [Introduction](https://dev.to/elanatframework/introduction-of-codebehind-framework-3oco)
- What is CodeBehind Framework
- Why was CodeBehind created
- Advantages of CodeBehind Framework
- Unique MVC architecture in CodeBehind
- Knowledge prerequisites
- Unique MVC architecture in CodeBehind
- Minimum required hardware and software
2. [Configuring the CodeBehind framework in the ASP.NET Core project](https://dev.to/elanatframework/configuring-the-codebehind-framework-in-the-aspnet-core-project-4a0p)
- Installation of Visual Studio Code and necessary items
- Creating a new .NET project
- Add CodeBehind Framework package
- Configure the CodeBehind framework in the Program.cs class
- Run project
3. [View in CodeBehind](https://dev.to/elanatframework/view-in-codebehind-framework-5h8g)
- What is View
- CodeBehind View extension
- CodeBehind View syntax
- Create only View
- HttpContext in View
4. [Add Model in View](https://dev.to/elanatframework/codebehind-framework-add-model-in-view-3i74)
- Create an application with only View and Model
- CodeBehindModel abstract class
- Using CodeBehindConstructor method
- View class without abstract
- Model-View in Visual Studio Code
- Why use ViewData
5. [Controller in CodeBehind](https://dev.to/elanatframework/learning-mvc-once-and-for-all-2d9d)
- Why do some people not understand the concept of MVC?
- How to teach MVC?
- Why do we need a Controller?
- MVC example in CodeBehind
- CodeBehindController abstract class
- Using CodeBehindConstructor method
- MVC in Visual Studio Code
- Add a new View
- Change View in Controller
6. [Example of displaying information with MVC in CodeBehind](https://dev.to/elanatframework/mvc-example-display-information-based-on-url-2309)
- Activation of static files in ASP.NET Core
- Add static file
- Series data
- Series Models
- Series Views
- Prevent access Default.aspx
- Series Controller
7. [Layout and its benefits](https://dev.to/elanatframework/why-use-layout-43ih)
- Why should we use Layouts?
- Use Layout in CodeBehind Framework
- Add Layout in series project
8. Not mandatory in MVC
9. Applying new View and Model by Controller class
10. Adding a new View without changing the Controller and Model
11. Calling the View in the current View
12. View design with Razor syntax
13. View design with standard syntax
14. Modular structure in CodeBehind
15. Using constructor method
16. HtmlData classes
17. Download file
18. Dynamic Model
19. Working with templates in View
20. Return template
21. Interesting ideas for working with template and return template
22. Transfer template block data with ViewData
23. Send data by ViewData
24. AJAX example in CodeBehind
25. Activate the section feature
26. Send information by submitting form
27. Loading a View page in another View page
28. Error handling
29. Getting to know the settings of the options file
30. Manage urls and remove the aspx extension
31. CodeBehind framework data
32. Namespace and dll for CodeBehind view class
33. Error detection
34. Route configuration
35. CodeBehind configuration alongside Razor Pages and ASP.NET Core MVC
36. Advanced configuration
37. Modularity in the default mode
38. Modularity in the configuration of the controller in the route
39. Separate compilation
40. Applying changes to views without the need to recompile
41. How is the list of views finally made?
42. Cache in CodeBehind
43. Work with database
44. CodeBehind access system
45. Authentication and authorization in CodeBehind
46. Publish project and deploy on web
> Note: This list will be updated over time. Every time a new tutorial is added, its link will be available in the above list.
### Copyright
All trainings in this tutorial series belong to the Elanat team and have no copyright; therefore, the republishing of this tutorial series is free from our point of view; please also check the copyright provisions on this platform before republishing the content.
### Related links
CodeBehind on GitHub:
https://github.com/elanatframework/Code_behind
CodeBehind in NuGet:
https://www.nuget.org/packages/CodeBehind/
CodeBehind page:
https://elanat.net/page_content/code_behind | elanatframework |
1,865,176 | An alternative to technical tests | As a software developer with over a decade of commercial experience, I've found myself on both sides... | 0 | 2024-05-25T21:18:32 | https://dev.to/scottharrisondev/an-alternative-to-technical-tests-1e4m | career, softwaredevelopment, interview | As a software developer with over a decade of commercial experience, I've found myself on both sides of the hiring process over the years. Technical tests are commonplace these days, yet they are a controversial topic. At best, they can be a time-consuming endeavour for all parties involved, and at worst, they have been used to exploit potential candidates for unpaid work. I'm here to propose an alternative approach which aims to reduce the barrier to entry for both candidate and employer, whilst getting a more accurate representation of the candidate's proficiency.
## The problem with the technical test
There are various issues with the tech test as we know it. I’ve picked out a few of the main ones below.
### Bias
The "suggested" time limit on most asynchronous tech tests is often flexible, not a strict rule. Generally speaking, this means there is a bias towards candidates who have more time to spare. These might be unemployed people, students, or younger people who might have fewer responsibilities, for example. This is not ideal when you want to appeal to the broadest set of potential candidates possible. For instance, a parent in their 40s with two young children could be an excellent fit for a senior engineer position, but they might not be able to spare the 3-4 hours a tech test might take.
### Time
As discussed above, tech tests can be quite a time sink for candidates, but they can also consume significant amounts of time from the hiring parties too. Checking out multiple codebases, potentially across various platforms (Github, Gitlab, maybe even the odd zip file!), familiarising yourself with the build tooling, and debugging whether an issue is with your machine or the candidate's implementation, all add up. And that's all before you've even got to the stage of reviewing their actual code.
### Accuracy
With the recent advancements in generative AI, the standard asynchronous tech test is less reliable than ever for producing an accurate representation of a candidate's proficiency. AI tools such as Chat GPT, Copilot, and many more are allowing more people than ever to enter the world of software development. AI-generated code is not always easily spotted, especially in a smaller codebase like many tech tests. This opens up the potential for a candidate to pass a tech test without possessing the necessary skills for the role.
While AI has certainly intensified this issue, it has actually been a problem for as long as tech tests have been around. In the same vein as using generative AI to complete a tech test, a candidate could just as well allow a friend, or hire someone more skilled than themselves to complete the tech test. Even excessive use of Stack Overflow and copy-pasting could lead to a passing tech test without a true understanding of the underlying concepts.
Of course, a well-rounded hiring process would include a follow-up review and discussion with the candidate about their approach, which would hopefully flush out the most egregious deception attempts. Ideally, the process wouldn't include this potential waste of time at all.
## A potential solution
My alternative to the conventional tech test is a process that every developer is already familiar with: the humble code review. The idea is that as part of the standard interview process, you can allow the candidate 20-30 minutes to complete a code review on a pre-prepared pull request in the source control tool of your choice. Then, meet back with the candidate to discuss their review comments, or even let them do it live with you and discuss as they progress through. You could even have multiple pull requests for different roles or levels. For example, a relatively simple pull request which has syntax errors, whitespace inconsistencies, variable name typos, unnecessary repetition, etc. These would all be great opportunities for a more junior developer to pick out improvements. A senior-level pull request might include more in-depth concepts such as ensuring that functions are pure and testable, components are appropriately broken down, the correct types are being used, and avoiding using loose typing or assertions, for example.
This approach respects the time of both the employer and candidate. It requires similar time commitment from both parties and can be completed synchronously. This closes the feedback loop and streamlines the hiring process, eliminating the wait for tech test results from candidates.
The risk of generative AI, copy-pasting, or outside help is significantly reduced, if not completely removed. This is due to the shorter time expectation and synchronous process, but primarily because it requires the candidate to understand the code and suggest improvements, rather than simply adding their own code to meet acceptance criteria.
The post-code review interview isn't all that different from the standard post-tech test interview, as the concepts are broadly the same. You can ask the candidate to explain their thought process and probe for any missed areas to determine if they don't know the topic or simply overlooked it during the interview.
## Conclusion
Like much of software development, there are many solutions to the same problem. The concept proposed in this post presents an alternative to the standard tech test. If your current hiring process is time-consuming and doesn't always give the results you want, then perhaps it's worth trying the code review approach. It could lead to a broader hiring pool and a more accessible, well rounded process for everyone involved. | scottharrisondev |
1,865,173 | Code is for Humans not for Computers | When I started learning programming the first rule I learned was "if it works don't touch it "Then I... | 0 | 2024-05-25T21:11:34 | https://dev.to/ikbalarslan/code-is-for-humans-not-for-computers-2ohd | programming |

When I started learning programming the first rule I learned was "_if it works don't touch it_ "Then I just realized that %70 of our time spent coding is spent reading the code. That first rule looks like has some issues.
A couple of days ago I watched a presentation related to this topic and the content in it was opposite what the first rule says. Here is a brief overview.
**If you don't know why your code works, you have no hope of fixing it when it breaks.**
When you don't understand how the code works and need to fix it.
An idea will come which is _it'd be faster if I just rewrote it_. This will fix the problem but if any more bug exists in the code base again since you don't understand the code base you need to rewrite it again and again.
This slows down the development progress. because of that most of the code bases are rewritten hundreds of times. if your code has to be rewritten to be fixed. improved, or extended, you failed. the only way to ensure your code survives is to make sure it's readable.
The problem is we are writing our code primarily for the computer. Not for humans but instead the code should communicate ideas with other people. Readability directly impacts your ability, and that of everyone else, to do their job.
**Readability isn't just a good idea or nice to have. it's the whole point.**
The one thing we will always be better at than the computer is empathetic communication with other people.
I am leaving the link for the presentation [here](https://frontendmasters.com/teachers/kyle-simpson/code-is-for-humans/) it is worth checking it out. | ikbalarslan |
1,865,172 | COOL CMD Features | Hi, This post is about some cool things you can do with curl in cmd. First you have to make sure you... | 0 | 2024-05-25T20:52:23 | https://dev.to/rusandu_dewm_galhena/cool-cmd-features-25j9 | development, coding | Hi,
This post is about some cool things you can do with curl in cmd.
First you have to make sure you have a windows 10 or above computer as Curl sometime would not be available in os below windows 10.
1.Rick roll in Curl
type. : curl ASCII.live/can-you-hear-me
2.Dancing parrot
type: curl parrot.live
Hope this would be fun !!!!! | rusandu_dewm_galhena |
1,865,169 | Apache Spark 101 | In order to understand Spark let's remember what was the scenario before its creation. A couple of... | 0 | 2024-05-25T20:47:37 | https://dev.to/rubnsbarbosa/apache-spark-101-2p68 | apache, spark, scala, dataengineering | In order to understand Spark let's remember what was the scenario before its creation. A couple of years ago computers became faster every year through processor speed increases. This trend in hardware stopped around 2005 due to hard limits in heat dissipation. So, hardware engineers stopped making individual processors faster, and started adding **parallel CPU cores all running at the same speed**. As a result of this change, applications needed to be modified to add parallelism in order to run faster.
**Google** wanted run giant computations on high volumes of data across large clusters. Because they were creating indexes of all the content of the web in order to identify the most important pages. So, they **designed MapReduce, a parallel data processing framework**, which enabled Google to index the web.
At that time, Hadoop MapReduce was the dominant parallel programming engine for clusters of thousands of nodes. So, why was Spark created? Well, **MapReduce engine made it challenging and inefficient to build large applications** that needed multiple MapReduce jobs together, which **causes a lot of reading and writing to disk**.
To address this issue, the **Spark** team first designed an API based on functional programming that could express multistep applications. The team then implemented this API over a new engine that **could perform efficient, in-memory data sharing across computation steps**.
### What is Apache Spark?
Apache Spark is an open-source unified computing engine and a set of libraries for parallel data processing on computer clusters.
**Spark is a fast engine for large-scale data processing**, basically the idea is that we can write a code which describes how you want to transform a huge amount of data, and Spark will figure out how to distribute that work across an entire cluster of computers, i.e., the driver send tasks to workers to run/process them on a parallel mode. Apache Spark gets a massive data set and distribute the processing across an entire set of computers that work together in parallel at the same time.
In a nutshell Spark can execute tasks on data across a cluster of computers.
NOTE: Spark itself is written in Scala, and runs on the Java Virtual Machine (JVM). So, therefore to run Spark either on your laptop or a cluster, you need an installation of Java.
### Architecture
Spark application architecture at high level

Spark architecture consists of driver process, executors, cluster manager, and worker nodes. Apache Spark follows a master and worker architecture; it has a single master and any number of workers.
There are some key componentes under the hood such as: Driver Program, Cluster Manager, Task, Partitions, Executors, Worker nodes.
### Spark APIs
When working with Spark, we will come across different APIs
1. RDD (Resilient Distributed Datasets) API
2. DataFrame API
3. Dataset API
4. SQL API
5. Structured Streaming API
#### RDDs
* RDDs are distributed collections of objects that can be processed in parallel;
* RDDs support two types of operations: transformations (which produce a new RDD) and actions (which return a value to the driver program after running a computation on the dataset);
* RDDs provides low-level control over data flow, data processing/operations;
* RDDs are fault tolerant, automatically recovers lost data due to node failures using lineage information. (Data lineage is the process of tracking the flow of data over time);
* RDDs don’t infer the schema of the data we need to specify it.
RDD Scala example:
```scala
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession
val spark: SparkSession = SparkSession
.builder()
.master("local[*]")
.appName("rdd")
.getOrCreate()
// I wanna square everything
val rdd = spark.sparkContext.parallelize(List(1,2,3,4))
// we are creating a new RDD called squares
val rddSquares = rdd.map(x => x * x)
println(rddSquares)
```
`res = 1, 4, 9, 16`
The beauty of this example is that it could be distributed. So, if the RDD was really massive, it could actually split that processing up and handle that squaring in different chunks of that RDD on different nodes within our cluster, and send the result back to your driver script and get the final answers that we want.
Another example:
```scala
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession
val spark: SparkSession = SparkSession
.builder()
.master("local[*]")
.appName("rdd")
.getOrCreate()
val rddNums: RDD[Int] = spark.sparkContext.parallelize(List(1,2,3))
val rddCollect: Array[Int] = rddNums.collect()
println("Action: RDD converted to Array[Int]")
```
Let's talk about `rdd.collect()` method in Apache Spark is a powerful and potentially problematic operation. It's used to retrieve the entire `rdd` from the distributed environment back to the local driver program. The `collect()` method require a full dataset in memory, it carries significant risks and potentially issues, especially when dealing with large datasets.
Issues with `rdd.collect()`
**memory overload** because it transfers all data from the distributed nodes to the driver node. If the dataset is large, this can cause the **driver program to run out of memory** and crash because it tries to fit the entire dataset into the limited memory of the driver node. Imagine calling `rdd.collect()` with terabytes of data, it will try to bring all that data into the memory of a single machine aka the driver, which is often impossible. So, in this scenario the job certainly will fail.
**network bottleneck** due to transferring large amounts of data over the network from the worker nodes to the driver node. This can lead to slow performance of the Spark job.
**reduced parallelism** one of the strengths of Spark is its ability to process data in parallel across a cluster, using `collect()` invalidate this advantage by aggregating all the data into a single node, reducing the benefits of distributed processing.
**Rules of thumb** avoid use `collect()` as much as possible, its use should be approached with caution. There are some best practices, instead of collecting the entire dataset, use Spark actions such as `take(n)`, `aggregate()`, `reduce()`to perform computations on the data directly within the distributed environment. Also, persist intermediate results in memory or disk using `persist()` or `cache()`.
#### DataFrame
* DataFrames is a distributed collection of rows under named columns (similar to a table in a relational database);
* Built on top of RDDs it provides a higher-level abstraction for structured data;
* Simplifies data manipulation with a high-level API;
* Easily integrates with various data sources like JSON, CSV, Parquet, etc;
* It does not support compile time safely, thus the user is limited in case the structure of the data is not known.
DataFrame makes easier to perform complex data processing tasks
```scala
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._
// Initialize SparkSession
val spark: SparkSession = SparkSession
.builder()
.master("local[*]")
.appName("dataframe")
.getOrCreate()
// Create DataFrame from CSV file
val filePath = "path/to/your/csvfile.csv"
val df = spark.read
.option("header", "true")
.option("inferSchema", "true")
.csv(filePath)
// Show the first 5 rows
df.show(5)
```
#### Dataset
* Datasets are a distributed collection of data, combining the best features of RDDs and DataFrames;
* A Dataset is a strongly-typed, immutable collection of objects that are mapped to a relational schema;
* Ensures compile-time type safety and supports object-oriented programming paradigms;
* The main disadvantage of datasets is that they require typecasting into strings;
* We can use it when complex transformations on structured data where compile-time type checking is beneficial.
```scala
import org.apache.spark.sql.{Dataset, SparkSession}
// Define the schema of our data
case class Client(name: String, age: Int, city: String)
// Initialize SparkSession
val spark: SparkSession = SparkSession
.builder()
.master("local[*]")
.appName("dataset")
.getOrCreate()
import spark.implicits._
// Create Dataset from a sequence of case class instances
val data = Seq(
Client("John", 30, "München"),
Client("Jane", 25, "Berlin"),
Client("Mike", 35, "Frankfurt"),
Client("Sara", 28, "Dachau")
)
val ds: Dataset[Client] = spark.createDataset(data)
// Show the content of the Dataset
ds.show()
```
Using Datasets, we can benefit from the best features of RDDs and DataFrames. Such as type safety and object-oriented programming interface of RDDs; and the optimizations execution, ease of use due to a higher level of abstraction from DataFrames for working with structured data in Spark.
#### SQL (via Spark SQL)
* Allows users to run SQL queries directly on DataFrames or Datasets;
* Provides a way to query data using standard SQL syntax;
* Uses standard SQL, which is familiar to many data professionals;
* Queries return DataFrames, enabling further processing using the DataFrame API;
* Ad-hoc querying and data exploration.
```scala
import org.apache.spark.sql.SparkSession
// Initialize SparkSession
val spark: SparkSession = SparkSession
.builder()
.master("local[*]")
.appName("sql")
.getOrCreate()
// Create DataFrame from CSV file
val filePath = "path/to/your/csvfile.csv"
val df = spark.read
.option("header", "true")
.option("inferSchema", "true")
.csv(filePath)
// Register the DF as a temp SQL view
df.createOrReplaceView("clients")
// Execute SQL queries
val allRowsDF = spark.sql("SELECT * FROM clients")
allRowsDF.show()
```
Using Spark SQL with Scala allows you to execute SQL queries on your data
#### Structured Streaming
* Built on the Spark SQL engine, it enables the same DataFrame and Dataset API to be used for stream processing;
* Uses the same API for batch and streaming data, simplifying the development process;
* Easy to use due to High-level abstraction for defining streaming computations;
* Real-time data processing and analytics;
* Stream processing applications that require the same APIs and optimizations as batch processing.
```scala
import org.apache.spark.sql.SparkSession
// Initialize SparkSession
val spark: SparkSession = SparkSession
.builder()
.master("local[*]")
.appName("StructuredStream")
.getOrCreate()
val kafkaStream = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", "kafka_topic_name")
.option("startingOffsets", "earliest")
.load()
val query = kafkaStream
.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
.writeStream
.outputMode("append")
.format("console")
.start()
query.awaitTermination()
```
In this example, the code reads data from the Kafka topic. Then the key and value are written to the console in the output mode append. The `awaitTermination` method is called to start the streaming query and wait for it to terminate.
Structured Streaming in Spark is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine. It allows you to work with streaming data in the same way you would work with batch data.
### Why should we use Spark?
* Spark can run programs up to 100 times faster than Hadoop MapReduce;
* It offers fast processing speed; through in-memory caching and processing data;
* Spark is a very mature technology, and it’s been out for a while so it’s reliable at this point;
* Spark is not that hard and applications can be implemented in a variety of programming languages like Scala, Java, Python;
* Spark puts together powerful libraries.
That's all folks :) | rubnsbarbosa |
1,865,167 | A Comprehensive Guide to Word Analysis, Character Counters, and Productivity Tools | In the digital age, where communication is predominantly text-based, the art of effective writing has... | 0 | 2024-05-25T20:36:18 | https://dev.to/countingtools/a-comprehensive-guide-to-word-analysis-character-counters-and-productivity-tools-178m | calculators, wordcounter, tools, webdev | In the digital age, where communication is predominantly text-based, the art of effective writing has never been more critical. Whether you're a seasoned writer, a budding blogger, or a social media influencer, the ability to craft compelling and impactful content is a valuable skill. Fortunately, the advent of technology has brought forth a myriad of tools and resources to assist writers in honing their craft and optimizing their output.
## Delving into Word Analysis Tools
[Word analysis tools](https://countingtools.com/) are the cornerstone of effective writing. These tools provide writers with valuable insights into various aspects of their writing, allowing them to refine their style and improve their communication skills. From basic word counters to more advanced features like sentence structure analysis and keyword density tracking, word analysis tools offer a comprehensive suite of functionalities to enhance the quality of your writing.
Character counters are another essential tool in the writer's toolkit. In a world where brevity is often prized, keeping track of character limits is essential for crafting concise and impactful messages. Whether you're composing a tweet, drafting a headline, or writing an email subject line, character counters ensure your message remains within the prescribed limits without sacrificing clarity or meaning.
## Optimizing Content Creation with Productivity Tools
In addition to word analysis and character counting tools, writers can also benefit from a variety of productivity tools designed to streamline the content creation process. Grammar checkers, proofreading tools, and style guides help writers ensure their content is error-free, consistent, and polished. Editorial calendars and project management platforms help writers stay organized and on track with their writing goals, while distraction-free writing environments foster creativity and focus.
## Maximizing Efficiency with Language-Specific Analysis
For writers working in multiple languages or targeting specific linguistic audiences, language-specific analysis tools are indispensable. These tools provide insights tailored to the nuances of different languages, allowing writers to optimize their content for maximum impact and resonance with their target audience. Whether you're writing in English, Spanish, French, or any other language, language-specific analysis tools help you craft content that speaks directly to your audience.
## Navigating Social Media Character Limits
In the realm of social media, where character limits reign supreme, character counters play a crucial role in ensuring your message is concise and engaging. Whether you're crafting a tweet, composing an Instagram caption, or writing a LinkedIn post, character counters help you stay within the prescribed limits while maximizing the impact of your message. Additionally, platform-specific tools provide insights into the optimal character length for different social media platforms, helping you tailor your content for maximum visibility and engagement.
## Empowering Writers with Essential Tools
In conclusion, [word analysis, character counters, and productivity tools](https://countingtools.com/) are essential resources for writers and content creators looking to optimize their output and maximize their impact. By harnessing the power of these tools, writers can gain valuable insights into their writing, streamline their workflow, and produce content that resonates with their audience. Whether you're a seasoned writer or just starting on your writing journey, these tools can help you unlock your full potential and take your content to the next level. | countingtools |
1,865,166 | Mario made only with CSS gradients - no JS, no embedded images/data URIs, no external images and using a micro HTML =) | Check out this Pen I made! | 0 | 2024-05-25T20:25:48 | https://dev.to/__d007e49033/mario-made-only-with-css-gradients-no-js-no-embedded-imagesdata-uris-no-external-images-and-using-a-micro-html--4f5f | codepen | Check out this Pen I made!
{% codepen https://codepen.io/alcidesqueiroz/pen/MVJEwd %} | __d007e49033 |
1,865,160 | Creating generic types for API (backend) responses | I wanted to share a little TypeScript tip that I tend to utilize in my projects whenever there's a... | 0 | 2024-05-25T20:19:24 | https://dev.to/lurco/creating-generic-types-for-api-backend-responses-3ho2 | typescript, tip, webdev, restapi | I wanted to share a little TypeScript tip that I tend to utilize in my projects whenever there's a REST API between the frontend and the backend. I hope you'll find it as useful I do!
## Setting the scene
Suppose you're developing a TypeScript web application and your backend is using REST API. In fact maybe they've already made all the endpoints and responses for your future requests and all you've got are Open API docs (Swaggger) documenting your API.
You're drowning in data types for all the different endpoints and their methods: GET on `api/v1/blogs/:id`, POST on `api/v1/blogs/`, login on `api/v1/token`, register, refresh token, add user, patch this, etc. etc.
If you are as big of a fan of TypeScript fan as I am you would probably like your TypeScript static code analysis to help you manage the backend/frontend contracts and keep track of what's what.
But the issue is that the same data can actually have different shapes depending on whether you're posting them to the backend or if you're getting them in a response. How to handle this with TS and not completely lose track while creating tens/hundreds of different types/interfaces?
## The data types
The basic idea is: you start with creating all the types you actually utilze as a frontend application (registration/user profile credentials, login credentials, form submission data shapes for all the different data submission logic you have in your application etc.).
But you ignore the elements that are handled by the backend e.g. the `id` and `created_at` fields, paginated or queried results etc.
Suppose we have interfaces like this:
```ts
interface Blogpost {
title: string;
content: string;
date: string;
summary: string:
tags: string[];
// ... you get the idea
}
interface Comment {
parent_id: number;
author: string;
email?: string;
}
```
They're all handled using the create operation in our REST API's CRUD repertoire. That means our backend will handle them all (in this example: both) in the exact same way. Wen can thus create an auxiliary union type (it makes sense the more types your application needs, especially if they have similar but different shapes):
```ts
type ApiTypes = Comment | Blogpost;
```
## The ApiResponse types
Now that we defined what we - the frontend - use, it's time to create a type that handles the "stuff" that backend throws at us. We create a generic type e.g.:
```ts
type ApiResponse<T extends ApiTypes> = T & {
id: number,
created_at?: string,
}
```
The `extends` in the generic argument limits the types/interfaces we can supply to the ApiResponse (which is what we want as not all of our backend responses will have the same shape, e.g. a login form response might just contain the authorization tokens).
Another schema might be suitable for e.g. paginated result:
```ts
interface PaginatedResults<T extends ApiTypes> = {
count: number;
previous: string | null;
next: string | null;
results: T[];
}
```
Now whenever we need to handle a response from our API we can just swap in the appropriate type into our generic types, e.g. when using TanStack Query and Axios when posting data (the *mutation* in TanStack Query):
The mutation function:
```ts
function sendFormData<T extends ApiTypes>(url: string) {
return async (data: T): Promise<ApiResponse<T>> => {
const response: AxiosResponse<ApiResponse<T>> = await
axios.post<ApiResponse<T>>(url, JSON.stringify(data));
return response.data;
};
}
```
The actual `useMutation` hook:
```tsx
const mutation: UseMutationResult<
ApiResponse<T>,
ErrorResponse
> = useMutation<ApiResponse<T>, ErrorResponse, T>({
mutationFn: sendFormData<T>(URL),
});
```
As a bonus you can see how an explicitly typed out TanStack Query's `useMutation` looks like. It might seem excessive, but like with all of TypeScript, from now on it does the work for you during development.
And when you need to add a new data type to this scheme, you just start from the top: create the type, add it to the `ApiTypes` union and voilà - you can now utilze the server state functions presented above by just plopping in the new type in place of the generic `T` type.
## A comment on types vs. interfaces
As you probably know types (aka type aliases) and interfaces have [almost the exact same use cases and utility](https://www.typescriptlang.org/play/?#example/types-vs-interfaces). So why was I using them so inconsitently here, specifically breaking the convention of using interfaces to introduce `ApiResponse` as a type alias?
The natural next step on the road to make the world a cleaner and more consistent place would be to introduce `ApiResponse` as something like this:
```ts
interface ApiResponse<T extends ApiTypes> extends T {
id: number;
created_at?: string;
}
```
Unfortunately this doesn't work - the TS static code analysis tool explains why:
```
TS2312: An interface can only extend an object type or intersection of object types with statically known members.
```
That's it. That's the reason. If you're more into TypeScript be sure to led me and the readers know why TS is designed this way, but as far as the I see it, it's just a subtle difference between a type and an interface that we have to accept.
---
source for the picture: [photo by Savvas Stavrinos](https://www.pexels.com/photo/monochrome-photography-of-people-shaking-hands-814544/)
| lurco |
1,865,158 | Controlling user auth flow with Lambda & Cognito | Disclaimer: the hero image of this post was the result of the following prompt AWS lambda and AWS... | 0 | 2024-05-25T20:05:47 | https://dev.to/jodamco/controlling-user-auth-flow-with-lambda-cognito-28k9 | aws, cognito, lambda, javascript | _Disclaimer: the hero image of this post was the result of the following prompt `AWS lambda and AWS cognito logos into a Renaissance paint. Use full logos and a less known painting`. I think I still have much to learn into AI image prompts 😅😅_
---
Authentication is a common topic between many kinds of systems. There are different ways to handle it and my preferred ones make usage of managed services. I found AWS Cognito a really great solution to handle authentication speacially if you are later connecting the authenticated app with other hosted services. Cognito will provide you built in ways to manage and cross validate users against services and recently I've been using it's hooks to build even more complex auth features
### Cognito triggers
Cognito user pools have a feature named 'Lambda triggers' which let's you use previously created Lambdas to perform custom actions during four types of flow:
1. Sign up
2. Authentication
3. Custom authentication (such as CAPTCHA or security questions)
4. Messaging
Each of these flows have different triggers that will execute lambda code in between specific steps of the flow. Sign up for instance has `Pre sign-up trigger`, `Post confirmation trigger` and `Migrate user trigger` that can be attached to a Lambda function.
To test the capacities of Lambda triggers, we will develop a system that prevents login after 5 consecutive failures.
### Coding the lambdas
We're gonna need two lambdas to make the flow controll, one of them would take care of updating the user data so we may count how many times the user tryed login. This one will also block the user if the number of attempts exceeds the maximumm. The second one, will be used reset our counter, so in the future the user will still have the maximumm number of attempts left.
The first lambda trigger would be like this
```
module.exports.preAuthTrigger = async (event) => {
if (!(await this.isUserEnabled(event))) throw new Error('Usuário Bloqueado')
const attempts = await this.getLoginAttempts(event)
if (attempts > 4) {
await this.disableUser(event)
throw new Error('Usuário Bloqueado')
}
await this.updateLoginAttempts(event, attempts)
return event
}
```
Our first step is to check whether the user is already blocked by the amount of attempts. We can do it with a separate fn:
```
exports.isUserEnabled = async (event) => {
const getParams = {
UserPoolId: event.userPoolId,
Username: event.userName,
}
const userData = await cognitoService.adminGetUser(getParams).promise()
return userData.Enabled
}
```
With this we are accessing the properties of the user on the cognito user pool and checking out the `Enable` property that dictates if the user is able to user it's `username` and `password` to login. **A disabled user can't login into a cognito pool** and that's exactly we want here.
For the second step, we need to check if the number of attempts is greater than the max permitted.
```
exports.getLoginAttempts = async (event) => {
const getParams = {
UserPoolId: event.userPoolId,
Username: event.userName,
}
const userData = await cognitoService.adminGetUser(getParams).promise()
const attribute = userData.UserAttributes.find(
(att) => att.Name === 'custom:attempts'
)
if (attribute !== undefined && attribute !== null)
return parseInt(attribute.Value)
else return 0
}
```
It is a very simmilar process to the previous fn, but now we're looking for a custom attribute named `custom:attempts` that we will create into our user pool in the next steps. If the user has more than 5 attempts (we start counting at 0), then we should block the user. Piece of cake:
```
exports.disableUser = async (event) => {
await cognitoService
.adminDisableUser({
UserPoolId: event.userPoolId,
Username: event.userName,
})
.promise()
}
```
We also have to throw an Error and stop executing the lambda since this will make the login process fail as we want. Now that we are able to block the user, we just need to update the number of attempts if it isn't blocked:
```
exports.updateLoginAttempts = async (event, attempts) => {
const updateParams = {
UserAttributes: [
{
Name: 'custom:login_attempts',
Value: (attempts + 1).toString(),
},
],
UserPoolId: event.userPoolId,
Username: event.userName,
}
await cognitoService.adminUpdateUserAttributes(updateParams).promise()
}
```
This last function sets everything for the first lambda trigger. Now we are able to perform all the actions from our main lambda function. The final code with all functions will be like this:
```
module.exports.preAuthTrigger = async (event) => {
if (!(await this.isUserEnabled(event))) throw new Error('Usuário Bloqueado')
const attempts = await this.getLoginAttempts(event)
if (attempts > 4) {
await this.disableUser(event)
throw new Error('Usuário Bloqueado')
}
await this.updateLoginAttempts(event, attempts)
return event
}
exports.isUserEnabled = async (event) => {
const getParams = {
UserPoolId: event.userPoolId,
Username: event.userName,
}
const userData = await cognitoService.adminGetUser(getParams).promise()
return userData.Enabled
}
exports.getLoginAttempts = async (event) => {
const getParams = {
UserPoolId: event.userPoolId,
Username: event.userName,
}
const userData = await cognitoService.adminGetUser(getParams).promise()
const attribute = userData.UserAttributes.find(
(att) => att.Name === 'custom:login_attempts'
)
if (attribute !== undefined && attribute !== null)
return parseInt(attribute.Value)
else return 0
}
exports.disableUser = async (event) => {
await cognitoService
.adminDisableUser({
UserPoolId: event.userPoolId,
Username: event.userName,
})
.promise()
}
exports.updateLoginAttempts = async (event, attempts) => {
const updateParams = {
UserAttributes: [
{
Name: 'custom:login_attempts',
Value: (attempts + 1).toString(),
},
],
UserPoolId: event.userPoolId,
Username: event.userName,
}
await cognitoService.adminUpdateUserAttributes(updateParams).promise()
}
```
In my next post we will write the code for the PostAuth lambda trigger and see how can we setup cognito to use both lambdas! | jodamco |
1,865,157 | Time-Series Mastery: Techniques for Precise Predictive Modeling | As a participant in the #SmaZoomcamp, I've delved into the intriguing world of time-series... | 0 | 2024-05-25T20:00:15 | https://dev.to/annaliesetech/time-series-mastery-techniques-for-precise-predictive-modeling-n2e | smazoomcamp, finance, modeling, python | As a participant in the #SmaZoomcamp, I've delved into the intriguing world of time-series predictions, gaining valuable insights and practical knowledge along the way. In this blog post, I'll reflect on the key learnings and techniques explored during the program.
Framing Hypotheses and Practical Predictions
One of the initial lessons emphasized framing hypotheses and formulating heuristic rules for practical predictions. Understanding the underlying principles behind time-series data and establishing hypotheses based on trends, seasonality, and other patterns are fundamental steps in predictive modeling.
Unraveling Time-Series Data
The program provided a deep dive into unraveling time-series data, focusing on techniques such as trend analysis, seasonality decomposition, and identifying the remainder component. These techniques play a crucial role in understanding the inherent structure of time-series data, enabling more accurate predictions and informed decision-making.
Regression Techniques and Data Relationships
Regression techniques emerged as powerful tools for uncovering data relationships. By applying regression analysis, we gained insights into how different variables interact and influence the outcome, paving the way for more nuanced predictions and actionable insights.
Binary Classification for Growth Direction
A highlight of the program was exploring binary classification models to determine growth direction. This approach enabled us to classify data points into distinct categories, such as positive or negative growth, providing a clear direction for decision-making and strategy development.
Further Exploration: Neural Networks in Analytical Modeling
For those interested in advanced techniques, the program offered insights into neural networks' role in analytical modeling. Neural networks have shown remarkable capabilities in handling complex data structures and uncovering nonlinear relationships, making them a valuable asset in predictive modeling scenarios.
In conclusion, my journey with the #SmaZoomcamp has been enlightening and empowering, equipping me with practical skills and techniques for time-series predictions. From framing hypotheses to leveraging advanced regression and classification methods, the program has broadened my analytical toolkit and deepened my understanding of predictive modeling in the context of time-series data. | annaliesetech |
1,865,156 | Overcoming Coding Challenges: My Experience and Solution | As a seasoned website developer, I've encountered various coding challenges throughout my career. One... | 0 | 2024-05-25T20:00:02 | https://dev.to/andrenaroy/overcoming-coding-challenges-my-experience-and-solution-41ck | webdev, javascript, beginners, programming | As a seasoned website developer, I've encountered various coding challenges throughout my career. One particular pain point I faced recently was optimizing database queries for improved performance in a large-scale web application, which is crucial for my role in coding and also relevant to jobs in the tech industry.
The issue stemmed from inefficient SQL queries that were causing significant delays in data retrieval, affecting the overall speed and responsiveness of the application. After thorough analysis and testing, I identified the problem areas and implemented optimized query techniques to enhance performance.
Here's an example of how I tackled this issue:
`-- Original inefficient query
SELECT * FROM products WHERE category_id = 1 AND price > 100;
-- Optimized query using indexing
CREATE INDEX idx_category_price ON products (category_id, price);
SELECT * FROM products WHERE category_id = 1 AND price > 100;
`
By creating an index on the columns used in the WHERE clause, I reduced the query execution time significantly, resulting in faster data retrieval and improved user experience.
This experience taught me the importance of constantly optimizing code and leveraging best practices to overcome coding challenges effectively. It's a valuable skill that not only enhances the performance of applications but also demonstrates expertise in database management, a key aspect of [tech jobs](https://jobsincentralqueensland.au/) today.
As developers, we continually learn and adapt to evolving technologies to deliver optimal solutions and stay ahead in the ever-changing tech landscape.
| andrenaroy |
1,865,120 | Machine Learning 101: What You Need to Know | Introduction Welcome 👋 to this blog. Have you heard about the Chat GPT or have you used... | 0 | 2024-05-25T19:48:32 | https://dev.to/ankur0904/machine-learning-101-what-you-need-to-know-1nip | machinelearning, beginners, tutorial, ai | ## Introduction
Welcome 👋 to this blog. Have you heard about the **Chat GPT** or have you used **Chat GPT** for any of your tasks? If your answer is yes have you ever wondered how these technologies are working internally? Why these technologies are popular nowadays? Then you are now in the correct place this blog will cover the basics of machine learning which is responsible for these kinds of technology. These technologies are the outcome of Machine Learning.
*Note: In this blog, we will not talk much about the mathematics of the algorithms to keep it simple so that everyone must get the basic intuition of the algorithms*
## Machine Learning
Let's break down the words **Machine Learning = Machine + Learning**, A machine will learn to perform a task that it has not been explicitly programmed.
*or*
The process of training a machine using data so that it will behave according to the provided new data or an external new environment.

We can also say that *Machine Learning* is a subset of Artificial intelligence.
## Types of Machine Learning Algorithms
We can generally classify the machine learning algorithms into 3 types:
1. Supervised learning algorithms
2. Unsupervised learning algorithms
3. Reinforcement learning algorithms
Let's dive into each of the algorithm types in detail.
## Supervised Learning Algorithms
In layman's terms, we can say that this is a class of algorithms in which the \*supervision\* of the data is required in the process of training the model. Let's take an example for more clarification, suppose we have a dataset:
```
| Size (sq ft) | Area | No. of Kitchen | Price ($) |
|--------------|-----------------|-----------------|-----------|
| 1200 | Downtown | 2 | 300,000 |
| 1500 | Suburban | 3 | 350,000 |
| 800 | City Center | 1 | 200,000 |
| 2000 | Suburban | 4 | 450,000 |
| 950 | Downtown | 2 | 250,000 |
| 1800 | Rural | 3 | 320,000 |
| 1600 | City Center | 3 | 380,000 |
| 1100 | Rural | 2 | 210,000 |
| 1300 | Suburban | 3 | 300,000 |
| 1400 | Downtown | 2 | 330,000 |
```
In this example, we are trying to make the house price prediction kind of model to predict the price of the house given the *size, area, no. of Kitchen*.
If you see the dataset carefully you will notice that in each row we have a notion of \*supervision\* with the **price** column. Each input in the dataset is mapped to the price giving us a notion of supervised that's why this algorithm is called the Supervised learning algorithm.
## Unsupervised Learning Algorithms
In this type of learning algorithm, we don't need our learning model to be supervised by the dataset. Our model will automatically learn the meaningful pattern & information from the data. One common example is the segmentation of news articles, where the algorithm groups articles into categories such as politics, sports, and technology without pre-labelled data.
## Reinforcement Learning Algorithms
Reinforcement learning algorithms learn by interacting with an environment, making decisions, and receiving feedback in the form of rewards or penalties. The goal is for the model to learn a strategy that maximizes the incremental reward over time. A classic example is training a model to play a game, where the algorithm improves its performance by learning from the outcomes of its actions, such as winning or losing.
## 🎉 Conclusion
You have learned the basics of machine learning algorithms. You now understand what machine learning is, how it can be classified, and the significance of each classification.
Thank you for reading! If you enjoyed this content and are looking for a talented writer for your organization, feel free to reach out to me:
📧 Email: [ankursingh91002@gmail.com](mailto:ankursingh91002@gmail.com)
🔗 LinkedIn: [Ankur Singh](https://www.linkedin.com/in/ankur-singh-161458227/)
🔗 Twitter: [@ankur_136](https://twitter.com/ankur_136)
Let's connect and create something great together! | ankur0904 |
1,865,153 | Buttons with CSS Effects | Buttons with CSS Effects Hello, dev.to community! 🚀 I want to share with you my project... | 0 | 2024-05-25T19:48:09 | https://dev.to/rinkon/buttons-with-css-effects-o66 | css, svg, webdesing, sass | ## Buttons with CSS Effects
Hello, dev.to community! 🚀
I want to share with you my project on [CodePen](https://codepen.io/Rincon/full/zardaq), where I have created 5 examples of animated buttons using CSS. These buttons feature roll-over effects utilizing SVG and CSS and have received a lot of positive attention with many views and comments. Here are some highlights:
- **Roll-over Effects**: Each button has a unique roll-over effect that adds dynamism and style to web interfaces.
- **Using SVG and CSS**: The effects are achieved by combining SVG for scalable vector graphics and CSS for animations.
- **Pug and Sass**: I used preprocessors like Pug for HTML and Sass for CSS, which allow me to write code more efficiently and in an organized manner.
### Key Learnings
- **Pug**: Simplifies writing HTML with a cleaner syntax.
- **Sass**: Makes CSS management easier with variables, nesting, and mixins, improving code maintainability.
### Benefits
These buttons are not only visually appealing but also enhance the user experience when interacting with the page. I invite you to explore the examples and see how you can integrate them into your own projects.
### What's Next?
I'm always looking for new ways to improve and learn. I'd love to hear your feedback and suggestions!
[Explore my project on CodePen](https://codepen.io/Rincon/full/zardaq)
| rinkon |
1,865,152 | Use Cases of the Power Platform: | Sales and Marketing: Organizations can use the Power Platform to analyze sales data, automate lead... | 0 | 2024-05-25T19:47:32 | https://dev.to/deransmith/use-cases-of-the-power-platform-4kh2 | Sales and Marketing: Organizations can use the Power Platform to analyze sales data, automate lead management processes, and create custom applications for sales and marketing teams.
Operations and Finance: Power BI [Microsoft 365](https://www.mrsharepoint.guru/) can be used to track key performance indicators (KPIs), Power Automate can automate invoice processing, and Power Apps can streamline inventory management processes.
Human Resources: Power Automate can automate employee onboarding processes, Power Apps can create custom HR applications, and Power Virtual Agents can provide self-service support for HR inquiries.
Conclusion:
In conclusion, the Power Platform is a versatile suite of tools that empowers organizations to analyze data, automate processes, and build custom applications to drive digital transformation and innovation. With its low-code/no-code approach, the Power Platform enables users across various business functions to work more efficiently, collaborate more effectively, and achieve more together.
| deransmith | |
1,865,151 | Loved this dentist | Ended up visiting the dentist and loved it. If anyone's interested I've linked their site:... | 0 | 2024-05-25T19:44:36 | https://dev.to/hitechdentist/loved-this-dentist-4k2a | Ended up visiting the dentist and loved it. If anyone's interested I've linked their site: [https://hitechdentist.ca/](https://hitechdentist.ca/) | hitechdentist | |
1,865,149 | Botones con efectos CSS | Botones con efectos CSS ¡Hola, comunidad de dev.to! 🚀 Quiero compartir con ustedes mi... | 0 | 2024-05-25T19:44:15 | https://dev.to/rinkon/botones-con-efectos-css-51gp | codepen, css, animation | ## Botones con efectos CSS
¡Hola, comunidad de dev.to! 🚀
Quiero compartir con ustedes mi proyecto en [CodePen](https://codepen.io/Rincon/full/zardaq), donde he creado 5 ejemplos de botones animados con CSS. Estos botones tienen efectos de roll-over utilizando SVG y CSS, y han tenido una gran acogida, con muchas visitas y comentarios positivos. Aquí algunos puntos destacados:
- **Roll-over Effects**: Cada botón presenta un efecto de roll-over único que añade dinamismo y estilo a las interfaces web.
- **Uso de SVG y CSS**: Los efectos se logran combinando SVG para gráficos vectoriales escalables y CSS para las animaciones.
- **Pug y Sass**: He utilizado preprocesadores como Pug para HTML y Sass para CSS, lo que me permite escribir código de manera más organizada y eficiente.
### Aprendizajes Clave
- **Pug**: Simplifica la escritura de HTML con una sintaxis más limpia.
- **Sass**: Facilita la gestión de CSS con variables, anidamiento y mixins, lo que mejora la mantenibilidad del código.
### Beneficios
Estos botones no solo son visualmente atractivos, sino que también mejoran la experiencia del usuario al interactuar con la página. Los invito a explorar los ejemplos y probar cómo pueden integrarlos en sus propios proyectos.
### ¿Qué Sigue?
Siempre estoy buscando nuevas formas de mejorar y aprender.
¡Me encantaría escuchar sus comentarios y sugerencias!
[Explora mi proyecto en CodePen](https://codepen.io/Rincon/full/zardaq)
| rinkon |
1,864,671 | Create plugins in Go | Of course Go has enabled the possibility to add a plugin feature to your Go application with the... | 0 | 2024-05-25T19:19:06 | https://dev.to/stefanalfbo/create-plugins-in-go-25bd | 100daystooffload, go, plugin, programming | Of course Go has enabled the possibility to add a [plugin](https://pkg.go.dev/plugin) feature to your Go application with the standard library.
By adding a plugin feature to your application you enable third-party development, extensibility, customization and more.
Here comes a simple example to use the plugin standard library in an application. We start out by creating the plugin first, which we will call, `simple-plugin`.
```console
mkdir simple-plugin && cd $_
touch plugin.go
go mod init simple.plugin
code .
```
The code will be super simple for this plugin, add this to the `plugin.go` file.
```golang
package main
import "fmt"
func SimplePluginFunc() {
fmt.Println("The simple plugin has been called!")
}
```
So the plugin needs to be a main package with public functions and/or variables. Therefore we use the package name, `main`, here and includes one public function, `SimplePluginFunc`.
A plugin needs to be built with an extra compilation flag, `buildmode`.
```console
go build -buildmode=plugin -o simple-plugin.so ./plugin.go
```
This will compile our plugin and produce a `simple-plugin.so` file as an output artifact of the compilation. The `-o` flag is not needed, but then the output artifact would have been named `plugin.so` instead. One drawback with this solution is that it's only supported on Linux, FreeBSD, and Mac. However you can use WSL on your Windows machine to do this too.
Now we have a plugin that we can consume in our main application, so lets create that application.
```console
mkdir app && cd $_
mkdir plugins
cp ../simple-plugin/simple-plugin.so plugins
touch main.go
go mod init example.app
code .
```
Note that we are creating a directory, `plugins`, that host our `simple-plugin.so` plugin from the previous exercise.
In the main file we first create a function to load the plugin.
```golang
func loadPlugin() func() {
plugin, err := plugin.Open("plugins/simple-plugin.so")
if err != nil {
log.Fatal(err)
}
simplePluginFunc, err := plugin.Lookup("SimplePluginFunc")
if err != nil {
log.Fatal(err)
}
f, ok := simplePluginFunc.(func())
if !ok {
log.Fatal("unexpected type from module symbol")
}
return f
}
```
This function is first using the standard library, `plugin`, to open our newly created plugin which is located in our plugin directory.
Then we are trying to `Lookup` our public function in the plugin, which we called `SimplePluginFunc`. The `Lookup` function will return a `Symbol` which is a pointer to the function `SimplePluginFunc`.
With the help of type assertion we will get the plugin function in a more strongly typed form, which we are doing with `simplePluginFunc.(func())`.
Finally we are returning the function from our `loadPlugin` function. The complete `app` will look like this.
```golang
package main
import (
"log"
"plugin"
)
func loadPlugin() func() {
plugin, err := plugin.Open("plugins/simple-plugin.so")
if err != nil {
log.Fatal(err)
}
simplePluginFunc, err := plugin.Lookup("SimplePluginFunc")
if err != nil {
log.Fatal(err)
}
f, ok := simplePluginFunc.(func())
if !ok {
log.Fatal("unexpected type from module symbol")
}
return f
}
func main() {
simplePlugin := loadPlugin()
simplePlugin()
}
```
And now when we execute our app we will get the output `The simple plugin has been called!`, as expected.

This is a simple example on how to use this feature from the standard library, `plugin`, but it's not harder than that. However, I encourage you to read the `plugin` package [documentation](https://pkg.go.dev/plugin) which also discuss other options depending on your needs for your application.
Happy coding!
| stefanalfbo |
1,865,132 | :has() in CSS | Hi everyone! I recently learned the usefulness of :has() in CSS. It can style an element if any of... | 0 | 2024-05-25T19:12:51 | https://dev.to/larafritosss/has-in-css-anc | Hi everyone! I recently learned the usefulness of :has() in CSS. It can style an element if any of the things we’re searching for inside it are found and accounted for. It’s like saying, “If there’s something specific inside this box, then style the box this way AND only this way.” Definity going to give it a try in my next web project!
| larafritosss | |
1,865,131 | Adding Colour To The Log Output Of Logging Libraries In Go | Logging is an integral part of software development, providing developers with valuable insights... | 0 | 2024-05-25T19:11:26 | https://keploy.io/blog/technology/adding-colour-to-the-log-output-of-logging-libraries-in-go | webdev, javascript, programming, ai |

Logging is an integral part of software development, providing developers with valuable insights into the behaviour and performance of their applications. In the Go programming language, various logging libraries, such as the standard library's log package or third-party options like logrus , zap and zerolog, facilitate the generation of log output. While the primary goal of logging is to convey information, the traditional black-and-white log messages can sometimes make it challenging to quickly discern critical information amidst a sea of logs.
**Need for colouring logs**
**Prioritisation and Highlighting:** colour can be used to prioritise and highlight critical information. For example, error messages or warning logs can be displayed in attention-grabbing colours like red or yellow, making it immediately apparent when an issue requires urgent attention. This facilitates a faster response to potential problems.
**Enhanced Readability:** colours can improve the overall readability of log messages by adding structure and visual hierarchy. Differentiating between log levels, timestamps, and contextual information becomes more intuitive, leading to a more user-friendly experience during log analysis and troubleshooting.
**User-Friendly Debugging:** Developers spend a considerable amount of time interacting with logs during debugging. colour logging contributes to a more user-friendly debugging experience by allowing developers to quickly spot relevant information, errors, or patterns in log outputs, thereby expediting the debugging process.
**Support for colouring log level keywords**
Nearly all logging libraries offer the option to enable colorization for log level keywords such as info, debug, warning and error.
In the case of Zap, you can use the CapitalColorLevelEncoder function to achieve this effect.
Here's an example of how you might configure Zap to enable colourised log levels:
```
logCfg := zap.NewDevelopmentConfig()
logCfg.EncoderConfig.EncodeLevel = zapcore.CapitalColorLevelEncoder
```
**Contextual Logs**
In complicated systems and tricky debugging situations, it's crucial to add extra details to log messages. This additional information, like variable values, timestamps, and user IDs, helps developers grasp what's happening in the application at specific times. Contextual logging jazzes up log messages by connecting them to dynamic key-value pairs, giving more flexibility than sticking to fixed formats. This method is super helpful during debugging, making it quicker to find the main issues and understand how the program is running.
Note: To follow along,
create acolor-logsdirectory by runningmkdir color-logs and cd color-logs to change directory.
Start a module by runninggo mod init example.com/color-logs, here we have used "example.com/color-logs" as a module path.
In your text editor, create a file in which to write your code and call itmain.go.
Ensure that you fetch the necessary dependencies by runninggo getbefore executinggo run main.go.
Here is an example of how you can display contextual logs in zerolog logging library:
```
package main
import "github.com/rs/zerolog/log"
func main() {
log.Debug().
Str("Scale", "833 cents").
Float64("Interval", 833.09).
Msg("Fibonacci is everywhere")
}
// output:
// {"level":"debug","Scale":"833 cents","Interval":833.09,"time":"2024-01-24T22:01:03+05:30","message":"Fibonacci is everywhere"}
```
**Lack of native support to colour contextual logs**
While contextual logging provides a powerful means of enhancing information, the visual representation of this context often remains monochromatic. Unfortunately, many logging libraries do not inherently support the colorization of key and value components within log entries. This absence leaves developers with a missed opportunity to leverage visual cues for quick identification and differentiation of critical information, hindering the efficiency of log analysis and debugging processes.
lets see what will happen if we try to add colour to context log in zap :
```
package main
import (
"github.com/fatih/color"
"go.uber.org/zap"
)
func main() {
PlainLogger, _:= zap.NewDevelopment()
var HighlightGreen = color.New(color.FgGreen).SprintFunc()
PlainLogger.Info("test log", zap.String(HighlightGreen("key"), "value"))
}
```
The output generated would be:
2024-01-24T22:23:16.356+0530 INFO zap-logging/main.go:66 test log {"\u001b[32mkey\u001b[0m": "value"}
The reason why the ANSI escape code for colour formatting are not interpreted is because of the way the EncoderEntry function is implemented in zapcore package.
We will now discuss how to solve this misinterpretation.
**Colouring contextual logs**
The primary cause of the mentioned problem lies with the default encoder, which can be either "json" or "console." To address this, we'll begin by crafting our own encoder.
1.Lets create a custom encoder named colorConsoleEncoder will be initialized by NewColorConsole:
```
type colorConsoleEncoder struct {
*zapcore.EncoderConfig
zapcore.Encoder
}
func NewColorConsole(cfg zapcore.EncoderConfig) (enc zapcore.Encoder) {
return colorConsoleEncoder{
EncoderConfig: &cfg,
// Using the default ConsoleEncoder can avoid rewriting interfaces such as ObjectEncoder
Encoder: zapcore.NewConsoleEncoder(cfg),
}
}
```
2.We will then register our encoder using RegisterEncoder function
```
func init() {
_ = zap.RegisterEncoder("colorConsole", func(config zapcore.EncoderConfig) (zapcore.Encoder, error) {
return NewColorConsole(config), nil
})
}
```
3.The misinterpretation issue discussed earlier is associated with the EncodeEntry function within the Encoder interface of the zapcore package. To address this, it is necessary to override this function.
```
// EncodeEntry overrides ConsoleEncoder's EncodeEntry
func (c colorConsoleEncoder) EncodeEntry(ent zapcore.Entry, fields []zapcore.Field) (buf *buffer.Buffer, err error) {
buff, err := c.Encoder.EncodeEntry(ent, fields) // Utilize the existing implementation of zap
if err != nil {
return nil, err
}
bytesArr := bytes.Replace(buff.Bytes(), []byte("\\u001b"), []byte("\u001b"), -1)
buff.Reset()
buff.AppendString(string(bytesArr))
return buff, err
}
```
This function will utilize the existing EncodeEntry implementation from the embedded Encoder field but introduces a correction using bytes.Replace. The aim is to replace occurrences of "\u001b" (which represent a literal backslash followed by the characters "u001b") with the ANSI escape code "\u001b" (representing the escape character). This adjustment ensures accurate handling of ANSI escape codes during log entry encoding, allowing for the intended colorization without disrupting the overall logging system functionality.
4.we can now create a setupLogger function that utilizes the registered encoder and produces a logger with colorization.
```
func setupLogger() *zap.Logger {
logCfg := zap.NewDevelopmentConfig()
logCfg.Encoding = "colorConsole"
logger, _ := logCfg.Build()
return logger
}
```
5.Write a Log to see the effect
```
func main() {
ColorLogger := setupLogger()
var HighlightGreen = color.New(color.FgGreen).SprintFunc()
var HighlightYellow = color.New(color.FgYellow).SprintFunc()
ColorLogger.Info("test log", zap.String(HighlightGreen("key"), HighlightYellow("value")))
}
```
To run the full code, execute the following commands in your terminal:
```
git clone https://github.com/AkashKumar7902/coloring-log-output
cd coloring-log-output
go run main.go
```
While we've specifically covered this aspect for the Zap logging library, comparable solutions can be identified for other logging libraries as well :)
Thank you and Happy colouring 🎨 !
**FAQ's**
**Can I color code log messages in Go?**
Yes, coloring log messages improves readability and helps prioritize critical information. Popular libraries like Zap and zerolog offer built-in support or allow customization through custom encoders.
**Should i add coloring to contextual logs?**
Coloring contextual logs (key-value pairs) makes debugging faster. You can easily spot relevant details and differentiate between values, leading to quicker problem identification.
**How do I create a custom encoder for coloring logs?**
While some libraries offer colorization by default, you can create a custom encoder to achieve more control. This typically involves overriding the EncodeEntry function to handle ANSI escape codes correctly. | keploy |
1,864,849 | Bringing Voicemails Back to Life with Amplify Gen 2 | What if we started being extra with our voicemail like we used to?! I asked myself the same question and ended up creating a silly voicemail + scrapbook app with for the Amplify Gen 2 #AWSChallenge | 0 | 2024-05-25T19:11:14 | https://dev.to/maludecks/bringing-voicemails-back-to-life-with-amplify-gen-2-27en | devchallenge, awschallenge, amplify, fullstack | ---
title: Bringing Voicemails Back to Life with Amplify Gen 2
published: true
description: What if we started being extra with our voicemail like we used to?! I asked myself the same question and ended up creating a silly voicemail + scrapbook app with for the Amplify Gen 2 #AWSChallenge
tags: devchallenge, awschallenge, amplify, fullstack
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pwsb35hp6ifirbasie82.JPG
---
*This is a submission for the [AWS Amplify Fullstack TypeScript Challenge](https://dev.to/challenges/awschallenge).*
## What I Built
So the other day -- which might have been any point in time between a week and three years ago because you know, pandemic time warp -- I was on TikTok laughing out loud because of this video:
{% youtube xbmHo5HDB9g %}
...mimicking how extra we used to be recording voicemail greeting messages back in the day.
I remember thinking, "there should be a revival app or something for this!" and when I saw the Amplify Gen 2 AWS Challenge, I thought this could be the perfect opportunity to build one. But, not only that, it gave me the excuse I was missing to start experimenting with Amplify.
For the past week, I’ve been learning a lot but most importantly, having **a lot** of fun trying things out. What I ended up building is a silly mix of voicemails + scrapbook (for the fellow Brazilians/Indians around here, think Orkut in the early days). I gave it some thought, and the technical requirements I settled on were:
- A user can record a greeting message to show on their profile.
- A user profile shows the user’s greeting message and their \*public inbox messages.
- Other users can record and send voicemails to a user profile (and yes, once sent, there is no way back).
## Demo
On the finished product, you can listen to a user's greeting message on their profile:

As well as a record and send a voicemail:

And lastly, on your own inbox, see the new ones separated from already opened messages, record a greeting message and manage your account:

You can give it a try yourself [here](https://main.d5wn5ra3pd2wx.amplifyapp.com/maludecks) by signing up and leaving *me* a voicemail. And finally the code can be found [here](https://github.com/maludecks/voicemailer).
*IMPORTANT NOTES IN CASE YOU'RE ACTUALLY TRYING IT OUT:*
- Once you sent a message, there is no way back, only the receiver can delete it
- Messages sent are **public**, anyone can listen to them :)
- Works better on desktop
## Journey
A moment of oversharing: you probably don’t know (because you don't know me) but I was a backend engineer at Spotify for a bit over a year and got laid off back in December. Since then, I have been learning things for the sake of learning, in an attempt to make programming spark some joy for me again. I came to the conclusion that I want to become a fullstack web dev - *I think*. So off I went to learn modern frontend (aka React and friends). I’m very much in the beginning of this new journey, which you might notice when going through my code 👀 but yes, I’m excited to write code again!
Back to the app in this submission, the result is a Next.js 14 app to handle routing + tailwindcss and somewhat of a *neubrutalism* design.
Some learnings along the way:
- It’s kind of hard to find answers on the web with the differentiation of Amplify Gen 1 and Gen 2 sometimes. The docs are very thorough but once you want to try things that are not so explicit in there, you’ll find yourself doing Gen 1 things.
- …But as I said, the docs are very thorough, and a lot of the things I lost myself in google results, I found the actual answer back in the docs (I know, I know, classic).
- Because I couldn’t find a way to use Amplify SDK to list/query Cognito users directly, I introduced the username mapping table to make things easier for me, there are other ways to solve this, but that’s the route I took.
- At first, I wanted to add a lot more features, like the ability for the user to choose which messages go public and which stay private, or record several greetings and choose which one shows in the profile…but I was running out of time for the submission and it kind of felt like overkill.
**Connected Components and/or Feature Full**
I was able to create a **feature-full** app by integrating:
- **Data** -> DynamoDB to store both messages, greetings and a username mapping and AppSync GraphQL API.
- **Storage** -> S3 for the audio recordings of messages and greetings.
- **Authentication** -> Cognito with login through email and the use of auth **Connected UI Components** for the login/signup and account management.
- **Functions** -> Lambda functions to validate/handle usernames.
By using Amplify Gen 2 setting up and deploying all of these components becomes a very simple task.

## That's a wrap
I’m extremely proud of what I built, I got to try out a whole bunch of new things and I now have an even longer list of React/Next.js things I need to dig deeper into. But overall, I’m mostly proud of myself for sticking with the plan throughout the whole week and being able to finish a for-fun project once again.
Amplify Gen 2 makes deploying apps on AWS so freaking simple that it feels like cheating. Once you comprehend what the possibilities are, it's really an incredible tool. I hope you get to send out some voicemails, let me know in the comments if (or should I say when) you find any bugs 🐛 | maludecks |
1,865,129 | Build a multi node Kubernetes Cluster on Google Cloud VMs using Kubeadm, from the ground up! | This article is for advanced Kubernetes and Google Cloud professionals, comfortable with Google Kubernetes Engine and interested in understanding how to build and run a hand configured and managed Kubernetes cluster on GCP using Linux VMs in a VPC Network | 0 | 2024-05-25T19:00:42 | https://dev.to/codewired/running-a-5-node-kubernetes-on-google-cloud-vms-using-kubeadm-3811 | ---
title: Build a multi node Kubernetes Cluster on Google Cloud VMs using Kubeadm, from the ground up!
published: true
description: This article is for advanced Kubernetes and Google Cloud professionals, comfortable with Google Kubernetes Engine and interested in understanding how to build and run a hand configured and managed Kubernetes cluster on GCP using Linux VMs in a VPC Network
tags:
# Linux Vms
# Google Cloud Platform
# Kubernetes
# Kubeadm
---
Bringing in and updating an initial post written two years ago on my other profile here. These are the defined and authentic steps you can use to run a Three, Five, Seven...etc node cluster on GCP using Linux VMs.
Choose your Linux Flavor, recommended is Ubuntu 22.04 (Jammy) (This will work on local dev boxes and on Cloud Compute VMs with Jammy)
On a Google Cloud Web Console, pick your desired project that has billing enabled and setup your cli tool (Google Cloud CLI) and create a VPC, Subnet and Firewall to allow traffik.
(Replace resource names in square brackets without the brackets):
Create a Virtual Private Cloud Network
`gcloud compute networks create [vpc name] --subnet-mode custom`
Create a Subnet with a specific range (10.0.96.0/24)
`gcloud compute networks subnets create [subnet name] --network [vpc name] --range 10.0.96.0/24`
Create Firewall Rule that allows internal communication accros all protocols (10.0.96.0/24, 10.0.92.0/22)
`gcloud compute firewall-rules create [internal network name] --allow tcp,udp,icmp --network [vpc name] --source-ranges 110.0.96.0/24, 10.0.92.0/22`
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
`gcloud compute firewall-rules create [external network name] --allow tcp,icmp --network [vpc name] --source-ranges 0.0.0.0/0`
List the firewall rules in the VPC network:
`gcloud compute firewall-rules list --filter="network:[vpc name]"`
Provision Nodes:
Create 3, 5 or 7 compute instances which will host the Kubernetes Proxy, control plane and worker nodes respectively (A proxy is recommended if you are creating 5 nodes or more):
Proxy Plane Node (Optional):
`gcloud compute instances create proxynode --async --boot-disk-size 50GB --can-ip-forward --image-family ubuntu-2204-lts --image-project ubuntu-os-cloud --machine-type n2-standard-2 --private-network-ip 10.0.96.10 --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring --subnet [subnet name] --tags kubevms-node,proxy`
Master Control Plane Node:
`gcloud compute instances create masternode --async --boot-disk-size 200GB --can-ip-forward --image-family ubuntu-2204-lts --image-project ubuntu-os-cloud --machine-type n2-standard-2 --private-network-ip 10.0.96.11 --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring --subnet [subnet name] --tags kubeadm-node,controller`
Worker Nodes: (10.0.96.21+ for the other worker nodes)
`gcloud compute instances create workernode1 --async --boot-disk-size 100GB --can-ip-forward --image-family ubuntu-2204-lts --image-project ubuntu-os-cloud --machine-type n2-standard-2 --private-network-ip 10.0.96.20 --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring --subnet [subnet name] --tags kubeadm-node,worker`
Print the Internal IP address and Pod CIDR range for each worker node
`gcloud compute instances describe workernode1 --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'`
List the compute instances in your default compute zone:
`gcloud compute instances list --filter="tags.items=kubeadm-node"`
Test SSH Into Google Cloud VM Instance (You will need to SSH into all the VMs/Nodes to install software)
`gcloud compute ssh [compute instance name]`
RUN THESE INSTALLATIONS ON ALL NODES
a. `sudo -i`
b. `apt-get update && apt-get upgrade -y`
c. `apt install curl apt-transport-https vim git wget gnupg2 software-properties-common apt-transport-https ca-certificates uidmap lsb-release -y`
d. `swapoff -a`
INSTALL AND CONFIGURE CONTAINER RUNTIME PREREQUISITES ON ALL NODES
Verify that the br_netfilter module is loaded by running
`lsmod | grep br_netfilter`
In order for a Linux node's iptables to correctly view bridged traffic, verify that `net.bridge.bridge-nf-call-iptables` is set to 1 in your sysctl by running the following commands:
`cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF`
`sudo modprobe overlay`
`sudo modprobe br_netfilter`
sysctl params required by setup, params persist across reboots
`cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF`
Apply sysctl params without reboot
`sudo sysctl --system`
INSTALL CONTAINER RUNTIME ON ALL NODES
a. `mkdir -p /etc/apt/keyrings`
b. `curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg`
c. `echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null`
d. `apt-get update`
e. `apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin`
CONFIGURE CGROUP DRIVER FOR CONTAINERD ON ALL NODES (We will use the more advanced SystemD that comes with Ubuntu 2204)
a. `stat -fc %T /sys/fs/cgroup/` (Check to see if you are using the supported cgroupV2)
b. `sudo containerd config default | sudo tee /etc/containerd/config.toml` (Make sure that the config.toml is present with defaults)
c. Set the SystemDGroup = true to use the CGroup driver in the config.toml [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
d. `sudo systemctl restart containerd` (Restart ContainerD)
INSTALL KUBEADM, KUBELET & KUBECTL ON ALL NODES
Download the Google Cloud public signing key:
a. `sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg`
Add the Kubernetes apt repository:
b. `echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list`
c. `sudo apt-get update`
`sudo apt-get install -y kubelet kubeadm kubectl`
`sudo apt-mark hold kubelet kubeadm kubectl`
CONFIGURE CGROUP DRIVER FOR MASTER NODE (Add section to kubeadm-config.yaml if you are using a supprted OS for SystemD and change the kubernetesVersion to the actual one installed by kubeadm)
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.30.0
controlPlaneEndpoint: "masternode:6443"
networking:
podSubnet: 10.200.0.0/16
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
CONFIRGURE HOSTNAME FOR MASTER NODE
Open file : nano /ect/hosts
Add Master Node's Static IP and preferred Hostname (10.0.96.11 masternode)
INITIALIZE KUBEADM ON MASTER NODE (Remember to save the token hash)
`kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out`
Logout from ROOT if you are stilL ROOT
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
`mkdir -p $HOME/.kube`
`sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config`
`sudo chown $(id -u):$(id -g) $HOME/.kube/config`
INSTALL A POD NETWORKING INTERFACE ON MASTER NODE
Download and Install the Tigera Calico operator and custom resource definitions.
`kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml`
Download and Install Calico by creating the necessary custom resource.
Before installing remember to change the CALICO_IPV4POOL_CIDR to the POD_CIDR (10.200.0.0/16)
`kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/custom-resources.yaml`
JOIN WORKER NODES TO THE CONTROL PLANE (MASTER NODE)
Run the command below inside each worker node with the token you got from the cli when you initialized kubeadm on Master Node:
`kubeadm join masternode:6443 --token n0smf1.ixdasx8uy109cuf8 --discovery-token-ca-cert-hash sha256:f6bce2764268ece50e6f9ecb7b933258eac95b525217b8debb647ef41d49a898` | codewired | |
1,865,128 | ASP.NET Core Navigation markers | Introduction Learn how to make standard navigation links user friendly and accessible.... | 22,454 | 2024-05-25T18:50:11 | https://dev.to/karenpayneoregon/aspnet-core-navigation-markers-2j38 | javascript, webdev, tutorial, frontend | ## Introduction
Learn how to make standard navigation links user friendly and accessible. With the standard ASP.NET Core project template the navigation appears as shown below.

This is okay if there is a header (H1 tag) or breadcrumbs on each page while without either of these there is no indication on which page a visitor is on.
There is an easy way to show the visitor which page they are on by adding a handful of lines of JavaScript to the project’s site.js file located under the wwwroot/js folder of a project.
## Examples
{% cta https://github.com/karenpayneoregon/bootstrap-samples/tree/master/Project1 %} Sample project {% endcta %}
All of the following examples shown below are easy to implement, copy the example code in wwwroot/js/site.js and run the project. If the example code colors are not desirable simply change the colors to suite the application color schemes.
### Example 1
This version places a top and bottom border on the navigation link for the current page.

```javascript
document.addEventListener('DOMContentLoaded', () => {
document.querySelectorAll('.nav-link').forEach(link => {
link.classList.remove('border-bottom');
link.classList.remove('border-top');
if (link.getAttribute('href').toLowerCase() === location.pathname.toLowerCase()) {
link.classList.add('border-dark');
link.classList.add('border-bottom');
link.classList.add('border-top');
} else {
link.classList.add('text-dark');
}
});
});
```
Here the border-dark has been changed to border-danger.

### Example 2
This example has no border but could, instead, on the active page the background color is changed to `bg-primary` and forecolor set to `text-white`

```javascript
document.addEventListener('DOMContentLoaded', () => {
document.querySelectorAll('.nav-link').forEach(link => {
link.classList.remove('text-dark');
link.classList.remove('bg-primary');
if (link.getAttribute('href').toLowerCase() === location.pathname.toLowerCase()) {
link.classList.add('text-white');
link.classList.add('bg-primary');
} else {
link.classList.add('text-dark');
}
});
})
```
### Example 2
This version places an underline beneath the active navigation link for the current page, see Bootstrap [documentation](https://blog.getbootstrap.com/#new-link-helpers-and-utilities) [New nav underline](https://getbootstrap.com/docs/5.3/components/navs-tabs/#underline).
**Step 1**
Add `nav-underline` to the `nav` element in _Layout.cshtml.

**Step 2**
Add the following code to wwwroot/js.site.js
```javascript
document.addEventListener('DOMContentLoaded', () => {
document.querySelectorAll('.nav-link').forEach(link => {
link.classList.remove('active');
if (link.getAttribute('href').toLowerCase() === location.pathname.toLowerCase()) {
link.classList.add('active');
}
});
});
```

## Special cases
For some cases the rule indicates not to repeat links. The following code adds event listeners for an about page directly below any of the above code samples.
```javascript
document.querySelectorAll('a#aboutFooter').forEach(link => {
link.addEventListener('click', (e) => {
window.location = 'About';
});
});
document.querySelectorAll('a#aboutNav').forEach(link => {
link.addEventListener('click', (e) => {
window.location = 'About';
});
});
```
## Mechanics
The file site.js run before a page is rendered via the code samples shown by running in document load event. There may be cases where this clashes with current document load events which means code needs to be refactored.
## Summary
The basics have been shown and demonstrated to use borders and colors to indicate to a visitor of a web page is the active page for ASP.NET Core projects.
**Next steps**
For accessibility, consider adding `aria-current="page"` and `aria-current="true"` when setting the active page in the code samples presented as some visitors may be vision impaired and can not see colors.
## Resources
- Bootstrap [Navbar](https://getbootstrap.com/docs/5.0/components/navbar/) documentation.
- Bootstrap [colors](https://getbootstrap.com/docs/5.0/utilities/colors/)
- Bootstrap [JavaScript](https://getbootstrap.com/docs/5.0/getting-started/javascript/)
| karenpayneoregon |
1,865,127 | Achieving Success in Online Learning: A Practical Guide | In my last article, I mentioned how I started my programming journey with the CS50 course. As luck... | 0 | 2024-05-25T18:43:09 | https://giftmugweni.hashnode.dev/achieving-success-in-online-learning | onlinelearning, gamedev, planning | In my last article, I mentioned how I started my programming journey with the CS50 course. As luck would have it, in the process of going to their site so I could properly reference it, I learnt they introduced a [bunch of new interesting courses](https://www.edx.org/cs50). For me, my interest was piqued by the below courses
* [CS50's Introduction to Artificial Intelligence with Python](https://www.edx.org/learn/artificial-intelligence/harvard-university-cs50-s-introduction-to-artificial-intelligence-with-python?webview=false&campaign=CS50%27s+Introduction+to+Artificial+Intelligence+with+Python&source=edx&product_category=course&placement_url=https%3A%2F%2Fwww.edx.org%2Fcs50)
* [CS50's Introduction to Game Development](https://www.edx.org/learn/game-development/harvard-university-cs50-s-introduction-to-game-development?webview=false&campaign=CS50%27s+Introduction+to+Game+Development&source=edx&product_category=course&placement_url=https%3A%2F%2Fwww.edx.org%2Fcs50)
* [CS50's Web Programming with Python and JavaScript](https://www.edx.org/learn/web-development/harvard-university-cs50-s-web-programming-with-python-and-javascript?webview=false&campaign=CS50%27s+Web+Programming+with+Python+and+JavaScript&source=edx&product_category=course&placement_url=https%3A%2F%2Fwww.edx.org%2Fcs50)
Although I know all the above subjects at a surface level, I thought it'd be interesting to dive deep into one of the topics and see what new stuff I could learn.
Considering my current experience in programming, the courses I thought I'd learn the most from were the Game Programming and Artificial Intelligence courses. I also wasn't in the mood to learn Artificial Intelligence as I did two courses that dove deep into machine learning at university which gave me a good impression of what to expect from the course. Hence, intending to learn and have fun, I decided to do the Game Programming course.
With the course decided upon, I tried determining what key things I wanted to ensure I obtained so I would approach it with the right frame of mind. For this, I considered my state of being and made the following observations.
1. I'll likely only be able to allocate an average of 1 to 2 hours per day to this course
2. My primary goal is entertainment
3. I don't need the certificate
4. I'd like to be able to know how I can go about making my own unique exciting game that all my 3 users will fall in love with
With my priorities set, I skimmed through the course to get an idea of what to expect, what I might like and what I might hate about the course. From this, I set the following constraints for myself.
1. I don't just want to copy the code exactly
2. Where I can, I should create my game assets (e.g. sounds, music, sprites) within reason.
Aside from this, I noticed the 2D game section was written using the [Lua programming language](https://www.lua.org/), and the 3D game section used the [Unity Game engine](https://unity.com/). Having played around with Lua for a bit, I realised I didn't like using it. There wasn't any rational reason for my dislike. It was mostly vibes but, considering one of my primary goals was entertainment, it was a real issue I had to resolve otherwise I'd likely drop the course as time went on.
And so, I started looking into alternative tools I could use for making 2D games which would satisfy the fun factor for me. After endless Googling with a hint of Bing, Chat GPT,... and Bard, I learnt about these alternative tools I might enjoy more.
* [Unity Game engine](https://unity.com/) which uses C#
* [Pygame](https://www.pygame.org/news) which uses Python
* [Godot](https://godotengine.org/) which uses GDScript or C#
* [Phasor](https://phaser.io/) which uses TypeScript or JavaScript
* [PixiJS](https://pixijs.com/) which uses TypeScript or JavaScript
With my list present, it was time for the hard part, deciding what to use. From what I had seen from the initial course videos, the games were being built using a relatively low-level approach since they used [Love2D](https://www.love2d.org/). This meant things like physics, state machines, and event propagation were being built as we went and I got the impression that the only thing mostly abstracted was the rendering (showing stuff on the screen). This meant I'd need a tool that didn't hide too much stuff from me and allowed me to drown in complexity. This left me with Pygame and PixiJS since they were the most similar in functionality to Love2D as far as I could tell with the only difference being the language used.
Of the remaining two tools, I did enjoy both so ..., I flipped a coin and ended up choosing PixiJS as my tool.
With all this said, this is how I set myself up for potential success in tackling the course and I even went about finishing the first two lessons along the way. You can find the Flappy Bird game here [(Fifty Bird (stelele.github.io))](https://stelele.github.io/pixijs-flappy-bird/) and here is the link to the repo [(https://github.com/Stelele/pixijs-flappy-bird)](https://github.com/Stelele/pixijs-flappy-bird) though considering this is for fun, you're gonna need a keyboard to play it.
Making the flappy bird was quite fun and in my next article, I look forward to explaining how I went about making it and preparing the assets for it. | gift_mugweni_1c055b418706 |
1,865,126 | Looking for Angular Mentor | Looking for a mentor who is well versed with Angular framework and can help me build a dairy farm... | 0 | 2024-05-25T18:41:30 | https://dev.to/kmuppalla/looking-for-angular-mentor-56ig | angular, javascript, webdev, beginners | Looking for a mentor who is well versed with Angular framework and can help me build a dairy farm application. For backend it can be something like firebase. This is for learning Angular and Javascript. I have some experience with Python programming but thats about it. Looking for guidance and best practices. | kmuppalla |
1,865,125 | Nginx Ingress Controller-Part01 | one of opensource projects for Kubernetes ingress controllers, for ex: nginx-ingress-controller What... | 0 | 2024-05-25T18:37:46 | https://dev.to/sambo2021/kubernetes-ingress-in-a-nutshell-part01-28j | kubernetes, nginx, aws, helm | one of opensource projects for Kubernetes ingress controllers, for ex: nginx-ingress-controller
What you are truly deploying for your services is ingress resources, but ingress Controller is required so that ingress resources come to life.
So please keep in mind that ingress-resource is different from ingress-controller
What is Ingress?
Ingress is an API object for routing and load balancing requests to a kubernetes service. Ingress can run on HTTP or HTTPS protocols and performs redirection by applying the rules we define as developers.
What is Ingress Controller?
Ingress Controller is a backend service developed with the Ingress API. It reads Ingress objects and takes actions to properly route incoming requests. Ingress Controllers can perform load balancing as well as forwarding operations. There are many Ingress Controllers in use
[ingress-controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/?source=post_page-----7b448f6314f6--------------------------------)
- **Install Ingress Controller
firstly, you need to deploy nginx-controller by helm chart**
```
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
--set-string controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb"
```
-- the default type of chart service type is LoadBalancer: https://github.com/kubernetes/ingress-nginx/blob/3b1908e20693c57a97b55d8a563da284a5dbf36c/charts/ingress-nginx/values.yaml#L482

-- and to define that created LoadBalancer should be NLB, it is defined as annotation at nginx-ingress-controller service:
```
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
```
For example, if we have public domain "tools.com"
-- to Set SSL/TLS termination on AWS load balancer:
simply this is a way of abstracting TLS handling is to terminate on load balancer and have HTTP inside the cluster by default.
by requesting a public certificate for your custom domain `"api.tools.com"` and wildcard custom domain `"*.api.tools.com"`, and don't forget to record the certificate CNAME under your public hosted zone to validate it.
then use the ACM certificate arn in controller service annotation and define the ssl port as "https"
and of course, don't forget to record all needed sub domains `"api.tools.com"` and `"*.api.tools.com"` to route to your NLP as record type A but the certificate is CNAME
Now you can set not only one ingress controller, but multiples and each ingress-controller has its own NLP and hostname for example
hostnameA: ui.api.tools.com -> to route to all ui websites
hostnameB: services.api.tools.com -> to route to restful apis services
and how the ingress resource knows which ingress-nginx-controller ?
--- each one will have unique nginx-controller-class
Another important note that ingress-controller should have the minimum permissions to allow it to create loadbalancer on aws, and this should use an IRSA role and passed as annotation to serviceAccount inside the helm chart
```
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:55xxxxxxx:certificate/5k0c5513-a947-6cc5-a506-b3yxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
```
-- Choosing Publicly Accessible
This will configure the AWS load balancer for public access
```
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
```
-- NLB with NGINX Ingress Controller maybe overwrite client IP, how to retain actual client IP:
you need to have proxy protocol enabled on your NLB and have the appropriate configuration in ingress-nginx.
```
controller:
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
use-forwarded-headers: "true"
```
--So Finally maybe all you needs
```
controller:
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
use-forwarded-headers: "true"
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:55xxxxxxx:certificate/5k0c5513-a947-6cc5-a506-b3yxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
```
-- what the chart deploys:
-<u>ingress-nginx</u> namespace
-<u>ingress-nginx-controller-7ed7998c-j2er5</u> pod
-<u>ingress-nginx-controller</u> service of type LoadBalancer
-<u>ingress-nginx-controller-admission</u> service of type ClusterIP (Validating admission controller which helps in preventing outages due to wrong ingress configuration)
-EXTERNAL-IP -> in turn points to AWS Load Balancer DNS Name which gets created when the Ingress Controller is installed cause of service of type LoadBalancer created by the chart
-- In details:
The controller deploys, configures, and manages Pods that contain instances of nginx, which is a popular open-source HTTP and reverse proxy server. These Pods are exposed via the controller’s Service resource, which receives all the traffic intended for the relevant applications represented by the Ingress and backend Services resources. The controller translates Ingress and Services’ configurations, in combination with additional parameters provided to it statically, into a standard nginx configuration. It then injects the configuration into the nginx Pods, which route the traffic to the application’s Pods.
The Ingress-Nginx Controller Service is exposed for external traffic via a load balancer. That same Service can be consumed internally via the usual <u>ingress-nginx-controller.ingress-nginx.svc.cluster.local</u> cluster DNS name.

- **Create Deployment and Expose it as a service**
```
# create deployment
kubectl create deployment demo --image=nginx --port=80
# expose deployment as a service
kubectl expose deployment demo
# Create Ingress resource to route request to demo service
kubectl create ingress demo --class=nginx \
--rule your-public-domain/=demo:80
```

Refereces:
- https://kubernetes.github.io/ingress-nginx/
- https://aws.amazon.com/blogs/containers/exposing-kubernetes-applications-part-3-nginx-ingress-controller/
- https://repost.aws/questions/QUw4SGJL79RO2SMT-LbpDRoQ/nlb-with-nginx-ingress-controller-is-overwriting-client-ip-how-to-retain-actual-client-ip
- https://dev.to/zenika/kubernetes-nginx-ingress-controller-10-complementary-configurations-for-web-applications-ken
- https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/ | sambo2021 |
1,865,124 | Understanding the Basics of HTTP Status Codes | When developing web applications, understanding HTTP status codes is essential for effectively... | 0 | 2024-05-25T18:35:44 | https://dev.to/vidyarathna/understanding-the-basics-of-http-status-codes-40b9 | statuscodes, http, beginners, webapplications | When developing web applications, understanding HTTP status codes is essential for effectively managing client-server communication. These codes, sent by the server in response to a client's request, are part of the HTTP protocol and indicate whether a request was successful, if an error occurred, or if further actions are needed. Here’s an overview of HTTP status codes and their significance.
## 1. Informational Responses (100-199)
Informational responses indicate that the request was received and understood, and that the process is continuing.
- **100 Continue**: The initial part of a request has been received and the client can continue with the rest of the request.
- **101 Switching Protocols**: The server is switching protocols as requested by the client.
- **102 Processing**: The server has received and is processing the request, but no response is available yet.
## 2. Successful Responses (200-299)
Successful responses indicate that the request was successfully received, understood, and accepted.
- **200 OK**: The request has succeeded. The meaning of the success depends on the HTTP method (GET, POST, etc.).
- **201 Created**: The request has been fulfilled and has resulted in the creation of a new resource.
- **202 Accepted**: The request has been accepted for processing, but the processing has not been completed.
- **204 No Content**: The server successfully processed the request, but is not returning any content.
## 3. Redirection Messages (300-399)
Redirection messages indicate that further action needs to be taken by the client in order to complete the request.
- **301 Moved Permanently**: The requested resource has been permanently moved to a new URL.
- **302 Found**: The requested resource resides temporarily under a different URL.
- **304 Not Modified**: Indicates that the resource has not been modified since the version specified by the request headers.
## 4. Client Error Responses (400-499)
Client error responses indicate that there was a problem with the request.
- **400 Bad Request**: The server could not understand the request due to invalid syntax.
- **401 Unauthorized**: The client must authenticate itself to get the requested response.
- **403 Forbidden**: The client does not have access rights to the content.
- **404 Not Found**: The server cannot find the requested resource.
- **429 Too Many Requests**: The user has sent too many requests in a given amount of time.
## 5. Server Error Responses (500-599)
Server error responses indicate that the server failed to fulfill a valid request.
- **500 Internal Server Error**: The server encountered an unexpected condition that prevented it from fulfilling the request.
- **502 Bad Gateway**: The server, while acting as a gateway or proxy, received an invalid response from the upstream server.
- **503 Service Unavailable**: The server is not ready to handle the request. Common causes are server overload or maintenance.
- **504 Gateway Timeout**: The server, while acting as a gateway or proxy, did not receive a timely response from the upstream server.
HTTP status codes play a crucial role in web development and API design by providing essential information about the result of a client's request. By familiarizing yourself with these codes, you can better handle responses and ensure robust and reliable web applications. Understanding these codes is key for effective development and troubleshooting. | vidyarathna |
1,865,206 | # Creando una imagen docker de una aplicación en react | En esta guía vamos a dockerizar una aplicación React. Pre-requisitos: - Tener instalado... | 0 | 2024-05-30T16:41:59 | https://www.ahioros.info/2024/05/creando-una-imagen-docker-de-una.html | devops, docker, linux, spanish | ---
title: # Creando una imagen docker de una aplicación en react
published: true
date: 2024-05-25 18:24:00 UTC
tags: DevOps,Docker,Linux,spanish
canonical_url: https://www.ahioros.info/2024/05/creando-una-imagen-docker-de-una.html
---
En esta guía vamos a dockerizar una aplicación React.
Pre-requisitos:
- Tener instalado [Docker](https://www.docker.com/)
- Un proyecto en [react](https://github.com/ahioros/devops-kubernetes-sr-azure)
<!-- agrega el botón leer más -->
1. Crear el archivo dockerignore
Primero tenemos que saber qué es el archivo [dockerignore](https://docs.docker.com/reference/dockerfile/#dockerignore-file) describe una lista de archivos que no queremos que se copien al contenedor. Por ejemplo, el archivo `node_modules` no se copiara al contenedor.
+ Crear el archivo .dockerignore y abrirlo con tu editor de texto favorito, poner la siguiente línea:
```bash
node_modules
```
+ Guardar y cerrar el archivo.
2. Crear el archivo dockerfile
¿Qué es un archivo Dockerfile? El archivo [Dockerfile](https://docs.docker.com/reference/dockerfile/#overview) define el contenedor que vamos a crear. En este caso, vamos a crear un contenedor llamado rdicidr.
+ Crear el archivo Dockerfile y abrirlo con tu editor de texto favorito.
```bash
FROM node:15
WORKDIR /app
COPY public/ .
COPY src/ .
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
```
+ Guardar y cerrar el archivo.
3. Crear la imagen docker
```bash
docker build -t rdicidr .
```
4. Probar nuestra imagen recien creada
```bash
docker run -p 3000:3000 rdicidr:latest
```
Acá te dejo el video de esta configuración por si tienes dudas:
{% youtube 429osQtM-M4 %}
<iframe allowfullscreen="" youtube-src-id="429osQtM-M4" width="480" height="270" src="https://www.youtube.com/embed/429osQtM-M4"></iframe> | ahioros |
1,865,123 | executive limousines | Executive Limousines is one of Melbourne’s most trusted premier chauffeuring services. With over 35... | 0 | 2024-05-25T18:23:24 | https://dev.to/executive_limousines_c7a5/executive-limousines-1khh | Executive Limousines is one of Melbourne’s most trusted [premier chauffeuring services](https://executivelimousines.com.au/). With over 35 years experience in the industry, satisfaction is guaranteed. providing an unrivalled High Class service, reaching your destination has never been so easy.
Our fleet and partners are comprised of the latest high end luxury Sedans, SUVs, Vans, Buses, Limousines and Coaches.
Whether it be Social gatherings, Executive transfers, Wedding rides, Business itineraries, Airport pickups, Deliveries/Courier, Winery/Country tours or even a Sightseeing trip around Australia, with 40 of your friends, we have it covered.
Take the stress out of travel and immerse your self in the latest luxury comfort with Executive Limousines.
Treat your self to a service which will leave you feeling Zealous. | executive_limousines_c7a5 | |
1,865,122 | Overnight Project | My vacation are going on and I dont know what to do as per say so I am going to make a project... | 0 | 2024-05-25T18:23:03 | https://dev.to/anushlinux/overnight-project-46jo | My vacation are going on and I dont know what to do as per say so I am going to make a project assigned to me. I am starting now and will not sleep before doing it as its not a big project so wish me luck...
> (ps- I also need new ideas on what to make as as of right now I am thinking of making an online poker on react so lets see)
| anushlinux | |
1,865,105 | Understanding JUnit: A Comprehensive Guide with Examples | JUnit is one of the most widely used testing frameworks for Java programming. It provides a simple... | 0 | 2024-05-25T18:18:31 | https://dev.to/fullstackjava/understanding-junit-a-comprehensive-guide-with-examples-ei3 | webdev, javascript, beginners, programming | **JUnit** is one of the most widely used testing frameworks for Java programming. It provides a simple yet powerful way to write and run repeatable tests, making it an essential tool for developers to ensure the reliability and correctness of their code. In this blog, we will explore JUnit in detail, covering its features, how to set it up, and how to use it effectively with examples.
### What is JUnit?
JUnit is a unit testing framework for the Java programming language. It plays a crucial role in test-driven development (TDD) and allows developers to write tests for individual units of source code. JUnit promotes the creation of simple, repeatable tests, helping to identify bugs early in the development process.
### Key Features of JUnit
1. **Annotations**: JUnit uses annotations to identify test methods and control the execution of test cases.
2. **Assertions**: Provides methods to check if a condition is true.
3. **Test Runners**: Executes the test methods.
4. **Fixtures**: Common test data setup and teardown methods.
5. **Test Suites**: Grouping multiple test cases to run together.
### Setting Up JUnit
Before we dive into examples, let's set up JUnit in a Java project. There are several ways to do this, including using build tools like Maven or Gradle.
#### Using Maven
Add the following dependency to your `pom.xml` file:
```xml
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.2</version>
<scope>test</scope>
</dependency>
```
#### Using Gradle
Add the following to your `build.gradle` file:
```groovy
testImplementation 'junit:junit:4.13.2'
```
### Writing Tests with JUnit
Let's start with a basic example. Consider a simple class `Calculator` with methods for addition and subtraction.
```java
public class Calculator {
public int add(int a, int b) {
return a + b;
}
public int subtract(int a, int b) {
return a - b;
}
}
```
#### Basic Test Case
Here's how you can write a test case for the `Calculator` class using JUnit:
```java
import static org.junit.Assert.assertEquals;
import org.junit.Test;
public class CalculatorTest {
@Test
public void testAdd() {
Calculator calculator = new Calculator();
int result = calculator.add(2, 3);
assertEquals(5, result);
}
@Test
public void testSubtract() {
Calculator calculator = new Calculator();
int result = calculator.subtract(5, 3);
assertEquals(2, result);
}
}
```
### JUnit Annotations
JUnit uses various annotations to define and control the test methods. Some of the key annotations are:
- `@Test`: Marks a method as a test method.
- `@Before`: Executed before each test. Used for setting up test data.
- `@After`: Executed after each test. Used for cleanup.
- `@BeforeClass`: Executed once before any test in the class. Used for expensive setup.
- `@AfterClass`: Executed once after all tests in the class. Used for cleanup.
- `@Ignore`: Ignores the test method.
#### Using `@Before` and `@After`
```java
import static org.junit.Assert.assertEquals;
import org.junit.Before;
import org.junit.After;
import org.junit.Test;
public class CalculatorTest {
private Calculator calculator;
@Before
public void setUp() {
calculator = new Calculator();
}
@After
public void tearDown() {
calculator = null;
}
@Test
public void testAdd() {
int result = calculator.add(2, 3);
assertEquals(5, result);
}
@Test
public void testSubtract() {
int result = calculator.subtract(5, 3);
assertEquals(2, result);
}
}
```
### Assertions in JUnit
JUnit provides various assertion methods to verify the test results. Some commonly used assertions are:
- `assertEquals(expected, actual)`: Checks if two values are equal.
- `assertTrue(condition)`: Checks if the condition is true.
- `assertFalse(condition)`: Checks if the condition is false.
- `assertNull(object)`: Checks if the object is null.
- `assertNotNull(object)`: Checks if the object is not null.
### Testing for Exceptions
JUnit allows testing for exceptions using the `expected` attribute of the `@Test` annotation.
```java
import static org.junit.Assert.assertThrows;
import org.junit.Test;
public class CalculatorTest {
@Test(expected = ArithmeticException.class)
public void testDivideByZero() {
Calculator calculator = new Calculator();
calculator.divide(1, 0);
}
}
```
Alternatively, you can use the `assertThrows` method in JUnit 4.13 and later:
```java
import static org.junit.Assert.assertThrows;
import org.junit.Test;
public class CalculatorTest {
@Test
public void testDivideByZero() {
Calculator calculator = new Calculator();
assertThrows(ArithmeticException.class, () -> {
calculator.divide(1, 0);
});
}
}
```
### Running Tests
JUnit tests can be run in various ways:
- **Integrated Development Environment (IDE)**: Most modern Java IDEs (e.g., IntelliJ IDEA, Eclipse) support running JUnit tests directly.
- **Command Line**: Using build tools like Maven or Gradle.
- **Continuous Integration (CI)**: Automated build systems like Jenkins, Travis CI, etc.
### Conclusion
JUnit is a powerful framework for writing and running tests in Java. It promotes good testing practices and helps ensure the reliability and correctness of code. By integrating JUnit into your development workflow, you can catch bugs early, improve code quality, and maintain a robust codebase.
Here’s a quick recap of what we covered:
- What JUnit is and its key features.
- Setting up JUnit using Maven or Gradle.
- Writing basic test cases.
- Using annotations like `@Test`, `@Before`, `@After`, `@BeforeClass`, `@AfterClass`.
- Using assertions to validate test outcomes.
- Testing for exceptions.
By following these practices and examples, you can start incorporating JUnit into your Java projects effectively. | fullstackjava |
1,864,161 | Shopping cart Quantity Component | Note, This post is for beginners In this post we see how to design a e-commerce shopping app... | 0 | 2024-05-25T18:14:11 | https://dev.to/raguram90/shopping-cart-quantity-component-52jp | react, reactnative, javascript, mobile | Note, This post is for beginners
In this post we see how to design a e-commerce shopping app quantity / qty component with regex validation.
This is the basic page and basic functions, hope you know them already.

```javascript
import React, { useState } from 'react';
import {
View,
Text,
StyleSheet,
TextInput,
TouchableOpacity,
} from 'react-native';
export default function App() {
const [qty, setQty] = useState(0);
return (
<View style={styles.container}>
<View style={styles.qtyLine}>
<TouchableOpacity
onPress={() => qty < 100 && setQty(qty + 1)}
style={styles.btn}>
<Text style={styles.btnText}>{'+'}</Text>
</TouchableOpacity>
<TextInput
keyboardType={'number-pad'}
maxLength={3}
style={styles.qtyInput}
value={'' + qty}
onChangeText={(txt) => {
let num = Number(txt);
if (!isNaN(num) && num > 0 && num < 101) {
setQty(num);
} else setQty(0);
}}
/>
<TouchableOpacity
onPress={() => qty > 0 && setQty(qty - 1)}
style={styles.btn}>
<Text style={styles.btnText}>{'-'}</Text>
</TouchableOpacity>
<TouchableOpacity
onPress={() => setQty(0)}
style={[styles.btn, { marginLeft: 5 }]}>
<Text style={styles.btnText}>{'x'}</Text>
</TouchableOpacity>
</View>
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: 'snow',
alignItems: 'center',
justifyContent: 'center',
padding: 10,
paddingTop: 20,
},
qtyLine: {
flexDirection: 'row',
alignItems: 'center',
justifyContent: 'center',
},
qtyInput: {
marginHorizontal: 10,
borderWidth: 0.5,
borderColor: 'dodgerblue',
padding: 5,
textAlign: 'center',
width: '20%',
},
btn: {
backgroundColor: 'dodgerblue',
paddingVertical: 5,
paddingHorizontal: 10,
borderRadius: 3,
},
btnText: {
color: 'white',
},
});
```
```javascript
Number(txt)
// also converts empty string and blank space(s) as 0
```
At present in TextInput `onChangeText` event we are checking 3 conditions. Checking a text is a number, number is > 0 and number is < 101. But this can be simplified in one step by using regex validation.
Lets touch the TextInput component. In `onChangeText` event we are using **regex** to validate the given string is a number and the value is between 1 to 100.
TextInput with RegEx pattern
```javascript
const qtyRgPtn = /^([0-9][0-9]?|100)$/;
. . .
<TextInput onChangeText={txt =>
qtyRgPtn.test(txt) && setQty(Number(txt))} />
```
so simple isn't it?
Now in addition to that we are showing a warning message for short time when the user inputs invalid qty.
```javascript
const qtyRgPtn = /^([0-9][0-9]?|100)$/;
export default function App() {
const [qty, setQty] = useState(0);
const [qtyError, setQtyError] = useState('');
. . .
<TextInput
onChangeText={(txt) => {
/*
let num = Number(txt);
if (!isNaN(num) && num > 0 && num < 101) {
setQty(num);
} else {
setQty(0);
}
*/
if (qtyRgPtn.test(txt)) setQty(Number(txt));
else {
setQty(0);
if (!txt || !txt.trim().length) {
setQtyError('Qty should be in between 1-100');
setTimeout(() => setQtyError(''), 2000);
}
}
}}
/>
. . .
{/* final child of container */}
{qtyError ? (
<Text style={{ color: 'salmon', fontSize: 12, marginTop: 5 }}>
{qtyError}
</Text>
) : null}
```
Good! we are done :)

Hope this post will be useful. Source code [here](https://gist.github.com/RaguRam1991/f9645a09e4a93e0d4815164e7fa1d82e). Thank you.
| raguram90 |
1,865,103 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-05-25T18:11:08 | https://dev.to/maxtyhuilianofrost141/buy-verified-cash-app-account-bha | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | maxtyhuilianofrost141 |
1,865,102 | Spring vs Spring MVC vs Spring Boot: A Detailed Comparison for Java Developers | In the ever-evolving landscape of Java development, the Spring Framework has emerged as a powerhouse,... | 0 | 2024-05-25T18:09:52 | https://dev.to/nikhilxd/spring-vs-spring-mvc-vs-spring-boot-a-detailed-comparison-for-java-developers-39ic | webdev, javascript, programming, springboot | In the ever-evolving landscape of Java development, the Spring Framework has emerged as a powerhouse, providing a robust and comprehensive solution for building enterprise-level applications. However, navigating the Spring ecosystem can be a daunting task, especially for newcomers, as it encompasses a multitude of projects and modules. This blog aims to shed light on three pivotal components of the Spring ecosystem: Spring, Spring MVC, and Spring Boot, exploring their unique features and how they seamlessly integrate to deliver a robust development experience.
#### 1. The Spring Framework
**Overview:**
The Spring Framework is the bedrock upon which the entire Spring ecosystem is built. It is a comprehensive framework that provides a wide range of infrastructure support for developing Java applications, enabling developers to create robust, secure, and scalable solutions.
**Key Features:**
- **Inversion of Control (IoC) and Dependency Injection (DI):** At the heart of Spring lies the IoC principle, which facilitates the creation, configuration, and management of objects within the framework. This is primarily achieved through Dependency Injection, a design pattern that promotes loose coupling and reusability.
- **Aspect-Oriented Programming (AOP):** Spring's AOP framework enables the separation of cross-cutting concerns, such as logging and security, from the application's core business logic, promoting modularity and maintainability.
- **Data Access:** Spring offers a consistent abstraction layer for data access, seamlessly integrating with various data access technologies like JDBC, JPA, Hibernate, and others.
- **Transaction Management:** Enterprise applications often require robust transaction management capabilities, and Spring excels in this area, providing a comprehensive solution for managing transactions across multiple data sources.
- **Spring Core:** The core module of Spring serves as the foundation, providing the fundamental functionalities of the framework, including the IoC container.
**Use Cases:**
- Building enterprise-level applications with complex transaction management requirements.
- Applications that demand a flexible and modular architecture for maintainability and scalability.
- Projects that require integration with multiple data sources and external systems.
#### 2. Spring MVC (Model-View-Controller)
**Overview:**
Spring MVC is a powerful web framework within the Spring ecosystem, designed to simplify the development of web applications. It follows the Model-View-Controller (MVC) architectural pattern, which separates an application into three distinct components: Model, View, and Controller.
**Key Features:**
- **DispatcherServlet:** Acting as the front controller, the DispatcherServlet routes incoming requests to the appropriate controllers for handling.
- **Controllers:** These components handle incoming requests, process data using the model, and return views for rendering the user interface.
- **Model:** Representing the application's data and encapsulating the business logic.
- **View:** Responsible for rendering the user interface, typically using technologies like JSP, Thymeleaf, or FreeMarker.
- **Validation:** Spring MVC provides built-in support for validating request parameters and model attributes, ensuring data integrity.
- **Form Handling:** Simplifying the process of handling form submissions and binding form data to Java objects.
**Use Cases:**
- Developing traditional web applications and RESTful web services.
- Applications that require a clear separation between the user interface and business logic.
- Projects that demand robust request handling and form processing capabilities.
#### 3. Spring Boot
**Overview:**
Spring Boot is a project built on top of the Spring Framework that aims to simplify the development of new Spring applications. It embraces an opinionated approach, providing a set of conventions and defaults that expedite the development process, allowing developers to focus on writing actual code rather than dealing with extensive configuration.
**Key Features:**
- **Auto-Configuration:** Spring Boot automatically configures your Spring application based on the dependencies you include in your project, reducing the need for manual configuration.
- **Embedded Servers:** Bundled with embedded servers like Tomcat, Jetty, and Undertow, Spring Boot enables you to run your application as a standalone Java application without the need for a separate web server.
- **Starter POMs:** Simplifying Maven/Gradle dependencies management, Spring Boot offers a set of convenient dependency descriptors known as Starter POMs.
- **Spring Boot CLI:** A command-line tool that empowers developers to quickly prototype Spring applications using Groovy scripts.
- **Actuator:** Spring Boot's Actuator module adds production-ready features such as health checks, metrics, and monitoring, ensuring your applications are ready for deployment.
**Use Cases:**
- Rapid prototyping and development of microservices.
- Projects where time-to-market is critical, and you want to avoid boilerplate configuration.
- Applications that need to be easily deployable as standalone executables with embedded web servers.
#### Comparison and Integration
**Spring vs. Spring MVC vs. Spring Boot:**
- **Spring:** The core framework providing fundamental features like IoC, DI, and AOP. It is the backbone upon which Spring MVC and Spring Boot are built.
- **Spring MVC:** A module within Spring focused on building web applications using the MVC design pattern. It leverages Spring's core features but is specifically tailored for web layer development.
- **Spring Boot:** A project that simplifies the creation and configuration of Spring applications, including those using Spring MVC. It builds upon Spring and Spring MVC, adding conventions, auto-configuration, and tools to reduce development time and complexity.
**Integration:**
Spring Boot seamlessly integrates with Spring MVC, often including it as part of its auto-configuration process. For example, when developing a web application with Spring Boot, Spring MVC components are automatically configured and set up if they are found on the classpath. This streamlined integration allows developers to leverage Spring MVC's powerful web development features without the need for manual configuration, saving time and effort.
#### Conclusion
The Spring ecosystem is a vast and powerful toolset for Java developers, offering a comprehensive solution for building robust and scalable applications. Spring, Spring MVC, and Spring Boot work in harmony, each serving a distinct purpose while complementing one another.
The Spring Framework lays the foundation with its core features, such as IoC, DI, and AOP, fostering modularity and reusability. Spring MVC extends this foundation by providing a robust web application framework based on the MVC pattern, enabling developers to build intuitive and maintainable web applications. Spring Boot, built atop Spring and Spring MVC, simplifies the development process by embracing an opinionated approach, offering auto-configuration and conventions that streamline application setup and deployment.
Together, these components form a powerful toolkit for modern Java developers, empowering them to create high-performance, scalable, and maintainable applications while reducing development time and complexity. Whether you're building enterprise-level applications, microservices, or traditional web applications, mastering the Spring ecosystem will undoubtedly unlock new horizons of productivity and efficiency in your Java development journey. | nikhilxd |
1,865,101 | Let's learn WebP before introduce it | What's the WebP WebP is a powerful image format developed by Google that prioritizes both... | 0 | 2024-05-25T18:07:43 | https://dev.to/yamashee/lets-learn-webp-before-introduce-it-5c82 | webp, seo, performance | ## What's the WebP
WebP is a powerful image format developed by Google that prioritizes both **smaller file sizes** and **high image quality**.
Launched in 2010, it's recently gained significant traction due to:
### Internet Explorer's Retirement (June 2022)
Previously, Internet Explorer's lack of WebP support hindered its adoption. With its discontinuation, WebP has become a viable option for websites seeking faster loading times.
### SEO and User Experience Benefits
Smaller image sizes translate to faster page loads, a crucial factor for both search engine ranking and user experience.

## Advantages of WebP
### Reduced File Size
Compared to JPEG, WebP offers **20-80% smaller file sizes** while maintaining similar image quality. This significantly improves website loading speed.
### Minimal Quality Loss
WebP utilizes advanced compression techniques that minimize image quality degradation.
### Supports Transparency
Unlike JPEG, WebP offers lossless compression with transparency, similar to PNG. This makes it ideal for graphics with transparent backgrounds.
## Things to Consider with WebP
### Limited Editing Software Support
While popular image editing software like Photoshop and Illustrator can't use WebP files, if these are old version.
### Compatibility with Applications
WebP files might not be directly compatible with certain applications like Microsoft Office. By the way, it can't paste dev.to.
## Converting Images to WebP
There are several methods to convert images to WebP format:
### Squoosh
This free web app by Google allows you to convert images directly in your browser without uploading them to a server.
This ensures security for sensitive content. ([https://squoosh.app/](https://squoosh.app/))
### Saruwaka
This online service offers batch conversion, making it ideal for converting multiple images at once. ([https://www.canva.com/features/image-converter/](https://www.canva.com/features/image-converter/))
### Command Line
The official WebP website provides command-line tools for conversion.
This method offers greater control and automation but might be less user-friendly for beginners. ([https://developers.google.com/speed/webp/docs/using](https://developers.google.com/speed/webp/docs/using))
## Summary
By adopting WebP, you can significantly enhance your website's loading speed, improve user experience, and potentially boost your SEO ranking.
Remember to consider compatibility with other applications and editing software for a seamless workflow.
| yamashee |
1,865,100 | The Power of CSS in Styling-up Forms | This is the second part of our mini project on creating an Online Flipping Form. Here is the link for... | 0 | 2024-05-25T18:06:59 | https://dev.to/george_kingi/the-power-of-css-in-styling-up-forms-o8m | webdev, html, css, javascript | This is the second part of our mini project on creating an Online Flipping Form. Here is the link for Part 1, (https://dev.to/george_kingi/how-to-create-an-online-flipping-form-using-plain-html-and-css-4ka5.)
In part 1, we focused on the HTML and created the structure of the Front and Back sides of the online form. In this section, we will dive deep into CSS, introduce unique styling on our forms, and give some highlights on Javascript.
## Harnessing the Power of CSS in Styling-up Forms
Remember, we created a CSS file and linked it with the HTML file, Let's journey together in making the form flip; We affect changes on all elements on the webpage, and we select the elements using the Universal Selector (`*`) as below.
We then style up the container as below: as we style the container, notice that the output affects the contents and background of our webpage.
```
*{
margin: 0;
padding: 0;
font-family: sans-serif;
}
.container{
height: 100vh;
background: #f53207;
color: #340707;
display: flex;
align-items: center;
justify-content: center;
}
```
Output:

We additionally select the card for further styling. Remember to include the. (dot)
```
*{
margin: 0;
padding: 0;
font-family: sans-serif;
}
.container{
height: 100vh;
background: #f53207;
color: #340707;
display: flex;
align-items: center;
justify-content: center;
}
.card{
width: 350px;
height: 550px;
box-shadow: 0 0 20px 20px #16010142;
border-radius: 50px;
}
```
Output:

As we wind up styling our form, we apply some styling on the inner-card, card-front, card-back, and so on as below.
```
.card{
width: 350px;
height: 550px;
box-shadow: 0 0 20px 20px #16010142;
border-radius: 50px;
}
.inner-card{
height: 100%;
transform: rotateY(180deg);
transform-style: preserve-3d;
}
.card-front, .card-back{
position: absolute;
height: 100%;
padding: 50px;
background-color: #931c02;
box-sizing: border-box;
backface-visibility:hidden;
border-radius: 50px;
}
.card-back{
transform: rotateY(180deg);
}
.card h2{
font-weight: normal;
font-size: 24px;
text-align: center;
margin-bottom: 20px;
}
.input-button{
width: 100%;
background: transparent;
border: 1px solid #5e0b0b;
margin: 6px 0;
height: 32px;
border-radius: 20px;
padding: 5px;
box-sizing: border-box;
text-align: center;
}
::placeholder{
color: #fff;
font-size: 12px;
}
button{
width: 100%;
background: transparent;
border: 1px solid #fff;
margin: 25px 0 10px;
height: 32px;
font-size: 20px;
border-radius: 20px;
padding: 2px;
box-sizing: border-box;
outline: none;
color: white;
cursor: pointer;
}
.submit-btn{
position: relative;
}
.submit-btn::after{
content: '\27a4';
color: #333;
line-height: 32px;
font-size: 12px;
height: 32px;
width: 32px;
border-radius: 50%;
background: #fff;
position: absolute;
right: -1px;
top: -1px;
}
span{
font-size: 20px;
margin-left: 10px;
}
.card .btn{
margin-top: 10px;
}
.card a{
color: #fff;
text-decoration: none;
display: block;
text-align: center;
font-size: 13px;
margin-top: 8px;
}
```
To learn more about CSS selectors, properties, colors, and more visit https://www.w3schools.com/css/default.asp
The Output:

We need to introduce some Javascript in our form to provide some interactivity, as such, locate the last div and input the script below,
Here are the final HTML and CSS codes that will indeed make our form flip on click as below
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Online Flipping Form</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<div class="container">
<div class="card">
<div class="inner-card" id="card">
<div class="card-front">
<h2>LOGIN</h2>
<form>
<input type="email" class="input-box" placeholder="Email" required>
<input type="number" class="input-box" placeholder="Phone Number" required>
<input type="password" class="input-box" placeholder="password" required>
<button type="submit" class="submit-btn">Submit</button>
<input type="checkbox"><span>Remember me</span>
<button type="button" class="button" onclick="openLOGIN()">I am New Here</button>
<a href="">Forgot password</a>
</div>
</form>
<div class="card-back">
<h2>REGISTER</h2>
<form>
<input type="text" class="input-box" placeholder="Full Name" required>
<input type="email" class="input-box" placeholder="email" required>
<input type="number" class="input-box" placeholder="Phone Number" required>
<input type="password" class="input-box" placeholder="password" required>
<input type="date" class="input-box" placeholder="DOB" required>
<button type="submit" class="submit-btn">Submit</button>
<input type="checkbox"><span>Remember me</span>
<button type="button" class="button" ondblclick="openREGISTER()">I have an Account</button>
<a href="">Forgot password</a>
</form>
</div>
</div>
</div>
</div>
<script>
var card =document.getElementById("card");
function openLOGIN(){
card.style.transform = "rotateY(-180deg)";
}
function openREGISTER(){
card.style.transform = "rotateY(0deg)";
}
</script>
</body>
</html>
```
Our Final CSS File should have the below code.
```
*{
margin: 0;
padding: 0;
font-family: sans-serif;
}
.container{
height: 100vh;
background: #f53207;
color: #340707;
display: flex;
align-items: center;
justify-content: center;
}
.card{
width: 350px;
height: 550px;
box-shadow: 0 0 20px 20px #16010142;
border-radius: 50px;
}
.inner-card{
height: 100%;
transform: rotateY(180deg);
transform-style: preserve-3d;
transition: transform 2s;
}
.card-front, .card-back{
position: absolute;
height: 100%;
padding: 50px;
background-color: #931c02;
box-sizing: border-box;
backface-visibility:hidden;
border-radius: 50px;
}
.card-back{
transform: rotateY(180deg);
}
.card h2{
font-weight: normal;
font-size: 24px;
text-align: center;
margin-bottom: 20px;
}
.input-box{
width: 100%;
background: transparent;
border: 1px solid #5e0b0b;
margin: 6px 0;
height: 32px;
border-radius: 20px;
padding: 5px;
box-sizing: border-box;
text-align: center;
color: white;
}
::placeholder{
color: #fff;
font-size: 12px;
}
button{
width: 100%;
background: transparent;
border: 1px solid #fff;
margin: 25px 0 10px;
height: 32px;
font-size: 20px;
border-radius: 20px;
padding: 2px;
box-sizing: border-box;
outline: none;
color: white;
cursor: pointer;
}
.submit-btn{
position: relative;
}
.submit-btn::after{
content: '\27a4';
color: #333;
line-height: 32px;
font-size: 12px;
height: 32px;
width: 32px;
border-radius: 50%;
background: #fff;
position: absolute;
right: -1px;
top: -1px;
}
span{
font-size: 20px;
margin-left: 10px;
}
.card .btn{
margin-top: 10px;
}
.card a{
color: #fff;
text-decoration: none;
display: block;
text-align: center;
font-size: 13px;
margin-top: 8px;
}
```
The output will be as below:
(Never mind the quality 🤭😀)

| george_kingi |
1,865,099 | KISS: Keep It Simple, Stupid | Today, we're going to talk about *the KISS principle in JavaScript. * It's all about simplifying... | 0 | 2024-05-25T18:06:15 | https://dev.to/shehzadhussain/kiss-keep-it-simple-stupid-26o4 | webdev, javascript, programming, beginners | Today, we're going to talk about **the KISS principle in JavaScript. **
It's all about **simplifying your JavaScript code**. You:
reduce bugs
enhance readability
improve maintainability
**You will save time and effort** in the long run.
Many devs struggle with overly complex code, leading to confusion and errors. Embracing simplicity prevents these pitfalls and streamlines your development process.
## Keep It Simple, Stupid = Simplicity = Smoother Development & Easier Maintenance
Keep the code easy to read and understand.
If you keep the code simple, you'll make it easier for all people to fix things when they go wrong.
Don’t make your code too smart.
Make it simple. You and your teammates will thank you in the future when maintaining the code or adding new features.
Here are some simple code examples to see how to apply this principle:
## Arrow Functions



## Short-Circuit Evaluation

## Implicit Return

## Default Parameters

## Simple functions

## Avoiding Complex Functions

## Using Built-in Functions

## Avoiding Old 'for' Loop

## Destructuring

## Spread Operator

## Async/Await over Promises

## Map, Filter, Reduce

## Template Literals

## Conclusion
Embracing the KISS principle in your JavaScript coding practices can significantly improve your productivity and code quality.**
By keeping it simple, you'll write better code and make y**our development process more enjoyable and efficient.**
So, next time you're tempted to overcomplicate things…
Remember: **Keep It Simple!**
| shehzadhussain |
1,865,098 | Bwing⚡️Cập Nhật Link Đăng Nhập Bwing Casino Năm 2024 | "Bwing được xem là điểm đến cá cược với nhiều dịch vụ, tính năng chất lượng cao. Những ưu thế này tạo... | 0 | 2024-05-25T18:05:13 | https://dev.to/bwingdev/bwingcap-nhat-link-dang-nhap-bwing-casino-nam-2024-3fpj | "Bwing được xem là điểm đến cá cược với nhiều dịch vụ, tính năng chất lượng cao. Những ưu thế này tạo nên sân chơi nổi tiếng bậc nhất trên thị trường giải trí trực tuyến.
Website: https://bwing.dev/
Địa chỉ: 18 Đ. Hoàng Diệu, Phường 12, Quận 4, Thành phố Hồ Chí Minh
Mail: bwingdev@gmail.com
SĐT: 0905909505
#bwing #nhacaibwing #nhacaibwingdev #bwingdev
Google Drive liên kết
https://drive.google.com/drive/folders/14KRLOTYlDMhObBj4o0FRE8ywwLs0YCCc
https://docs.google.com/document/d/1QmpNzf696YJ0zu1Yy_n275BYA3GtAYPYn8skutJazMc/edit?usp=drive_link
https://sites.google.com/view/bwingdev/home
Mạng xã hội liên kết
https://x.com/bwingdev
https://www.youtube.com/channel/UCSliHakUG9837mN0blgeFBw
https://vimeo.com/user220128311
https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/748679
https://github.com/bwingdev
https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/2488847#profile
https://www.reddit.com/user/bwingdev/
https://gravatar.com/bwingdev
https://www.behance.net/bwingdev
https://talk.plesk.com/members/bwingdev.340788/#about" | bwingdev | |
1,865,097 | Symfony Station Communiqué — 24 May 2024: A look at Symfony, Drupal, PHP, Cybersec, and Fediverse News! | This article originally appeared on Symfony Station. Welcome to this week's Symfony Station... | 0 | 2024-05-25T17:56:31 | https://symfonystation.mobileatom.net/Symfony-Station-Communique-24-May-2024 | symfony, drupal, php, fediverse | This article [originally appeared on Symfony Station](https://symfonystation.mobileatom.net/Symfony-Station-Communique-24-May-2024).
Welcome to this week's Symfony Station communiqué. It's your review of the essential news in the Symfony and PHP development communities focusing on protecting democracy. That necessitates an opinionated Butlerian jihad against big tech as well as evangelizing for open-source and the Fediverse. We also cover the cybersecurity world. You can't be free without safety and privacy.
There's good content in all of our categories, so please take your time and enjoy the items most relevant and valuable to you. This is why we publish on Fridays. So you can savor it over your weekend.
Or jump straight to your favorite section via our website.
- [Symfony Universe](https://symfonystation.mobileatom.net/Symfony-Station-Communique-24-May-2024#symfony)
- [PHP](https://symfonystation.mobileatom.net/Symfony-Station-Communique-24-May-2024#php)
- [More Programming](https://symfonystation.mobileatom.net/Symfony-Station-Communique-24-May-2024#more)
- [Fighting for Democracy](https://symfonystation.mobileatom.net/Symfony-Station-Communique-24-May-2024#other)
- [Cybersecurity](https://symfonystation.mobileatom.net/Symfony-Station-Communique-24-May-2024#cybersecurity)
- [Fediverse](https://symfonystation.mobileatom.net/Symfony-Station-Communique-24-May-2024#fediverse)
Once again, thanks go out to Javier Eguiluz and Symfony for sharing [our communiqué](https://symfonystation.mobileatom.net/Symfony-Station-Communique-17-May-2024) in their [Week of Symfony](https://symfony.com/blog/a-week-of-symfony-907-13-19-may-2024).
**My opinions will be in bold. And will often involve cursing. Because humans.**
---
## Symfony
As always, we will start with the official news from Symfony.
Highlight -> "This week, the first release candidate version of Symfony 7.1 was published so you can test it in your projects before the stable release in two weeks. Meanwhile, we continued publishing more talks and information about the upcoming SymfonyOnline June 2024."
[A Week of Symfony #907 (13-19 May 2024)](https://symfony.com/blog/a-week-of-symfony-907-13-19-may-2024)
They also have:
[SymfonyLive Berlin 2024 postponed to 2025](https://symfony.com/blog/symfonylive-berlin-2024-postponed-to-2025)
[New in Symfony 7.1: Commands Improvements](https://symfony.com/blog/new-in-symfony-7-1-commands-improvements)
[New in Symfony 7.1: Mapped Route Parameters](https://symfony.com/blog/new-in-symfony-7-1-mapped-route-parameters)
**A good alternative to the MapEntity attribute in certain cases.**
[SymfonyCon Vienna 2024 - Submit your talk before July 8th](https://symfony.com/blog/symfonycon-vienna-2024-submit-your-talk-before-july-8th)
[SymfonyOnline June 2024: Using container's features to manage complexity](https://symfony.com/blog/symfonyonline-june-2024-using-container-s-features-to-manage-complexity)
[New in Symfony 7.1: POSIX Signals Improvements](https://symfony.com/blog/new-in-symfony-7-1-posix-signals-improvements)
[SymfonyOnline June 2024: Designing Security-First Symfony Apps](https://symfony.com/blog/symfonyonline-june-2024-designing-security-first-symfony-apps)
[Introducing Symfony Jobs](https://symfony.com/blog/introducing-symfony-jobs)
SymfonyCasts is back with:
[Kevin Bond (aka Zenstruck) joins SymfonyCasts](https://symfonycasts.com/blog/zenstruck-joins-symfonycasts)
**Great news.**
---
## Featured Item
Taggart writes:
I would never have imagined a year ago that Google would kill web searching as we know it. I did not expect absolutely every product in the tech sector to attempt to increase valuation by tossing LLMs into their product, regardless of functionality or utility. Perhaps I should have.
But the rapid takeover of the web by generative text and images? That I did see coming, and here we are. I don't know about you, but interacting with the wider web these days feels like I'm picking up a device with an exposed wire that zaps me about 50% of the time. It used to be 30%. Next week, it may be 70%. Point is, the web I grew up with, fell in love with, and have—in many ways—built a life around, is being choked out of existence. ...
For the rest of this piece, I'll be referring to this idea of "The Human Web." This is the network of sites and works created by people, without generative assistance. It is art, culture, journalism, history, technical information, and more. Is it commerce? Personally I think it has to be, but we'll get to that.
### [Meditations on The Human Web](https://taggart-tech.com/human-web/)
---
### This Week
Ion Bazan shows us:
[Write Future-Compatible PHP Code with Symfony Polyfills](https://dev.to/ionbazan/write-future-compatible-php-code-with-symfony-polyfills-380b)
Sylvain Blondeau's latest newsletter is out:
[Level 3 : Symfony 7.1 is coming](https://symfonylevelup.substack.com/p/level-3-symfony-71-is-coming)
**Which is unfortunately on Substack. He has great videos as well. Unfortunately, they are on YouTube. And they are both in French. But still check them out! ;)**
David Garcia explores:
[Using Symfony Console and Google Cloud API to translate your projects](https://blog.stackademic.com/using-symfony-console-and-google-cloud-api-to-translate-your-projects-fdddfd699795)
Yonel Ceruto shares:
[Symfony App Config in 2 steps](https://dev.to/yceruto/symfony-app-config-in-2-steps-26dl)
Radhwan Ben Youssef shows us:
[How to Use Traits in Symfony](https://medium.com/@radhwanrouihm/how-to-use-traits-in-symfony-af413edd685e)
Nacho Colomina Torregrosa demonstrates:
[Using a Symfony secret to encode your JWT Tokens](https://dev.to/icolomina/using-a-symfony-secret-to-encode-your-jwt-tokens-3167)
### eCommerce
Sylius examines:
[Expanding eCommerce into International Markets with Sylius](https://sylius.com/blog/expanding-ecommerce-into-international-markets-with-sylius/)
### CMSs
Sulu shows us:
[How To Merge Two Sulu Instances Together](https://sulu.io/blog/how-to-merge-two-sulu-instances-together)
<br/>
Concrete CMS looks at:
[Enhancing On-Site Search Functionality: Best Practices for Websites](https://www.concretecms.com/about/blog/web-design/enhancing-on-site-search-functionality-best-practices-for-websites)
[The Next Marketplace](https://www.concretecms.com/about/blog/news/the-next-marketplace)
<br/>
TYPO3 has:
[My First TYPO3 General Assembly](https://typo3.org/article/my-first-typo3-general-assembly)
[Improve TYPO3 Error Log — Q2 Budget Idea Report](https://typo3.org/article/improve-typo3-error-log-q2-budget-idea-report)
[Status Update on the Asynchronous Image Rendering Initiative for TYPO3](https://typo3.org/article/status-update-on-the-asynchronous-image-rendering-initiative-for-typo3)
<br/>
Joomla has:
[The May Issue](https://magazine.joomla.org/all-issues/may-2024/the-may-issue-2024)
[Content Templates - the Joomla Page Builder you didn't know you already had](https://magazine.joomla.org/all-issues/may-2024/content-templates-the-joomla-page-builder-you-didn-t-know-you-already-had "Content Templates - the Joomla Page Builder you didn't know you already had")
[Templates for Joomla - Episode 1: Templates, Frameworks & Clubs or not…](https://magazine.joomla.org/all-issues/may-2024/templates-frameworks-clubs-or-not "Templates for Joomla - Episode 1: Templates, Frameworks & Clubs or not…")
<br/>
Drupal has:
[DrupalCon Portland 2024 - Recapping Drupal’s most significant North American event!](https://www.drupal.org/association/blog/drupalcon-portland-2024-recapping-drupals-most-significant-north-american-event)
[Drupal GAAD Pledge 2024 Update](https://www.drupal.org/association/blog/drupal-gaad-pledge-2024-update)
Specbee is:
[Starstruck by the Drupal Starshot Initiative](https://www.specbee.com/blogs/drupal-starshot-initiative)
The Drop Times has:
[Acquia Engage London 2024: Insights from Featured Speakers](https://www.thedroptimes.com/40238/acquia-engage-london-2024-insights-featured-speakers)
**There are also stops in Paris, Tokyo, Boston. I attended the one in Miami last year and these are good events.**
[Enhancing Drupal 11: Transitioning Deprecated Modules to Contributed Alternatives](https://www.thedroptimes.com/40259/enhancing-drupal-11-transitioning-deprecated-modules-contributed-alternatives)
[Drupal Launches IXP Fellowship Initiative Survey to Support Inexperienced Developers](https://www.thedroptimes.com/40227/drupal-launches-ixp-fellowship-initiative-survey-support-inexperienced-developers)
On a related note, DrupalEasy opines:
[Drupal needs new, young developers](https://www.drupaleasy.com/blogs/ultimike/2024/05/drupal-needs-new-young-developers)
And:
[Ruminations on Drupal Starshot](https://www.drupaleasy.com/blogs/ultimike/2024/05/ruminations-drupal-starshot)
Wim Leers starts work on:
[XB (Experience Builder) week 1: 0.x branch opened!](https://wimleers.com/blog/xb-week-1)
Agile Drop has:
[Drupal Starshot - what is it & what does it mean for Drupal?](https://www.agiledrop.com/blog/drupal-starshot-what-it-what-does-it-mean-drupal)
**They look at it from the low code perspective. Hence ⬇️.**
[Low-code/no-code & the future of digital experience management](https://www.agiledrop.com/blog/low-codeno-code-future-digital-experience-management)
Gizra shows us:
[How We Made Drupal Starter 2X Faster for Authenticated Users](https://www.gizra.com/content/drupal-caching/)
Tag1 Consulting continues a series:
[Migrating Your Data from Drupal 7 to Drupal 10: Known issues](https://www.tag1consulting.com/blog/migrating-your-data-drupal-7-drupal-10-known-issues)
Golems explores:
[Simplifying Form Work in Drupal 10: Best Practices and Plugins](https://gole.ms/guidance/simplifying-form-work-drupal-10-best-practices-and-plugins)
Salsa Digital asks:
[Why Use Drupal?](https://salsa.digital/insights/why-use-drupal)
**And gives a very comprehensive answer.**
1X Internet lists:
[CMS features every editor and marketer needs](https://www.1xinternet.de/en/highlights/cms-features-every-editor-marketer-needs)
Gregg Boogs demonstrates:
[Transitioning from Drupal 7 to Backdrop CMS](https://www.gregboggs.com/transitioning-from-drupal7-to-backdrop-cms/)
### Previous Weeks
Gavin Murambadoro shows us:
[How to start a Symfony 7 application with Docker without having PHP locally installed on your machine](https://dev.to/gmurambadoro/how-to-start-a-symfony-7-application-with-docker-without-having-php-locally-installed-on-your-machine-28h2)
Vsevolod Girenko examines:
[Consistent validation with API Platform 3](https://dev.to/sauromates/consistent-validation-with-api-platform-3-56oi)
JoliCode shares:
[Ajouter un champ de texte statique dans un formulaire EasyAdmin](https://jolicode.com/blog/ajouter-un-champ-de-texte-statique-dans-un-formulaire-easyadmin)
Lullabot looks at:
[Drupal Release Planning in the Enterprise](https://www.lullabot.com/articles/drupal-release-planning-enterprise)
Debug Academy shows us:
[How to create custom sorting logic for Drupal views](https://debugacademy.com/article/create-custom-drupal-views-sort)
Acquia covers:
[The four big Drupal themes of DrupalCon Portland 2024](https://dev.acquia.com/blog/four-big-drupal-themes-drupalcon-portland-2024)
Capellic continues a series:
[Frontend performance optimization for Drupal websites: Part 4](https://capellic.com/blog/frontend-performance-optimization-drupal-websites-part-4)
Amazee has:
[DrupalCon Portland 2024 in 1,800 Words](https://www.amazee.io/blog/post/drupalcon-portland-2024-recap)
[LagoonCon Portland 2024 Recap](https://www.amazee.io/blog/post/lagooncon-portland-2024-recap)
---
## PHP
### This Week
And announces:
[amazee.io Unveils Self-Sign-Up](https://www.amazee.io/blog/post/self-sign-up-unveiled)
**I am definitely checking this out.**
Metaphorically Speaking explores:
[Primitive Obsession](https://acairns.co.uk/posts/primitive-obsession)
php [architect] examines:
[PHP’s Magic Methods](https://www.phparch.com/2024/05/phps-magic-methods/)
Ion Bazan has:
[How to see what changed in Composer files](https://dev.to/ionbazan/how-to-see-what-changed-in-composer-files-1ih6)
**This is a prequel to the Symfony article above.**
[Turn a country code into an emoji flag](https://dev.to/ionbazan/turn-a-country-code-into-an-emoji-flag-us--360a)
Fernando Castillo says:
[Value Objects in PHP can protect you from bad data](https://medium.com/@fernando_28520/value-objects-in-php-can-protect-you-from-bad-data-056582866333)
Alex Castellano shows us:
[How To Create WebP Images With PHP](https://alexwebdevelop.activehosted.com/social/6da37dd3139aa4d9aa55b8d237ec5d4a.343)
PHPStan announces:
[PHPStan 1.11 with Error Identifiers and New PHPStan Pro UI](https://mailchi.mp/883a4d87c433/phpstan-pro-just-got-a-lot-better-12701308?e=77f3af939a)
Jonas Elias has:
[Substituindo o Redis pelo Valkey em projetos PHP/Hyperf](https://dev.to/jonas-elias/substituindo-o-redis-pelo-valkey-em-projetos-phphyperf-3lh0)
Chris Sprayberry demonstrates:
[Annotated Container Without Attributes](https://www.cspray.io/blog/annotated-container-without-attributes/)
Sarah Savage explores:
[Air Traffic Control: Routing microservices with a single Nginx server](https://sarah-savage.com/air-traffic-control-routing-microservices-with-a-single-nginx-server/)
Mohamed Ahmed is:
[Implementing Feature Flagging in PHP Using AST Parsers](https://medium.com/@.Chromax/implementing-feature-flagging-in-php-using-ast-parsers-d2feec424b84)
Khairu Aqsara demonstrates:
[Avoiding Imports and Aliases in PHP](https://dev.to/khairuaqsara/avoiding-imports-and-aliases-in-php-52m0)
Sohel Ahmed shares:
[Understanding Prepared Statements in PHP and MySQL](https://medium.com/@sohel.ahmed2405/understanding-prepared-statements-in-php-and-mysql-dc009d38c7d3)
Paul Underwood has a quick tip:
[Performance Metrics Using Guzzle](https://paulund.co.uk/performance-metrics-using-guzzle)
Wasmer examines:
[Running PHP blazingly fast at the Edge with WebAssembly](https://wasmer.io/posts/running-php-blazingly-fast-at-the-edge-with-wasm)
**This sounds awesome. You can test drive it with Symfony, Laravel, and WordPress.**
Darko Todorić shows us:
[How to configure PHP in Airflow?](https://dev.to/darkotodoric/how-to-configure-php-in-airflow-5d9i)
Itsimiro is:
[Unlocking the Power of Attributes in PHP](https://itsimiro.medium.com/unlocking-the-power-of-attributes-in-php-a6af57225bbf)
Laravel News looks at:
[New Proposed Array Find Functions in PHP 8.4](https://laravel-news.com/php-8-4-array-find-functions)
Grant Horwood shows us how to do it now:
[php: write php 8.4’s array_find from scratch](https://gbh.fruitbat.io/2024/05/21/php-write-php-8-4s-array_find-from-scratch/)
### Previous Weeks
---
## More Programming
And has:
[bash: splitting tarballs the ‘easy’ way](https://gbh.fruitbat.io/2024/05/21/bash-splitting-tarballs-the-easy-way/)
**This is interesting.**
TechCrunch opines:
[I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture](https://techcrunch.com/2024/05/19/melinda-french-gates-fixing-tech-bro-culture-brilliant-jerk/)
**That would be great.**
Bruce Lawson declares:
[CSS :has(), the God Selector](https://brucelawson.co.uk/2024/css-has-the-god-selector/)
My man Jason Knight explores:
[Testing Website Speed And Quality](https://medium.com/codex/testing-website-speed-and-quality-e37622bd5889)
**And as usual, it's not looking good for frontend frameworks.**
Smashing Magazine has:
[Hidden vs. Disabled In UX](https://www.smashingmagazine.com/2024/05/hidden-vs-disabled-ux/)
[Modern CSS Layouts: You Might Not Need A Framework For That](https://www.smashingmagazine.com/2024/05/modern-css-layouts-no-framework-needed/)
[Best Practices For Naming Design Tokens, Components, Variables, And More](https://www.smashingmagazine.com/2024/05/naming-best-practices/)
[Switching It Up With HTML’s Latest Control](https://www.smashingmagazine.com/2024/05/switching-it-up-html-latest-control/)
Frontend Masters asks:
[We’ve Got Container Queries Now, But Are We Actually Using Them?](https://frontendmasters.com/blog/weve-got-container-queries-now-but-are-we-actually-using-them/)
Roman Agabekov shows us:
[How to Check MySQL Database and Table Sizes](https://dev.to/drupaladmin/how-to-check-mysql-database-and-table-sizes-2ep2)
---
## Fighting for Democracy
[Please visit our Support Ukraine page](https://symfonystation.mobileatom.net/Support-Ukraine)to learn how you can help
kick Russia out of Ukraine (eventually, like ending apartheid in South Africa).
### The cyber response to Russia’s War Crimes and other douchebaggery
The Hacker News reports:
[Chinese Nationals Arrested for Laundering $73 Million in Pig Butchering Crypto Scam](https://thehackernews.com/2024/05/chinese-nationals-arrested-for.html)
404 Media reports:
[Hacker Breaches Scam Call Center, Warns Victims They've Been Scammed](https://www.404media.co/hacker-breaches-scam-call-center-emails-its-scam-victims/)
Ars Technica reports:
[Tesla shareholder group opposes Musk’s $46B pay, slams board “dysfunction”](https://arstechnica.com/tech-policy/2024/05/tesla-shareholder-group-opposes-musks-46b-pay-slams-board-dysfunction/)
**They recommend the board vote against Elon Musk's $46 billion pay package and to vote against the reelection of board members Kimbal Musk and James Murdoch. Which the full board would do if they were capitalists as opposed to ass-licking ideologues.**
[Google Search’s “udm=14” trick lets you kill AI search for good](https://arstechnica.com/gadgets/2024/05/google-searchs-udm14-trick-lets-you-kill-ai-search-for-good/)
BitDefender reports:
[23-year-old alleged founder of dark web Incognito Market arrested after FBI tracks cryptocurrency payments](https://www.bitdefender.com/blog/hotforsecurity/23-year-old-alleged-founder-of-dark-web-incognito-market-arrested-after-fbi-tracks-cryptocurrency-payments/)
TechCrunch reports:
[‘Pro-competition’ rules for Big Tech make it through UK’s pre-election wash-up](https://techcrunch.com/2024/05/23/pro-competition-rules-for-big-tech-make-it-through-uks-pre-election-wash-up/)
The Register reports:
[Man behind deepfake Biden robocall indicted on felony charges, faces $6M fine](https://www.theregister.com/2024/05/24/biden_robocall_charges/)
**This sets a good precedent. Because unfortunately, there is more of this coming.**
### ???
Ars Technica reports:
[Lawmakers say Section 230 repeal will protect children—opponents predict chaos](https://arstechnica.com/tech-policy/2024/05/lawmakers-say-section-230-repeal-will-protect-children-opponents-predict-chaos/)
**If they passed a law canceling it for Big Tech only, that would move it up a section.**
### The Evil Empire Strikes Back
DarkReading reports:
[Russia's Turla APT Abuses MSBuild to Deliver TinyTurla Backdoor](https://www.darkreading.com/cyberattacks-data-breaches/russia-turla-apt-msbuild-tinyturla-backdoor)
EuroNews has:
[Russia waging shadow war on West: Estonia PM](https://www.euronews.com/2024/05/22/russia-waging-shadow-war-on-west-estonia-pm)
[Why is Central Europe at heightened risk of fake news ahead of European elections?](https://www.euronews.com/my-europe/2024/05/22/why-is-central-europe-at-a-heightened-risk-of-fake-news-ahead-of-the-european-elections)
Pravda Ukraine reports:
[Russia uses Moldova as testing ground for new influence technologies – Moldovan Foreign minister](https://www.pravda.com.ua/eng/news/2024/05/24/7457395/)
The Markup reports:
[The Inside Story of the YouTube Influencer Who Peddles Misinformation to Vietnamese Communities](https://themarkup.org/languages-of-misinformation/2024/05/22/the-inside-story-of-the-youtube-influencer-who-peddles-misinformation-to-vietnamese-communities)
The Hacker News reports:
[Inside Operation Diplomatic Specter: Chinese APT Group's Stealthy Tactics Exposed](https://thehackernews.com/2024/05/inside-operation-diplomatic-specter.html)
TechDirt opines:
[Decentralized Systems Will Be Necessary To Stop Google From Putting The Web Into Managed Decline](https://www.techdirt.com/2024/05/21/decentralized-systems-will-be-necessary-to-stop-google-from-putting-the-web-into-managed-decline/)
The Verge reports:
[Lawyers say OpenAI could be in real trouble with Scarlett Johansson](https://www.theverge.com/2024/5/22/24162429/scarlett-johansson-openai-legal-right-to-publicity-likeness-midler-lawyers)
404 Media reports:
[Google Is Paying Reddit $60 Million for Fucksmith to Tell Its Users to Eat Glue](https://www.404media.co/google-is-paying-reddit-60-million-for-fucksmith-to-tell-its-users-to-eat-glue/)
**This may be the greatest article title of all time. And fuck both these c^nts.**
[Nonconsensual AI Porn Maker Accidentally Leaks His Customers' Emails](https://www.404media.co/nonconsensual-ai-porn-maker-accidentally-leaks-his-customers-emails/)
[Amazon Kills Shareholder Proposals on Worker Protections and AI Oversight](https://www.404media.co/amazon-kills-shareholder-proposals-on-worker-treatment-transparency/)
Sherwood News reports:
[Facebook's top poster is a Catholic fundamentalist page. Is Meta OK?](https://sherwood.news/tech/catholic-fundamentalism-largest-publisher-on-facebook/)
**Uh, no.**
Vox reports:
[“Everyone is absolutely terrified”: Inside a US ally’s secret war on its American critics](https://www.vox.com/world-politics/24160779/inside-indias-secret-campaign-to-threaten-and-harass-americans)
### Cybersecurity/Privacy
The Register reports:
[With ransomware whales becoming so dominant, would-be challengers ask 'what's the point?'](https://www.theregister.com/2024/05/21/with_ransomware_whales_becoming_so/)
Dark Reading reports:
[Transforming CISOs Into Storytellers](https://www.darkreading.com/cyber-risk/transforming-cisos-into-storytellers)
**This is a good strategy.**
BleepingComputer reports:
[High-severity GitLab flaw lets attackers take over accounts](https://www.bleepingcomputer.com/news/security/high-severity-gitlab-flaw-lets-attackers-take-over-accounts/)
The Next Web reports:
[Dutch cybercops tracked a crypto theft to one of the world’s worst botnets](https://thenextweb.com/news/eset-dutch-police-discover-ebury-malware-in-cryptocurrency)
---
### Fediverse
The Fediverse Report has:
[Last Week in Fediverse – ep 69](https://fediversereport.com/last-week-in-fediverse-ep-69/)
Conspirador Norteño looks at:
[Federation and political spam](https://conspirator0.substack.com/p/federation-and-political-spam)
Hypha announces:
[Social Reader is out!](https://hypha.coop/dripline/social-reader-is-out/)
**Explore this if you aren't quite ready for a Fediverse account.**
TechCrunch reports:
[Meta’s Oversight Board takes its first Threads case](https://techcrunch.com/2024/05/20/metas-oversight-board-takes-its-first-threads-case/)
Not Root explores:
[Adding a Fediverse Share Button to my Emacs Nikola Blog](https://blog.notroot.online/posts/adding-a-fediverse-share-button-to-my-emacs-nikola-blog/)
Digiday reports on:
[Why publishers are preparing to federate their sites](https://digiday.com/media/why-publishers-are-preparing-to-federate-their-sites/)
We Distribute has:
[A Primer on Mastodon’s New Board Members](https://wedistribute.org/2024/05/mastodons-board-members/)
[FediVision 2024 is Live! Listen and Vote!](https://wedistribute.org/2024/05/listen-fedivision-2024/)
**There are only a few days left to vote.**
### Other Federated Social Media
And:
[Bluesky Introduces Direct Messages](https://wedistribute.org/2024/05/bluesky-introduces-dms/)
---
## CTAs (aka show us some free love)
- That’s it for this week. Please share this communiqué.
- Also, please [join our newsletter list for The Payload](https://newsletter.mobileatom.net/). Joining gets you each week's communiqué in your inbox (a day early).
- Follow us [on Flipboard](https://flipboard.com/@mobileatom/symfony-for-the-devil-allupr6jz)or at [@symfonystation@drupal.community](https://drupal.community/@SymfonyStation)on Mastodon for daily coverage.
- Do you like Reddit? Why? Instead, follow us [on kbin](https://kbin.social/u/symfonystation)for a better Fediverse and Symfony-based experience. We have a [Symfony Magazine](https://kbin.social/m/Symfony)and [Collection](https://kbin.social/c/SymfonyUniverse)there.
Do you own or work for an organization that would be interested in our promotion opportunities? Or supporting our journalistic efforts? If so, please get in touch with us. We’re in our toddler stage, so it’s extra economical. 😉
More importantly, if you are a Ukrainian company with coding-related products, we can offer free promotion on [our Support Ukraine page](https://symfonystation.mobileatom.net/Support-Ukraine). Or, if you know of one, get in touch.
You can find a vast array of curated evergreen content on our [communiqués page](https://symfonystation.mobileatom.net/communiques).
## Author

### Reuben Walker
Founder
Symfony Station
| reubenwalker64 |
1,865,096 | Build Real-time transcription app using React Hooks | Step-by-Step Tutorial for the Code This tutorial will guide you through the setup and... | 0 | 2024-05-25T17:55:26 | https://dev.to/video-sdk/build-real-time-transcription-app-using-react-hooks-1coe | showdev, webdev, react, ai | ## Step-by-Step Tutorial for the Code
This tutorial will guide you through the setup and functioning of the React app using the VideoSDK.live SDK. The app allows users to join or create meetings and provides functionalities for recording and transcription.
### Prerequisites
1. **Node.js and npm**: Ensure you have Node.js and npm installed.
2. **VideoSDK Account**: Sign up at [VideoSDK.live](https://www.videosdk.live) and get your API key.
### Step 1: Setup the Project
1. **Create a React App**: If you don't already have a React app, create one using the following command:
```sh
npx create-react-app my-video-app
cd my-video-app
```
2. **Install Dependencies**: Install the necessary packages.
```sh
npm install @videosdk.live/react-sdk react-player
```
### Step 2: Setup API Functions
Create an `API.js` file in the `src` directory to handle API interactions.
```js
// API.js
export const authToken = "YOUR_VIDEO_SDK_AUTH_TOKEN";
export const createMeeting = async ({ token }) => {
const response = await fetch("https://api.videosdk.live/v1/meetings", {
method: "POST",
headers: {
Authorization: token,
"Content-Type": "application/json",
},
});
const data = await response.json();
return data.meetingId;
};
```
### Step 3: Build the Components
In your `App.js`, import necessary modules and create components as described below.
#### JoinScreen Component
Allows users to enter a meeting ID or create a new meeting.
```js
import React, { useState } from "react";
function JoinScreen({ getMeetingAndToken }) {
const [meetingId, setMeetingId] = useState(null);
const onClick = async () => {
await getMeetingAndToken(meetingId);
};
return (
<div>
<input
type="text"
placeholder="Enter Meeting Id"
onChange={(e) => {
setMeetingId(e.target.value);
}}
/>
<button onClick={onClick}>Join</button>
{" or "}
<button onClick={onClick}>Create Meeting</button>
</div>
);
}
export default JoinScreen;
```
#### ParticipantView Component
Displays the video and audio of a participant.
```js
import React, { useEffect, useMemo, useRef } from "react";
import { useParticipant } from "@videosdk.live/react-sdk";
import ReactPlayer from "react-player";
function ParticipantView(props) {
const micRef = useRef(null);
const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =
useParticipant(props.participantId);
const videoStream = useMemo(() => {
if (webcamOn && webcamStream) {
const mediaStream = new MediaStream();
mediaStream.addTrack(webcamStream.track);
return mediaStream;
}
}, [webcamStream, webcamOn]);
useEffect(() => {
if (micRef.current) {
if (micOn && micStream) {
const mediaStream = new MediaStream();
mediaStream.addTrack(micStream.track);
micRef.current.srcObject = mediaStream;
micRef.current
.play()
.catch((error) =>
console.error("videoElem.current.play() failed", error)
);
} else {
micRef.current.srcObject = null;
}
}
}, [micStream, micOn]);
return (
<div>
<p>
Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
{micOn ? "ON" : "OFF"}
</p>
<audio ref={micRef} autoPlay playsInline muted={isLocal} />
{webcamOn && (
<ReactPlayer
playsinline
pip={false}
light={false}
controls={false}
muted={true}
playing={true}
url={videoStream}
height={"300px"}
width={"300px"}
onError={(err) => {
console.log(err, "participant video error");
}}
/>
)}
</div>
);
}
export default ParticipantView;
```
#### Controls Component
Provides buttons to leave the meeting and toggle mic/webcam.
```js
import React from "react";
import { useMeeting } from "@videosdk.live/react-sdk";
function Controls() {
const { leave, toggleMic, toggleWebcam } = useMeeting();
return (
<div>
<button onClick={() => leave()}>Leave</button>
<button onClick={() => toggleMic()}>toggleMic</button>
<button onClick={() => toggleWebcam()}>toggleWebcam</button>
</div>
);
}
export default Controls;
```
#### MeetingView Component
Main component to handle meeting functionalities like transcription, recording, and displaying participants.
```js
import React, { useState } from "react";
import { useMeeting, useTranscription, Constants } from "@videosdk.live/react-sdk";
import ParticipantView from "./ParticipantView";
import Controls from "./Controls";
function MeetingView(props) {
const [transcript, setTranscript] = useState("Transcription");
const [transcriptState, setTranscriptState] = useState("Not Started");
const tConfig = { webhookUrl: "https://www.example.com" };
const { startTranscription, stopTranscription } = useTranscription({
onTranscriptionStateChanged: (data) => {
const { status } = data;
if (status === Constants.transcriptionEvents.TRANSCRIPTION_STARTING) {
setTranscriptState("Transcription Starting");
} else if (status === Constants.transcriptionEvents.TRANSCRIPTION_STARTED) {
setTranscriptState("Transcription Started");
} else if (status === Constants.transcriptionEvents.TRANSCRIPTION_STOPPING) {
setTranscriptState("Transcription Stopping");
} else if (status === Constants.transcriptionEvents.TRANSCRIPTION_STOPPED) {
setTranscriptState("Transcription Stopped");
}
},
onTranscriptionText: (data) => {
let { participantName, text, timestamp } = data;
console.log(`${participantName}: ${text} ${timestamp}`);
setTranscript(transcript + `${participantName}: ${text} ${timestamp}`);
},
});
const { startRecording, stopRecording } = useMeeting();
const handleStartRecording = () => {
startRecording("YOUR_WEB_HOOK_URL", "AWS_Directory_Path", {
layout: { type: "GRID", priority: "SPEAKER", gridSize: 4 },
theme: "DARK",
mode: "video-and-audio",
quality: "high",
orientation: "landscape",
});
};
const handleStopRecording = () => stopRecording();
const handleStartTranscription = () => startTranscription(tConfig);
const handleStopTranscription = () => stopTranscription();
const [joined, setJoined] = useState(null);
const { join, participants } = useMeeting({
onMeetingJoined: () => setJoined("JOINED"),
onMeetingLeft: () => props.onMeetingLeave(),
});
const joinMeeting = () => {
setJoined("JOINING");
join();
};
return (
<div className="container">
<h3>Meeting Id: {props.meetingId}</h3>
{joined && joined === "JOINED" ? (
<div>
<Controls />
<button onClick={handleStartRecording}>Start Recording</button>
<button onClick={handleStopRecording}>Stop Recording</button>
<button onClick={handleStartTranscription}>Start Transcription</button>
<button onClick={handleStopTranscription}>Stop Transcription</button>
{[...participants.keys()].map((participantId) => (
<ParticipantView participantId={participantId} key={participantId} />
))}
<p>State: {transcriptState}</p>
<p>{transcript}</p>
</div>
) : joined && joined === "JOINING" ? (
<p>Joining the meeting...</p>
) : (
<button onClick={joinMeeting}>Join</button>
)}
</div>
);
}
export default MeetingView;
```
#### App Component
Main component that manages the meeting state and provides the necessary context.
```js
import React, { useState } from "react";
import { MeetingProvider } from "@videosdk.live/react-sdk";
import JoinScreen from "./JoinScreen";
import MeetingView from "./MeetingView";
import { authToken, createMeeting } from "./API";
function App() {
const [meetingId, setMeetingId] = useState(null);
const getMeetingAndToken = async (id) => {
const meetingId =
id == null ? await createMeeting({ token: authToken }) : id;
setMeetingId(meetingId);
};
const onMeetingLeave = () => setMeetingId(null);
return authToken && meetingId ? (
<MeetingProvider
config={{
meetingId,
micEnabled: true,
webcamEnabled: true,
name: "C.V. Raman",
}}
token={authToken}
>
<MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} />
</MeetingProvider
>
) : (
<JoinScreen getMeetingAndToken={getMeetingAndToken} />
);
}
export default App;
```
### Step 4: Run the Application
1. **Start the React App**:
```sh
npm start
```
2. **Navigate to Your Browser**: Open [http://localhost:3000](http://localhost:3000) to view the app.

### Optional Step: Add CSS for Styling
To enhance the visual appeal of your app, you can add the following CSS.
Create an `App.css` file in your `src` directory with the following content:
```css
/* App.css */
body {
background-color: #121212;
color: #e0e0e0;
font-family: 'Roboto', sans-serif;
margin: 0;
padding: 0;
}
input, button {
background-color: #1e1e1e;
border: 1px solid #333;
color: #e0e0e0;
padding: 10px;
margin: 5px;
border-radius: 5px;
}
button:hover {
background-color: #333;
cursor: pointer;
}
.container {
max-width: 800px;
margin: auto;
padding: 20px;
text-align: center;
}
h3 {
color: #f5f5f5;
}
p {
margin: 10px 0;
}
audio, .react-player__preview {
background-color: #333;
border: 1px solid #555;
border-radius: 5px;
margin: 10px 0;
}
.react-player__preview img {
border-radius: 5px;
}
.react-player__shadow {
border-radius: 5px;
}
```
### Summary
You have created a functional video meeting application using the VideoSDK.live SDK. This app allows users to join or create meetings, manage participants, and control functionalities like recording and transcription. The optional CSS step ensures a consistent and visually appealing user interface. | arjunkava |
1,865,095 | JavaScript vs. TypeScript: A Comprehensive Comparison | 🚀 Check Out My YouTube Channel! 🚀 please subscribe to my YouTube channel to support my... | 0 | 2024-05-25T17:53:03 | https://dev.to/dipakahirav/javascript-vs-typescript-a-comprehensive-comparison-c97 | javascript, typescript, angular, learning | ## 🚀 Check Out My YouTube Channel! 🚀
please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1
) to support my channel and get more web development tutorials.
JavaScript and TypeScript are two programming languages that play significant roles in web development. Although they share some similarities, they also have key differences that set them apart.
### JavaScript
JavaScript is a versatile scripting language used to make web pages interactive. It is a dynamically typed language, meaning the data type of a variable is determined at runtime rather than at compile time. JavaScript is executed by web browsers or Node.js, a JavaScript runtime environment, making it a cornerstone of modern web development.
### TypeScript
TypeScript, on the other hand, is a statically typed language that extends JavaScript. It introduces optional static typing and additional features to enhance JavaScript's capabilities, making it more suitable for large and complex applications. TypeScript code is compiled into JavaScript, ensuring it can run on any platform that supports JavaScript.
### Key Differences
Let's explore the key differences between JavaScript and TypeScript:
#### Typing
JavaScript's dynamic typing means the data type of a variable is determined during runtime. This flexibility can sometimes lead to unexpected errors. Conversely, TypeScript’s static typing ensures the data type of a variable is determined at compile time, making TypeScript code safer and less prone to errors.
#### Compilation
JavaScript code is executed directly by web browsers or Node.js without the need for prior compilation. In contrast, TypeScript code must be compiled into JavaScript before it can be executed, adding an extra step in the development process but enhancing code reliability and error checking.
#### Error Handling
JavaScript, as an interpreted language, only detects errors at runtime, which can lead to runtime failures that are harder to debug. TypeScript, being a compiled language, catches errors during the compile time, making it easier to identify and fix issues early in the development cycle.
#### Readability and Maintainability
TypeScript’s static typing and additional features, such as interfaces and type annotations, improve code readability and maintainability, especially in large and complex applications. These features help developers understand the codebase better and maintain consistency across the project.
#### Compatibility
While all JavaScript code is valid TypeScript code, not all TypeScript code is valid JavaScript. This means you can easily convert JavaScript projects to TypeScript, leveraging its advanced features without losing existing functionality.
#### Learning Curve
JavaScript is generally easier for beginners to learn due to its simpler syntax and dynamic nature. TypeScript requires a solid understanding of JavaScript and additional concepts like static typing and interfaces, which can pose a steeper learning curve for newcomers.
#### Use Cases
JavaScript is well-suited for small to medium-sized projects due to its simplicity and flexibility. TypeScript, with its robust type system and advanced features, is more appropriate for large-scale and complex applications where maintainability and error prevention are critical.
### Conclusion
In summary, JavaScript and TypeScript each have their strengths and ideal use cases. JavaScript’s dynamic nature and ease of learning make it perfect for smaller projects and quick prototyping. TypeScript’s static typing and enhanced features provide a more robust framework for large, complex applications, ensuring safer and more maintainable code. By understanding these differences, developers can choose the right language for their project’s needs.
| dipakahirav |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.