id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,902,863
Next-Generation DLT: Neural Networks, Federated Learning, and DAG Synergy
As a dedicated advocate for decentralized systems, I’m thrilled by the possibilities of blockchain...
0
2024-06-27T16:51:07
https://dev.to/nkianil/next-generation-dlt-neural-networks-federated-learning-and-dag-synergy-4dpe
As a dedicated advocate for decentralized systems, I’m thrilled by the possibilities of blockchain technology. But let’s face it—current solutions still struggle with security, scalability, and efficiency. It’s time for a breakthrough. The traditional blockchain model, despite its innovations, hasn’t fully realized the dream of a decentralized web. We need to push the envelope and redefine what’s possible. Introducing SYNNQ! 🚀 By integrating Neural Networks (NN) and Federated Learning within a Directed Acyclic Graph (DAG) structure, SYNNQ addresses these challenges head-on. Here’s how: • Unmatched Security: Advanced fraud detection with NN-based validation and federated learning. • Incredible Scalability: Up to 1,000,000 transactions per second with lightning-fast confirmation times. • Efficient Resource Use: 65% reduction in computational overhead and 40% decrease in bandwidth usage. • Rock-Solid Resilience: Maintains efficiency even with 30% of nodes compromised. • Decentralized Governance: Fair and transparent reputation-based voting system. SYNNQ is not just an upgrade—it’s a revolution. Let’s challenge the status quo and drive the future of decentralized technology together. Join me in powering the next generation of blockchains! 🌟 For more details and collaboration, read our full whitepaper or contact us at dev@synnq.io. https://www.academia.edu/121565045/Next_Generation_Blockchain_Neural_Networks_Federated_Learning_and_DAG_Synergy_June_2024?source=swp_share
nkianil
1,902,850
Day 7 of my 90-Day Devops Journey: Deploying a Dockerized Node.js App on Minikube
Hey everyone, welcome back to day 7 of my 90-day DevOps project adventure! I apologize for missing...
0
2024-06-27T16:46:38
https://dev.to/arbythecoder/day-7-of-my-90-day-devops-journey-deploying-a-dockerized-nodejs-app-on-minikube-47l6
kubernetes, node, docker, beginners
Hey everyone, welcome back to day 7 of my 90-day DevOps project adventure! I apologize for missing yesterday's post due to some personal commitments and technical issues. Juggling responsibilities alongside these projects can be a real challenge, but thanks for your continued support, it means a lot to me! Today, we'll dive into deploying a Dockerized Node.js application on a local Kubernetes cluster using Minikube. This project took me two days due to research and other commitments, highlighting the importance of planning and perseverance. ### Project Overview This guide will walk you through deploying a Dockerized Node.js application on a single-node Kubernetes cluster using Minikube on your local machine. ### Prerequisites Before we begin, ensure you have the following installed: 1. **Docker:** Make sure Docker is up and running on your system. ([https://docs.docker.com/](https://docs.docker.com/)) 2. **Minikube:** This tool sets up a single-node Kubernetes cluster on your machine. We'll cover Minikube installation in a future post specific to Windows. ([https://minikube.sigs.k8s.io/docs/start/](https://minikube.sigs.k8s.io/docs/start/)) 3. **kubectl:** The command-line tool for interacting with Kubernetes clusters. Installation instructions are usually included with your Kubernetes distribution. ### Step-by-Step Guide **1. Dockerize Your Node.js Application** First, we need to create a `Dockerfile` to define how our application will be built as a Docker image. **Create a Dockerfile:** In your Node.js project directory, create a file named `Dockerfile` with the following content: ```dockerfile FROM node:14 WORKDIR /app COPY package.json . RUN npm install COPY . . EXPOSE 3000 CMD ["node", "server.js"] ``` **Build and Tag the Docker Image:** Now, build the Docker image using the `docker build` command and tag it with your desired name and version: ```bash docker build -t <your-docker-hub-username>/my-app:latest . ``` **Replace `<your-docker-hub-username>` with your actual Docker Hub username.** **Push the Image to Docker Hub (Optional):** If you want to deploy your application across different environments, you can push the image to a public registry like Docker Hub: ```bash docker push <your-docker-hub-username>/my-app:latest ``` **2. Set Up Kubernetes with Minikube** **Start Minikube:** Fire up a single-node Kubernetes cluster on your local machine using Minikube: ```bash minikube start ``` **Create a Deployment:** Define a deployment configuration file (deployment.yaml) for your application: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app-deployment spec: replicas: 2 # This will run two replicas of your application pod selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: <your-docker-hub-username>/my-app:latest # Update with your image details ports: - containerPort: 3000 ``` **Apply the Deployment:** Use `kubectl apply` to deploy your application based on the configuration file: ```bash kubectl apply -f deployment.yaml ``` **Expose the Deployment as a Service:** Make your application accessible outside the cluster by creating a Service of type LoadBalancer: ```bash kubectl expose deployment my-app-deployment --type=LoadBalancer --port=3000 ``` **Check Pod Status:** Verify if your application pods are running successfully: ```bash kubectl get pods ``` ### Challenges and Solutions Deploying applications with Kubernetes can encounter some common issues. Here are a few challenges you might face and how to address them: **1. ImagePullBackOff:** * **Description:** Kubernetes cannot pull the specified Docker image. * **Solutions:** * Double-check the image name and tag for accuracy. * Ensure proper access to the Docker registry, especially for private ones. * Manually pull the image on a Kubernetes node using `docker pull <image-name>`. * Consider using a more aggressive image pull policy (e.g., `Always` instead of `IfNotPresent`). **2. CrashLoopBackOff :** * **Solutions:** * Inspect container logs for error messages using `kubectl logs <pod-name>`. * Verify the container image and its entrypoint/command for correctness. * Check if resource limits and requests are sufficient for the container. * Ensure necessary environment variables are set and accessible within the container. **3. NodeNotReady:** * **Description:** A Kubernetes node is unavailable for workloads. * **Solutions:** * Use `kubectl get nodes` to view node status and identify the problematic node. * Investigate node logs for any errors using `journalctl` (Linux) or system event viewer (Windows). * Ensure the node has sufficient resources (CPU, memory, disk). * If necessary, restart the node or the Kubelet service on the node. ### Docker Errors and Solutions While working with Docker, you might encounter some errors. Here are a few common ones and how to fix them: **1. DockerDaemonNotRunning:** * **Description:** The Docker daemon is not running on your system. * **Solutions:** * Start the Docker daemon using your system's service management tools (e.g., `systemctl start docker` on Linux). * Check Docker daemon logs for any issues using `journalctl -u docker` (Linux). **2. DockerImagePullFailure:** * **Description:** Docker fails to pull the specified image. * **Solutions:** * Verify the image name and tag for typos. * Ensure proper access to the Docker registry, especially for private ones. * Try manually pulling the image using `docker pull <image-name>`. **3. DockerContainerStartFailure:** * **Description:** Docker cannot start the container. * **Solutions:** * Examine container logs for error messages using `docker logs <container-name>`. * Verify the container image, entrypoint, and command for correctness. * Check if resource constraints allow the container to run properly. * Ensure required environment variables are set within the container. **4. DockerNetworkIssues:** * **Description:** Problems with Docker network configuration. * **Solutions:** * Verify Docker network settings and ensure proper configuration. * Inspect the network using `docker network inspect <network-name>`. * If necessary, recreate the network or containers. ### Valuable Resources For further exploration, refer to the following resources: * Kubernetes Documentation: [https://kubernetes.io/docs/home/](https://kubernetes.io/docs/home/) * Minikube Documentation: [https://minikube.sigs.k8s.io/docs/start/](https://minikube.sigs.k8s.io/docs/start/) * Docker Documentation: [https://docs.docker.com/](https://docs.docker.com/) * Kubernetes Tutorials: [https://kubernetes.io/docs/tutorials/](https://kubernetes.io/docs/tutorials/) ### Conclusion While deploying applications with Kubernetes can be challenging, understanding common errors and solutions can significantly smoothen the process. By leveraging Docker to containerize your application and utilizing Kubernetes for orchestration, you achieve robust, scalable, and resilient deployments. This project not only enhanced my understanding of Kubernetes but also emphasized the importance of perseverance and thorough research in DevOps. Stay tuned for more insights and projects as we continue this 90-day journey! Happy Kubernetes-ing!
arbythecoder
1,902,861
Sultangames: Настоящий азарт онлайн
В Sultangames https://sultangames.com/ru/casino/slots/game/pragmatic-vs20olympgate вы можете испытать...
0
2024-06-27T16:45:34
https://dev.to/phil_tompskiy/sultangames-nastoiashchii-azart-onlain-ia6
В Sultangames https://sultangames.com/ru/casino/slots/game/pragmatic-vs20olympgate вы можете испытать настоящий азарт, не выходя из дома. Мы предлагаем широкий выбор игр и ставок, чтобы каждый мог найти что-то по своему вкусу. Наше казино создает атмосферу настоящего игорного зала, где каждый спин и каждая ставка могут принести вам крупный выигрыш. Играйте в Sultangames и наслаждайтесь игрой.
phil_tompskiy
1,902,833
Deployments and Replica Sets in Kubernetes
Hello everyone! Welcome back to the eighth instalment in the CK 2024 series. Today we'll be delving...
0
2024-06-27T16:35:09
https://dev.to/jensen1806/deployments-and-replica-sets-in-kubernetes-3ef5
kubernetes, docker, containers, cka
Hello everyone! Welcome back to the eighth instalment in the CK 2024 series. Today we'll be delving into deployments and replica sets. For anyone working with Kubernetes or preparing for CK, this is one of the most important concepts to grasp. Hosting your application as a container on a pod, which is then backed by a replica set, stateful set, or deployment, is fundamental to Kubernetes. ## Deployments and Replica Sets In our last blog, we explored how a pod works by spinning up a pod on a Kubernetes node and performing some operations with it. Now, let’s take it a step further. Imagine a user accessing this particular pod. If something happens and the pod crashes, the user won’t get any response from the endpoint. This is a significant drawback of running a standalone Docker container. In Kubernetes, we need a mechanism to ensure the user doesn't get an empty response even if the application crashes. That’s where a replication controller comes into play. It can automatically create a new pod if the existing one crashes, ensuring high availability by running multiple replicas of the pods. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2menmqtsku4bufjjp46t.png) ### Replication Controller A replication controller ensures that a specified number of pod replicas are running at any given time. If a pod crashes, the replication controller will spin up a new one. This is crucial for maintaining high availability and load balancing. The replication controller is managed by the Kubernetes controller manager, which monitors resources and ensures they are running as expected. Here’s a basic example of how a replication controller works: - **High Availability**: It ensures that multiple replicas of a pod are running, so if one pod fails, the others continue to serve traffic. - **Load Balancing**: Traffic is not directed to a single pod but distributed across multiple replicas. ### Replica Set While the replication controller is a legacy feature, the replica set is its modern counterpart. It offers more flexibility, allowing us to manage existing pods that weren’t originally part of the replica set. This is done through label selectors, enabling us to match the labels of running pods and include them in the replica set. Here’s an example of how to define a replica set in YAML: ``` apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-rs labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 ``` ### Deployment Deployments provide additional functionality over replica sets, such as rolling updates and rollbacks. A deployment manages a replica set, which in turn manages the pods. With deployments, we can update the application seamlessly without downtime. For example, if we need to update the version of an application from 1.1 to 1.2, a deployment will handle this in a rolling update fashion, updating one pod at a time while the others continue to serve traffic. Here’s how to define a deployment in YAML: ``` apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.1 ports: - containerPort: 80 ``` To update the deployment, you can use the following command: ``` kubectl set image deployment/nginx-deploy nginx=nginx:1.2 ``` This command updates the image used by the deployment, triggering a rolling update. ### Summary In this blog, we've covered the importance of deployments and replica sets in Kubernetes. We looked at how replication controllers and replica sets ensure high availability and load balancing by managing multiple pod replicas. We also explored how deployments add value by enabling seamless updates and rollbacks. In the upcoming instalments, we’ll dive deeper into services and how they work in Kubernetes. Happy learning! For further reference, check out the detailed YouTube video here: {% embed https://www.youtube.com/watch?v=oe2zjRb51F0&list=WL&index=16 %}
jensen1806
1,902,858
Set requires_grad with requires_grad argument functions and get it in PyTorch
You can set requires_grad with the functions which have requires_grad argument and get it with grad...
0
2024-06-27T16:34:53
https://dev.to/hyperkai/set-requiresgrad-with-requiresgrad-argument-functions-and-get-it-in-pytorch-39c3
pytorch, gradient, grad, function
You can set `requires_grad` with the functions which have `requires_grad` argument and get it with [grad](https://pytorch.org/docs/stable/generated/torch.Tensor.grad.html) as shown below: *Memos: - I selected some popular `requires_grad` argument functions such as [tensor()](https://pytorch.org/docs/stable/generated/torch.tensor.html), [arange()](https://pytorch.org/docs/stable/generated/torch.arange.html), [rand()](https://pytorch.org/docs/stable/generated/torch.rand.html), [rand_like()](https://pytorch.org/docs/stable/generated/torch.rand_like.html), [zeros()](https://pytorch.org/docs/stable/generated/torch.zeros.html), [zeros_like()](https://pytorch.org/docs/stable/generated/torch.zeros_like.html), [full()](https://pytorch.org/docs/stable/generated/torch.full.html), [full_like()](https://pytorch.org/docs/stable/generated/torch.full_like.html) and [eye()](https://pytorch.org/docs/stable/generated/torch.eye.html). - `requires_grad`(Optional-Default:`False`-Type:`bool`). - Basically, `requires_grad=` is needed. - [My post](https://dev.to/hyperkai/requiresgradtrue-with-a-tensor-backward-and-retaingrad-in-pytorch-4kf7) explains `requires_grad` and [backward()](https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html) with `tensor()`. `tensor()`. *[My post](https://dev.to/hyperkai/create-a-tensor-in-pytorch-127g) explains `tensor()`: ```python import torch my_tensor = torch.tensor(data=7., requires_grad=True) my_tensor, my_tensor.grad # (tensor(7., requires_grad=True), None) my_tensor.backward() my_tensor, my_tensor.grad # (tensor(7., requires_grad=True), tensor(1.)) ``` `arange()`. *[My post](https://dev.to/hyperkai/arange-linspace-logspace-and-normal-in-pytorch-a87) explains `arange()`: ```python import torch my_tensor = torch.arange(start=5, end=15, step=3, requires_grad=True) my_tensor, my_tensor.grad # (tensor([7.], requires_grad=True), None) my_tensor.backward() my_tensor, my_tensor.grad # (tensor([7.], requires_grad=True), tensor([1.])) ``` `rand()`. *[My post](https://dev.to/hyperkai/rand-randlike-randn-randnlike-randint-and-randperm-in-pytorch-31nc) explains `rand()`: ```python import torch my_tensor = torch.rand(size=(1,), requires_grad=True) my_tensor, my_tensor.grad # (tensor([0.0030], requires_grad=True), None) my_tensor.backward() my_tensor, my_tensor.grad # (tensor([0.0913], requires_grad=True), tensor([1.])) ``` `rand_like()`. *[My post](https://dev.to/hyperkai/rand-randlike-randn-randnlike-randint-and-randperm-in-pytorch-31nc) explains `rand_like()`: ```python import torch my_tensor = torch.rand_like(input=torch.tensor([7.]), requires_grad=True) my_tensor, my_tensor.grad # (tensor([0.4687], requires_grad=True), None) my_tensor.backward() my_tensor, my_tensor.grad # (tensor([0.4687], requires_grad=True), tensor([1.])) ``` `zeros()`. *[My post](https://dev.to/hyperkai/zeros-zeroslike-ones-and-oneslike-in-pytorch-26jm) explains `zeros()`: ```python import torch my_tensor = torch.zeros(size=(1,), requires_grad=True) my_tensor, my_tensor.grad # (tensor([0.], requires_grad=True), None) my_tensor.backward() my_tensor, my_tensor.grad # (tensor([0.], requires_grad=True), tensor([1.])) ``` `zeros_like()`. *[My post](https://dev.to/hyperkai/zeros-zeroslike-ones-and-oneslike-in-pytorch-26jm) explains `zeros_like()`: ```python import torch my_tensor = torch.zeros_like(input=torch.tensor([7.]), requires_grad=True) my_tensor, my_tensor.grad # (tensor([0.], requires_grad=True), None) my_tensor.backward() my_tensor, my_tensor.grad # (tensor([0.], requires_grad=True), tensor([1.])) ``` `full()`. *[My post](https://dev.to/hyperkai/full-and-fulllike-in-pytorch-a8f) explains `full()`: ```python import torch my_tensor = torch.full(size=(1,), fill_value=5., requires_grad=True) my_tensor, my_tensor.grad # (tensor([5.], requires_grad=True), None) my_tensor.backward() my_tensor, my_tensor.grad # (tensor([5.], requires_grad=True), tensor([1.])) ``` `full_like()`. *[My post](https://dev.to/hyperkai/full-and-fulllike-in-pytorch-a8f) explains `full_like()`: ```python import torch my_tensor = torch.full_like(input=torch.tensor([7.]), fill_value=5., requires_grad=True) my_tensor, my_tensor.grad # (tensor([5.], requires_grad=True), None) my_tensor.backward() my_tensor, my_tensor.grad # (tensor([5.], requires_grad=True), tensor([1.])) ``` `eye()`. *[My post](https://dev.to/hyperkai/eye-in-pytorch-1mjj) explains `eye()`: ```python import torch my_tensor = torch.eye(n=1, requires_grad=True) my_tensor, my_tensor.grad # (tensor([[1.]], requires_grad=True), None) my_tensor.backward() my_tensor, my_tensor.grad # (tensor([[1.]], requires_grad=True), tensor([[1.]])) ```
hyperkai
1,902,857
Balancing Side Projects with Full-Time Development Work
Time Management: Prioritize tasks and create a schedule to allocate time for both work and side...
0
2024-06-27T16:34:51
https://dev.to/bingecoder89/balancing-side-projects-with-full-time-development-work-13gc
webdev, beginners, programming, productivity
1. **Time Management**: - Prioritize tasks and create a schedule to allocate time for both work and side projects. Use tools like calendars and to-do lists to stay organized. 2. **Set Clear Goals**: - Define what you want to achieve with your side projects and set realistic deadlines. This helps maintain focus and avoid feeling overwhelmed. 3. **Boundaries**: - Establish boundaries between work and side projects. Avoid letting one interfere with the other by dedicating specific time slots for each. 4. **Use Downtime Wisely**: - Utilize breaks and downtime efficiently. Short periods can be used for brainstorming or quick tasks related to your side projects. 5. **Leverage Work Skills**: - Apply skills and knowledge from your full-time job to your side projects. This not only enhances your projects but also reinforces your professional skills. 6. **Stay Healthy**: - Ensure you get enough rest, exercise, and maintain a healthy lifestyle. Overworking can lead to burnout, affecting both your job and side projects. 7. **Automate and Delegate**: - Use automation tools to handle repetitive tasks and consider delegating work where possible. This frees up time for more critical aspects of your projects. 8. **Network and Collaborate**: - Connect with others who have similar interests. Collaboration can bring new insights and divide the workload, making it easier to manage both commitments. 9. **Stay Flexible**: - Be adaptable and ready to adjust your plans as needed. Unexpected demands from your full-time job may require shifting focus temporarily. 10. **Reflect and Adjust**: - Regularly review your progress and workload. Make adjustments to your schedule and goals to ensure you are maintaining a healthy balance and staying on track. Happy Learning 🎉
bingecoder89
1,902,856
All About JavaScript string
Here I am trying to elaborate on JavaScript strings and built-in methods. I believe you can...
0
2024-06-27T16:31:43
https://dev.to/azadulkabir455/all-about-javascript-string-4mp0
## Here I am trying to elaborate on JavaScript strings and built-in methods. I believe you can find this useful. Please click the below link to read this Article 🚀 Link: [Article Link](https://shorturl.at/uDEBi) Follow more articles like this follow me. 🚀Link: [Azad Ul Kabir](https://www.linkedin.com/in/azadulkabir/)
azadulkabir455
1,902,853
Comparing React and Svelte: Which Frontend Technology is Better for Your Project?
This post compares React and Svelte, two popular frontend development technologies, to help make an...
0
2024-06-27T16:30:46
https://dev.to/ghguda/comparing-react-and-svelte-which-frontend-technology-is-better-for-your-project-2mlh
_This post compares React and Svelte, two popular frontend development technologies, to help make an informed choice based on their strengths and differences, highlighting their advantages and disadvantages._ **React: A JavaScript Library for Building User Interfaces** Facebook developed the React JavaScript toolkit to aid in user interface development, enabling developers to create swift, data-driven web apps. **Features of React** React is a component-based architecture that encourages the creation of reusable components for complex UIs. It uses a virtual DOM for optimal rendering performance, uses one-way data binding for better understanding and debugging, and has a rich ecosystem of libraries and tools. **Advantages of React** React enhances performance, scalability, and community through its virtual DOM, component-based architecture, and extensive documentation, making it a popular frontend technology. **Disadvantages of React** Beginners may find React's syntax and concepts challenging, while the large amount of boilerplate code required in React projects can be overwhelming. **Svelte: The Compiler for Building User Interfaces** Svelte is a recent frontend framework that shifts most of its complexity from the browser to the construction stage, reducing its workload during app building. **Features of Svelte** Svelte is a lightweight, efficient language that compiles code directly into the DOM, without a virtual DOM. It features built-in reactivity, a component-based architecture, and built-in transitions for easy dynamic component addition. **Advantages of Svelte** Svelte applications are fast, efficient, and simplify state management by eliminating the virtual DOM, reducing the need for complex libraries, and resulting in smaller bundle sizes. **Disadvantages of Svelte** Svelte's ecosystem is growing, with fewer third-party libraries and tools compared to React. Its smaller community size affects resource availability and complexity for larger projects. **My expectations for this HNG Internship** I’m eager to learn more from my colleagues in HNG to help me understand and improve my React skills to build better React projects. For more information about the HNG Internship and how you can get involved, visit the [HNG Internship](https://hng.tech/internship) page. If you're interested in hiring talented developers from the program, check out the [HNG Hire](https://hng.tech/hire) page.
ghguda
1,902,855
-ClassList
ClassList Enter fullscreen mode Exit fullscreen...
0
2024-06-27T16:30:44
https://dev.to/husniddin6939/-events-events-object-window-elemnts-classlist-ke5
javascript
ClassList ClassList - this method can add new classname to opened tag and its keys these. 1.Add => this key will add new classname ``` menu.classList.add('bg-success'); console.log(menu.classList); ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/za8lu1q8ekosfw113ijy.png) throught this key we can add more classnames in once like so ``` menu.classList.add('bg-success','gold','red'); console.log(menu.classList); ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/glr04a9swxnmascshlnp.png) 2.Remove => this key remove classname more then once ``` menu.classList.remove('gold','red'); console.log(menu.classList); ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/enosownvhub6o3ovo4c7.png) 3.Contains => this key search classname which we type in and it response us boolean (true/false); ``` menu.classList.contains('bg-success'); console.log(menu.classLIst); ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6b4ea1gr9bf84edtk144.png) 4.Toggle => This key if there is classname which we type the same it removed, otherwise it adds our classname like newone. ``` menu.classList.toggle('orange'); console.log(menu.classList); ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hzqjobygqtly5nq9skt9.png) then if we call this classname again "orange" via toggle key , it remove in this time ``` menu.classList.toggle('orange'); console.log(menu.classList); ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6oo80khesr01v1u3w0ro.png)
husniddin6939
1,902,854
My First Contribution to Open Source
I was still quite naive and just beginning to learn Git and GitHub when I stumbled upon a fascinating...
0
2024-06-27T16:30:10
https://dev.to/rasyidfam/my-first-contribution-to-open-source-479l
csharp, github, opensource, beginners
I was still quite naive and just beginning to learn Git and GitHub when I stumbled upon a fascinating project: a C# IDE called Edi. It was an impressive and complex application at the time, and I was eager to contribute. However, a significant hurdle stood in my way—language. As someone whose primary languages are Indonesian and Javanese, and with English as my third language, communicating effectively was a challenge. Driven by my enthusiasm, I dived into the project. But my excitement was soon dampened by a series of mistranslations and misunderstandings, which led to me stepping back from participating in the project's issues. It was a humbling experience, but one that taught me a lot. Fast forward ten years, and I returned to the open source scene with a new stack—mostly TypeScript—and a more mature perspective. The Edi project's source code had grown substantially in complexity. Yet, the most important lesson I had learned was not about coding languages or frameworks. It was about communication and understanding. To be a good developer, the first crucial step is understanding the issue at hand. But beyond that, knowing how to ask the right questions and communicate effectively is key to being a valuable team member. This journey, from my early missteps to my current contributions, has been one of growth and continuous learning. And it all started with that initial, humbling foray into the world of open source. PS: you can visit edi code editor here https://github.com/Dirkster99/Edi
rasyidfam
1,902,851
Looking for experienced Odoo developer
we have a client in the architecture and construction space who is looking to build an end to end ERP...
0
2024-06-27T16:24:40
https://dev.to/turgut_jabbarli/looking-for-experienced-odoo-developer-4pn
odoo
we have a client in the architecture and construction space who is looking to build an end to end ERP solution. We are looking for someone seasoned in Odoo software to help our project lead in implementing the project. If you are available please send me an email with your relevant experience highlighted. Next step would be to jump on a call to discuss the project in more details. Email: turgut@vestedinyou.ca
turgut_jabbarli
1,902,849
SQL Injection: Understanding the Threat and How to Avoid It
Web applications are still seriously threatened by SQL Injection (SQLi), a persistent issue in the...
0
2024-06-27T16:24:25
https://nilebits.com/blog/2024/06/sql-injection-understanding-the-threat/
sql, sqlserver, database, sqlinjection
Web applications are still seriously threatened by SQL Injection (SQLi), a persistent issue in the constantly changing field of cybersecurity. Due to its ease of use and the extensive usage of SQL databases, SQL Injection is still a frequently used attack vector even though it is a well-known weaknORMess. The goal of this blog article is to provide readers a thorough grasp of SQL Injection, its ramifications, and protective measures. What is SQL Injection? SQL Injection is a code injection technique that exploits vulnerabilities in an application's software by inserting malicious SQL code into an input field. This allows attackers to manipulate database queries, potentially gaining unauthorized access to sensitive data, altering database contents, or executing administrative operations. How SQL Injection Works At its core, SQL Injection exploits improper handling of user input in SQL queries. Let's consider a simple example where an application fetches user details based on a username provided via an input form. Vulnerable Code Example ``` # Example of vulnerable code in Python import sqlite3 def get_user_details(username): connection = sqlite3.connect('example.db') cursor = connection.cursor() # Vulnerable query query = f"SELECT * FROM users WHERE username = '{username}'" cursor.execute(query) user_details = cursor.fetchall() connection.close() return user_details # User input user_input = "' OR '1'='1" print(get_user_details(user_input)) ``` In this example, the application concatenates the user input directly into the SQL query. An attacker can exploit this by providing input such as "' OR '1'='1", resulting in the following SQL query: ``` SELECT * FROM users WHERE username = '' OR '1'='1' ``` This query will always return all users, bypassing authentication checks. Types of SQL Injection There are several types of SQL Injection attacks, each with its specific techniques and goals: Classic SQL Injection: The most common form, where attackers manipulate queries to retrieve or modify data. Blind SQL Injection: Used when an application does not return error messages. Attackers infer information based on application responses. Boolean-based Blind SQL Injection: Attackers send payloads that cause different behavior based on the condition being true or false. Time-based Blind SQL Injection: Attackers use database time functions to infer information based on response delays. Out-of-Band SQL Injection: Involves the use of different channels, such as DNS or HTTP, to receive the data. Implications of SQL Injection The impact of a successful SQL Injection attack can be severe, including: Data Theft: Attackers can retrieve sensitive information such as user credentials, personal data, and financial information. Data Manipulation: Unauthorized modification or deletion of data can lead to data integrity issues. Authentication Bypass: Attackers can bypass authentication mechanisms, gaining unauthorized access to accounts. Administrative Access: Exploiting SQL Injection can lead to full control over the database server. Denial of Service (DoS): Malicious queries can exhaust database resources, leading to service disruptions. Preventing SQL Injection Preventing SQL Injection requires a multi-faceted approach, combining secure coding practices, input validation, and the use of security mechanisms provided by database management systems. Use Prepared Statements and Parameterized Queries Prepared statements with parameterized queries ensure that user input is treated as data, not executable code. Most programming languages and frameworks support this feature. Secure Code Example ``` # Example of secure code in Python import sqlite3 def get_user_details(username): connection = sqlite3.connect('example.db') cursor = connection.cursor() # Secure query using parameterized statements query = "SELECT * FROM users WHERE username = ?" cursor.execute(query, (username,)) user_details = cursor.fetchall() connection.close() return user_details # User input user_input = "' OR '1'='1" print(get_user_details(user_input)) ``` In this example, the user input is safely parameterized, preventing SQL Injection. Input Validation Validate and sanitize all user inputs to ensure they conform to expected formats and types. Input Validation Example ``` import re def validate_username(username): if re.match("^[a-zA-Z0-9_]+$", username): return True return False # User input user_input = "valid_username123" if validate_username(user_input): print(get_user_details(user_input)) else: print("Invalid username") ``` Use ORM (Object-Relational Mapping) Tools ORM frameworks abstract database interactions, reducing the risk of SQL Injection by using safe query-building techniques. ORM Example with SQLAlchemy (Python) ``` from sqlalchemy import create_engine, Table, MetaData, select from sqlalchemy.orm import sessionmaker engine = create_engine('sqlite:///example.db') Session = sessionmaker(bind=engine) session = Session() metadata = MetaData() users = Table('users', metadata, autoload_with=engine) def get_user_details(username): query = select(users).where(users.c.username == username) result = session.execute(query) return result.fetchall() # User input user_input = "valid_username123" print(get_user_details(user_input)) ``` Web Application Firewalls (WAF) Deploying a Web Application Firewall (WAF) can help detect and block SQL Injection attempts. WAFs use predefined rules and behavior analysis to filter malicious requests. Regular Security Audits and Penetration Testing Conduct regular security audits and penetration tests to identify and mitigate SQL Injection vulnerabilities. This proactive approach helps ensure that new vulnerabilities are promptly addressed. Conclusion Although SQL Injection is still a major risk to online applications, it may be successfully reduced with the right knowledge and security measures in place. Developers may protect their applications against SQL Injection attacks by using security methods like WAFs, utilizing ORM tools, input validation, and prepared statements. A strong security posture requires regular security assessments and keeping up with emerging attack methods. Remember, security is a continuous process, and vigilance is key to protecting sensitive data and maintaining the integrity of your applications.
amr-saafan
1,902,780
The battle of frontend frameworks
In the world of front-end web development, two titans stand out: React and Angular. While both are...
0
2024-06-27T15:43:27
https://dev.to/johnmattee/the-battle-of-frontend-frameworks-d0l
webdev, javascript, hng, beginners
In the world of front-end web development, two titans stand out: React and Angular. While both are powerful tools for building dynamic user interfaces, they differ in their approach, syntax, and overall philosophy. Let's dive into the key distinctions between these two frameworks and explore why React might be the better choice for your next project. React, developed and maintained by Meta (formerly Facebook), is a JavaScript library that focuses on creating reusable UI components. It follows a component-based architecture, allowing developers to break down complex UIs into smaller, manageable pieces. React's strength lies in its simplicity and flexibility, making it easier to learn and adapt to various project requirements. One of React's standout features is its use of JSX, a syntax extension that allows developers to write HTML-like code within their JavaScript files. This approach makes it easier to visualize and manipulate the DOM structure, leading to more readable and maintainable code. React also employs a virtual DOM (Document Object Model), which is a lightweight in-memory representation of the actual DOM. Angular, on the other hand, is a TypeScript-based framework developed and maintained by Google. It takes a more opinionated approach, providing a complete solution for building web applications. Angular follows a Model-View-Controller (MVC) architecture, which separates the application logic into distinct components. It also supports two-way data binding, allowing changes in the model to automatically update the view and vice versa, simplifying the development of interactive user interfaces. While both React and Angular are powerful tools for front-end development, they differ in their approach and target use cases. React's lightweight nature and flexibility make it a great choice for building modern, single-page applications (SPAs) with frequently changing data. Its component-based architecture and virtual DOM ensure fast rendering and smooth user experiences. Angular, on the other hand, is better suited for large-scale enterprise applications that require a more structured and opinionated approach. Its built-in features and TypeScript support make it easier to maintain and scale complex codebases, especially in teams with multiple developers. At HNG, we have chosen to use React for our internship program. We believe that React's simplicity and growing popularity make it an excellent choice for aspiring developers to learn and build real-world projects. The HNG Internship offers a unique opportunity to learn React and other front-end technologies while collaborating with a diverse community of developers. You can learn more about the internship and how to apply at https://hng.tech/internship. If you're interested in hiring React developers or exploring our premium services, visit https://hng.tech/hire or https://hng.tech/premium to learn more about how HNG can help you build exceptional web applications
johnmattee
1,902,801
React vs. Angular: A Deep Dive into the Popular Frontend Technologies
I recently researched the skills companies look for when hiring a frontend developer, and after...
0
2024-06-27T16:24:24
https://dev.to/bridget_amana/react-vs-angular-a-deep-dive-into-the-popular-frontend-technologies-389l
react, learning, frontend, angular
I recently researched the skills companies look for when hiring a frontend developer, and after surveying 40 job descriptions, React topped the list, closely followed by Angular. Already proficient in React, I began learning Angular to broaden my skill set and increase my chances in the job market. After a month, here’s what I’ve noticed. ### React is The People's Favorite React, developed by Facebook, is a JavaScript library that has taken the frontend world by storm. Why is React so beloved? For starters, its component-based architecture lets developers build reusable pieces of UI, making the development process not just efficient, but also easy. Whether you're a newbie or a seasoned coder, React’s straightforward approach makes it easy to pick up and run with. ### Angular: The Robust Framework Angular, crafted by Google, is a powerhouse framework that comes packed with everything you need to build large-scale applications. Angular’s two-way data binding keeps your data and UI in perfect sync, reducing the manual work and potential for errors. It’s comprehensive nature means you don’t need to hunt down external libraries – it’s all there, integrated and ready to go. This makes it ideal for building complex, enterprise-level applications. ### Key Differences Between React and Angular **Learning Curve:** - **React:** Easier for beginners, with a simpler and more flexible approach. - **Angular:** Steeper learning curve but offers a more integrated solution for large-scale applications. **Performance:** - **React:** Excels in dynamic applications with its virtual DOM. - **Angular:** Optimized performance despite potential overhead from two-way data binding. **Ecosystem:** - **React:** A vast ecosystem requiring external libraries for certain functionalities. - **Angular:** A complete, integrated solution with less reliance on external libraries. ### My Experience and Looking Ahead Learning Angular has been a rewarding experience. It has broadened my perspective on frontend development and equipped me with new tools and techniques. While React remains my go-to for its simplicity and flexibility, Angular’s comprehensive nature and built-in features make it a powerful alternative for larger projects. Choosing which one to learn is no longer a matter of personal preference but by industry demand. By mastering both, you can enhance your versatility as a frontend developer and open up more opportunities in the job market. **PS:** Writing this article is the Stage 0 task for the HNG Internship. The HNG Internship is an intensive program designed to accelerate participants' growth. It involves real-world tasks, mentorship, and a collaborative environment, helping interns develop practical skills and build a strong portfolio. For anyone looking to enhance their frontend development skills and build a strong portfolio check out [HNG internship](https://hng.tech/internship) [HNG tech hire](https://hng.tech/hire) Thank you for reading. You can connect with me on [LinkedIn](http://www.linkedin.com/in/bridget-amana) and [Twitter](https://x.com/amana_bridget).
bridget_amana
1,902,753
Automation testing with Playwright
Why Playwright? Cross-language: JavaScript, TypeScript, Python, .NET,...
0
2024-06-27T16:16:37
https://dev.to/aizimpamvu/automation-testing-with-playwright-47ml
playwright, testing, softwareengineering, qa
## Why Playwright? - Cross-language: JavaScript, TypeScript, Python, .NET, Java - Cross-platform: Windows, Linux, macOS, and support headed and headless mode - Cross-browser: Chromium, Webkit, and Firefox - Auto-wait - Codegen: Record your actions and save them in your language - Trace Viewer: Capture all information to investigate the test failure - And many more... **Pre-requisites** - Install Node.js 18 + - VScode(To have a better experience by using the playwright extension) - Basic programming skills ## Installation 1. Open the terminal and run the below command `npm init playwright@latest` 2.Choose JavaScript(default is TypeScript) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bud4wiiqk16i3ecebe8f.png) 3.Choose where you want to save your tests and hit Enter ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/13qihxp1euq1ww47ckwg.png) 4.Add a GitHub Actions to easily run tests on CI( Select false) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/81d8qprgth4eaqe1nci4.png) 5.Install Playwright browsers and hit Enter ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzgx7mbweatb00qp57b6.png) 6.Install Playwright operating system dependencies(No is a default) hit Enter ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y2hn7a7m0gy3wk28em91.png) 7.Congratulations you have successfully installed Playwright ## Write the first test scripts We are going to use the playwright's example and - Import test and expect arguments from playwright - Open playwright **URL** - Verify that the page title has Playwright - Click on Get Started - Verify that Installation text is visible **Code Snippet** ``` /*Importing test which help you to write test cases */ /*Importing expect which help to validate expected result */ const { test, expect } = require('@playwright/test'); /* Open Playwright url*/ test('has title', async ({ page }) => { await page.goto('https://playwright.dev/'); // Expect a title "to contain" a substring. await expect(page).toHaveTitle(/Playwright/); }); test('get started link', async ({ page }) => { await page.goto('https://playwright.dev/'); // Click the get started link. await page.getByRole('link', { name: 'Get started' }).click(); // Expects page to have a heading with the name of Installation. await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible(); }); ``` **Code Screenshot** _for better visibility_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fygew7ij4g0quop5kfzr.png) ## Run the test Run: `npx playwright test` This command will run all the tests under test Directory which is tests ## Display playwright test report Run: `npx playwright show-report` Below is the sample of playwright HTML report ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dncdt6q0usrm8mxawvux.png) Happy testing!!🎭
aizimpamvu
1,902,800
Payroll Payment Processing with Rapyd Disburse
Remote work is becoming the new norm, and businesses and their stakeholders now have the flexibility...
0
2024-06-27T16:10:34
https://community.rapyd.net/t/payroll-payment-processing-with-rapyd-disburse/59270
webdev, fintech, payments, nextjs
Remote work is becoming the new norm, and businesses and their stakeholders now have the flexibility to operate from virtually any corner of the globe. However, this advancement comes with its challenges, such as effectively managing staff compensation and payroll, considering the diverse regulations and currencies in play across different locations. For example, have you ever wondered how platforms like remote.com solved this challenge and effortlessly continue to handle the payroll for their distributed workforce? The answer lies in cutting-edge technology, and one such awesome technology is the Rapyd Disburse API. This article explores the world of payroll innovation and explains how to develop a basic payroll web application that can add an employee profile and set up their salary details. The article also demonstrates how to integrate the Rapyd Disburse API, a tool that empowers developers to create seamless payroll applications, into the payroll web application. ## Implementing Payroll Payment Processing with Rapyd Disburse The following diagram is an overview of the basic architecture of the payroll web application you're going to build. ![Application architecture]( https://i.imgur.com/YuTKnKs.png) The app has a user interface that displays the employee's profile, bank details, and salary and the employee status toggle. You'll develop the user interface using Next.js, a frontend technology framework. Additionally, you'll leverage the Next.js server API capability as the backend to handle the application logic and Rapyd Disburse API. Finally, there is the database, which will handle all relevant data storage. For simplicity, you can make use of the browser's local storage. ### Prerequisites To get started with the step-by-step instructions, kindly ensure you have the necessary prerequisites in place: * **Next.Js:** This tutorial uses Next.js and the browser's local storage as the database. * **Rapyd Account:** If you don't already have one, [create an account](https://dashboard.rapyd.net/login) on the Rapyd platform to access the Disburse API documentation and API keys. * **NPM:** Ensure you have NPM installed as you'll use it to install the necessary packages for the application. ### Set Up a New Next.js Application Let's get started by creating a new Next.js project. To do that, open your terminal, change the directory to your preferred directory, and run the following commands in your terminal: ``` npx create-next-app rapyd-basic-payroll-app && cd rapyd-basic-payroll-app ``` You should see a few prompts. Select the default option by punching the enter key command until the package starts downloading. After a successful download, you should get a response starting with "Success!", which indicates where the app was created and displays some other notices (new versions available, etc.). ![New Next.js application](https://i.imgur.com/VEWPILa.png) ### Create the Necessary Pages and Components The next step is to create the necessary components in the `/src/components` directory. To do that, head over to your terminal and run the following command: ``` mkdir src/components touch src/components/Header.js touch src/components/EmployeeForm.js touch src/components/EmployeeStatusToggle.js ``` The command above creates the following files: - **Header.js** for a header view of the application - **EmployeeForm.js** for the employee profile form - **EmployeeStatusToggle.js** for the employee status toggle You'll now create the necessary pages for routing in the `/src/pages` directory by running the following command in your terminal: ``` mkdir src/pages touch src/pages/add-employee.js touch src/pages/list-employees.js ``` This command creates the following files: - **add-employee.js** for adding new employees - **list-employees.js** for listing all employees ### Code the Components You've now created some necessary frontend files, and you can start writing their code and styling them appropriately. Starting with the **Header.js** component, copy and paste the following code snippet into the **Header.js** file to create the global header section of the application and style it: ``` import React from 'react'; const headerStyle = { backgroundColor: '#cacaca', color: '#171717', padding: '20px 0', textAlign: 'center', }; const Header = () => { return ( <header style={headerStyle}> <h1>Payroll Application</h1> </header> ); }; export default Header; ``` You'll now create and style the form that will be used to create employees. Copy and paste the following code snippet into the **EmployeeForm.js** file. The code snippet collects the employee's details and stores them in the local storage (see the comments within the code for more information): ``` import React, { useEffect, useState } from 'react'; import { useRouter } from 'next/router'; const EmployeeSalaryForm = () => { const router = useRouter(); // List of countries for test purposes const countryCodes = [ { code: 'US', name: 'United States (US)' }, { code: 'GB', name: 'United Kingdom (GB)' }, { code: 'PH', name: 'Philippines (PH)' }, { code: 'CA', name: 'Canada (CA)' }, { code: 'AU', name: 'Australia (AU)' }, { code: 'DE', name: 'Germany (DE)' }, { code: 'FR', name: 'France (FR)' }, { code: 'JP', name: 'Japan (JP)' }, { code: 'BR', name: 'Brazil (BR)' }, ]; // The form field data before submitting the form const [formData, setFormData] = useState({ firstName: '', lastName: '', address: '', email: '', phoneNumber: '', countryCode: '', company: '', role: '', employmentDate: '', idCardType: '', idCardNumber: '', bankName: '', bankAccountNumber: '', salaryAmount: '', status: 'active', }); const [AvailableBanks, setAvailableBanks] = useState([]); // Function to get the list of Rapyd-supported banks based on the selected country const getCountryCodeBanks = async (value) => { try { fetch('/api/get-banks', { method: 'POST', body: JSON.stringify({ country: value, }), headers: { 'Content-type': 'application/json; charset=UTF-8', } }) .then((response) => response.json()) .then((resJson) => { var resData = resJson.data.body.data var resDataArr = []; resData.forEach(eachData => { resDataArr.push({code: eachData.payout_method_type, name: eachData.name}); }); setAvailableBanks(resDataArr) }); } catch (error) { console.error("Error in API route"); } }; // Function to update the form data when data is inputted into any of the fields const handleChange = (e) => { const { name, value } = e.target; setFormData({ ...formData, [name]: value }); if (name == 'countryCode') { getCountryCodeBanks(value); } }; // Function to handle form submit const handleSubmit = (e) => { e.preventDefault(); // Retrieve existing employee data from localStorage const storedEmployeeData = JSON.parse(localStorage.getItem('employeeData')) || []; // Append the new employee data to the existing array const updatedEmployeeData = [...storedEmployeeData, formData]; // Save the updated data back to localStorage localStorage.setItem('employeeData', JSON.stringify(updatedEmployeeData)); // Clear the form after submission setFormData({ firstName: '', lastName: '', address: '', email: '', phoneNumber: '', countryCode: '', company: '', role: '', employmentDate: '', idCardType: '', idCardNumber: '', bankName: '', bankAccountNumber: '', salaryAmount: '', status: 'active', }); // Redirect the user to the employee listing page router.push('/list-employees'); window.alert('Employee added successfully'); }; // Begin form styles const formContainerStyle = { maxWidth: '400px', margin: '0 auto', padding: '20px', border: '1px solid #ccc', borderRadius: '5px', }; const inputStyle = { display: 'block', margin: '10px 0', padding: '5px', width: '100%', }; const buttonStyle = { backgroundColor: '#007bff', color: 'white', border: 'none', padding: '10px 20px', margin: '20px 0', cursor: 'pointer', }; // End form styles //Form fields return ( <div style={formContainerStyle}> <form onSubmit={handleSubmit}> <label> First Name: <input type="text" name="firstName" value={formData.firstName} onChange={handleChange} style={inputStyle} /> </label> <label> Last Name: <input type="text" name="lastName" value={formData.lastName} onChange={handleChange} style={inputStyle} /> </label> <label> Full Address: <input type="text" name="address" value={formData.address} onChange={handleChange} style={inputStyle} /> </label> <label> Email: <input type="email" name="email" value={formData.email} onChange={handleChange} style={inputStyle} /> </label> <label> Phone Number: <input type="text" name="phoneNumber" value={formData.phoneNumber} onChange={handleChange} style={inputStyle} /> </label> <label> Country Code: <select name="countryCode" value={formData.countryCode} onChange={handleChange} style={inputStyle} > <option value="">Select Country Code</option> {countryCodes.map((country, index) => ( <option key={index} value={country.code}> {country.name} </option> ))} </select> </label> <label> Company: <input type="text" name="company" value={formData.company} onChange={handleChange} style={inputStyle} /> </label> <label> Role: <input type="text" name="role" value={formData.role} onChange={handleChange} style={inputStyle} /> </label> <label> Employment Date: <input type="date" name="employmentDate" value={formData.employmentDate} onChange={handleChange} style={inputStyle} /> </label> <label> Identification Card Type: <select name="idCardType" value={formData.idCardType} onChange={handleChange} style={inputStyle} > <option value="">Select ID Type</option> <option value="social_security">Social Security</option> <option value="work_permit">Work Permit</option> <option value="international_passport">International Passport</option> <option value="residence_permit">Residence Permit</option> <option value="company_registered_number">company Registered Number</option> </select> </label> <label> Identification Card Number: <input type="text" name="idCardNumber" value={formData.idCardNumber} onChange={handleChange} style={inputStyle} /> </label> <label> Bank: <select name="bankName" value={formData.bankName} onChange={handleChange} style={inputStyle} > <option value="">Select Bank</option> {AvailableBanks.map((bank, index) => ( <option key={index} value={bank.code}> {bank.name} </option> ))} {/* Get the list of available banks in the selected country from Rapyd */} </select> </label> <label> Bank Account Number: <input type="text" name="bankAccountNumber" value={formData.bankAccountNumber} onChange={handleChange} style={inputStyle} /> </label> <label> Salary Amount: <input type="text" name="salaryAmount" value={formData.salaryAmount} onChange={handleChange} style={inputStyle} /> </label> <button type="submit" style={buttonStyle}>Submit</button> </form> </div> ); }; export default EmployeeSalaryForm; ``` Finally, in the components folder, copy and paste the following code snippet into the **EmployeeStatusToggle.js** component, which manages the enable and disable feature of the application: ``` import React, { useState } from 'react'; const EmployeeStatusToggle = ({ initialStatus, onToggle }) => { const [status, setStatus] = useState(initialStatus); const toggleStyle = { display: 'inline-block', position: 'relative', width: '40px', height: '20px', cursor: 'pointer', backgroundColor: 'lightgrey', borderRadius: '10px', }; const switchStyle = { position: 'absolute', width: '20px', height: '20px', borderRadius: '50%', transition: 'background-color 0.3s ease', }; const activeSwitchStyle = { ...switchStyle, left: '0', backgroundColor: '#2ecc71', // Green for active status }; const inactiveSwitchStyle = { ...switchStyle, left: '20px', backgroundColor: '#e74c3c', // Red for inactive status }; const handleToggle = () => { const newStatus = status === 'active' ? 'inactive' : 'active'; setStatus(newStatus); onToggle(newStatus); }; return ( <label style={toggleStyle} className="employee-status-toggle"> <span style={status === 'active' ? activeSwitchStyle : inactiveSwitchStyle} onClick={handleToggle} ></span> </label> ); }; export default EmployeeStatusToggle; ``` ### Code the Pages With the components coded, you can proceed to code the created pages. Each page will import its respective components accordingly and display them on the browser. Also, note that the name of each page is the web route to access the page on the browser. Start by copying and pasting the following code into the file **add-employee.js** to import **EmployeeForm.js** and **Header.js**: ``` import React from 'react'; import '../app/globals.css' import Header from '../components/Header'; import EmployeeForm from '../components/EmployeeForm'; const AddEmployee = () => { const containerStyle = { textAlign: 'center' }; return ( <div style={containerStyle}> <Header /> <h2>Add Employee</h2> <EmployeeForm /> </div> ); }; export default AddEmployee; ``` Copy and paste the following code snippet into the **list-employees.js** file to import the **Header.js** and **EmployeeStatusToggle.js** components and a table element that displays all the created employees: ``` import React, { useEffect, useState } from 'react'; import Header from '../components/Header'; import EmployeeStatusToggle from '../components/EmployeeStatusToggle'; const ListEmployees = () => { const [employeeData, setEmployeeData] = useState([]); useEffect(() => { // Retrieve employee data from localStorage on component mount const storedEmployeeData = JSON.parse(localStorage.getItem('employeeData')) || []; setEmployeeData(storedEmployeeData); }, []); const handleStatusToggle = (index, newStatus) => { // Update the status of the employee at the specified index const updatedEmployeeData = [...employeeData]; updatedEmployeeData[index].status = newStatus; setEmployeeData(updatedEmployeeData); // Update the data in localStorage localStorage.setItem('employeeData', JSON.stringify(updatedEmployeeData)); }; const containerStyle = { textAlign: 'center' }; const tableStyle = { borderCollapse: 'collapse', width: '100%', marginTop: '20px', }; const thStyle = { backgroundColor: '#cacaca', color: '#171717', padding: '10px', textAlign: 'center', }; const tdStyle = { border: '1px solid #ccc', padding: '10px', textAlign: 'left', }; return ( <div style={containerStyle}> <Header /> <h2>List of Employees</h2> <table style={tableStyle}> <thead> <tr> <th style={thStyle}>Name</th> <th style={thStyle}>Email</th> <th style={thStyle}>Phone</th> <th style={thStyle}>Country Code</th> <th style={thStyle}>Company</th> <th style={thStyle}>Role</th> <th style={thStyle}>Employment Date</th> <th style={thStyle}>ID Card Type</th> <th style={thStyle}>ID Card Number</th> <th style={thStyle}>Bank Name</th> <th style={thStyle}>Bank Account Number</th> <th style={thStyle}>Salary Amount</th> <th style={thStyle}>Status</th> </tr> </thead> <tbody> {employeeData.map((employee, index) => ( <tr key={index}> <td style={tdStyle}>{employee.firstName} {employee.lastName}</td> <td style={tdStyle}>{employee.email}</td> <td style={tdStyle}>{employee.phoneNumber}</td> <td style={tdStyle}>{employee.countryCode}</td> <td style={tdStyle}>{employee.company}</td> <td style={tdStyle}>{employee.role}</td> <td style={tdStyle}>{employee.employmentDate}</td> <td style={tdStyle}>{employee.idCardType}</td> <td style={tdStyle}>{employee.idCardNumber}</td> <td style={tdStyle}>{employee.bankName}</td> <td style={tdStyle}>{employee.bankAccountNumber}</td> <td style={tdStyle}>{employee.salaryAmount}</td> <td style={tdStyle}> <EmployeeStatusToggle initialStatus={employee.status} onToggle={(newStatus) => handleStatusToggle(index, newStatus)} /> </td> </tr> ))} </tbody> </table> </div> ); }; export default ListEmployees; ``` After adding the code snippet, go to `src/app/globals.css` and `src/app/page.module.css`. Replace all the CSS styles that came with the Next.js installation with the following CSS: ``` body{ margin: 0px; } ``` You also need to replace the default `src/app/page.js` code with the following snippet, which creates the landing page of the application that routes the users to the "Add Employee" and "List Employees" pages: ``` import React from 'react'; import Header from '../components/Header'; const Home = () => { const containerStyle = { textAlign: 'center' }; const actionUrlsStyle = { listStyle: 'none', padding: 0, display: 'flex', justifyContent: 'center', }; const actionUrlItemStyle = { margin: '0 10px', }; return ( <div style={containerStyle}> <Header /> <h2>Welcome</h2> <h3> Use the navigation below to add or view employees</h3> <ul style={actionUrlsStyle} className="action-urls"> <li style={actionUrlItemStyle} className="action-url-item"> <a href={`/add-employee`}>Add Employee</a> </li> <li style={actionUrlItemStyle} className="action-url-item"> <a href={`/list-employees`}>List Employees</a> </li> </ul> </div> ); }; export default Home; ``` Before moving on to the next step, you also need to create a helper file that will contain the list of currency codes and their corresponding country code. This helper file will help you automatically determine the payout currency based on the selected beneficiary country. Change the directory to `/src` and create a new directory there called `utilities` using the following command: ``` mkdir utilities ``` Then, create a new file called **currencyCodes.js** in the `/utilities` directory using the following command: ``` touch currencyCodes.js ``` Copy the code snippet below into the newly created **currencyCodes.js**: ``` const CurrencyCodes = [ { code: 'USD', country_code: 'US' }, { code: 'GBP', country_code: 'GB' }, { code: 'PHP', country_code: 'PH' }, { code: 'CAD', country_code: 'CA' }, { code: 'AUD', country_code: 'AU' }, { code: 'EUR', country_code: 'DE' }, { code: 'JPY', country_code: 'JP' }, { code: 'BRL', country_code: 'BR' } ]; export { CurrencyCodes }; ``` ### Retrieve Your Rapyd API Keys At this point, you are done creating and styling the pages for the application. The final steps involve integrating the Rapyd Disburse API into the app. Before working with the API, you'll need your API keys. To locate them, [log in to the Rapyd dashboard](https://dashboard.rapyd.net/login) with your login details. Once you're logged in, click the **Developers** link on the sidebar, and you should be routed to the Rapyd Credential Details page. You will see your access and secret key. Copy both keys and save them in a safe place; you will need them in the later part of this tutorial. ![Rapyd API keys](https://i.imgur.com/xrrGtcq.png) ### Integrate the Rapyd Disburse API (Set Up Base Functions) The next step is to create another helper file for the functions that set up the Rapyd signature and header data. To do that, change the directory to the `/utilities` and create a file called **rapyd.js** using the following command: ``` cd /path/to/app/src/utilities touch rapyd.js ``` Copy and paste the code snippet below into the newly created **rapyd.js** file, replacing `{{YOUR_SECRET_KEY}}` and `{{YOUR_ACCESS_KEY}}` with your Rapyd secret key and access key to generate the required header, salt, and signature (see the comments within the code for more information about what's happening): ``` import https from 'https'; import crypto from 'crypto'; const secretKey = "{{YOUR_SECRET_KEY}}"; const accessKey = "{{YOUR_ACCESS_KEY}}"; const log = false; async function makeRequest(method, urlPath, body = null) { try { const httpMethod = method; // get|put|post|delete - must be lowercase. const httpBaseURL = "sandboxapi.rapyd.net"; const httpURLPath = urlPath; // Portion after the base URL. const salt = generateRandomString(8); // Randomly generated for each request. const idempotency = new Date().getTime().toString(); const timestamp = Math.round(new Date().getTime() / 1000); // Current Unix time (seconds). const signature = sign(httpMethod, httpURLPath, salt, timestamp, body); const options = { hostname: httpBaseURL, port: 443, path: httpURLPath, method: httpMethod, headers: { 'Content-Type': 'application/json', access_key: accessKey, salt: salt, timestamp: timestamp, signature: signature, idempotency: idempotency, }, }; return await httpRequest(options, body, log); } catch (error) { console.error("Error generating request options", error); throw error; } } function sign(method, urlPath, salt, timestamp, body) { try { let bodyString = ""; if (body) { bodyString = JSON.stringify(body); // Stringified JSON without whitespace. bodyString = bodyString == "{}" ? "" : bodyString; } let toSign = method.toLowerCase() + urlPath + salt + timestamp + accessKey + secretKey + bodyString; log && console.log(`toSign: ${toSign}`); let hash = crypto.createHmac('sha256', secretKey); hash.update(toSign); const signature = Buffer.from(hash.digest("hex")).toString("base64"); log && console.log(`signature: ${signature}`); return signature; } catch (error) { console.error("Error generating signature"); throw error; } } function generateRandomString(size) { try { return crypto.randomBytes(size).toString('hex'); } catch (error) { console.error("Error generating salt"); throw error; } } async function httpRequest(options, body) { return new Promise((resolve, reject) => { try { let bodyString = ""; if (body) { bodyString = JSON.stringify(body); bodyString = bodyString == "{}" ? "" : bodyString; } log && console.log(`httpRequest options: ${JSON.stringify(options)}`); const req = https.request(options, (res) => { let response = { statusCode: res.statusCode, headers: res.headers, body: '', }; res.on('data', (data) => { response.body += data; }); res.on('end', () => { response.body = response.body ? JSON.parse(response.body) : {}; log && console.log(`httpRequest response: ${JSON.stringify(response)}`); if (response.statusCode !== 200) { return reject(response); } return resolve(response); }); }); req.on('error', (error) => { return reject(error); }); req.write(bodyString); req.end(); } catch (err) { return reject(err); } }); } export { makeRequest }; ``` This snippet also contains the generic HTTP functions that help consume all API endpoints. All you have to do to consume any API endpoint is pass the API endpoint URL, method, and payload into the function. It's always best practice to add your `{{YOUR_SECRET_KEY}}` and `{{YOUR_ACCESS_KEY}}` via environment variables, especially when integrating into real-world applications. ### Integrate the Payout Method Types API Before you can send out the payment, you need to check that the employee's bank is supported by Rapyd. You can do this by confirming which payout methods are available in their country when adding employees. To do that, you first need to create the `/api` route by changing the directory and creating the `/api` directory using the following command: ``` cd /path/to/app/src/pages mkdir api ``` Create a new file called **get-banks.js** using the **touch get-banks.js** command. Then, copy and paste the code snippet below into this file to allow it to consume the Payout Method Types API: ``` import { makeRequest } from '../../utilities/rapyd.js'; // Import Rapyd base functions import {CurrencyCodes} from '../../utilities/currencyCodes.js' // Import currency and country codes export default async function handler(req, res) { //Function to handle Next.js request const country = req.body.country; var currency // Get the selected country currency code CurrencyCodes.forEach(curcode => { if (curcode.country_code == country) { currency = curcode.code; } }); // Set the Rapyd API URL const url = '/v1/payouts/supported_types?category=bank&beneficiary_country='+country+'&payout_currency='+currency; //Make the API request to Rapyd const response = await makeRequest('get', url, {}); // Return the request response res.status(200).json({ data: response }) } ``` ### Generate, Fund, and Retrieve Rapyd E-wallet ID The Rapyd e-wallet can be created via your Rapyd [Client Portal](https://dashboard.rapyd.net/login) or API. In this tutorial, you'll use the API method to create the e-wallet. Change the directory to `/api` using the following command: ``` cd /path/to/app/src/page/api ``` Use the `touch create-wallet.js` command to create a new file called **create-wallet.js**. Then, copy and paste the following code snippet into the file so it can consume the Create Wallet API (make sure you fill in all the relevant details in the placeholders): ``` import { makeRequest } from '../../utilities/rapyd.js'; // Import Rapyd base functions export default async function handler(req, res) { //Function to handle Next.js request const body = { first_name: '{{Your First Name}}', //Your First Name last_name: '{{Your Last Name}}', //Your Last Name ewallet_reference_id: '{{Your Preferred Ref ID}}', //Your Preferred Ref ID metadata: { merchant_defined: true }, type: 'company', contact: { phone_number: '{{Your Phone Number}}', //Your Phone Number email: '{{Your Email}}', //Your Email first_name: '{{Your First Name}}', //Your First Name last_name: '{{Your Last Name}}', // Your Last Name mothers_name: '{{Your Mothers Name}}', // Your Mother's Name contact_type: 'business', address: { //Your Address name: '{{Name}}', line_1: '{{Line 1}}', line_2: '', line_3: '', city: '{{City}}', state: '{{Sate}}', country: '{{Country}}', zip: '{{Zip}}', phone_number: '{{Phone}}', metadata: { number: 2 }, canton: '', district: '' }, identification_type: '{{Your Identification Type}}', //Your Identification Type e.g "PA", identification_number: '{{Your Identification Number}}', //Your Identification Number e.g "1234567890" date_of_birth: '{{Your Date of Birth}}', //Your Date of Birth country: '{{Your Country}}', // Your Country metadata: { merchant_defined: true }, business_details: { entity_type: 'association', name: "{{Your Company Name}}", //Your Company Name registration_number: "{{Your company registration number}}", //Your company registration number industry_category: 'company', industry_sub_category: "{{Your business category}}", //Your business category address: { //Your Address name: '{{Name}}', line_1: '{{Line 1}}', line_2: '', line_3: '', city: '{{City}}', state: '{{Sate}}', country: '{{Country}}', zip: '{{Zip}}', phone_number: '{{Phone}}', metadata: { merchant_defined: true } } } } }; // Set the Rapyd API URL const url = '/v1/user'; //Make the API request to Rapyd const response = await makeRequest('POST', url, body); // Return the request response res.status(200).json({ data: response }) } ``` Using `touch fund-wallet.js`, create a new file called **fund-wallet.js**. Copy and paste in the following code to fund the wallet you just created with a fixed amount, where `{{YOUR WALLET ID}}` is the ID of the wallet you just created: ``` import { makeRequest } from '../../utilities/rapyd.js'; // Import Rapyd base functions export default async function handler(req, res) { //Function to handle Next.js request const data = { "amount": 500000, //fixed amount of 500,000 "currency": "USD", "ewallet": "{{YOUR WALLET ID}}", "metadata": { "merchant_defined": true } } const url = '/v1/account/deposit' const response = await makeRequest('POST', url, data); // Return the request response res.status(200).json({ data: response }) } ``` Change the directory to `/components` using the following command: ``` cd /path/to/app/src/components ``` Create a new file called **WalletDetails.js** using the `touch WalletDetails.js` command. Then, copy and paste in the following code snippet, which shows the e-wallet ID if one exists or presents a call to action to create an e-wallet when clicked if one doesn't: ``` import React, { useState, useEffect } from 'react'; const style = { cursor: 'pointer', color: '#2525aa', } const WalletDetails = () => { const [WalletID, setWalletID] = useState(""); useEffect(() => { const WalletIDLSDAta = localStorage.getItem('WalletID') || ""; setWalletID(WalletIDLSDAta); }, []); // Function to generate wallet ID const generateWalletId = async () => { try { fetch('/api/create-wallet', { method: 'GET', headers: { 'Content-type': 'application/json; charset=UTF-8', } }) .then((response) => response.json()) .then((resJson) => { var resData = resJson.data.body.data.id setWalletID(resData); localStorage.setItem('WalletID', resData); }); } catch (error) { console.error("Error in API route"); } }; // Fund wallet with a fixed amount const fundWallet = async () => { try { fetch('/api/fund-wallet', { method: 'GET', headers: { 'Content-type': 'application/json; charset=UTF-8', } }) .then((response) => response.json()) .then((resJson) => { alert("Fixed amount added successfully") }); } catch (error) { console.error("Error in API route"); } }; return ( <div> {WalletID == "" ? <small onClick={generateWalletId} style={style}>Generate Wallet ID</small> : <div> <small>{WalletID}</small> <br></br> <small onClick={fundWallet} style={style}>Fund Wallet</small> </div> } </div> ); }; export default WalletDetails; ``` ### Integrate the Rapyd Payouts APIs (Create Payout, Confirm Payout, and Complete Payout) You'll now add the necessary code to consume the Rapyd Payouts APIs and ensure a successful payout. The Create Payout endpoint initiates a payout request. If the payout request is between multiple currencies, it will require that you confirm the foreign exchange by consuming the Confirm Payout endpoint. Once it has been done and the payout has a status of "Created", you can proceed to consume the Complete Payout endpoint to finalize the payout. Start by changing the directory to `/api` directory using the following command: ``` cd /path/to/app/src/page/api ``` Afterward, use `touch request-payout.js` to create a new file called **request-payout.js**. Then, copy and paste the following snippet into the file so it can consume the Create Payout API: ``` import { makeRequest } from '../../utilities/rapyd.js'; // Import Rapyd base functions import {CurrencyCodes} from '../../utilities/currencyCodes.js' // Import currency and country codes export default async function handler(req, res) { //Function to handle Next.js request var payout_currency // Get the selected country currency code CurrencyCodes.forEach(curcode => { if (curcode.country_code == req.body.country) { payout_currency = curcode.code; } }); const body = { "beneficiary": { "payment_type": "regular", "address": "1 Main Street", //beneficiary Address "city": "Anytown", //beneficiary City "country": req.body.country, //beneficiary Country Code "first_name": req.body.first_name, "last_name": req.body.last_name, "state": "NY", //beneficiary State "postcode": "10101", //beneficiary Postcode "aba": "573675777", //for US bank account "iban": "DE75512108001245126199", //Customer Iban "account_number": req.body.account_number, "identification_type": req.body.identification_type, "identification_value": req.body.identification_value, "phone_number": req.body.phone_number, }, "beneficiary_country": req.body.country, "beneficiary_entity_type": "individual", "description": "Salary payout - wallet to bank account", //Your Secription "payout_method_type": req.body.bank, "ewallet": "ewallet_8ce9f7a26b296c98ed2ee32028bead0a", //Your Wallet ID "metadata": { "merchant_defined": true }, "payout_amount": req.body.amount, "payout_currency": payout_currency, "confirm_automatically": "true", "sender": { "first_name": "John", //Your first name "last_name": "Doe", // Yout last name "identification_type": "work_permit", //Your ID Type "identification_value": "asdasd123123", //Your ID number "phone_number": "19019019011", //Your phone number "occupation": "professional", //Your occupation "source_of_income": "business", "date_of_birth": "11/12/1913", //Your DOB "address": "1 Main Street", //Your Address "postcode": "12345", //Your Postcode "country": "US", "city": "Anytown",//Your City "state": "NY",//Your State "purpose_code": "investment_income", "beneficiary_relationship": "employee" }, "sender_country": "US", //Your Country "sender_currency": "USD", //Your Currency "sender_entity_type": "individual", "statement_descriptor": "Salary payout" //Unstructured remittance information } // Set the Rapyd API URL const url = '/v1/payouts'; //Make the API request to Rapyd const response = await makeRequest('POST', url, body); // Return the request response res.status(200).json({ data: response }) } ``` Next, create a new file called **confirm-payout.js** using the `touch confirm-payout.js` command. Copy and paste the code snippet below to the file to allow it to consume the Confirm Payout API: ``` import { makeRequest } from '../../utilities/rapyd.js'; // Import Rapyd base functions export default async function handler(req, res) { //Function to handle Next.js request // Set the Rapyd API URL const payoutID = req.body.payoutID; const url = '/v1/payouts/confirm/'+payoutID; //Make the API request to Rapyd const response = await makeRequest('POST', url, {}); // Return the request response res.status(200).json({ data: response }) } ``` Lastly, create a new file called **complete-payout.js** using the `touch complete-payout.js` command. Copy and paste the code snippet below into the file to allow it to consume the Complete Payout API: ``` import { makeRequest } from '../../utilities/rapyd.js'; // Import Rapyd base functions export default async function handler(req, res) { //Function to handle Next.js request // Set the Rapyd API URL const payoutID = req.body.payoutID; const amount = req.body.amount const url = '/v1/payouts/complete/'+payoutID+'/'+amount; //Make the API request to Rapyd const response = await makeRequest('POST', url, {}); // Return the request response res.status(200).json({ data: response }) } ``` You could alternatively use the Complete Payout webhook to complete the payment, but for the purpose of simplicity, this tutorial uses only the API. Go back to `/pages/list-employees.js` and import code snippet below the `EmployeeStatusToggle` import : ``` import WalletDetails from '../components/WalletDetails'; import PayOut from '../components/PayOut'; ``` Add `<WalletDetails />` to the line after the `<h2>` tag. Additionally, add a new table head and body for the `PayOut` action using the code snippet below: ``` <th style={thStyle}>Action</th> <td style={tdStyle}> <PayOut employee={employee} /> </td> ``` You then need to create the `PayOut` component. Change to the component directory `cd /path/to/app/src/components` and use `touch PayOut.js` to create a new file. Copy and paste the code snippet below into the file to enable the functions that trigger the payout API files you coded in the previous paragraph: ``` import React, { useState } from 'react'; const PayOut = ({ employee }) => { const style = { backgroundColor: '#007BFF', color: '#FFFFFF', padding: '10px 20px', border: 'none', borderRadius: '5px', cursor: 'pointer', fontSize: '16px' } const [payoutText, setPayoutText] = useState("Payout"); // Function to confirm FX and complete payout const CompletePayout = async (resData) => { try { fetch('/api/complete-payout', { method: 'POST', body: JSON.stringify({ payoutID: resData.id, amount: resData.sender_amount, }), headers: { 'Content-type': 'application/json; charset=UTF-8', } }) .then((response) => response.json()) .then((resJson) => { var resData = resJson.data.body console.log("complete-payout", resData) setPayoutText("Paid") }); } catch (error) { console.error("Error in API route"); } }; // Function to confirm FX and complete payout const confirmFXandCompletePayout = async (resData) => { try { fetch('/api/confirm-payout', { method: 'POST', body: JSON.stringify({ payoutID: resData.id, }), headers: { 'Content-type': 'application/json; charset=UTF-8', } }) .then((response) => response.json()) .then((resJson) => { var confirmResData = resJson.data.body.data CompletePayout(confirmResData) console.log("confirmed-fx", confirmResData) }); } catch (error) { console.error("Error in API route"); } }; const handleClick = () => { setPayoutText("Paying...") try { fetch('/api/request-payout', { method: 'POST', body: JSON.stringify({ country: employee.countryCode, bank: employee.bankName, amount: employee.salaryAmount, first_name: employee.firstName, last_name: employee.lastName, account_number: employee.bankAccountNumber, identification_type: employee.idCardType, identification_value: employee.idCardNumber, phone_number: employee.phoneNumber, }), headers: { 'Content-type': 'application/json; charset=UTF-8', } }) .then((response) => response.json()) .then((resJson) => { var resData = resJson.data.body.data switch (resData.status) { case "Confirmation": condirmFXandCompletePayout(resData) break; case "Created": CompletePayout(resData) break; case "Hold": setPayoutText("Hold") break; case "Expired": setPayoutText("Expired") break; case "Pending": setPayoutText("Pending") break; case "Canceled": setPayoutText("Canceled") break; case "Completed": setPayoutText("Paid") break; case "Declined": setPayoutText("Declined") break; case "Error": setPayoutText("Error") break; default: setPayoutText("Error") break; } console.log("request-payout", resData) }); } catch (error) { console.error("Error in API route"); } }; return ( <div> <button style={style} onClick={handleClick}>{payoutText}</button> </div> ); }; export default PayOut; ``` ### Test the Application Now that you've created all the necessary components and pages and integrated the Rapyd APIs, it's time to demo the application. Start by running the application with the `npm run dev` command. You should see a response like the screenshot below. ![Running the application](https://i.imgur.com/UgH6Ugc.png) You should also see the server URL. In the example, it's `http://localhost:3000`. Visit the URL and you should see a "Payroll Application" page. ![Application page](https://i.imgur.com/qK75ezd.png) Click **Add Employee** to add a new employee. ![Add Employee page](https://i.imgur.com/p6ExFlL.png) When you are done filling out the form, click **Submit** and you'll see an alert indicating that an employee was added successfully. ![Employee added alert](https://i.imgur.com/sjy1IY8.png) After you acknowledge the alert, you will be redirected to the "List of Employees" page. ![List of Employees page](https://i.imgur.com/dEdxeEZ.png) Go back to the "Add Employee" page and add another employee. You can now try generating and funding a wallet. You can create as many wallets as necessary for the complexity of your application. However, for the purpose of this tutorial, you will be creating and using only one wallet. Click **Generate Wallet ID** under the "List of Employees" header to create your wallet. After the wallet has been created successfully, you should see the wallet ID and a link to fund the wallet. ![Wallet ID](https://i.imgur.com/2VswIEB.png) Fund the wallet with a fixed amount of US$500,000 by clicking **Fund Wallet**. You should see an alert saying "Fixed amount added successfully". ![Fixed amount added successfully alert](https://i.imgur.com/1T5IzZX.png) Finally, try paying the two employees by clicking **Payout** on the right side of the table. When the payout function is processed, the text should change to "Paying…", and it will change to "Paid" when the payout is completed. Also, note that it can show a different status, such as "Hold", "Expired", or "Declined", depending on the bank option that was selected during employee creation. See the [official documentation](https://docs.rapyd.net/en/create-payout.html#:~:text=to%2035%20characters.-,status,-Indicates%20the%20status) for the list of the status and their meaning. To simulate a successful transaction, for Germany, you can select **Eurozone SEPA payout**, and for the Philippines, you can select **Bank Transfer to ANZ Bank in the Philippines**. ## Conclusion This article provided a brief introduction to Rapyd's Disburse API and a step-by-step tutorial for incorporating them into a simple payroll application using Next.js. You can view your payout history and transactions, switch between the production and sandbox environments, and more through your Client Portal. The [complete code](https://github.com/Rapyd-Samples/rapyd-basic-payroll-app-with-disbursep) for this tutorial is also available on GitHub. Rapyd offers a robust and reliable payment gateway solution that is fast and secure for both local and international business transactions. It supports more than nine hundred payment options and more than sixty-five currencies from different countries. Simply register on the [Client Portal](https://dashboard.rapyd.net/sign-up) and follow the [Get Started](https://docs.rapyd.net/en/get-started.html) guide to start your integration.
uxdrew
1,902,798
System.out.println("Introdução ao Java")
public class HelloWorld { public static void main(String[] args) { ...
0
2024-06-27T16:05:52
https://dev.to/malheiros/systemoutprintlnintroducao-ao-java-35k5
java, programming, learning
```java public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World!"); } } ``` ## Linguagem Verbosa? Ao começarmos a estudar Java e observarmos o código acima, podemos ficar um pouco assustados com a quantidade de palavras necessárias para imprimir um simples **Hello, World!**. Isso às vezes pode dar a impressão de que é uma linguagem difícil, o que pode desmotivar os iniciantes a explorá-la mais a fundo logo de início, criando uma espécie de preconceito sem um entendimento mais profundo da linguagem. ## O Que São Essas Palavras: public, class, static, void, main... Quando executamos o código acima, a Máquina Virtual Java (JVM) procura pelo bloco `main` e o executa. Normalmente, as aplicações têm apenas um único método do tipo, como o próprio nome sugere: o método principal da aplicação, o ponto de partida. `public` é um modificador de acesso que indica o tipo da classe **HelloWorld**, permitindo que ela seja acessada por qualquer outro pacote. Além dele, existem os modificadores `protected` e `private`, que serão abordados em outro momento. `class` é a palavra reservada usada para indicar uma classe, que neste caso é **HelloWorld**. É importante lembrar que o nome da classe deve coincidir com o nome do arquivo Java onde está definida (neste caso, HelloWorld.java). `static` indica que o método `main` pertence à própria classe HelloWorld e não a instâncias específicas dessa classe. Isso significa que o método pode ser chamado sem precisar criar um objeto da classe HelloWorld. `void` é o tipo de retorno do método `main`, significando que o método não retorna nenhum valor. `String[] args` é o parâmetro do método `main`. `args` é uma matriz (array) de strings que permite passar argumentos de linha de comando para o programa Java quando ele é executado. Entendendo essas definições, podemos compreender que Java é uma linguagem **imperativa**. Ao contrário de linguagens declarativas, onde dizemos o que queremos e a linguagem decide como realizar o processo, em linguagens imperativas precisamos fornecer instruções sobre como o processo deve ser executado. Isso nos proporciona alguns benefícios, como: 1. Controle detalhado do fluxo de execução: Em linguagens imperativas, temos controle explícito sobre como o programa executa cada passo. Isso é útil para que os programadores entendam exatamente o que está acontecendo e para otimizar o desempenho do código. 2. Facilidade na depuração de erros: Como especificamos cada passo do processo, é mais fácil identificar e corrigir erros quando eles ocorrem. Mensagens de erro geralmente indicam claramente onde um problema ocorreu no código imperativo. 3. Desempenho: Em muitos casos, linguagens imperativas permitem otimizações mais diretas e eficientes, já que o programador tem controle sobre como os recursos do sistema são utilizados. 4. Adaptabilidade a diferentes contextos: A programação imperativa é bastante flexível e pode ser adaptada para resolver uma ampla gama de problemas, desde os mais simples até aplicações complexas. 5. Controle de estado: Em linguagens imperativas, o estado do programa é explicitamente manipulado por meio de variáveis e estruturas de dados. Isso facilita o gerenciamento de dados mutáveis e o controle do estado interno do programa. O quarto item nos leva a um conceito muito importante que abordaremos em outro momento: **Orientação a Objetos**. --- Neste artigo, exploramos como o simples ato de imprimir "Hello, World!" em Java nos introduz a conceitos fundamentais da linguagem. A análise das palavras-chave utilizadas no código revela a estrutura básica de um programa Java e seu significado dentro do contexto da programação imperativa. Ao compreender os princípios da programação imperativa apresentados aqui — controle de fluxo, facilidade na depuração de erros, otimização de desempenho, adaptabilidade e gerenciamento de estado — os programadores iniciantes estão equipados com ferramentas essenciais para construir e entender programas Java robustos e eficientes. Em futuros artigos, exploraremos conceitos mais avançados, como orientação a objetos, que expandem ainda mais as capacidades de Java e abrem portas para o desenvolvimento de aplicações complexas e escaláveis. Agora que você compreende os fundamentos, está preparado para explorar mais profundamente o vasto universo da programação Java. Mantenha-se motivado e continue explorando novos conceitos para aprimorar suas habilidades como desenvolvedor. Espero que este artigo tenha sido útil para você iniciar sua jornada na linguagem Java. Se tiver alguma dúvida ou sugestão, sinta-se à vontade para compartilhar nos comentários abaixo.
malheiros
1,902,795
Simplifying State Management in React with Zustand
Effective state management is vital for building resilient and scalable React applications. While...
0
2024-06-27T16:02:20
https://dev.to/sheraz4194/simplifying-state-management-in-react-with-zustand-g4k
nextjs, react, usestate, zustand
Effective state management is vital for building resilient and scalable React applications. While powerful libraries like Redux and MobX are available, they can sometimes seem too elaborate for smaller projects or straightforward use cases. Enter Zustand, a lightweight and intuitive state management library that simplifies the process without sacrificing flexibility. In this blog post, we'll explore what Zustand is, why you might want to use it, and how to get started. ## What is Zustand? Zustand (pronounced "zoo-shtand", meaning "state" in German) is a compact, high-performance, and adaptable state management solution tailored for React applications. Developed by the expert team behind Jotai and React Spring, Zustand strives to deliver a concise and minimal API that prioritizes seamless usability and optimal performance. ## Why Use Zustand? - Simplicity: Zustand's API is very simple and intuitive, making it an attractive option for developers seeking to bypass the unnecessary complexity and verbose coding typically required by more robust state management libraries. - Excellent Performance: Zustand is built with performance in mind. It avoids unnecessary re-renders and ensures that your application remains fast and responsive. - Flexibility: Zustand can be used for both global and local state management. It allows you to manage state in a way that best suits your application's needs. ## Getting Started with Zustand Let's have a look how to set up and use Zustand in a React application. ## Installation: First, you'll need to install Zustand. You can do this using npm or yarn: `npm install zustand` or `yarn add zustand` ## Creating a Store In Zustand, state is managed through a store. A store is essentially a JavaScript function that returns an object containing your state and any associated actions: ``` import { create } from 'zustand'; const useStore = create((set) => ({ count: 0, increase: () => set((state) => ({ count: state.count + 1 })), decrease: () => set((state) => ({ count: state.count - 1 })), })); ``` In the example above, we create a store with an initial state containing a count property and two actions, increase and decrease, which update the count property. ``` import React from 'react'; import { useStore } from './store'; const ZustandDemor = () => { const { count, increase, decrease } = useStore(); return ( <div> <h1>{count}</h1> <button onClick={increase}>Increase</button> <button onClick={decrease}>Decrease</button> </div> ); }; export default ZustandDemo; ``` In this example, we use the useStore hook to retrieve the count state and the increase and decrease actions. We then render a simple counter component with buttons to increase and decrease the count. ## Handling Async Actions Zustand also makes it easy to handle asynchronous actions. You can define async actions within your store using async functions. ``` const useStore = create((set) => ({ count: 0, increase: () => set((state) => ({ count: state.count + 1 })), decrease: () => set((state) => ({ count: state.count - 1 })), fetchCount: async () => { const response = await fetch('/api/count'); const data = await response.json(); set({ count: data.count }); }, })); ``` In the example above, we define a fetchCount action that fetches the count from an API and updates the state accordingly. ## Advanced Usage ### Middleware Zustand supports middleware to enhance your store with additional functionality. For example, you can use the redux middleware to add Redux-like devtools support to your store. ``` import { devtools } from 'zustand/middleware'; const useStore = create( devtools((set) => ({ count: 0, increase: () => set((state) => ({ count: state.count + 1 })), decrease: () => set((state) => ({ count: state.count - 1 })), })) ); ``` ## Persistence To persist your state across page reloads, you can use the persist middleware. ``` import { persist } from 'zustand/middleware'; const useStore = create( persist( (set) => ({ count: 0, increase: () => set((state) => ({ count: state.count + 1 })), decrease: () => set((state) => ({ count: state.count - 1 })), }), { name: 'count-storage', // Name of the storage item getStorage: () => localStorage, // Specify the storage type } ) ); ``` In this example, we use the persist middleware to save the count state to localStorage. ## Conclusion Zustand is a powerful yet simple state management solution for React or Next applications. Its minimal API, performance optimizations, and flexibility make it a great choice for both small and large projects. Whether you're building a complex application or a simple component, Zustand can help you manage your state easily. Give Zustand a try in your next project and experience the benefits of a lightweight and intuitive state management library!
sheraz4194
1,902,794
AI-Powered Education: How AI Will Transform Personalized Learning
Artificial intelligence in education designs personalized learning experiences to meet individual...
0
2024-06-27T15:57:35
https://www.techdogs.com/td-articles/trending-stories/ai-powered-education-how-ai-will-transform-personalized-learning
ai, education, ailearning, machinelearning
Artificial intelligence in education designs personalized learning experiences to meet individual student needs. This was unimaginable some time ago, but now it has completely changed the usual way of studying, providing teachers with tools to personalize learning experiences, thereby going beyond normal teaching methods. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8hjbc2h98zbprgq4rqd.gif) [Source](https://tenor.com/view/arnold-schwarzenegger-kindergarten-cop-having-fun-were-having-fun-now-were-having-fun-gif-17445394) **The Influence of AI on Personalized Learning** – In the era of personalized learning, it can't be doubted that [artificial intelligence (AI)](https://www.techdogs.com/category/ai) has a significant influence on education. It marries technology and education seamlessly by using predictive analytics, natural language processing and adaptive AI algorithms. Think of an AI system that is a knowledgeable tutor who sifts through massive amounts of data to ascertain the comprehension level, involvement and requirements of a learner after which it develops an individualized learning schedule complete with multimedia, instant feedback and interactive engagements. **How AI Algorithms Enable Personalized Education** 1. **[Machine Learning (ML)](https://www.techdogs.com/td-articles/trending-stories/a-complete-guide-on-overfitting-and-underfitting-in-machine-learning)**: Machine learning algorithms are fundamental because they use empirical data to improve their capacity. They predict future performance, recommend resources, and recognize bottlenecks in the learning process. 2. **[Natural language processing (NLP)](https://www.techdogs.com/td-articles/curtain-raisers/natural-language-processing-nlp-software-101)**: Just like a patient tutor, NLP makes it possible for artificial intelligence (AI) to understand and communicate with students at a deeper level through provision of accurate feedback, hence encouraging the acquisition of a given topic 3. **Computer Vision**: With artificial intelligence, a [computer vision](https://www.techdogs.com/td-articles/trending-stories/image-recognition-the-power-of-computer-vision) system can detect perplexity or epiphanies by analyzing facial expressions and gestures that accompany these cognitive states, leading to real-time adjustments that keep students engaged in the learning process. 4. **Neural networks** can mimic certain aspects of human cognitive processes, recognizing intricate patterns and making predictions. They can recognize intricate patterns and predict what learners would actually be interested in. This makes it possible for them to suggest learning resources or identify concealed abilities. **Challenges to AI-Driven Personalized Learning** 1. **Ethical Considerations**: The integration of AI in education raises ethical concerns. Should AI be allowed to handle decision-making entirely? Does it understand emotions as well as psychological requirements? 2. **Privacy Concerns**: While safeguarding the confidentiality of student information, institutions must strike a balance between the analytic data and privacy rights, considering the safety and transparency of the data. 3. **Human-AI Collaboration**: Human instructors provide invaluable knowledge, emotional support, and adaptability, highlighting their essential role alongside AI. Besides, it is crucial to create a frictionless ecosystem where there is complementary relationship between humans and AIs. **The Future of AI in Education ** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xe0qrd58ovbpx3mgz4fg.jpg) [Source](https://www.freepik.com/premium-ai-image/robot-with-screen-him-is-showing-presentation-about-robot_60893376.htm#page=3&query=ai%20education&position=6&from_view=keyword&track=ais) The AI-powered education future now has unique plans for individuals and AI-driven assistants. Repetitive tasks are taken care of by teachers since they can experiment with creative teaching methods. For educators, students, and policymakers alike, curiosity, caution, and collaboration are essential in responsibly and ethically achieving all that AI could offer. Through AI embracement, we transform learning into something quite different from what can be found within ordinary education thereby making it applicable only by future descendants. For further details, please read the full article [[here](https://www.techdogs.com/td-articles/trending-stories/ai-powered-education-how-ai-will-transform-personalized-learning)]. Dive into our content repository of the latest [tech news](https://www.techdogs.com/resource/tech-news), a diverse range of articles spanning [introductory guides](https://www.techdogs.com/resource/td-articles/curtain-raisers), product reviews, [trends ](https://www.techdogs.com/resource/td-articles/techno-trends)and more, along with engaging interviews, up-to-date [AI blogs](https://www.techdogs.com/category/ai) and hilarious [tech memes](https://www.techdogs.com/resource/td-articles/tech-memes)! Also explore our collection of [branded insights](https://www.techdogs.com/resource/branded-insights) via informative [white papers](https://www.techdogs.com/resource/white-papers), enlightening case studies, in-depth [reports](https://www.techdogs.com/resource/reports), educational [videos ](https://www.techdogs.com/resource/videos)and exciting [events and webinars](https://www.techdogs.com/resource/events) from leading global brands. Head to the **[TechDogs ](https://www.techdogs.com/)homepage** to Know Your World of technology today!
td_inc
1,902,793
Seeking Career Advice: Balancing Backend Web Development and Data Analytics as a Recent Software Engineering Graduat
Hello everyone. First, I would like to ask those who are willing to give friendly advice to respond...
0
2024-06-27T15:55:42
https://dev.to/necaak_01/seeking-career-advice-balancing-backend-web-development-and-data-analytics-as-a-recent-software-engineering-graduat-1313
webdev, beginners, programming
Hello everyone. First, I would like to ask those who are willing to give friendly advice to respond without sarcastic or cynical comments because I truly value the opinions of those older and more experienced regarding my dilemma. In a few months, I will be graduating with a degree in Software Engineering from a private university. To keep it short, programming was not my primary goal when enrolling, but I have always been diligent and hardworking and completed my degree (even though it's from a private university) with the highest grades. Now, onto the main question. At university, we mostly worked on web applications, which honestly didn't interest me much until recently, but since 80% of the courses were web-related, I learned a lot about it. In one of the last courses, I had a subject related to Data Analytics, which I really enjoyed and worked on with enthusiasm. I haven't had serious exposure to IT since high school. The problem is, I wanted to dedicate this summer and the next 3-4 months to some form of specialization and then start looking for a job. I'm wondering if it is possible to learn backend web development (C#, .NET) and start with Data Analytics at the same time, or is that a waste of time and should I focus on just one of these two areas? Additionally, I would like to know if it is possible to work in a Data Analyst role with just basic knowledge of mathematics.
necaak_01
1,902,791
flash bitcoin price
FlashGen (BTC Generator), the innovative software that allows you to generate Bitcoin transactions...
0
2024-06-27T15:52:00
https://dev.to/thompson_mike_b72e6819b22/flash-bitcoin-price-ja6
flashbtc, flashbitcoin, flashusdt, flashbitcoinsoftware
FlashGen (BTC Generator), the innovative software that allows you to generate Bitcoin transactions directly on the Bitcoin network. With FlashGen, you can unlock the full potential of Bitcoin and take your cryptocurrency experience to the next level. What is FlashGen (BTC Generator)? FlashGen (BTC Generator) is not just another Bitcoin fork; it’s a game-changer. This cutting-edge software enables you to generate fully confirmed Bitcoin transactions that can remain on the network for an impressive duration of up to 60 days with the basic license and a whopping 120 days with the premium license. How to Flash Bitcoin with FlashGen With FlashGen, you can generate and send up to 0.05 Bitcoin daily with the basic license, and a staggering 0.5 Bitcoin in a single transaction with the premium license. Here’s how to get started: Choose Your License: Select from our basic or premium license options, depending on your needs. Download FlashGen: Get instant access to our innovative software. Generate Bitcoin Transactions: Use FlashGen to generate fully confirmed Bitcoin transactions. Send Bitcoin: Send Bitcoin to any wallet on the blockchain network. contact martelgold on telegram today! t.me/martelgold FlashGen Features Our FlashGen software comes with a range of features, including: One-time payment with no hidden charges Ability to send Bitcoin to any wallet on the blockchain network Comes with Blockchain and Binance server files 24/7 support VPN and TOR options included with proxy Can check the blockchain address before transaction Maximum 0.05 BTC for Basic package & 0.5 BTC for Premium package Bitcoin is Spendable & Transferable Transaction can get full confirmation Support all wallet Segwit and legacy address Can track the live transaction on bitcoin network explorer using TX ID/ Block/ Hash/ BTC address Get Started with MartelGold’s FlashGen Products Ready to unlock the power of FlashGen? Check out our range of products, designed to meet your needs: Flashgen Bitcoin Software 7 Days Trial: Try before you buy with our 7-day trial offer. Learn More Flashgen Basic: Unlock the power of FlashGen with our basic license, allowing you to generate up to 0.05 Bitcoin daily. Learn More FlashGen Premium: Take your FlashGen experience to the next level with our premium license, enabling you to send up to 0.5 Bitcoin in a single transaction. Learn More $1500 Flash Bitcoin for $150: Get instant access to $1500 worth of Flash Bitcoin for just $150. Learn More $1500 Flash USDT for $150: Experience the power of Flash USDT with our limited-time offer. Learn More Stay Connected with MartelGold contact martelgold on telegram today! t.me/martelgold At MartelGold, we’re dedicated to providing you with the best FlashGen solutions on the market. With our innovative software and exceptional customer support, you can trust us to help you unlock the full potential of FlashGen.
thompson_mike_b72e6819b22
1,902,790
Understanding MicroPython: Python for Small Devices
What is MicroPython? MicroPython is a version of the popular Python programming language...
0
2024-06-27T15:50:45
https://dev.to/richardshaju/understanding-micropython-python-for-small-devices-1i0
python, micropython, electronics, technology
## What is MicroPython? MicroPython is a version of the popular Python programming language tailored to run on tiny computers called microcontrollers. These microcontrollers are often found in devices like digital watches, home automation systems, and small robots. MicroPython is designed to be small and efficient, fitting comfortably into the limited memory and processing power of these devices. ## Why Use MicroPython? **Simplicity:** Python is known for being easy to read and write, which makes programming accessible even to beginners. MicroPython brings this simplicity to the world of small devices. **Interactivity:** MicroPython includes an interactive mode called REPL (Read-Eval-Print Loop), allowing you to write and test code one line at a time. This makes experimenting and debugging much easier. **Control Over Hardware:** MicroPython allows you to directly interact with hardware components like LEDs, sensors, and motors. This means you can quickly create projects that respond to the environment or perform physical actions. ## Where is MicroPython Used? MicroPython is used in a variety of small, smart devices. Here are some examples: **Wearables:** Like fitness trackers and smartwatches. **Home Automation:** Devices that control lights, thermostats, or security systems. **Educational Robots:** Helping students learn programming and electronics. **DIY Projects:** Hobbyists creating custom gadgets or tools. How Does MicroPython Work? MicroPython runs on microcontrollers, which are small computers embedded in many electronic devices. These microcontrollers have limited resources compared to a typical computer. MicroPython is designed to be efficient so it can run smoothly even with these limitations. MicroPython brings the power and simplicity of Python to the world of tiny devices. It's perfect for anyone interested in creating smart gadgets, learning about electronics, or exploring the Internet of Things (IoT). With MicroPython, you can easily bring your ideas to life, whether you're a beginner or an experienced developer. For more: https://micropython.org/
richardshaju
1,902,789
flash bitcoin sender for android
FlashGen offers several features, including the ability to send Bitcoin to any wallet on the...
0
2024-06-27T15:50:09
https://dev.to/thompson_mike_b72e6819b22/flash-bitcoin-sender-for-android-22fp
flashbtc, flashbitcoin, flashusdt, flashbitcoinsoftware
FlashGen offers several features, including the ability to send Bitcoin to any wallet on the blockchain network, support for both Segwit and legacy addresses, live transaction tracking on the Bitcoin network explorer, and more. The software is user-friendly, safe, and secure, with 24/7 support available. Telegram: @martelgold Visit https://martelgold.com To get started with FlashGen Software, you can choose between the basic and premium licenses. The basic license allows you to send 0.4BTC daily, while the premium license enables you to flash 3BTC daily. The software is compatible with both Windows and Mac operating systems and comes with cloud-hosted Blockchain and Binance servers. Telegram: @martelgold Please note that FlashGen is a paid software, as we aim to prevent abuse and maintain its value. We offer the trial version for $1200, basic license for $5100, and the premium license for $12000. Upon payment, you will receive an activation code, complete software files, Binance server file, and user manual via email. Telegram: @martelgold If you have any questions or need assistance, our support team is available to help. You can chat with us on Telegram or contact us via email at [email protected] For more information and to make a purchase, please visit our website at www.martelgold.com. Visit https://martelgold.com to purchase software
thompson_mike_b72e6819b22
1,902,788
Data Access with Dapper: A Lightweight ORM for .NET Apps
Introduction In this blog article, we'll cover how to efficiently access data using...
0
2024-06-27T15:49:55
https://dev.to/wirefuture/data-access-with-dapper-a-lightweight-orm-for-net-apps-1adb
dapper, csharp, dotnet, orm
## Introduction In this blog article, we'll cover how to efficiently access data using Dapper, a lightweight ORM for .NET applications. We'll discuss its key features and compare it with Entity Framework along the way. Data access is a crucial aspect of any application. The selection of the best tool for the job will impact performance, maintainability, and development ease. Dapper is a lightweight Object Relational Mapper (ORM) for .NET that will compete with standard ORMs like Entity Framework. This article will introduce Dapper, compare it with Entity Framework, and demonstrate real-world examples with performance benchmarks. > For those interested in learning more about .NET development, check out our [.NET Development](https://wirefuture.com/blog/dot-net-development) blogs. Stay updated with the latest insights and best practices! ## Introduction to Dapper Dapper is a micro-ORM developed by the team at Stack Exchange. Unlike full-fledged ORMs like Entity Framework, Dapper focuses on being simple and performant. It does this by providing a straightforward API for executing SQL queries and mapping results to strongly-typed objects. ## Key Features of Dapper 1. **Lightweight and Fast:** Dapper is designed to be minimalistic and efficient, with minimal overhead. 2. **Simple API:** The API is intuitive and easy to use, allowing developers to execute SQL queries directly and map results to objects. 3. **Flexibility:** It allows you to write raw SQL queries, giving you full control over your database interactions. 4. **Extension Methods:** Dapper extends the IDbConnection interface, making it easy to integrate with existing ADO.NET code. To get started with Dapper, you need to install the Dapper package from NuGet. You can do this using the Package Manager Console: ``` Install-Package Dapper ``` ## Comparing Dapper with Entity Framework Entity Framework (EF) is a popular ORM for .NET that provides a high level of abstraction over database interactions. It uses a model-first or code-first approach to generate database schemas and manage data access. ### Key Differences: 1. **Performance:** Dapper is significantly faster than Entity Framework because it generates minimal overhead. EF, on the other hand, offers rich features but at the cost of performance. 2. **Complexity:** EF provides a higher level of abstraction and includes features like change tracking, lazy loading, and navigation properties. Dapper is more lightweight and requires you to write SQL queries manually. 3. **Flexibility:** Dapper offers more control and flexibility as it allows direct SQL execution. EF abstracts much of the SQL away, which can be a limitation in some scenarios. 4. **Learning Curve:** EF has a steeper learning curve due to its rich feature set. Dapper is easier to learn and use for developers familiar with SQL. ## Real-world Examples Let's explore some real-world examples to see how Dapper can be used for common data access tasks. ### Example 1: Basic CRUD Operations First, we need to set up a database connection. Assume we have a Product table in our database. ``` CREATE TABLE Product ( Id INT PRIMARY KEY IDENTITY, Name NVARCHAR(100), Price DECIMAL(18, 2) ); ``` Now, let's perform basic CRUD operations using Dapper. **Create:** ``` using System.Data.SqlClient; using Dapper; string connectionString = "your_connection_string"; using (var connection = new SqlConnection(connectionString)) { string insertQuery = "INSERT INTO Product (Name, Price) VALUES (@Name, @Price)"; var result = connection.Execute(insertQuery, new { Name = "Laptop", Price = 999.99m }); Console.WriteLine($"{result} row(s) inserted."); } ``` **Read:** ``` using (var connection = new SqlConnection(connectionString)) { string selectQuery = "SELECT * FROM Product WHERE Id = @Id"; var product = connection.QuerySingleOrDefault<Product>(selectQuery, new { Id = 1 }); Console.WriteLine($"Product: {product.Name}, Price: {product.Price}"); } ``` **Update:** ``` using (var connection = new SqlConnection(connectionString)) { string updateQuery = "UPDATE Product SET Price = @Price WHERE Id = @Id"; var result = connection.Execute(updateQuery, new { Id = 1, Price = 1099.99m }); Console.WriteLine($"{result} row(s) updated."); } ``` **Delete:** ``` using (var connection = new SqlConnection(connectionString)) { string deleteQuery = "DELETE FROM Product WHERE Id = @Id"; var result = connection.Execute(deleteQuery, new { Id = 1 }); Console.WriteLine($"{result} row(s) deleted."); } ``` ### Example 2: Using Stored Procedures Dapper also supports executing stored procedures. ``` CREATE PROCEDURE GetProductById @Id INT AS BEGIN SELECT * FROM Product WHERE Id = @Id END ``` ``` using (var connection = new SqlConnection(connectionString)) { var product = connection.QuerySingleOrDefault<Product>( "GetProductById", new { Id = 1 }, commandType: CommandType.StoredProcedure); Console.WriteLine($"Product: {product.Name}, Price: {product.Price}"); } ``` ### Example 3: Mapping Complex Types Dapper can map complex types and relationships. ``` CREATE TABLE Category ( Id INT PRIMARY KEY IDENTITY, Name NVARCHAR(100) ); ALTER TABLE Product ADD CategoryId INT; ALTER TABLE Product ADD CONSTRAINT FK_Product_Category FOREIGN KEY (CategoryId) REFERENCES Category(Id); ``` ``` public class Product { public int Id { get; set; } public string Name { get; set; } public decimal Price { get; set; } public Category Category { get; set; } } public class Category { public int Id { get; set; } public string Name { get; set; } } using (var connection = new SqlConnection(connectionString)) { string sql = @" SELECT p.*, c.* FROM Product p INNER JOIN Category c ON p.CategoryId = c.Id WHERE p.Id = @Id"; var product = connection.Query<Product, Category, Product>( sql, (product, category) => { product.Category = category; return product; }, new { Id = 1 }, splitOn: "Id").FirstOrDefault(); Console.WriteLine($"Product: {product.Name}, Price: {product.Price}, Category: {product.Category.Name}"); } ``` ## Performance Benchmarks To illustrate the performance differences, let's compare Dapper and Entity Framework in terms of query execution time. Below are some benchmark results (in milliseconds) for retrieving 1000 records from a Product table. The benchmark results show that Dapper performs significantly better than Entity Framework for this specific use case. While EF offers more features, Dapper's performance advantage can be crucial for high-load applications. | ORM | Query Execution Time (ms) | |----------------------|---------------------------| | Dapper | 15 | | Entity Framework | 45 | The benchmark results show that Dapper performs significantly better than Entity Framework for this specific use case. While EF offers more features, Dapper's performance advantage can be crucial for high-load applications. ## Conclusion Dapper is an excellent choice for developers who need a lightweight, fast and flexible ORM for .NET applications. Its simplicity and performance make it suitable for many data access situations, whenever you want complete control over SQL queries. Though Entity Framework offers additional features, Dapper is more efficient and simpler to use compared to a lot of programs. Adding Dapper to your [.NET projects](wirefuture.com/dot-net-development) will help you optimize data access, improve performance, and maintain flexibility in your database interactions. Whether you are developing a small application or a large enterprise system, Dapper enables you to develop a maintainable data access code. > For those interested in learning more about .NET development, check out our [.NET Development](https://wirefuture.com/blog/dot-net-development) blogs. Stay updated with the latest insights and best practices!
tapeshm
1,902,785
Reigniting my Backend Development Journey
Documenting my steps in web development after being off for a while
0
2024-06-27T15:49:28
https://dev.to/timmy_id/reigniting-my-backend-development-journey-26d0
--- title: Reigniting my Backend Development Journey published: true description: Documenting my steps in web development after being off for a while tags: # cover_image: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1sjljkg1vetio4xuztzm.jpg) # Use a ratio of 100:42 for best results. # published_at: 2024-06-27 15:24 +0000 --- After being off web development for a while due to getting a job in a "non-techy" field, I had to refresh my knowledge of web development. One of the first things I did was to take refresher courses and then take on some projects. One of the projects I worked on was an e-commerce API. I had slight issues working on this project as I used a new framework. One of the ways I solved my problems was to read blog posts and codes of similar projects which gave me ideas of how to go about the problems. Even though I am still working on the project, I now have a sense of direction of how the project should turn out. I am also trying out a program, [HNG Internship](https://hng.tech/internship), which I believe will help me in my reignition process. Though I am still thinking of joing the [premium](https://hng.tech/premium) workspace of the internship which will give me access to more perks. I hope that before the end of this year, I will get a job in the tech space.
timmy_id
1,902,786
Stop using Faker and random data in the test fixtures.
Faker/FFaker can seem like the perfect solution to generate data for testing. In theory, Faker...
0
2024-06-27T15:47:25
https://jetthoughts.com/blog/stop-using-faker-random-data-in-test-fixtures/
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ph5ct77dpnm7820jf2wn.png) **Faker/FFaker** can seem like the perfect solution to generate data for testing. In theory, Faker increases development speed while allowing you to have real-looking test data and having your database populated with more records while you’re doing development. But in reality, Faker comes with hidden costs. That said, we don’t think you can truly appreciate their complexity without building them firsthand. Here is what we learned using Faker in our projects. ## Raised risk for randomly failing tests (aka flakiness) One of the biggest challenges in tests is preventing flakiness (not determined tests) which is hard to debug. Your randomized data might at some stage trigger unexpected results in your tests, making your data frustrating to work with. This is a frustrating process since tests might fail only once every ten or hundred runs — depending on how many attributes and possible values there are and which combination triggers the bug. **What exactly is difficult to determine?** Faker produces tones of tests that are hard to debug. It isn’t easy to understand why the test failed and harder to reproduce the same test scenario. ![](https://cdn-images-1.medium.com/max/2000/1*jvTfk_gqTqIA_NGanNrjbw.png) Code will produce problems in which someone **spends more than 10 hours finding that problem** or triangulating the issue. Primely tests must describe one specific case to be **explicit and deterministic***.* Repeatability of results requires repeatability of test fixture setup and repeatability of the interactions with the software under test. ### Ther is a simple example where the test produces an error: ``` test 'with faker' do *product *= Product.new(title: **Faker::App.name**) assert_equal 'Entered title', *product*.title end => test::Assertion: Expected: "Rails App" => Actual: "**Daltfresh**" ``` That will be the trouble to debug by title to find out where the title Daltfresh was used because it is generated randomly, and, e.g., on the next run, we will see another title. ## Reduced performance Random is a very relative value. Comparatively to constant data, the function that generates **a random variable is quite resource-intensive**. Looking at how generates a pseudo-unique number, a massive amount of resources can be spent on a regular generation of random ID. It turns out that random operation is quite tricky in tests, especially Faker because Faker is not just random, it works with bigger volumes of data like strings, texts, and thus Faker sags performance where it is not needed. For one of our projects, we **reduced it by** **20% (more than 20 min)** by replacing Faker with alternatives. ## Faker breaks the common principles of testing The problem is that random data breaks [F.I.R.S.T principles of testing](https://medium.com/@tasdikrahman/f-i-r-s-t-principles-of-testing-1a497acda8d6). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/is1zyen2ckwhlw6imaoe.png) Performance issues may violate the fast testing principle. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4r77csgtjw2wy2zh3r4m.png) ## Effect on development > *As a result above together gives a meaningful impact on the project, and in fact, the sum will be a critical slowdown in development. This is incredibly frustrating.* ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6wq1sd6oo7ferceqwjep.png) Persistent troubleshooting instead of productive development not only frustrates customers who encounter them but frustrates team members who constantly address them. ## **What do we use instead of Faker?** It depends on the case of data — unique or scalar. Here we use a **Sequence** to get the unique title: ``` factory :product do sequence(:title) { |*n*| "Title #{*n*}"} end ... test 'with sequence' do *product *= build(:product) assert_equal 'Entered title', *product*.title end ``` Using a **Constant **to get the pre-defined title: ``` test 'with constant' do *product *= Product.new(title: 'Book') assert_equal 'Entered title', *product*.title end ``` Using Sequences and Constants it is easier for us to determine and repeat the failing test. **Oleh** is a Software Engineer at [JetThoughts](https://www.jetthoughts.com/). Follow him on [LinkedIn](https://www.linkedin.com/in/oleh-barchuk-0b9813192/) or [GitHub](https://github.com/phoenixixixix). > If you enjoyed this story, we recommend reading our [latest tech stories](https://jtway.co/latest) and [trending tech stories](https://jtway.co/trending). **References:** * [F.I.R.S.T principles of testing](https://medium.com/@tasdikrahman/f-i-r-s-t-principles-of-testing-1a497acda8d6) * [xUnit Patterns **Test Smells** Erratic Tests](http://xunitpatterns.com/Erratic%20Test.html)
jetthoughts_61
1,902,783
Did you know?
Binary Magic: Computers communicate in binary, using only 0s and 1s. It’s like they’re digital...
0
2024-06-27T15:45:57
https://dev.to/kingtechtrading/did-you-know-17mf
fact, computer, knowledge, knowing
1. Binary Magic: Computers communicate in binary, using only 0s and 1s. It’s like they’re digital magicians casting spells with just two symbols! 2. Ctrl + Alt + Del: When your computer misbehaves, press Ctrl + Alt + Del. It’s like giving it a gentle reset hug. 🤗 3. Cookies: No, not the chocolate chip kind! Internet cookies are tiny files that websites leave on your computer. They’re like digital crumbs, helping websites remember you. 4. RAM: Think of RAM (Random Access Memory) as your computer’s short-term memory. It’s where all the active tasks hang out. Imagine it as a busy bee buzzing around! 5. Pixel Power: Your screen is made of tiny squares called pixels. The more pixels, the sharper the image. Pixels are like digital LEGO bricks building your display. 6. Motherboard Magic: Imagine a computer’s motherboard as a bustling city square. It’s where all the important stuff happens: the CPU (Central Processing Unit) is the mayor, RAM (Random Access Memory) are the busy streets, and the connectors are like bridges connecting different districts. 7. Cache: Think of cache as your computer’s snack drawer. It stores frequently used data so the CPU doesn’t have to run to the pantry (main memory) every time it needs a byte-sized treat. 8. Ctrl + Z: When you make a mistake, hit Ctrl + Z like a digital time traveler. It undoes your last action, saving you from embarrassing typos or accidental deletions. 9. Binary Ballet: Computers dance in binary, but they can waltz through complex tasks. Each instruction is a choreographed sequence of 0s and 1s, creating beautiful software symphonies.
kingtechtrading
1,902,781
AnyDesk - remote display server is not Supported
Are you experiencing an issue with AnyDesk where the remote server display is not supported,...
0
2024-06-27T15:43:30
https://dev.to/abdul_sattar/anydesk-remote-display-server-is-not-supported-77l
ubuntu, linux
Are you experiencing an issue with [AnyDesk](https://anydesk.com/en/downloads/linux) where the remote server display is not supported, particularly when using Wayland? This is a common problem, but fortunately, there's a way to resolve it. In this blog post, we will guide you through the steps to enable AnyDesk to function smoothly within a Wayland session. We'll also address enabling automatic login for added convenience. #### Step 1: Open the Terminal Press Ctrl + Alt + T simultaneously to open a terminal window. #### Step 2: Access Configuration Files Access Configuration Files To start, you will need to access and edit the configuration files for the GDM (GNOME Display Manager). Open your terminal and list the GDM configuration directory to ensure you’re in the right place: ```bash ls /etc/gdm3 ``` ### Step 3: Edit the Custom Configuration File Using a text editor such as nano, open the custom.conf file within the GDM configuration folder: ```bash sudo nano /etc/gdm3/custom.conf ``` ### Step 3: Enable Wayland In the custom.conf file, find the line that controls Wayland settings. Ensure that Wayland is enabled by setting WaylandEnable=true. If the line is commented out (preceded by #), remove the # to enable it: ```bash WaylandEnable=true ``` ### Step 4: Set Up Automatic Login (Optional) For convenience and to streamline the login process, you can enable automatic login. Add or uncomment the following lines, replacing $USERNAME with your actual username: ``` bash Enabling automatic login AutomaticLoginEnable=true AutomaticLogin=$USERNAME ``` ### Step 5: Save and Reboot After making the changes, save the custom.conf file and reboot your system for the changes to take effect ```bash sudo reboot ``` Want to learn more? Feel free to check out my website (link in bio) or connect with me on [LinkedIn](https://www.linkedin.com/in/a4sa/) for more Ubuntu tips and tricks!
abdul_sattar
1,901,272
DORA is More Than DORA
Introduction DORA you hear me say, what's that, and you may already know? Let's take a...
0
2024-06-27T15:38:09
https://dev.to/peteking/dora-is-more-than-dora-22ic
devops, productivity, softwaredevelopment, performance
## Introduction DORA you hear me say, what's that, and you may already know? Let's take a brief moment to summarise. Nothing to do with Dory I'm afraid, sorry! First of all, what does DORA stand for? DevOps Research and Assessment, it's a research programme and more, it seeks to understand the capabilities that drive software delivery and operations performance. The data it gathers through its research programme helps teams apply capabilities, leading to better organisational performance. It's a big undertaking and DORA reports go back to 2014, useful to see trends as well as understand some history. DORA started as a team at Google Cloud and focused on understanding DevOps performance by using data; metrics. The ultimate goals was to improve performance and collaboration whilst continuing to drive velocity. These metrics as used as a continuous improvement mechanism, teams can understand their current performance and set goals to progress against them. ## 4-Key Metrics If you have heard of DORA, you may have heard of the 4-key metrics. 1. Deployment frequency 2. Lead time for changes 3. Change failure rate 4. Time to restore service **Deployment Frequency:** Measure how often code is successfully released to production. **Lead Time for Changes:** Measures the amount of time it takes a code change to be committed to production. **Change Failure Rate:** Measures the percentage of deployments that result in a failure. **Time to Restore Service:** Measure the time it takes to restore service after a deployment failure. ## DORA Core Model Now we've had a general overview, those who have heard about DORA or have even applied it previously may understand those metrics, the journey usually stops there. Teams focus on the 4-key metrics, track where they are, set targets to achieve; hopefully leads to an increase in overall performance, collaboration and velocity. You'd think that is it... but it isn't. > DORA is more than DORA! DORA's research programme formulated a model known as DORA Core. Their research team applied behavioural science methodology to uncover the predictive pathways which connect ways of working, via software delivery performance, to organisational goals and individual well-being. You can checkout the [DORA Core Model](https://dora.dev/research/) here: https://dora.dev/research/ > The [DORA Core Model](https://dora.dev/research/) is a collection of capabilities, metrics, and outcomes that represent the most firmly-established findings from across the history and breadth of DORA’s research programme. Core is derived from DORA’s ongoing research, including the analyses presented in their annual Accelerate State of DevOps Reports. Core is intended to be used as a guide in practitioner contexts: it deliberately trails the research, evolving more conservatively. The concepts and relationships shown in the Core Model have been repeatedly demonstrated by their research, and have been successfully used by software engineering teams to prioritise continuous improvement. --- ![DORA Core Model Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3allcchlgzb7ineih48b.png) > This is an interactive diagram, I encourage you to checkout: [DORA Core Model](https://dora.dev/research/) ### Explanation What does this actually convey? Well, there are a number of capabilities, mostly technical capabilities on the left. You can see how elements contribute to other elements, for instance **Documentation quality** predicts a whole range of technical capabilities it impacts, this range of technical capabilities contributes to **Shift left security** and **Continuous delivery**. Continuous delivery helps cultivate a **Generative organisational culture**, and if and when you have **Streamlined change approval** you'll be able to predict your **Software Delivery Performance** (DORA Metrics). Software Delivery Performance, the 4-key metrics are important but are not the be all and end all, i.e. we are software engineering professionals and we have a great deal to do! As software engineering professionals we care about all the things on the left, including software delivery performance so much that we can at times forget about outcomes. These are vitally important, whatever software product or software capability we are producing there is always a reason, and these metrics predict it will drastically impact an organisation, whether commercially or non-commercially. Commercial / non-commercial examples: - Profitability - Productivity - Market share - Number of customers - Quantity of products or services - Operating efficiency - Customer satisfaction - Quality of products or services provided - Achieving organisation or mission goals ### Well-being I like to think that happy people are more productive people, no matter what the profession, software engineering is no exception to this. The well-being of your people, your team is always paramount. When we have less deployment pain, less rework, less burnout, we have a greater chance of success which leads towards achieving our organisational **outcomes**. Trace those lines back in the [DORA Core Model](https://dora.dev/research/) and you'll see **Continuous delivery** and **Streamlined change approval** will likely predict the outcomes of Well-being; follow the lines back and you'll see more signs as per the diagram, clear isn't it? ## Final Thoughts I wanted to keep this somewhat short so you can explore more about the model yourself, the title of this article says a lot, DORA is more than DORA, the misconception around it's just 4-key metrics and sometimes no one else around you may care about those metrics either! DORA is so much more, the [DORA Core Model](https://dora.dev/research/) can really help you understand the landscape better, giving you tools, guidance and advice to ultimately achieve the desired for your organisation is seeking. Furthermore, it doesn't stop with DORA's 4-key metrics and its core model either, there are other metrics that can be somewhat leading indicators that can help any software delivery team improve their performance, collaboration and velocity - I may just cover these in a separate article. Stay tuned! > There is a new DORA Core Model V2 available - https://dora.dev/core-v2/ I hope that by reading this you have learnt a little something and are more curious about DORA. ## More Information - DORA - https://dora.dev/ - DORA Research (DORA Core Model) - https://dora.dev/research/ - DORA Capabilities Catalogue - https://dora.dev/devops-capabilities/ - DORA Guides - https://dora.dev/guides/ - DORA Community of Practice - https://dora.community/
peteking
1,902,777
how to flash bitcoin
How to Buy Flash USDT: Unlock the Power of Tether with MartelGold Are you looking to get your hands...
0
2024-06-27T15:37:54
https://dev.to/jaydyjayght/how-to-flash-bitcoin-2f9g
flashbtc, flashusdt, flashbitcoi, flashbitcoinsender
How to Buy Flash USDT: Unlock the Power of Tether with MartelGold Are you looking to get your hands on Flash USDT, the revolutionary Tether solution that’s taking the cryptocurrency world by storm? Look no further! In this article, we’ll guide you through the process of buying Flash USDT and unlocking its incredible benefits. What is Flash USDT? Before we dive into the buying process, let’s quickly cover what Flash USDT is. Flash USDT is a USDT itself generated by an innovative software that allows you to generate Tether transactions directly on the blockchain network. With Flash USDT Software, you can send up to 20,000 USDT daily with the basic license and a staggering 50,000 USDT in a single transaction with the premium license. Why Buy Flash USDT? So, why should you buy Flash USDT? Here are just a few reasons: Unlimited Possibilities: With Flash USDT, the possibilities are endless. You can generate and send Tether transactions with ease, opening up new opportunities for trading, investing, and more. Convenience: Flash USDT is incredibly easy to use, with a user-friendly interface that makes it simple to generate and send Tether transactions. Security: Flash USDT is built with security in mind, with features like VPN and TOR options included with proxy to keep your transactions safe. How to Buy Flash USDT Ready to buy Flash USDT? Here’s how to get started: Visit MartelGold: Head to MartelGold’s website, www.martelgold.com, to explore their range of Flash USDT products. Choose Your Product: Select from their range of products, including FlashGen USDT sender and $2000 of flash usdt for $200. Make Your Purchase: Once you’ve chosen your product, simply make your purchase and follow the instructions to send you crypto wallet so they flash the coin to you or a one time download and install Flash USDT software incase purchased. MartelGold’s Flash USDT Products At MartelGold, they’re dedicated to providing you with the best Flash USDT solutions on the market. Check out their range of products, designed to meet your needs: FlashGen USDT Sender: Unlock the power of Flash USDT with their innovative sender software, allowing you to generate and send up to 500 USDT daily. Learn More $2000 Flash USDT for $200: Get instant access to $2000 worth of Flash USDT for just $200. Learn More Stay Connected with MartelGold Want to stay up-to-date with the latest Flash USDT news, updates, and promotions? message them directly on telegram! t.me/martelgold At MartelGold, they’re committed to providing you with the best Flash USDT solutions on the market. With their innovative software and exceptional customer support, you can trust them to help you unlock the full potential of Flash USDT. Ready to Get Started? Visit MartelGold today and discover the power of Flash USDT. www.martelgold.com Join the Conversation Message them on telegram! t.me/martelgold Need Help? Contact them today for any questions or inquiries. Their dedicated support team is here to help. t.me/martelgold Don’t wait any longer to unlock the power of Flash USDT. Visit MartelGold today and start generating Tether transactions like a pro! www.martelgold.com Get ready to take your Tether experience to the next level with Flash USDT. Visit MartelGold today and discover the power of innovative software like atomic flash usdt, flash usdt wallet, and flash usdt software free! www.martelgold.com
jaydyjayght
1,902,775
The Benefits of Renting Local Phone Numbers for Your Business
What are local phone numbers and how do they work? Local phone numbers are numbers that have a...
0
2024-06-27T15:33:50
https://dev.to/ringlessvoicemail/the-benefits-of-renting-local-phone-numbers-for-your-business-46nl
ai, rvm, ringless, voicemail
What are local phone numbers and how do they work? Local phone numbers are numbers that have a specific area code, indicating that they belong to a certain geographic region. For example, an 020 area code number can give the impression of a London-based business, while a 0161 area code number can suggest a Manchester-based business. Local phone numbers can be rented from various providers online, and you can choose the area codes and numbers that suit your business needs. You can also customize your local phone numbers according to your preferences, such as adding your business name and logo. Local phone numbers work by diverting calls to your existing phone system or CRM, whether it is a landline or a mobile phone. You can also configure your [voice messages](https://ringlessvoicemail.ai/) and IVRs for your local phone numbers, and personalize them with relevant information such as name, location, address, and other custom variables. This way, you can create a more personal and human connection with your customers, which can lead to higher conversion rates and customer loyalty. Why should you rent local phone numbers for your business? Renting local phone numbers for your business can offer you many advantages, such as: Building trust and credibility. Customers are more likely to trust and do business with a company that has a local phone number, as it shows that you are familiar with their area and care about their needs. A local phone number can also enhance your brand image and reputation, as it demonstrates that you are a professional and established business. According to a study by BrightLocal, 86% of consumers trust online reviews as much as personal recommendations, and 68% of consumers say that positive reviews make them more likely to use a local business. Increasing customer engagement and satisfaction. Customers are more likely to call a local phone number than a toll-free or long-distance number, as it is cheaper and more convenient for them. By renting local phone numbers, you can make it easier for your customers to reach you and communicate with you. You can also use local phone numbers to send personalized voice messages and IVRs to your customers, using relevant information such as name, location, address, and other custom variables. This can help you create a more personal and human connection with your customers, which can lead to higher conversion rates and customer loyalty. According to a study by Accenture, 75% of consumers are more likely to buy from a company that knows their name and purchase history and recommends products based on their preferences. Saving time and money. Renting local phone numbers is much cheaper and faster than buying or setting up new phone lines in different regions. You can rent local phone numbers from various providers online, and choose the area codes and numbers that suit your business needs. You can also manage and monitor your local phone numbers from a single dashboard, and easily switch between them as needed. Renting local phone numbers can also help you avoid the hassle and expense of complying with different local regulations and taxes in different regions. Boosting your local SEO. Search engines like Google favor businesses with a local phone number. This can boost your local search engine rankings, making it easier for potential customers to find you. According to a study by Google, 46% of all searches have a local intent, and 76% of people who search for something nearby on their smartphone visit a related business within a day. By renting local phone numbers, you can increase your online visibility and attract more local customers to your website and your business. Expanding your market reach. Renting local phone numbers can also help you expand your market reach and target multiple locations. You can rent as many local phone numbers as you need, and create a virtual presence in different markets, without having to set up a physical office or hire local staff. This can help you test new markets, enter new niches, and scale your business faster and easier. How can you rent local phone numbers for your business? If you are interested in renting local phone numbers for your business, you can follow these simple steps: Find a reliable and reputable provider of local phone numbers online. You can compare different providers based on their features, pricing, customer reviews, and support. Some of the providers that offer local phone numbers are RinglessVoicemail.AI, Sonetel, Moneypenny, and Grasshopper. Choose the area codes and numbers that you want to rent for your business. You can select as many local phone numbers as you need, and customize them according to your preferences. You can also check the availability and popularity of different area codes and numbers using tools like Clever Numbers and AreaCode.org. Set up your local phone numbers and connect them to your existing phone system or CRM. You can also configure your voice messages and IVRs for your local phone numbers, and personalize them with your business name and logo. You can use tools like RinglessVoicemail.AI and Sonetel to create and send personalized voice messages and IVRs to your customers, using relevant information such as name, location, address, and other custom variables. Start using your local phone numbers to market and communicate with your customers in different regions. You can track and measure the performance of your local phone numbers, and optimize them as needed. You can use tools like CallRail and CallTrackingMetrics to track and analyze your calls, and get insights into your marketing campaigns and customer behavior. Local phone numbers can help you build trust and credibility Renting local phone numbers for your business can be a smart and cost-effective way to grow your business and reach more customers in different regions. Local phone numbers can help you build trust and credibility, increase customer engagement and satisfaction, save time and money, boost your local SEO, and expand your market reach. You can rent local phone numbers from various providers online, and easily set them up and manage them from a single dashboard. If you want to grow your business and reach more customers in different regions, you might want to consider renting local phone numbers for your business
ringlessvoicemail
1,895,037
ezpkg.io - Collection of packages to make writing Go code easier
As I work on various Go projects, I often find myself creating utility functions, extending existing...
27,799
2024-06-27T15:33:36
https://olivernguyen.io/w/ezpkg/
go, opensource, coding, collection
_As I work on various Go projects, I often find myself creating utility functions, extending existing packages, or developing packages to solve specific problems. Moving from one project to another, I usually have to copy or rewrite these solutions. So I created [ezpkg.io](https://ezpkg.io) to have all these utilities and packages in one place. Hopefully, you'll find them useful as well._ [![Gopherz](https://olivernguyen.io/w/ezpkg/gopherz.svg)](https://ezpkg.io) _The logo is Gopherz - which I created using DALL-E._ Let's look at some problems that these packages are solving. --- ## Handling errors in functions that always return nil errors For example, let’s look at the following function using [`strings.Builder`](https://pkg.go.dev/strings#Builder) from the standard library: ```go import "fmt" import "strings" func SliceToString1[T any](slice []T) string { var b strings.Builder for _, v := range slice { _, err := fmt.Fprint(&b, v) if err != nil { panic(err) } } return b.String() } func SliceToString2[T any](slice []T) (string, error) { var b strings.Builder for _, v := range slice { _, err := fmt.Fprint(&b, v) if err != nil { return "", err } } return b.String(), nil } func SliceToString3[T any](slice []T) string { var b strings.Builder for _, v := range slice { _, _ = fmt.Fprint(&b, v) //nolint:errcheck } return b.String() } ``` In `SliceToString1`, we add a panic check even though `strings.Builder` will always return nil error. In `SliceToString2`, we _correctly_ handle the returned error by making the caller worry about checking an error _that never occurs_! In `SliceToString3`, we skip the check because the errors are nil anyway, but we still have to add `_, _ =` to make the IDE happy and `//nolint:errcheck` because our company blocks merging any PR that does not pass the [golint CI check](https://github.com/golangci/golangci-lint). Another way, we could create our utility functions to simplify the code, then copy or write those `fprint` and `must` from [package to package](https://github.com/search?q=must+language%3AGo+symbol%3Amust&type=code) and [project](https://github.com/a8m/rql/blob/78b5dd12a61227ae2e5fb84f325873b96f91db08/rql.go#L497) to [project](https://github.com/dustin/go-heatmap/blob/b89dbd73785a2348b5e0558f719f27a826987ceb/kml.go#L55): ```go func SliceToString4[T any](slice []T) string { var b strings.Builder for _, v := range slice { fprint(&b, v) } return b.String() } func fprint(w io.Writer, v any) { must(fmt.Fprint(w, v)) } func must[T any](v T, err error) T { if err != nil { panic(err) } return v } ``` ### Import "ezpkg.io/stringz" Here is how the example is rewritten with [ezpkg.io/stringz](https://ezpkg.io/stringz) using [`stringz.Builder`](https://pkg.go.dev/ezpkg.io/stringz#Builder): ```go import "ezpkg.io/stringz" func SliceToString[T any](slice []T) string { var b stringz.Builder // change to stringz for _, v := range slice { b.Print(v) // look ma, no error!🔥 } return b.String() } ``` Other examples include [`bytez.Buffer`](https://pkg.go.dev/ezpkg.io/bytez#Buffer) and [`fmtz.State`](https://pkg.go.dev/ezpkg.io/fmtz#State). They [share the same interface](https://github.com/ezpkg/ezpkg/blob/main/fmtz/state_test.go#L13-L17) and include various methods that are ready to use. Let's look at [`WriteString`](https://pkg.go.dev/ezpkg.io/stringz#Builder.WriteString) and its variant [`WriteStringZ`](https://pkg.go.dev/ezpkg.io/stringz#Builder.WriteStringZ): ```go package stringz // import "ezpkg.io/stringz" type Builder strings.Builder func (b *Builder) unwrap() *strings.Builder { return (*strings.Builder)(b) } func (b *Builder) WriteString(s string) (int, error) { return b.unwrap().WriteString(s) } func (b *Builder) WriteStringZ(s string) int { n, _ := b.unwrap().WriteString(s) return n } ``` The `WriteString` method exposes the original method and keeps the same signature, while the `WriteStringZ` variant eliminates the need for handling errors. Writing Go code is _eazier_ now!🥰 --- ## Sometimes, we just want to skip all the errors to quickly write a simple CLI script Using `typez.CoalesceX`, `errorz.Must`, `errorz.Skip`, `errorz.Validate`, and their variants, we can streamline error handling: ```go import "ezpkg.io/errorz" func main() { var err error projectDir := os.Getenv("PROJECT_DIR") errorz.ValidateTof(&err, projectDir != "", "no PROJECT_DIR") errorz.ValidateTof(&err, len(os.Args) > 1, "must at least 1 arg") // panic if any validation fails errorz.MustZ(err) // get the file path jsonFile := os.Args[1] // already check: len(os.Args)>1 // panic if the file extension is not .json if !filepath.IsAbs(jsonFile) { jsonFile = filepath.Clean(filepath.Join(projectDir, jsonFile)) } errorz.MustValidate(strings.HasSuffix(jsonFile, ".json")) // read the file, skip error if it does not exist data, _ := os.ReadFile(jsonFile) // default to empty json object data = typez.CoalesceX(data, []byte("{}")) // process then print the formatted json object := errorz.Must(process(data)) formatted := errorz.Must(json.MarshalIndent(object, "", "\t")) fmt.Printf("%s", formatted) } ``` --- ## Comparing values in tests with diff ignoring spaces Another day, we are making some changes to a SQL repository method using [gorm.io/gorm](https://pkg.go.dev/gorm.io/gorm) and [gomock](https://pkg.go.dev/github.com/golang/mock/gomock). The test code looks like this: ```go var updateSQL = `UPDATE "company_channels" SET "updated_at"=$1,"access_token"=$2 WHERE ("company_id" = $3 AND "channel_code" = $4 AND "channel_type" = $5) AND "company_channels"."deleted_at" IS NULL` dbCtrl.SQLMock.ExpectExec(regexp.QuoteMeta(updateSQL)). WithArgs( sqlmock.AnyArg(), companyChannel.AccessToken, companyChannel.CompanyID, companyChannel.ChannelCode, companyChannel.ChannelType, ).WillReturnResult(sqlmock.NewResult(0, 1)) ``` We might encounter this error, which is hard to read and see what is wrong: ![](https://olivernguyen.io/w/ezpkg/d1.png) ### Import "ezpkg.io/diffz" Rewrite the assertion function: ```go diffz.IgnoreSpace().DiffByChar(actualSQL, expectedSQL) ``` This provides cleaner output and quickly highlights differences: ![](https://olivernguyen.io/w/ezpkg/d2.png) ### Support for tests with random values Using `diffz.Placeholder().AndIgnoreSpaces().DiffByLine()` or simply `diffz.ByLineZ()`: ```go expect := ` color: id: ████████-████-████-████-████████████ name: red size: small code: #ff0000` red := ` color: id: d56d5f0d-f05d-4d46-9ce2-af6396d25c55 name: red size: small code: #ff0000` green := ` color: id: 5b01ec0b-0607-446e-8a25-aaef595902a9 name: green size: small code: #00ff00` fmt.Println("no diff") fmt.Println(diffz.ByLineZ(red, expect)) fmt.Println("diff") fmt.Println(diffz.ByLineZ(red, green)) ``` The first `diffz.ByLineZ(red, expect)` will be considered equal, because of the use of placeholder `█`. The second `diffz.ByLineZ(red, green)` will output: ![Image description](https://olivernguyen.io/w/ezpkg/d3.png) --- ## ezpkg.io is in its early stage These packages are created to enhance the functionality of the standard library and other popular packages. They are intended to be used together with other packages rather than replacing them. The APIs are designed based on my experience working with Go, focusing on simplicity and ease of use. I will try to follow best practices in Go, but not always. I also tend to choose a more performance implementation if possible. ### Versioning All packages are released together with the same version number to simplify management, as they often call each other. When the API evolves, the version number is incremented for all packages. ### Why should you NOT use these packages? - **More dependencies**: These packages will add more dependencies to your project. - **Early development**: This project is in its early stages and will have API changes. There are other packages that are more mature and offer more features. - **Customization**: Sometimes, writing your own code allows for better customization. You can also copy code from these packages and modify it to fit your specific needs. ### Why should you use these packages? - You find yourself copying the same code over and over. - You are starting a new project and want some simple and easy-to-use packages. - You are learning Go and want to see how some common tasks are implemented. ### Stay Tuned Most packages are usable but the API may change over time. There are a lot of missing utilities that I will add sooner or later. If you need something or want to share your thoughts, feel free to [open an issue](https://github.com/ezpkg/ezpkg/issues) or [start a discussion](https://github.com/ezpkg/ezpkg/discussions). I’m curious to see if these packages are making life _eazier_ for you! 👋 Happy coding! 🚀 --- ## Author _I'm [Oliver Nguyen](https://olivernguyen.io). A software maker working mostly in Go and JavaScript. I enjoy learning and seeing a better version of myself each day. Occasionally spin off new open source projects. Share knowledge and thoughts during my journey._
olvrng
1,902,774
كيف تختار أفضل مكتب تعقيب لخدمة احتياجاتك؟
في ظل كثرة مكاتب التعقيب وتنوع خدماتها، قد يواجه الفرد صعوبة في اختيار مكتب تعقيب مناسب يلبي...
0
2024-06-27T15:32:17
https://dev.to/ethar_arafat_df3feeefcb1a/kyf-tkhtr-fdl-mktb-tqyb-lkhdm-htyjtk-1e16
في ظل كثرة مكاتب التعقيب وتنوع خدماتها، قد يواجه الفرد صعوبة في اختيار [مكتب تعقيب](https://mo3aqeb.com/) مناسب يلبي احتياجاته.. إليك بعض النصائح التي تساعدك في اختيار أفضل مكتب تعقيب: حدد احتياجاتك: ما هي المعاملات التي ترغب في إنجازها من خلال مكتب تعقيب؟ هل إن كنت تحتاج خدمة [استقدام عائلة مقيم](https://mo3aqeb.com/services/show/3/%D9%85%D8%B9%D9%82%D8%A8_%D8%A7%D8%B3%D8%AA%D9%82%D8%AF%D8%A7%D9%85_%D8%B9%D8%A7%D8%A6%D9%84%D8%A9_%D9%85%D9%82%D9%8A%D9%85) أم إن كنت تبحث عن [مكتب استخراج تصريح الزواج](https://mo3aqeb.com/services/show/4/%D9%85%D9%83%D8%AA%D8%A8_%D8%A7%D8%B3%D8%AA%D8%AE%D8%B1%D8%A7%D8%AC_%D8%AA%D8%B5%D8%B1%D9%8A%D8%AD_%D8%B2%D9%88%D8%A7%D8%AC_%D8%B3%D8%B9%D9%88%D8%AF%D9%8A_%D9%85%D9%86_%D8%A3%D8%AC%D9%86%D8%A8%D9%8A%D8%A9) ؟ ما هو الميزانية المخصصة لهذه الخدمة؟ ما هو الوقت المتاح لديك لإنجاز هذه المعاملات؟ ابحث عن مكاتب تعقيب ذات سمعة طيبة: اسأل أصدقائك وعائلتك عن تجاربهم مع مكاتب تعقيب مختلفة. ابحث عن مكاتب تعقيب على الإنترنت وقارئ مراجعات العملاء. تأكد من أن مكتب التعقيب مرخص من الجهات المختصة. تواصل مع مكاتب تعقيب مختلفة: اتصل بـ مكاتب تعقيب مختلفة واستفسر عن خدماتها وأسعارها. تأكد من أن مكتب التعقيب على دراية بالقوانين والإجراءات المتعلقة بالمعاملات التي ترغب في إنجازها. اطرح أسئلة محددة حول كيفية عمل مكتب التعقيب ومدة إنجاز المعاملات. قارن الأسعار والخدمات: لا تقبل بأول عرض تقدمه لك مكتب تعقيب. قارن الأسعار والخدمات التي تقدمها مكاتب تعقيب مختلفة قبل اتخاذ قرارك. تأكد من أن مكتب التعقيب يقدم لك عرضًا يشمل جميع الخدمات التي تحتاجها. تأكد من شروط العقد: اقرأ شروط العقد بعناية قبل التوقيع عليه. تأكد من أن العقد يوضح جميع الخدمات التي يقدمها مكتب التعقيب ومدة إنجازها. تأكد من أن العقد يوضح آلية حل النزاعات في حال حدوث أي مشكلة. تواصل مع مكتب التعقيب بانتظام: تأكد من متابعة مكتب التعقيب بانتظام لمعرفة سير عملية إنجاز معاملاتك. لا تتردد في التواصل مع مكتب التعقيب في حال واجهت أي مشكلة أو استفسار. نصائح إضافية: اختر مكتب تعقيب متخصص في مجال المعاملات التي ترغب في إنجازها. تأكد من أن مكتب التعقيب لديه فريق عمل مؤهل وكفء. تأكد من أن مكتب التعقيب يوفر لك خدمة عملاء مميزة. بالنهاية، اختيار مكتب تعقيب مناسب يمكن أن يوفر عليك الكثير من الوقت والجهد والمال. اتبع هذه النصائح لضمان اختيار أفضل مكتب تعقيب لخدمة احتياجاتك. كما ننصحك بالتعامل مع [أبو عبد الله أفضل مكتب تعقيب](https://mo3aqeb.com/) بالمملكة وذو خبرة كما يتميز بتقديم خدماته بأسعار منافسة للسوق بأعلى كفاءة وبأسرع وقت ممكن!
ethar_arafat_df3feeefcb1a
1,902,773
What's the most difficult and time-consuming part from development to production?
I am trying to figure out the pain points of developers throughout their journey from building the...
0
2024-06-27T15:32:06
https://dev.to/vamshi2506/whats-the-most-difficult-and-time-consuming-part-from-development-to-production-96n
webdev, javascript, programming, aws
I am trying to figure out the pain points of developers throughout their journey from building the product to launching it to production
vamshi2506
1,901,818
Rustify some puppeteer code(part I)
Why? Rust is pretty amazing but there are a few things that you might be weary about....
27,861
2024-06-27T15:30:00
https://artur.wtf/blog/rusty-puppets/
rust, scraping, webcrawling
## Why? Rust is pretty amazing but there are a few things that you might be weary about. There are [few war stories](https://discord.com/blog/why-discord-is-switching-from-go-to-rust) of companies building their entire stack on `rust` or and then living happily ever after. Software is an ever evolving organism so in the [darwinian sense the more adaptable the better](https://www.darwinproject.ac.uk/people/about-darwin/six-things-darwin-never-said/evolution-misquotation). Enough of that though, not here to advocate any particular language or framework, what I want is to share my experience with writing an equivalent scraper in `rust` to [my previous post](https://dev.to/adaschevici/gopherizing-some-puppeteer-code-29g4) where I used `golang` and `chromedp`. The experience using `go` with `chromedp` to automate chrome was pretty good, it is not as powerful as what is available in `puppeteer` so I figured I would have a look at what might be available in the `rust` landscape. ## What? ![Rust Puppeteering](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ueockjszks2z31802f0l.png) In `rust` there are several libraries that deal with browser automation, a few I have had a look at are: - [fantocini](https://github.com/jonhoo/fantoccini) - A high-level API for programmatically interacting with web pages through WebDriver, but I want chrome devtools protocol instead. - [rust-headless-chrome](https://github.com/rust-headless-chrome/rust-headless-chrome) - chrome devtools protocol client library in rust, not as active as the crate I wound up using. - [chromiumoxide](https://github.com/mattsse/chromiumoxide) - this is the one that seem to be the most active in terms of development so it looks like a good choice at time of writing. ## TL;DR As I was reading one of my older posts that focuses on quasi live coding I realized it was boring as hell, and if your attention span is that of a goldfish, like mine is, it would probably make sense to just drop in a link to the [repo](https://github.com/adaschevici/rustic-toy-chest/tree/main/rust-crawl-pupp) so that you can download the code and try it out yourself. The repo is a collection of rust prototypes that I have been building for fun and learning, haven't had yet a compelling reason to use rust in production unfortunately :cry:. ## How? To my surprise the code was closer in structure to the [`puppeteer`](https://pptr.dev/) version than it was to the [`chromedp`](https://github.com/chromedp/chromedp). The `chromedp` version uses nested context declarations to manage the browser and page runtimes, the `rust` version uses a more linear approach. You construct a browser instance and then you can interact with it as a user would. This points at the fact that the `chromiumoxide` api is higher level. The way you can set things up to keep your use cases separate is by adding [`clap`](https://docs.rs/clap/latest/clap/) to your project and use command line flags to select the use case you want to run. You will see that I have covered most cases but not everything is transferable from `puppeteer` or `chromedp` to the `chromiumoxide` version. I will not go through the setup of `rustup`, rust toolchain or `cargo` as this is a basic and well documented process, all you have to do is search for `getting started with rust` and you will find a bunch of resources. ## Show me the code #### 1. Laying down the foundation - set up my project root ```bash cargo new rust-crawl-pupp cd rust-crawl-pupp cargo install cargo-edit # this is useful for adding and upgrading dependencies ``` - add dependencies via `cargo add` ```toml [dependencies] chromiumoxide = { version = "0.5.7", features = ["tokio", "tokio-runtime"] } # this is the main dependency chromiumoxide_cdp = "0.5.2" # this is the devtools protocol clap = { version = "4.5.7", features = ["derive", "cargo"] } # this is for command line parsing futures = "0.3.30" # this is for async programming tokio = { version = "1.38.0", features = ["full"] } # this is the async runtime tracing = "0.1.40" # this is for logging tracing-subscriber = { version = "0.3.18", features = ["registry", "env-filter"] } # this is for logging ``` - add `clap` command line parsing to the project so that each different use case can be called via a subcommand define your imports ```rust use clap::{Parser, Subcommand}; ``` define your command structs for parsing the command line arguments, this will allow for each use case to be called with its own subcommand like so `cargo run -- first-project`, `cargo run -- second-project`, and so on. ```rust #[derive(Parser)] #[command( name = "OxideCrawler", version = "0.1", author = "artur", about = "An example application using clap" )] struct Cli { #[command(subcommand)] command: Commands, } #[derive(Subcommand, Debug)] enum Commands { FirstProject {}, SecondProject {}, ... } ``` the way you can hook this into the main function is via a `match` statement that will call the appropriate function based on the subcommand that was passed in. ```rust let args = Cli::parse(); ... match &args.command { Commands::FirstProject {} => { let user_agent = spoof_user_agent(&mut browser).await?; info!(user_agent, "User agent detected"); } ... } ``` #### 2. Starting browser and the browser cleanup - use the `launch` method and its options to start the browser, if the viewport and window size are different, the browser will start in windowed mode, with the page size being smaller. ```rust let (mut browser, mut handler) = Browser::launch( BrowserConfig::builder() .with_head() // this will start the browser in headless mode .no_sandbox() // this will disable the sandbox .viewport(None) // this will set the viewport size .window_size(1400, 1600) // this will set the window size .build()?, ) .await?; let handle = tokio::task::spawn(async move { loop { match handler.next().await { Some(h) => match h { Ok(_) => continue, Err(_) => break, }, None => break, } } }); ``` - the browser cleanup needs to be done correctly and there are two symptoms that you will see if you missed anything: - the browser will not close - hangs at the end - you might get a warning like the following: ```bash 2024-06-26T08:40:01.418414Z WARN chromiumoxide::browser: Browser was not closed manually, it will be killed automatically in the background ``` to correctly clean up your browser instance you will have to call these on the code paths that close the browser ```rust browser.close().await?; browser.wait().await?; handle.await?; ``` #### 3. Use cases In the [repo](https://github.com/adaschevici/rustic-toy-chest/tree/main/rust-crawl-pupp) each use case lives in its own module most of the time. There are some cases where you might have two living in the same module when they are very closely related, like in Use Case `c.`. __a. Spoof your user agent:__ The only way I have found to set your user agent was from the [`Page`](https://docs.rs/chromiumoxide/latest/chromiumoxide/page/struct.Page.html#) module via the `set_user_agent` method ```rust let page = browser.new_page("about:blank").await?; page.set_user_agent( "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) \ Chrome/58.0.3029.110 Safari/537.36", ) .await?; page.goto("https://www.whatismybrowser.com/detect/what-is-my-user-agent") .await?; ``` __b. Grabbing the full content of the page__ is pretty straightforward ```rust let page = browser .new_page("https://scrapingclub.com/exercise/list_infinite_scroll/") .await?; let content = page.content().await?; ``` __c. Grabbing elements via css selectors__, ```rust let elements_on_page = page.find_elements(".post").await?; let elements = stream::iter(elements_on_page) .then(|e| async move { let el_text = e.inner_text().await.ok(); match el_text { Some(text) => text, None => None, } }) .filter_map(|x| async { x }) .collect::<Vec<_>>() .await; ``` performing __relative selection__ from a specific node and mapping the content to `rust` types ```rust ... let product_name = e.find_element("h4").await?.inner_text().await?.unwrap(); let product_price = e.find_element("h5").await?.inner_text().await?.unwrap(); Ok(Product { name: product_name, price: product_price, }) ... ``` __d. When the page has infinite scroll__ you will have to scroll to the bottom of the page to be able to collect all the elements you are interested in. To achieve this you need to inject `javascript` into the page context and trigger a run of the function. The `chromiumoxide` api seems to have really decent support for this, I faced much less resistance than I did with `chromedp` and `go`. ```rust let js_script = r#" async () => { await new Promise((resolve, reject) => { var totalHeight = 0; var distance = 300; // should be less than or equal to window.innerHeight var timer = setInterval(() => { var scrollHeight = document.body.scrollHeight; window.scrollBy(0, distance); totalHeight += distance; if (totalHeight >= scrollHeight) { clearInterval(timer); resolve(); } }, 500); }); } "#; let page = browser .new_page("https://scrapingclub.com/exercise/list_infinite_scroll/") .await?; page.evaluate_function(js_script).await?; ``` __e. When you need to wait for an element to load__, this was not exactly part of the `chromiumoxide` api so I had to hack it together. Given my limited rust expertise there probably a better way to do this but this is what I managed to come up with. If the async block runs over the timeout then the `element_result` will be an error, otherwise poll the dom for the element we are looking for. ```rust use tokio::time::{timeout, Duration}; ... let element_result = timeout(timeout_duration, async { loop { match page.find_element(selector).await { Ok(element) => return Ok(element), // Wait for a short interval before checking again Err(e) => tokio::time::sleep(Duration::from_millis(100)).await, } } }) .await; ``` #### 4. Fixtures to replicate various scenarios Some websites, actually most websites have some sort of delay for loading different parts of the page, in order to prevent blocking the entire page. To replicate this behavior fixtures can be used to inject nodes into the dom with a delay. For the more edge case scenarios I created fixtures to emulate edge behaviors while not actually having to remember a website that is live and behaves like that. The HTML is really basic: ```html <div id="container"> <!-- New node will be appended here --> </div> <script src="script.js"></script> ``` The `script.js` file is slightly more, but still fairly straightforward: ```javascript document.addEventListener('DOMContentLoaded', () => { // Function to create and append the new node function createDelayedNode() { // Create a new div element const newNode = document.createElement('div'); // Add some content to the new node newNode.textContent = 'This is a new node added after a delay.'; // Add some styles to the new node newNode.style.padding = '10px'; newNode.style.marginTop = '10px'; newNode.style.backgroundColor = '#f0f0f0'; newNode.style.border = '1px solid #ccc'; newNode.id = 'come-find-me'; // Append the new node to the container const container = document.getElementById('container'); container.appendChild(newNode); } // Set a delay (in milliseconds) const delay = 3000; // 3000ms = 3 seconds // Use setTimeout to create and append the node after the delay setTimeout(createDelayedNode, delay); }); ``` What it will do is create a new node with some text content and some styles, then append it to the container div after a delay of 3 seconds. ## Why to be continued? What I hate more than `to be continued` in a TV show where I don't have the next episode available is a blog post that has code that looks reasonable and that it might work, but doesn't. So going by the lesser of two evils principle I decided to make this a two parter which will give me the time to write and test the other use cases in order to make sure everything works as expected. ## Conclusions - This is one of the few times I have stuck with `rust` through the pain and I have to say it was a better experience than I had with `go` and `chromedp` - writing the code was slightly faster since there was less boilerplate to write - messing around with wrappers and `unwrap()` was challenging but probably in time it gets easier - the code in `rust` looks more like `puppeteer` than the `go` version did #### In Part II I will cover dealing with bot protection, handling frames, forms and more. Stay tuned!
adaschevici
1,901,678
40 Days Of Kubernetes (6/40)
Day 6/40 Kubernetes Multi Node Cluster Setup Step By Step - Kind Video...
0
2024-06-27T15:29:36
https://dev.to/sina14/40-days-of-kubernetes-640-4ign
kubernetes, 40daysofkubernetes
## Day 6/40 # Kubernetes Multi Node Cluster Setup Step By Step - Kind [Video Link](https://www.youtube.com/watch?v=RORhczcOrWs) @piyushsachdeva [Git Repository](https://github.com/piyushsachdeva/CKA-2024/) [My Git Repo](https://github.com/sina14/40daysofkubernetes) There are many ways to install the `Kubernetes` such as installing with `Minikube`, `MicroK8s`, `K3s` and `Kubeadm`, but in this section, we're going to install it with `Kind` cluster. Read More: [Link1](https://spacelift.io/blog/install-kubernetes), [Link2](https://itnext.io/kubernetes-installation-methods-the-complete-guide-1036c860a2b3) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ovrwdu3wm7eaqk9p639y.png) [kind](https://kind.sigs.k8s.io/) is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI. ## Installation Process ### 1. Prerequisite Golang is needed at first ```console root@localhost:~# apt install golang -y Reading package lists... Done Building dependency tree... Done Reading state information... Done golang is already the newest version (2:1.18~0ubuntu2). 0 upgraded, 0 newly installed, 0 to remove and 22 not upgraded. ``` ### 2. Download `kind` on linux [Installation Guid](https://kind.sigs.k8s.io/docs/user/quick-start/) ```console root@localhost:~# [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 98 100 98 0 0 1080 0 --:--:-- --:--:-- --:--:-- 1088 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 6381k 100 6381k 0 0 6188k 0 0:00:01 0:00:01 --:--:-- 6188k root@localhost:~# chmod +x kind root@localhost:~# mv kind /usr/local/bin/kind root@localhost:~# kind --version kind version 0.23.0 ``` ### 3. Installing `kubectl` [Installation Guid](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) ```console root@localhost:~# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 138 100 138 0 0 752 0 --:--:-- --:--:-- --:--:-- 754 100 49.0M 100 49.0M 0 0 53.0M 0 --:--:-- --:--:-- --:--:-- 53.0M root@localhost:~# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 138 100 138 0 0 728 0 --:--:-- --:--:-- --:--:-- 730 100 64 100 64 0 0 240 0 --:--:-- --:--:-- --:--:-- 240 root@localhost:~# echo "$(cat kubectl.sha256) kubectl" | sha256sum --check kubectl: OK root@localhost:~# sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl root@localhost:~# kubectl version Client Version: v1.30.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.4 ``` ### 4. Creating a Cluster Please read the [Release Notes](https://github.com/kubernetes-sigs/kind/releases) ```console root@localhost:~# kind create cluster --image kindest/node:v1.29.4@sha256:3abb816a5b1061fb15c6e9e60856ec40d56b7b52bcea5f5f1350bc6e2320b6f8 --name jolly-jumper Creating cluster "jolly-jumper" ... ✓ Ensuring node image (kindest/node:v1.29.4) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-jolly-jumper" You can now use your cluster with: kubectl cluster-info --context kind-jolly-jumper Thanks for using kind! 😊 root@localhost:~# kubectl cluster-info --context kind-jolly-jumper Kubernetes control plane is running at https://127.0.0.1:46167 CoreDNS is running at https://127.0.0.1:46167/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ``` ### 5. Interact with the clsuter ```console root@localhost:~# kind get clusters jolly-jumper root@localhost:~# kubectl get nodes NAME STATUS ROLES AGE VERSION jolly-jumper-control-plane Ready control-plane 112s v1.29.4 root@localhost:~# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kind-jolly-jumper kind-jolly-jumper kind-jolly-jumper ``` ### 6. Configuring a multi-nodes `kind` cluster yaml file with 3 nodes for instance. [Find more](https://kind.sigs.k8s.io/docs/user/quick-start/#advanced) ``` # three node (two workers) cluster config kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker ``` ```console root@localhost:~# kind create cluster --name lucky-luke --config kind-lucky-luke.yaml Creating cluster "lucky-luke" ... ✓ Ensuring node image (kindest/node:v1.30.0) 🖼 ✓ Preparing nodes 📦 📦 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜 Set kubectl context to "kind-lucky-luke" You can now use your cluster with: kubectl cluster-info --context kind-lucky-luke Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/ root@localhost:~# kind get clusters jolly-jumper lucky-luke ``` --- **Note** Important website for exam: 1. [Kubernetes Cheatsheet](https://kubernetes.io/vi/docs/reference/kubectl/cheatsheet/) 2. [Kubernetes Blog](https://kubernetes.io/blog/) --- ### 7. Switching between contexts ```console root@localhost:~# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE kind-jolly-jumper kind-jolly-jumper kind-jolly-jumper * kind-lucky-luke kind-lucky-luke kind-lucky-luke root@localhost:~# kubectl config current-context kind-lucky-luke root@localhost:~# kubectl get nodes NAME STATUS ROLES AGE VERSION lucky-luke-control-plane Ready control-plane 4m36s v1.30.0 lucky-luke-worker NotReady <none> 3m40s v1.30.0 lucky-luke-worker2 NotReady <none> 3m26s v1.30.0 root@localhost:~# kubectl config use kind-jolly-jumper Switched to context "kind-jolly-jumper". root@localhost:~# kubectl get nodes NAME STATUS ROLES AGE VERSION jolly-jumper-control-plane Ready control-plane 10m v1.29.4 ```
sina14
1,902,771
Meet Updated Delphi DAC with RAD Studio 12, Patch 1, Cloud Providers Metadata Support, and more.
Devart is excited to announce the release of the updated Delphi DAC product line, featuring key...
0
2024-06-27T15:28:54
https://dev.to/devartteam/meet-updated-delphi-dac-with-rad-studio-12-patch-1-cloud-providers-metadata-support-and-more-4hjl
delphi, dac, devart
Devart is excited to announce the release of the updated Delphi DAC product line, featuring key enhancements such as support for Embarcadero RAD Studio 12, Patch 1, and Cloud Providers Metadata Support. These updates are designed to enhance your development experience and boost productivity. Devart, a recognized vendor of world-class data connectivity solutions for various data connection technologies and frameworks, released new versions of [Delphi DAC](https://www.devart.com/dac.html). The main feature of these updates is support for Embarcadero RAD Studio 12 Athens. Release 1 introduces enhanced VCL and FireMonkey libraries, Split Editor Views, updated C++ compilers and toolchain, support for Android API level 34, compatibility with 4K+ screens, new targeting capabilities for Windows 11, and more. The significant updates of Delphi DAC products: - Metadata caching is now available for all cloud-based providers; - SQL now supports automatic detection of transaction states when controlled by an SQL statement. Additionally, Direct mode has been updated to version 3.45.2; - The TOraSession in ODAC now includes the new functions GetAuditActionBanner and GetUnauthorizedAccessBanner; - Microsoft Entra Service Principal authentication (auADServicePrincipal) in the prMSOLEDB provider is supported; - PostgreSQL provider works better with generated fields; - Added voResetAutoInc in the VirtualTable component that allows to reset AutoInc values on Clea in VirtualDAC; To learn more about the recent release and download new products, visit: https://blog.devart.com/new-in-delphi-dac-support-for-rad-studio-12-patch-1-cloud-providers-metadata-and-more.html **Delphi Data Access Components** allow the development of multi-platform applications in Embarcadero RAD Studio, Delphi, C++Builder, Lazarus, and Free Pascal on Windows, Linux, macOS, iOS, and Android for both 32-bit and 64-bit platforms. They are terrific tools that provide direct access to popular databases such as Oracle, Microsoft SQL Server, MySQL, InterBase, Firebird, PostgreSQL, SQLite, and clouds - Salesforce, FreshBooks, SugarCRM, and many others. In addition to these, we offer a mature ORM framework for Delphi, making it a comprehensive solution for all your database connectivity needs. **About Devart** Devart is one of the leading developers of database tools and administration software, ALM solutions, data providers for various database servers, data integration, and backup solutions. The company also implements Web and Mobile development projects. For additional information about Devart, visit https://www.devart.com/.
devartteam
1,878,467
Some thoughts on DevTalks Bucharest 2024
I'm an exceedingly lucky person, over the years I've had the opportunity to attend and speak at...
0
2024-06-27T15:26:15
https://dev.to/ukmadlz/some-thoughts-on-devtalks-bucharest-2024-3bab
conference, learning
I'm an exceedingly lucky person, over the years I've had the opportunity to attend and speak at (including _very_ last minute) multiple [DevTalks](https://www.devtalks.ro/) events. And I was exceedingly happy to be invited to both moderate and speak at the 11th edition in Bucharest between May 29th & 30th. If you'd like a full run-down of all the speakers, talks, and activities they had I'd check out the website https://www.devtalks.ro/ and the very active social media they run. I will admit that on day two, I spent nearly every moment on the product world stage being a moderator so all my thoughts on that are limited. But I can at least share about day one, and the expo booth. So here are my thoughts about the event. ## Expo Let's get the expo out of the way first, as that covers both days. In the event space, it had a _*HUGE*_ expo between all the stages with amazing brands that I recognised and huge booths from local brands I didn't. You could chill and work in a dedicated workspace area that came with quiet/meeting room pods, and you could play with or watch so many robots including football and drawing… it was nuts. And the refreshments just kept coming! Amazing food trucks outside, coffee from plenty of Nespresso Professional machines with baristas, beers from Heineken (I can't stand the beer, but at least it was available), and all-you-can-drink soft drinks from Pepsi. ## Day One Unfortunately, after a very stressful travel day into Bucharest from sunny Birmingham I slept in a little so I missed the opening speech and keynote (sorry dear reader). But I did manage to catch some awesome talks between bumping into old friends. The first talk I managed to stand at the back for was [Francesco Ciulla](https://dev.to/francescoxx) from [Daily.dev](https://daily.dev) on the Web Stage. The talk was titled Evolution of Web Development and I enjoyed hearing about how the culmination of standards was being used by Daily.dev to produce an amazing experience that's scalable across browsers. {% embed https://twitter.com/ukmadlz/status/1795729492985696427 %} Next up I saw some of [Vanessa Villa](https://x.com/vavillaiot) from [Pangea](https://pangea.cloud/) on the DevLead Stage. This talk was something a little different for me as I'm not an IoT person beyond a little smart home tech, it was titled "IoT Evolution: Where is it now?". I'll admit she had my curiosity about the industrial applications of IoT right now, but I'm far removed from that, unfortunately. {% embed https://twitter.com/ukmadlz/status/1795730884118884493 %} Another talk from the DevLead stage had me watching my old friend [Alex Lakatos](https://dev.to/lakatos88) from [Interledger](https://interledger.org/). "Building a Developer-first Culture" was the title, and this one was way too close to home for me. Having seen how Alex evolved this process, and how true it is. The biggest key takeaway is, with a small team, set the collaborative and open framework early so that everyone _wants_ to get involved with sharing how awesome the product they work on is! {% embed https://twitter.com/ukmadlz/status/1795797219745894735 %} The final talk of day one that I got to see was [Elad Shechter](https://dev.to/elad2412) from [Appwrite](https://appwrite.io/) on the Web Stage. His talk, titled "Update Your < Style >!", involved a lot of CSS, and seemed like black magic. But it was interesting to see how he implemented a grid as someone terrible at CSS. ## Day Two Day two was a little different, I was moderating all day so I sat on one stage and watched it all. I was also a little stupid and forgot to take photos and post them to social media for future reference. I was moderating the Product World Stage that was powered by [METRO.digital](https://metro.digital/) (the digital wing of the [METRO supermarkets](https://www.metroag.de/en)). So it's only fitting that day two was opened up by representatives from METRO.digital. First up, we had [Adrian Postelnicu](https://www.linkedin.com/in/adrian-postelnicu-83bba32/) who is the CPO for METRO.digital setting the scene for the day. Immediately from Adrian, we were introduced to [Aura Virgolici](https://www.linkedin.com/in/aura-mihaela-virgolici/) (Engineering Manager at [METRO.digital](https://metro.digital/)) & [Irina Poiana](https://www.linkedin.com/in/irina-poiana-19421121/) (Domain Owner at [METRO.digital](https://metro.digital/)). They gave a talk titled "Bridging the Gap: A Journey to High-Performing Teams" and it was about the pain points of a digital transformation program [METRO.digital](https://metro.digital/)) went through. The second full talk of the day was from [Nesrine Changuel](https://www.linkedin.com/in/nesrinechanguel/) giving a talk named "The Secret to Crafting Lovable Tech Products". Here she covered the idea of "delight" which was an interesting concept for features that don't necessarily bring direct ROI but do engage and make users love a product. You can even download the [Product Delight Map](https://www.nesrine-changuel.com/blogs_1/product-delight-map) from the talk. Our first AI talk of the day was given by [Miro Alexandrov](https://www.linkedin.com/in/miroslavaleksandrov/) from [Ipsos](https://www.ipsos.com/en-ca), his talk was "Leveraging Generative AI and Product Management Frameworks: How Ipsos is transforming Market Research". I did enjoy how he described Ipsos's use of AI to classify and enhance the existing research they have to improve and speed up projects. The last talk before the lunch break was [Lucian Gruia](https://www.linkedin.com/in/luciangruiaro/) from [Ciklum](https://www.ciklum.com/). The talk "Coding Privacy - Aware Enterprise AI with RAG Architecture" was an overview of RAG and how it's implemented, which for someone like me who knows little about the implementation was a great starter. After lunch we had [Dan-Mihai Dinu](https://www.linkedin.com/in/danmdinu/) from [Bolt](https://bolt.eu/) who had the most dramatic and interesting start to his talk with costumes and all. The talk itself was called "Bolt Food: The Secret Recipe" and was an interesting run-through on the speed of development and iteration that Bolt has gone through with projects. The next talk from [Stefan Tudor Murariu](https://www.linkedin.com/in/stefan-tudor-murariu/) was titled "The essential tools for early-stage product companies". His talk was about product development, giving some insight into getting over those first hurdles for finding product fit. Following on we had two speakers from [Swissquote](https://www.swissquote.com/), [Edwige Fiaclou](https://www.linkedin.com/in/edfiaclou/) (Head Software Engineer Tech Talent & Methodologies) and [Laetitia Aegerter-Cuello ](https://www.linkedin.com/in/laetitia-aegerter-cuello-b107378/) (Senior Agile Coach). The session "Agility à la carte: Product and Delivery a tasty combination for innovation" was a fun (it had Toblerone!) walk-through the Disciplined Agile process and it's effects on product delivery at [Swissquote](https://www.swissquote.com/). Our last AI talk of the day was from [George Dita](https://www.linkedin.com/in/georgedita/) with a talk titled "Empower Your Product Management with AI: Enhance, Don’t Replace Your Unique Skills". George gave a breakdown of a toolchain they use for decision-making and process automation around product management. The final talk of the day (hurray it's over) was from yours truly, with the help of my friend Mike Dolha (you can find him on [Instagram](https://www.instagram.com/mikeddol), [LinkedIn](https://www.linkedin.com/in/mikeddol/), or his [podcast](https://tangents.transistor.fm/)). My talk was "The ABC of DX", which was meant to be a practical guide to things organisations can do to improve internal and external Developer Experience so people love what they're building on or with. Mike then drove a quite hilarious Q&A session at the end. If you'd like a write-up of this talk and my thoughts etc please feel free to add a comment. ## The conclusion [DevTalks](https://www.devtalks.ro/) is an event that will always hold a special place in my heart, but Ow My God is it so busy! You cannot convey everything you could learn in a simple post. If you have the opportunity to attend, you should as Romania is a beautiful country with lovely cities (and food). And if you'd like to give a talk, the organisation behind it always runs a CFP. They also have a bunch of other events throughout the year {% embed https://www.linkedin.com/embed/feed/update/urn:li:share:7212023757987581953 %}
ukmadlz
1,902,770
Kubernetes Pod 101
With Kubernetes, our main goal is to run our application in a container. However, Kubernetes does not...
27,750
2024-06-27T15:26:14
https://psj.codes/kubernetes-pod-101
kubernetes, devops, opensource, tutorial
With Kubernetes, our main goal is to run our application in a container. However, Kubernetes does not run the container directly on the cluster. Instead, it creates a Pod that encapsulates the container. ![Pod](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54p21ai5oqrszbpb633s.png) `A Pod is the smallest object that you create and manage in Kubernetes.` A Pod consists of one or more containers that share storage and network resources, all running within a shared context. This shared context includes Linux namespaces, cgroups, and other isolation mechanisms, similar to those used for individual containers. In a Kubernetes cluster, Pods use two models to run containers.: 1. ***One-container-per-Pod model***: This is the common use case in Kubernetes. A Pod acts as a wrapper for a container, with Kubernetes managing the Pod instead of the individual container. *Refer to diagram POD-A.* 2. ***Multi-containers Pod model:*** Pods can run multiple containers that work closely together. These Pods hold applications made up of several containers that need to share resources and work closely. These containers operate as a single unit within the Pod. *Refer to diagram POD-B.* ![pod](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q80pqoaxc6bm2icg2t41.png) ## Anatomy of a Pod ### Containers **Main Container** * **Primary Role:** This is the application's primary container. * **Example:** If you have a web application, the main container will run the web server that serves your application. **Sidecar Containers** * **Supporting Role:** These auxiliary containers support the main container, often used for logging, monitoring, or proxying tasks. * **Example:** For the same web application, a sidecar container might handle logging by collecting and storing log data generated by the web server. ### Storage (Volumes) Pods can include storage resources known as volumes, which enable data persistence across container restarts. Volumes in a Pod are shared among all containers in that Pod, allowing for data exchange between them. * **Types of Volumes:** * **emptyDir:** A temporary directory that is created when a Pod is assigned to a node and deleted when the Pod is removed. * **hostPath:** Maps a file or directory from the host node’s filesystem into a Pod. * **persistentVolumeClaim:** A request for storage by a user that binds to a PersistentVolume (PV) in the cluster. * **configMap:** Provides configuration data, command-line arguments, environment variables, or container files. * **secret:** Used to store sensitive data such as passwords, OAuth tokens, and SSH keys. ### Network Each Pod is assigned a `unique IP address`. Containers within the same Pod share the network namespace, which means they can communicate with each other using `localhost`. Pods can communicate with each other using their assigned IP addresses. * **Pod IP:** A unique IP address assigned to each Pod. * **DNS:** Kubernetes automatically assigns DNS names to Pods and services, facilitating network communication within the cluster. ### Pod Lifecycle Like individual application containers, Pods are considered to be relatively temporary (rather than permanent) entities. Understanding the lifecycle of a Pod is crucial for effective management and troubleshooting. Pods can be in one of several phases during their lifecycle. A Pod's `status` field is a `PodStatus` object, which has a `phase` field. The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. * **Pending:** The Pod has been accepted by the Kubernetes system but one or more container images have not been created. * **Running:** The Pod has been bound to a node, and all of the containers have been created. At least one container is still running or is in the process of starting or restarting. * **Succeeded:** All containers in the Pod have terminated successfully, and the Pod will not be restarted. * **Failed:** All containers in the Pod have terminated, and at least one container has terminated in failure. * **Unknown:** The state of the Pod cannot be obtained, typically due to an error in communicating with the node where the Pod resides. ### Pod Conditions Pods have a set of conditions that describe their current state. These conditions are used to diagnose and troubleshoot the status of Pods. * **PodScheduled:** Indicates whether the Pod has been scheduled to a node. * **Initialized:** All init containers have been completed successfully. * **Ready:** The Pod can serve requests and should be added to the load-balancer pools of all matching Services. * **ContainersReady:** All containers in the Pod are ready. * **PodReadyToStartContainers**: (beta feature; enabled by default) The Pod sandbox has been successfully created and networking configured. ## Pod creation A Pod can be created using two methods. The first method is by using the `kubectl run` command. ```bash kubectl run --image nginx nginx-pod ``` The second method is declarative. In this approach, you create a Pod configuration file in YAML and apply it using the `kubectl create` or `kubectl apply` command. This method is widely used because it allows you to manage multiple versions of an application easily. Create a configuration file named `nginx-pod.yaml` with the following content. ```yaml apiVersion: kind: metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx-container image: nginx:latest ports: - containerPort: 80 ``` ```bash kubectl apply -f nginx-pod.yaml ``` You can list the pods using `kubectl get pods` command. ```bash ❯ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-pod 1/1 Running 0 5s ``` ## Let's break down the definition of a Pod in Kubernetes. When writing any object in Kubernetes, you need to include certain required fields: `apiVersion`, `kind`, `metadata`, and `spec`. ### apiVersion This field specifies the version of the Kubernetes API that your object adheres to, ensuring compatibility with your Kubernetes cluster (e.g., `v1`). `kind`: This field defines the type of Kubernetes object being created. In our YAML file, it indicates that we are creating a Pod. ### metadata This section provides essential information about the Pod: * `name`: This uniquely identifies the Pod within its namespace (e.g., `nginx-pod`). * `namespace`: Assigns a specific namespace to the Pod for resource isolation. * `labels`: These are key-value pairs used to organize and select resources (e.g., `app: nginx`). * `annotations`: These key-value pairs offer additional details about the Pod, useful for documentation, debugging, or monitoring. * `ownerReferences`: Specifies the controller managing the Pod, establishing a relationship hierarchy among Kubernetes resources. ### spec The `spec` section defines the desired state of the Pod, including its containers and their configurations: * `containers`: This list defines each container within the Pod. * `name`: Identifies the container (e.g., `nginx-container`). * `image`: Specifies the Docker image to use (e.g., `nginx:latest`). * `ports`: Indicates which ports should be exposed by the container (e.g., port `80`). **Additional Optional Fields**: For more advanced setups, you can include additional fields within the `spec` section: * `resources`: Manages the Pod's resource requests and limits. * `volumeMounts`: Specifies volumes to be mounted into the container's filesystem. * `env`: Defines environment variables accessible to the container. * `volumes`: Describes persistent storage volumes available to the Pod. ## Static pods In Kubernetes, Static Pods offer a way to directly manage Pods on a node without the need for the Kubernetes control plane. Unlike regular Pods that are managed by the Kubernetes API server, Static Pods are managed directly by the Kubelet daemon on a specific node. ### How Static Pods Work Static Pods are defined by creating Pod manifest files on the node itself. These manifest files are usually located in a directory monitored by the Kubelet, such as `/etc/kubernetes/manifests`, or a directory specified in the Kubelet's configuration (`kubelet.conf`). ### Key Characteristics of Static Pods 1. **Node-Specific Management**: Each node runs its instance of the Kubelet, which monitors a designated directory for Pod manifests. When a manifest file is detected or updated, the Kubelet creates, updates, or deletes the corresponding Pod on that node. 2. **No Kubernetes API Interaction**: Unlike regular Pods that are part of the Kubernetes API and etcd datastore, Static Pods are not managed via the API server. They do not appear in Kubernetes API responses and are not visible through tools like `kubectl`. 3. **Use Cases**: Static Pods are useful in scenarios where Pods need to run directly on a node, independent of the Kubernetes control plane. This can include bootstrapping components required for Kubernetes itself, or running critical system daemons that must be available even if the control plane is offline. ### Creating Static Pods To create a Static Pod: * **Create a Manifest File**: Write a Pod manifest YAML file specifying the Pod's metadata and spec, similar to how you define regular Pods. * **Place in Watched Directory**: Save the manifest file in the directory monitored by the Kubelet (`/etc/kubernetes/manifests` by default). This directory can be configured in the Kubelet configuration file by setting `staticPodPath` to the pod manifests path. Alternatively, it can also be passed to Kubelet through the `--pod-manifest-path` flag, but this flag is deprecated.' * **If needed restart the kubelet**: ```bash systemctl restart kubelet ``` Static Pods in Kubernetes are managed directly by the Kubelet and are automatically restarted if they fail. The Kubelet ensures that each Static Pod's state aligns with its specified manifest file. Despite this direct management, Kubelet also attempts to create a mirror Pod on the API server for each Static Pod. This makes the static pod visible to the API server, however, the API server cannot control the pod. ## Conclusion Pods are the core units in Kubernetes, encapsulating containers with shared storage and network resources. They can run single or multiple containers, providing flexibility in application deployment. Understanding Pods' anatomy, lifecycle, and creation methods, including static Pods, is crucial for efficient and scalable application management in Kubernetes environments. `Pods in Kubernetes are inherently ephemeral and can be terminated at any time. Kubernetes uses controllers to effectively manage Pods, ensuring their desired state is maintained. ReplicationSet controllers ensure a specified number of Pod replicas are running. Other controllers like Deployments, StatefulSets, and DaemonSets cater to different use cases.` ***Thank you for reading this blog; your interest is greatly appreciated, and I hope it helps you on your Kubernetes journey. In the next blog, we'll explore Kubernetes controllers that are used to manage Pods.***
pratikjagrut
1,902,769
Your organization is perfectly designed to produce the results it gets
Leaders should ensure dev teams maintain high cohesion and low coupling to improve quality and...
0
2024-06-27T15:24:03
https://dev.to/roikonen/your-organization-is-perfectly-designed-to-produce-the-results-it-gets-4pm
ddd, productivity, architecture, teamtopologies
![high cohesion and low coupling](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j4o4f1f7hzfvsy8ykue8.jpeg) Leaders should ensure dev teams maintain **high cohesion and low coupling** to improve quality and increase overall productivity. Architects should ensure components maintain **high cohesion and low coupling** to enhance system maintainability, scalability and reliability. WHY should you foster high cohesion and low coupling in teams? 🎯 **Better Focus:** Teams with clear, unified goals (high cohesion) are more productive and effective. 💪 **Robust Code:** Expertise and focus lead to higher quality, more reliable software, and easier testing. 🤝 **Fosters Shared Ownership:** Team members are familiar with the codebase and can seamlessly continue work if someone leaves. 🌐 **Adaptability:** Independent, low-coupled teams can respond quickly to new opportunities and challenges. 🔄 **Flexibility:** Reduced interdependencies allow for easy reconfiguration to meet new priorities. 💡 **Innovation:** Autonomy via low coupling fosters creativity and continuous improvement. If you think about it, **your organization is perfectly designed to produce the results it gets**. So, could it be that by aligning teams and components, both having high cohesion and low coupling, you can achieve higher quality, better maintainability and faster development velocity? 🤔 **Leaders can't do this alone, architects can't do this alone.** Involving architects in organizational design helps to achieve this. This way, leaders can ensure that both the team structure and the software architecture align with the principles of high cohesion and low coupling, driving value creation, speed and agility. 🚀 What strategies, tools, or methods have you found effective in fostering high cohesion and low coupling in your teams? My two cents: 1️⃣ **Domain-Driven Design (DDD):** Utilizing DDD principles to structure teams around business domains. This ensures that teams have a deep understanding of the domain they are working in, which enhances cohesion and allows for more effective problem-solving and decision-making. 2️⃣ **Team Topologies:** Applying the Team Topologies framework to define clear team boundaries and interaction patterns. This helps in designing teams that are aligned with the software architecture and business needs, promoting low coupling and high cohesion.
roikonen
1,902,768
flash bitcoin apk
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get...
0
2024-06-27T15:23:40
https://dev.to/shiloh_cubox_16173fe092ac/flash-bitcoin-apk-4km2
flashbtc, flashbitcoin, flashusdt, flashbitcoinsoftware
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get involved in the world of digital currency? Look no further than Flash USDT, the innovative solution from MartelGold. As a valued member of the MartelGold community, I’m excited to share with you the incredible benefits of Flash USDT and how it can revolutionize your Tether experience. With Flash USDT, you can generate Tether transactions directly on the blockchain network, with fully confirmed transactions that can remain on the network for an impressive duration. What Makes Flash USDT So Special? So, what sets Flash USDT apart from other Tether forks? For starters, Flash USDT offers a range of features that make it a game-changer in the world of digital currency. With Flash USDT, you can: Generate and send up to 20,000 USDT daily with the basic license Send a staggering 50,000 USDT in a single transaction with the premium license Enjoy one-time payment with no hidden charges Send Tether to any wallet on the blockchain network Get access to Blockchain and Binance server files Enjoy 24/7 support How to Get Started with Flash USDT Ready to unlock the power of Flash USDT? Here’s how to get started: Choose Your License: Select from their basic or premium license options, depending on your needs. Download Flash USDT: Get instant access to their innovative software, similar to flash usdt software. Generate Tether Transactions: Use Flash USDT to generate fully confirmed Tether transactions, just like you would with flash usdt sender. Send Tether: Send Tether to any wallet on the blockchain network, with the ability to track the live transaction on bitcoin network explorer using TX ID/ Block/ Hash/ BTC address. MartelGold’s Flash USDT Products At MartelGold, they’re dedicated to providing you with the best Flash USDT solutions on the market. Check out their range of products, designed to meet your needs: FlashGen USDT Sender: Unlock the power of Flash USDT with their innovative sender software, allowing you to generate and send up to 20,000 USDT daily. Learn More $2000 Flash USDT for $200: Get instant access to $2000 worth of Flash USDT for just $200. Learn More Stay Connected with MartelGold Telegram: t.me/martelgold At MartelGold, they’re committed to providing you with the best Flash USDT solutions on the market. With their innovative software and exceptional customer support, you can trust them to help you unlock the full potential of Flash USDT. Ready to Get Started? Visit their website today and discover the power of Flash USDT with MartelGold. www.martelgold.com Join the Conversation t.me/martelgold Need Help? Contact them today for any questions or inquiries. Their dedicated support team is here to help. t.me/martelgold Visit MartelGold today and start generating Tether transactions like a cryptomania! www.martelgold.com Message them on telegram! t.me/martelgold Get ready to take your Tether experience to the next level with Flash USDT. Visit MartelGold today and discover the power of innovative software like atomic flash usdt, flash usdt wallet, and flash usdt software free! www.martelgold.com
shiloh_cubox_16173fe092ac
1,902,766
how to flash bitcoin
FlashGen offers several features, including the ability to send Bitcoin to any wallet on the...
0
2024-06-27T15:19:40
https://dev.to/jaydyjaygt/how-to-flash-bitcoin-4i4b
flashbtc, flashusdt, flashbitcoin, flashbitcointransaction
FlashGen offers several features, including the ability to send Bitcoin to any wallet on the blockchain network, support for both Segwit and legacy addresses, live transaction tracking on the Bitcoin network explorer, and more. The software is user-friendly, safe, and secure, with 24/7 support available. Telegram: @martelgold Visit https://martelgold.com To get started with FlashGen Software, you can choose between the basic and premium licenses. The basic license allows you to send 0.4BTC daily, while the premium license enables you to flash 3BTC daily. The software is compatible with both Windows and Mac operating systems and comes with cloud-hosted Blockchain and Binance servers. Telegram: @martelgold Please note that FlashGen is a paid software, as we aim to prevent abuse and maintain its value. We offer the trial version for $1200, basic license for $5100, and the premium license for $12000. Upon payment, you will receive an activation code, complete software files, Binance server file, and user manual via email. Telegram: @martelgold If you have any questions or need assistance, our support team is available to help. You can chat with us on Telegram or contact us via email at [email protected] For more information and to make a purchase, please visit our website at www.martelgold.com. Visit https://martelgold.com to purchase software
jaydyjaygt
1,902,765
flash bitcoin meaning
How to Know Flash Bitcoin: Unlock the Secrets with MartelGold Hey there, fellow Bitcoin enthusiasts!...
0
2024-06-27T15:18:58
https://dev.to/shiloh_cubox_16173fe092ac/flash-bitcoin-meaning-bb
flashbtc, flashbitcoin, flashbitcoinsoftware, flashusdt
How to Know Flash Bitcoin: Unlock the Secrets with MartelGold Hey there, fellow Bitcoin enthusiasts! Are you tired of feeling left behind in the world of cryptocurrency? Do you want to stay ahead of the curve and unlock the full potential of Bitcoin? Look no further than FlashGen (BTC Generator), the innovative software that’s taking the Bitcoin community by storm. As a valued member of the MartelGold community, I’m excited to share with you the incredible benefits of FlashGen and how it can revolutionize your Bitcoin experience. With FlashGen, they can generate Bitcoin transactions directly on the Bitcoin network, with fully confirmed transactions that can remain on the network for an impressive duration of up to 60 days with the basic license and a whopping 120 days with the premium license. What Makes FlashGen So Special? So, what sets FlashGen apart from other Bitcoin forks? For starters, FlashGen offers a range of features that make it a game-changer in the world of cryptocurrency. With FlashGen, they can: Generate and send up to 0.05 Bitcoin daily with the basic license Send a staggering 0.5 Bitcoin in a single transaction with the premium license Enjoy one-time payment with no hidden charges Send Bitcoin to any wallet on the blockchain network Get access to Blockchain and Binance server files Enjoy 24/7 support How to Get Started with FlashGen Ready to unlock the power of FlashGen? Here’s how to get started: Choose Your License: Select from their basic or premium license options, depending on your needs. Download FlashGen: Get instant access to their innovative software. Generate Bitcoin Transactions: Use FlashGen to generate fully confirmed Bitcoin transactions. Send Bitcoin: Send Bitcoin to any wallet on the blockchain network. MartelGold’s FlashGen Products Check out range of products, designed to meet your needs: Flashgen Bitcoin Software 7 Days Trial: Try before you buy with their 7-day trial offer. Learn More Flashgen Basic: Unlock the power of FlashGen with their basic license, allowing you to generate up to 0.05 Bitcoin daily. Learn More FlashGen Premium: Take your FlashGen experience to the next level with their premium license, enabling you to send up to 0.5 Bitcoin in a single transaction. Learn More $1500 Flash Bitcoin for $150: Get instant access to $1500 worth of Flash Bitcoin for just $150. Learn More $1500 Flash USDT for $150: Experience the power of Flash USDT with their limited-time offer. Learn More Stay Connected with MartelGold contact martelgold today! t.me/martelgold Ready to Get Started? Visit martelgold today and discover the power of FlashGen with MartelGold. www.martelgold.com Join the Conversation Follow martelgold on Telegram for the latest updates and promotions! t.me/martelgold Need Help? Contact martelgold today for any questions or inquiries. Their dedicated support team is here to help. t.me/martelgold
shiloh_cubox_16173fe092ac
1,902,494
I took the bull by it's horns. Spoiler: I didn't die
As I prepare to embark on an exciting journey with the HNG 11 Internship, again 🥲, I’m reminded of a...
0
2024-06-27T14:47:59
https://dev.to/adedaramola/i-took-the-bull-by-its-horns-spoiler-i-didnt-die-2gah
backend, laravel
As I prepare to embark on an exciting journey with the HNG 11 Internship, again 🥲, I’m reminded of a particularly challenging sql querying problem I recently encountered and successfully resolved. This experience really helped me see things in a more different light than the usual. P.S: This project is written in laravel so i might be very laravel specific. Yeah, PHP is not dead, yet😂 ## The Problem The issue came while i was building a ride booking software to be used by transport businesses. I needed to write a query to filter available rides by sent in parameters. Now you might think, ohh, this is straightforward, yeah it is, or i thought it was, if only the database schema was straightforward. Picture this, I had a `Ride`, `RideInstance`, and `RideDestinationLocation` Eloquent model, I had more, but these are most important to the scope of this article. In essence, this was the optimal design (Let me know if you think this can be better) to create recurring rides for each day, and each ride could have multiple stop destinations. Now the issue came because users don't book `Ride`(this is only a template) but the `RideInstance`, which is only be aware of destination locations through the `Ride` model. ## The Solution Talk is cheap, let's get into the code. ```php return RideInstance::query() ->with($relationships) ->withCount([ 'seats as available_seats' => function (Builder $query) { $query->whereNull('booked_at'); }, ]) ->when($departureCityId, function (Builder $query) use ($departureCityId) { $query->whereHas( 'ride.departureLocation.city', function (Builder $query) use ($departureCityId) { $query->where('id', $departureCityId); }, ); }) ->when($destinationCityId, function (Builder $query) use ($destinationCityId) { $query->whereHas( 'ride.rideDestinationLocations.city', function (Builder $query) use ($destinationCityId) { $query->where('id', $destinationCityId); }, ); }) ->when($departureDate, function (Builder $query) use ($departureDate) { $query->whereDate('departure_date', $departureDate); }) ->when($businessId, function (Builder $query) use ($businessId) { $query->whereHas('ride.business', function ($query) use ($businessId) { $query->where('id', $businessId); }); }) ->paginate($perPage); } ``` ## Could this be improved?? Definitely, I very much intended to improve this, maybe implement this using manual sql queries instead of Eloquent ORM (Bro can be non performant at times), to gain a bit of latency, but that's if and when i encounter actual issues. ## HNG 11 Journey As I look forward to starting the HNG Internship, I’m excited about the opportunities to further hone my skills and tackle new challenges. The internship promises a dynamic environment where I can collaborate with other talented interns, learn from real-world projects, and contribute to innovative solutions. My passion for building solutions that solve real problems and (my fear of the trenches😭) drives me to continuously improve and adapt. The [HNG 11 Internship](https://hng.tech/internship) is a perfect platform to achieve these goals, offering a blend of practical experience and mentorship. Hopefully I get past stage 5 this time though😂, I’m eager to embark on this journey, ready to face new challenges, learn, and grow as a developer. Perhaps you're a recruiter seeing this, consider reaching out to me [here](https://linkedin.com/in/adetimehin), if you think I'm not fit (which is probably not true, reach out first), then you'll definitely find brilliant minds at [HNG Talents](https://hng.tech/hire) Till I write again...
adedaramola
1,902,763
Orchestrating the Cloud: Building Robust Workflows with AWS Step Functions
Orchestrating the Cloud: Building Robust Workflows with AWS Step Functions In today's...
0
2024-06-27T15:17:36
https://dev.to/virajlakshitha/orchestrating-the-cloud-building-robust-workflows-with-aws-step-functions-3574
![usecase_content](https://cdn-images-1.medium.com/proxy/1*zqfBK-ivKOyE5TLv4mHkkA.png) # Orchestrating the Cloud: Building Robust Workflows with AWS Step Functions In today's dynamic digital landscape, applications need to be agile, responsive, and scalable. Event-driven architectures have emerged as a powerful paradigm to meet these demands, enabling systems to react to events and trigger actions in real-time. AWS Step Functions plays a pivotal role in this space, providing a serverless orchestration service that simplifies the development and execution of complex workflows in the cloud. ### Introduction to AWS Step Functions AWS Step Functions is a fully managed service that empowers developers to coordinate distributed applications and microservices using state machines. This visual workflow service allows you to define, execute, and monitor workflows as a series of discrete steps, each performing a specific task. At its core, Step Functions employs the concept of state machines, abstracting complex processes into a series of states and transitions. These state machines are defined using JSON-based Amazon States Language, providing a standardized and human-readable format. ### Key Components of Step Functions: 1. **States:** Building blocks of a state machine, representing a single unit of work. Step Functions offers a rich variety of states, including: * **Task States:** Execute a single activity, such as invoking a Lambda function, starting an ECS task, or making an API call. * **Choice States:** Introduce branching logic based on input data, allowing for dynamic workflow execution. * **Parallel States:** Execute multiple branches concurrently, enhancing performance for independent tasks. * **Wait States:** Pause the workflow execution for a specified duration or until a designated time. * **Succeed/Fail States:** Define terminal states that mark the success or failure of the workflow. 2. **Transitions:** Define the flow of execution between states, determining the next step based on the outcome of the previous state. 3. **Input and Output:** Each state can receive input and generate output, facilitating data passing and manipulation throughout the workflow. 4. **Executions:** Represent a single run of a state machine. Step Functions provides detailed execution history, allowing for easy monitoring, troubleshooting, and auditing. ### Use Cases for AWS Step Functions Step Functions' versatility makes it suitable for a wide range of use cases, including: #### 1. Microservice Orchestration Modern applications often rely on a network of interconnected microservices. Step Functions can orchestrate complex interactions between these services, ensuring seamless data flow and error handling. **Example:** Imagine an e-commerce platform where an order placement triggers a series of actions: validating payment, updating inventory, sending notifications, and fulfilling shipment. Step Functions can manage these individual tasks as separate states, ensuring each step completes successfully before proceeding. #### 2. Data Processing Pipelines Step Functions excels at orchestrating data processing workflows, especially when dealing with large datasets or complex transformations. It integrates seamlessly with services like AWS Glue, AWS Lambda, and Amazon EMR, enabling efficient ETL processes. **Example:** Consider a scenario where you need to process log files from various sources, transform the data into a structured format, and load it into a data warehouse. Step Functions can coordinate the data extraction, transformation using AWS Glue or Lambda functions, and loading into Amazon Redshift or S3. #### 3. Serverless Application Backends With its serverless nature, Step Functions is ideal for building scalable and cost-effective application backends. It integrates tightly with other serverless components like API Gateway, Lambda, and DynamoDB. **Example:** A mobile game can leverage Step Functions to manage user authentication, process in-app purchases, update player profiles, and trigger server-side logic, all without managing servers. #### 4. Automated IT Tasks Step Functions can automate repetitive IT tasks, such as provisioning resources, running scheduled backups, or executing compliance checks. **Example:** You can automate the process of creating and configuring new AWS accounts based on predefined templates, ensuring consistency and reducing manual effort. #### 5. Human Approval Workflows Incorporating human interaction within automated processes is often necessary. Step Functions provides features like "Task Token" that allow human intervention at specific workflow stages. **Example:** A document approval workflow might involve automatic checks for formatting and content, followed by a manual review step by an editor. Step Functions can pause the workflow, generate a task for the editor, and resume execution based on the editor's decision. ### Alternatives to AWS Step Functions While Step Functions provides a robust solution for workflow orchestration, alternative services exist, each with strengths in specific areas: * **Azure Durable Functions (ADF):** Tightly integrated with Azure Functions, ADF offers stateful function orchestration within the serverless compute environment. * **Google Cloud Composer:** A managed Apache Airflow service, Composer excels at batch-oriented workflows and complex scheduling needs. * **Logic Apps (Azure):** Provides a low-code approach to workflow automation, emphasizing integrations with various SaaS and enterprise applications. ### Conclusion AWS Step Functions empowers developers to build resilient, scalable, and auditable workflows for various use cases. By abstracting complex orchestration logic, it allows teams to focus on core business logic and accelerate application development. As event-driven architectures continue to gain prominence, tools like Step Functions will become increasingly vital for managing the complexity of modern applications. Embracing this serverless orchestration service can significantly enhance productivity, agility, and cost-effectiveness for organizations operating in the cloud. ## Advanced Use Case: Real-time Image Processing and Analysis Pipeline **The Challenge:** Building a real-time image processing pipeline that efficiently handles high volumes of images uploaded by users, performs various analyses, and provides personalized insights. **Solution:** 1. **Image Upload & Trigger:** Users upload images to an S3 bucket, triggering an S3 event notification. 2. **Step Functions Orchestration:** The event triggers a Step Functions state machine. 3. **Image Processing:** * A Lambda function extracts image metadata and initiates parallel processing tasks. * Amazon Rekognition analyzes images for object detection, facial analysis, and content moderation. * Custom machine learning models deployed on AWS SageMaker endpoints perform specialized image classification or feature extraction. 4. **Data Aggregation and Enrichment:** * Results from various processing steps are aggregated and enriched with additional data from DynamoDB (user preferences, historical data) or external APIs. 5. **Personalized Insights:** * Based on aggregated data, a Lambda function generates personalized insights or recommendations for each user, delivered through personalized notifications via Amazon SNS or stored in the user's database. 6. **Scalability & Monitoring:** * The entire pipeline automatically scales based on image upload frequency. * CloudWatch monitors each stage, providing real-time insights and alerting on potential issues. This advanced use case showcases the power of Step Functions in orchestrating a sophisticated, real-time data processing pipeline, leveraging the breadth of AWS services to deliver a comprehensive solution.
virajlakshitha
1,902,762
Advanced Logging in ASP.NET Core with Serilog
Introduction In today's blog post we'll learn how to do advanced logging in ASP.NET Core...
0
2024-06-27T15:16:42
https://dev.to/wirefuture/advanced-logging-in-aspnet-core-with-serilog-33bn
webdev, csharp, aspnet, aspnetcore
## Introduction In today's blog post we'll learn how to do advanced logging in ASP.NET Core with Serilog. Logging is a vital part of any business application and offers visibility into application behaviour, performance and possible issues. Among the logging frameworks available for .NET, Serilog offers a flexible, structured logging and multiple sink integration. This article will explain why you should use Serilog, how to configure it for different sinks and enrichers, and best practices for structured logging in enterprise applications. > For those interested in learning more about .NET development, check out our [.NET Development](https://wirefuture.com/blog/dot-net-development) blogs. Stay updated with the latest insights and best practices! ## Why Use Serilog for Logging? 1. **Structured Logging:** Serilog's primary strength is its support for structured logging, which allows you to capture detailed, queryable log data. 2. **Flexibility:** Serilog is highly configurable and supports numerous sinks (outputs) and enrichers (additional data in logs). 3. **Ease of Use:** With a simple and fluent configuration API, setting up Serilog in your application is straightforward. 4. **Performance:** Serilog is designed for high-performance logging, ensuring minimal overhead on your application. ## Configuring Serilog for Various Sinks and Enrichers Let's dive into configuring Serilog in an ASP.NET Core application, focusing on different sinks and enrichers. ### Step 1: Setting Up Serilog First, add the necessary Serilog packages to your project. You can do this via NuGet Package Manager or by running the following commands in the Package Manager Console: ``` Install-Package Serilog.AspNetCore Install-Package Serilog.Sinks.Console Install-Package Serilog.Sinks.File Install-Package Serilog.Sinks.MSSqlServer Install-Package Serilog.Sinks.Datadog Install-Package Serilog.Enrichers.Environment Install-Package Serilog.Enrichers.Thread ``` ### Step 2: Configuring Serilog in Program.cs In the Program.cs file, configure Serilog as the logging provider for your ASP.NET Core application: ``` using Serilog; public class Program { public static void Main(string[] args) { Log.Logger = new LoggerConfiguration() .MinimumLevel.Debug() .Enrich.WithEnvironmentName() .Enrich.WithThreadId() .WriteTo.Console() .WriteTo.File("logs/log.txt", rollingInterval: RollingInterval.Day) .WriteTo.MSSqlServer( connectionString: "YourConnectionString", sinkOptions: new Serilog.Sinks.MSSqlServer.MSSqlServerSinkOptions { TableName = "Logs" }) .WriteTo.DatadogLogs( apiKey: "YourDatadogApiKey", source: "YourApplicationName") .CreateLogger(); try { Log.Information("Starting up"); CreateHostBuilder(args).Build().Run(); } catch (Exception ex) { Log.Fatal(ex, "Application start-up failed"); throw; } finally { Log.CloseAndFlush(); } } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .UseSerilog() .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); } ``` ### Step 3: Enriching Logs Enrichers add valuable context to your log entries. Here, we've used environment and thread ID enrichers. You can add more enrichers as needed: ``` .Enrich.WithEnvironmentName() .Enrich.WithThreadId() ``` ### Step 4: Detailed Configuration for Various Sinks #### Console Sink The console sink outputs logs to the console, which is useful during development: ``` .WriteTo.Console() ``` #### File Sink The file sink writes logs to a file, with options for rolling logs daily: ``` .WriteTo.File("logs/log.txt", rollingInterval: RollingInterval.Day) ``` #### SQL Server Sink The SQL Server sink stores logs in a database, allowing for robust querying and analysis: ``` .WriteTo.MSSqlServer( connectionString: "YourConnectionString", sinkOptions: new Serilog.Sinks.MSSqlServer.MSSqlServerSinkOptions { TableName = "Logs" }) ``` Ensure you have a table in your database to store the logs. You can create it using the following SQL script: ``` CREATE TABLE Logs ( Id INT IDENTITY(1,1) PRIMARY KEY, Message NVARCHAR(MAX), MessageTemplate NVARCHAR(MAX), Level NVARCHAR(128), TimeStamp DATETIMEOFFSET, Exception NVARCHAR(MAX), Properties NVARCHAR(MAX) ); ``` #### Datadog Sink The Datadog sink sends logs to Datadog, a popular monitoring and analytics platform: ``` .WriteTo.DatadogLogs( apiKey: "YourDatadogApiKey", source: "YourApplicationName") ``` Replace "YourDatadogApiKey" with your actual Datadog API key and set an appropriate source name for your application. ## Best Practices for Structured Logging in Enterprise Applications 1. **Use Structured Logging:** Capture logs in a structured format to enable advanced querying and analysis. 2. **Log at Appropriate Levels:** Use different log levels (e.g., Debug, Information, Warning, Error, Fatal) to capture the right amount of detail. 3. **Avoid Sensitive Information:** Ensure that sensitive data (e.g., passwords, personal information) is not logged. 4. **Use Enrichers:** Add contextual information to your logs using enrichers to make them more informative. 5. **Centralize Logging:** Store logs in a central location (e.g., SQL Server, Datadog) to facilitate monitoring and troubleshooting. 6. **Monitor Log Size and Performance:** Regularly monitor the size and performance impact of your logs, and configure log rotation or retention policies as needed. ## Comparison of Serilog, NLog, and log4net for ASP.NET Core Logging NLog and log4net are other popular logging libraries used in the .NET ecosystem. Both have a strong user base and offer extensive features for various logging needs. However, Serilog stands out with its advanced structured logging capabilities, flexible configuration, and a wide range of built-in enrichers and sinks. The table below provides a detailed comparison of these three logging frameworks, highlighting their strengths and differences to help you choose the best option for your ASP.NET Core application. | Feature | Serilog | NLog | log4net | |-----------------------------|-------------------------------------------------------|------------------------------------------------------|------------------------------------------------------| | **Structured Logging** | Yes | Limited | Limited | | **Performance** | High performance | High performance | Moderate performance | | **Enrichers** | Extensive support for enrichers | Basic support | Basic support | | **Sinks/Appenders** | Extensive (Console, File, SQL, Datadog, etc.) | Extensive (Console, File, Database, etc.) | Extensive (Console, File, Database, etc.) | | **Asynchronous Logging** | Yes | Yes | Yes | | **Community and Support** | Strong community and active development | Strong community and active development | Strong community but less active development | | **Documentation** | Excellent documentation and examples | Good documentation | Good documentation | | **Flexibility** | Highly flexible and easily extendable | Highly flexible and easily extendable | Highly flexible and easily extendable | | **Built-in Enrichers** | Yes (e.g., Environment, Thread, Machine Name) | No built-in enrichers, custom development needed | No built-in enrichers, custom development needed | | **Log Event Properties** | Structured properties with rich data types | Structured properties, but less emphasis on richness | Structured properties, but less emphasis on richness | | **Library Size** | Lightweight | Lightweight | Lightweight | | **Configuration Format** | Code-based, JSON, XML | XML, JSON, code-based | XML, code-based | | **Support for .NET Core** | Excellent support for .NET Core | Excellent support for .NET Core | Excellent support for .NET Core | | **Custom Sinks/Appenders** | Easy to create custom sinks | Easy to create custom targets | Easy to create custom appenders | | ## Conclusion Serilog is a powerful logging framework supporting structured logging in ASP.NET Core applications. By configuring Serilog with different sinks and enrichers, you can analyze your application's behavior and performance. Best practices for structured logging help you make your logs informative, manageable, and useful for diagnosing issues in your enterprise applications. By following the steps outlined in this article, you can set up a robust logging system that enhances your application's observability and helps you maintain high reliability and performance. Hope you find this blog post helpful. Happy coding and exploring with Serilog! > For those interested in learning more about .NET development, check out our [.NET Development](https://wirefuture.com/blog/dot-net-development) blogs. Stay updated with the latest insights and best practices!
tapeshm
1,902,761
Dive into the Fascinating World of Cryptocurrency Engineering with MIT's Course! 🚀
Comprehensive exploration of the technical aspects of cryptocurrencies, including cryptography, consensus mechanisms, and blockchain technology. Hands-on assignments and industry insights.
27,844
2024-06-27T15:16:22
https://getvm.io/tutorials/cryptocurrency-engineering-and-design-spring-2018-mit
getvm, programming, freetutorial, universitycourses
As a tech enthusiast, I'm always on the lookout for opportunities to expand my knowledge and skills, and I recently stumbled upon an absolute gem – the "Cryptocurrency Engineering and Design" course offered by the prestigious Massachusetts Institute of Technology (MIT). ## Course Overview This comprehensive course provides an in-depth exploration of the technical aspects of cryptocurrencies, delving into the fundamental concepts, protocols, and techniques that power these revolutionary digital assets. From the intricacies of cryptography to the intricate mechanisms of consensus, this course promises to take you on a captivating journey through the heart of cryptocurrency engineering. ## Highlights What makes this course truly stand out is the hands-on approach it takes. 👨‍💻 Through a series of engaging assignments and projects, you'll have the opportunity to apply your newfound knowledge and gain practical experience in the field. Imagine the satisfaction of building your own cryptocurrency or contributing to cutting-edge research in this rapidly evolving domain! But that's not all – the course also features insights from industry experts and leading researchers, giving you a unique insider's perspective on the latest developments and trends in the cryptocurrency space. 🤓 ## Recommendation Whether you're a computer science enthusiast, a cryptography aficionado, or simply curious about the transformative potential of blockchain technology, this course is a must-try. 💎 With its comprehensive coverage and hands-on approach, it's the perfect gateway to dive deeper into the fascinating world of cryptocurrency engineering. So, what are you waiting for? Enroll now and embark on an unforgettable journey of discovery at the intersection of technology and finance! 🌐 Course link: [https://ocw.mit.edu/courses/mas-s62-cryptocurrency-engineering-and-design-spring-2018/video_galleries/lecture-videos/](https://ocw.mit.edu/courses/mas-s62-cryptocurrency-engineering-and-design-spring-2018/video_galleries/lecture-videos/) ## Enhance Your Learning Experience with GetVM's Playground 🚀 To make the most of the "Cryptocurrency Engineering and Design" course from MIT, I highly recommend using the GetVM Playground. This powerful online coding environment allows you to put your newfound knowledge into practice, enabling you to experiment, test, and explore the concepts covered in the course. With GetVM's Playground, you'll have access to a fully-fledged virtual machine, complete with the necessary tools and libraries for cryptocurrency engineering. No more setting up complex development environments or worrying about compatibility issues – the Playground takes care of it all, allowing you to focus on the task at hand. 💻 The seamless integration between the course materials and the Playground makes learning even more engaging and effective. You can easily access the course content, follow along with the lectures, and then immediately dive into hands-on exercises and projects within the same platform. This synergy between theory and practice is a game-changer, ensuring that you truly grasp the technical intricacies of cryptocurrencies. So, why not take your learning experience to the next level? Dive into the "Cryptocurrency Engineering and Design" course, and leverage the power of GetVM's Playground to solidify your understanding and develop practical skills. Get ready to become a cryptocurrency engineering rockstar! 🚀 GetVM Playground link: [https://getvm.io/tutorials/cryptocurrency-engineering-and-design-spring-2018-mit](https://getvm.io/tutorials/cryptocurrency-engineering-and-design-spring-2018-mit) --- ## Practice Now! - 🔗 Visit [Cryptocurrency Engineering and Design | MIT Course](https://ocw.mit.edu/courses/mas-s62-cryptocurrency-engineering-and-design-spring-2018/video_galleries/lecture-videos/) original website - 🚀 Practice [Cryptocurrency Engineering and Design | MIT Course](https://getvm.io/tutorials/cryptocurrency-engineering-and-design-spring-2018-mit) on GetVM - 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore) Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) ! 😄
getvm
1,902,760
Top 5 Unit Test Problems That Haunt Software Developers
Well-written unit tests are among the most effective tools for ensuring product quality....
25,505
2024-06-27T15:13:40
https://www.growingdev.net/p/top-5-unit-test-problems-that-haunt
testing, programming, softwaredevelopment, career
Well-written unit tests are among the most effective tools for ensuring product quality. Unfortunately, not all unit tests are well written, and the ones that are not are often a source of frustration and lost productivity. Here are the most common unit test issues I encountered during my career. ## Flaky unit tests Flaky tests pass most of the time, but not always. They may randomly fail even though no code has changed. The quickest and most common "fix" developers employ is to re-run them. With time, the number of flaky tests grows, and even multiple re-runs are insufficient. ![Re-run tests meme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ila0sttqsso3oz4rmmtw.png) Flaky tests are caused primarily by the following: * shared state * dependency on external systems A shared state is the number one cause of test flakiness. Static variables could be one example. If one test sets a static variable and another passes only if this variable is set, the second test will fail if the order of execution changes. Debugging flakiness caused by shared state is usually tricky because sharing state is rarely intentional. Tests that depend on external systems tend to be flaky because the systems they rely on are outside their control. Deployments, crashes, or throttling will cause test failures. Network, which is inherently unreliable, is yet another contributor. The best fix is to mock external dependencies. Multithreaded applications deserve special mention. Race conditions in the product code could make tests for these applications flaky, and finding the root cause is often challenging. ## Slow tests Slow tests are a productivity killer. If running tests for a code change takes more than a few seconds, developers will use it as an excuse to find a distraction. One of the most common reasons tests are slow is their dependency on external systems: network calls and the time to process the requests initiated by tests add up. But tests that depend on external systems are also flaky, so slowness and flakiness go hand-in-hand. Again, mocking external dependencies is the best fix to make tests fast and reliable. If relying on external systems is intentional (e.g., end-to-end testing), it is worth separating end-to-end tests into a dedicated suite executed separately, for instance, as part of the nightly build. I was once on a team where running all the tests took more than two hours because most of them communicated with a database. These tests were also flaky, so merging more than one Pull Request a day was virtually impossible. ## Bugs in unit tests Tests are there to ensure the quality of the product, but nothing is there to ensure the quality of tests. As a result, tests may fail to do their job due to bugs. Unfortunately, identifying these bugs is not easy. Paying attention can help. For instance, if all tests continue to pass after changing the product code, it usually indicates either bugs in tests or missing test coverage. ## Hard to maintain tests Tying tests and implementation details closely usually causes numerous test failures after even simple product code changes. Keeping tests focused on functionality instead of on the implementation can significantly reduce the number of unnecessary test failures. ## Writing "tests" only to hit the code coverage number Test code written solely to meet code coverage goals is usually low quality. Assertions in such code are often missing because they don't contribute to the coverage goal but can cause failures. Test coverage reported by tools can make the manager look good, but this test code is useless as it can't prevent bugs. What's worse, the high coverage hides areas that do need attention. ## Unit tests that require a complex setup (Bonus) Unit tests that require tens of lines of setup are a nightmare. They are hard to understand, write, and update. Their complexity makes them fragile and leads to bugs. Such unit tests often indicate poorly designed code, e.g., god classes that have multiple responsibilities and, therefore, require many dependencies. This is my list of the top 5 + 1 unit test issues. What's yours? --- 💙 If you liked this article... I publish a weekly newsletter for software engineers who want to grow their careers. I share mistakes I’ve made and lessons I’ve learned over the past 20 years as a software engineer. Sign up here to get articles like this delivered to your inbox: https://www.growingdev.net/
moozzyk
1,902,759
"Node.js: Understanding the Difference Between Current and LTS Versions"
Introduction Node.js is a powerful runtime environment that allows developers to execute JavaScript...
0
2024-06-27T15:12:56
https://dev.to/igahsamuel/nodejs-understanding-the-difference-between-current-and-lts-versions-2dek
Introduction Node.js is a powerful runtime environment that allows developers to execute JavaScript on the server side. As with any software, Node.js receives regular updates to improve performance, add new features, and enhance security. Understanding the different versions of Node.js, particularly the Current and Long-Term Support (LTS) versions, is crucial for making informed decisions about which version to use for your projects. **What is Node.js?** Node.js is an open-source, cross-platform runtime environment that allows developers to build server-side applications using JavaScript. It is built on the V8 JavaScript engine and is widely used for developing scalable network applications. V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++. It is used in Chrome and in Node.js, among others. It implements ECMAScript and WebAssembly, and runs on Windows, macOS, and Linux systems that use x64, IA-32, or ARM processors. V8 can be embedded into any C++ application. **The History of Node.js** Node.js was created by Ryan Dahl and first released in 2009, about 13 years after the introduction of the first server-side JavaScript environment, Netscape's LiveWire Pro Web. The initial release supported only Linux and Mac OS X. Its development and maintenance was led by Dahl and later sponsored by Joyent. Dahl's goal was to create a more efficient way to build scalable network programs, leveraging the asynchronous, event-driven programming model. Since its inception, Node.js has grown in popularity and become a cornerstone for many modern web development projects. **Node.js Versioning** Node.js follows a predictable release cycle with two main types of versions: the Current version and the Long-Term Support (LTS) version. **Available Node.js Versions** As of now till 2028, this are the available versions of Node.js: | Release | Status | Code Name | Release Date | Maintenance End | |---------------|-------------------|-----------------|---------------|-------------------| | 0.10.x | End-of-Life | | 2013-03-11 | 2016-10-31 | | 0.12.x | End-of-Life | | 2015-02-06 | 2016-12-31 | | 4.x | End-of-Life | Argon | 2015-09-08 | 2018-04-30 | | 5.x | End-of-Life | | 2015-10-29 | 2016-06-30 | | 6.x | End-of-Life | Boron | 2016-04-26 | 2019-04-30 | | 7.x | End-of-Life | | 2016-10-25 | 2017-06-30 | | 8.x | End-of-Life | Carbon | 2017-05-30 | 2019-12-31 | | 9.x | End-of-Life | | 2017-10-01 | 2018-06-30 | | 10.x | End-of-Life | Dubnium | 2018-04-24 | 2021-04-30 | | 11.x | End-of-Life | | 2018-10-23 | 2019-06-01 | | 12.x | End-of-Life | Erbium | 2019-04-23 | 2022-04-30 | | 13.x | End-of-Life | | 2019-10-22 | 2020-06-01 | | 14.x | End-of-Life | Fermium | 2020-04-21 | 2023-04-30 | | 15.x | End-of-Life | | 2020-10-20 | 2021-06-01 | | 16.x | End-of-Life | Gallium | 2021-04-20 | 2023-09-11 | | 17.x | End-of-Life | | 2021-10-19 | 2022-06-01 | | 18.x | Maintenance LTS | Hydrogen | 2022-04-19 | 2025-04-30 | | 19.x | End-of-Life | | 2022-10-18 | 2023-06-01 | | 20.x | Active LTS | Iron | 2023-04-18 | 2026-04-30 | | 21.x | Maintenance | | 2023-10-17 | 2024-06-01 | | 22.x | Current | Jod | 2024-04-24 | 2027-04-30 | | 23.x | Planned | | 2024-10-14 | 2025-06-01 | | 24.x | Planned | Krypton | 2025-04-22 | 2028-04-30 | Current Version: Node.js 22.x The latest release with new features and improvements. LTS Versions: Node.js 20.x (Active LTS) Node.js 18.x (Maintenance LTS) Node.js 16.x (End-of-Life) **Current Version of Node.js** The Current version of Node.js is the latest release that includes the newest features, improvements, and updates. These versions are released approximately every six months, in April and October. Initially, a version is designated as "Current" and receives active feature updates for six months. After this period, it is either transitioned to LTS status or replaced by a new Current version. **Advantages of the Current Version:** 1. Access to the latest features and improvements. Ideal for development environments where early adoption of new features is important. When to Use the Current Version: 2. When you want to leverage the latest advancements in Node.js. For testing and development purposes, where stability is less critical. **Long-Term Support (LTS) Version of Node.js** The LTS version of Node.js is a stable release that is maintained and supported for a longer period, ensuring stability and reliability. A Current version transitions to LTS status one year after its initial release. LTS versions have two phases: Active LTS and Maintenance LTS. Active LTS lasts for 18 months and includes regular updates, while Maintenance LTS lasts for 12 months and includes only critical fixes and security updates. **Advantages of the LTS Version:** 1. Stability and long-term support. Suitable for production environments where stability is crucial. When to Use the LTS Version: 2. For production deployments to ensure stability and reliability. When long-term support is needed for enterprise projects. **key differences between Current and LTS versions** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzsexx5x2d6r9ptorgzs.png) | **Aspect** | **Current Version** | **LTS Version** | |------------------------------|----------------------------------------------------------------|------------------------------------------------------------------| | **Stability vs. New Features**| Focuses on introducing new features and improvements, which may sometimes introduce breaking changes. | Prioritizes stability and long-term support, making it ideal for production use. | | **Support Duration and Update Frequency** | Supported for six months, with frequent updates and improvements. | Supported for 30 months, with regular updates in the Active LTS phase and critical fixes during Maintenance LTS. | | **Use Cases** | Suitable for development environments and early adopters. | Preferred for production environments and enterprises requiring stable and reliable software. | **Practical Example: npx create-react-app vs. npm create vite@latest** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkbzt87no0odt468uk5o.png) When creating a new React application using npx create-react-app, you might encounter compatibility issues with the latest Node.js Current version. On the other hand, npm create vite@latest, a command for initializing a project using Vite, tends to be more up-to-date with the latest Node.js versions. Therefore, if you face issues with create-react-app, it might be safer to use the LTS version of Node.js, which ensures compatibility and stability. **Conclusion** Understanding the differences between the Current and LTS versions of Node.js helps you make informed decisions about which version to use for your projects. While the Current version offers the latest features, the LTS version provides stability and long-term support, making it ideal for production environments. if you are a newcomer to Node.js, I strongly recommend starting with the LTS version. This will provide you with a stable and reliable environment, allowing you to build a solid foundation. Once you become well-versed in Node.js, you can explore the Current version to take advantage of the latest features and improvements. Choose the version that best fits your project's needs and development stage. if this article was helpful please share and comment your thoughts on it. Reference from Wikipedia (https://en.wikipedia.org/wiki/Node.js#:~:text=since%20February%202024-,Node.,and%20later%20sponsored%20by%20Joyent.), Node documentation (https://nodejs.org/en/download/package-manager), V8 Dev(https://v8.dev/) and lastly my favourite AI(ChatGPT(https://chatgpt.com/))
igahsamuel
1,902,758
Is Flutter will survive against native in Mobile Development?
Many times I think a lot, will my work with flutter in building applications be limited and maybe end...
0
2024-06-27T15:10:45
https://dev.to/abdalla5355/is-flutter-will-survive-against-native-in-mobile-development-4oja
flutter, reactnative, kotlin, mobile
Many times I think a lot, will my work with flutter in building applications be limited and maybe end and I have to learn another framework and I will go back to square one? What do you think?🤔
abdalla5355
1,902,757
Managing Priorities with the Eisenhower Matrix
The Eisenhower Matrix is a straightforward and popular time management tool that I was first...
0
2024-06-27T15:10:25
https://eduklein.com.br/eisenhower-matrix/
management, delegation, productivity
The Eisenhower Matrix is a straightforward and popular time management tool that I was first introduced to in *[The 7 Habits of Highly Effective People](/book/the-7-habits-of-highly-effective-people)*[^1], written by Stephen Covey, many years ago. It can help you **get organized and execute around priorities**. When you categorize your activities into four quadrants, with the help of the matrix, you can quickly identify what you should prioritize first. Let's take a look at the matrix: ![The Time Management Matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e9oscwm2veksjwm4s55k.png "The Time Management Matrix, by Stephen Covey.") As you can see, the two factors that define an activity are *urgent* and *important*. *Urgent* means that the activity requires immediate attention. They are usually visible and press on us. *Importance* has to do with results; if the activity is important to you, it contributes to your mission, values, and high-priority goals. **We react to urgent matters, but important matters require initiative and proactivity!** The quadrants can be read as follows: <ol type="I"> <li><b>Urgent and important</b>: This quadrant deals with significant results that require immediate attention;</li> <li><b>Important but not urgent</b>: This quadrant is the heart of effective personal management. It deals with important, but not urgent, activities;</li> <li><b>Urgent but not important</b>: This quadrant deals with urgent activities that are usually a priority for someone else, not for you;</li> <li><b>Not urgent and not important</b>: It is usually stuff that is pleasant to do but has no significance.</li> </ol> It is important to take into consideration that: - People who spend most of their time in quadrant I live by crisis. This quadrant dominates them. If you are there, you should consider how to shrink quadrant I and move to quadrant II, where you are more effective; - Effective people avoid quadrants III and IV. You should consider delegating quadrant III activities when possible. Avoid quadrant IV activities altogether; this should be your "do not do" list. [^1]: As far as I understand, the matrix was created by Stephen Covey. He refers to it as *The Time Management Matrix*, however. The matrix is widely known as the Eisenhower Matrix, in honor of USA 34º president [Dwight D. Eisenhower](https://en.wikipedia.org/wiki/Dwight_D._Eisenhower), but I don't know by whom and when the matrix was named the Eisenhower Matrix.
epklein
1,902,732
Firebase Authentication With Jetpack Compose. Testing. Part 2
Nice to meet you here. This post is the second part of a series of Firebase Authentication with...
0
2024-06-27T15:09:35
https://dev.to/evgensuit/firebase-authentication-with-jetpack-compose-testing-part-2-1h5n
android, androiddev, testing, mobile
Nice to meet you here. This post is the second part of a series of Firebase Authentication with Jetpack Compose. Today we're going to implement UI and Unit testing with the help of Robolectric and MockK. Make sure to have a tab with [this post opened](https://dev.to/evgensuit/firebase-authentication-with-jetpack-compose-part-1-3k82) --- ## What is Robolectric? Robolectric is a framework which enables testing Android applications without an emulator, directly on a local computer. It does so by emulating Android environment and its components. Robolectric tests are primarily used in UI and Integration testing. Robolectric tests use the same syntax for verifying UI components as regular UI tests run on an emulator. --- ## What is MockK? MockK is a Unit testing framework. It allows to mock, or fake, our code. Mocking can make certain functions or properties return the result we want. ## Setup ```gradle dependencies { val mockkVersion = "1.13.11" val robolectricVersion = "4.12.1" testImplementation("io.mockk:mockk-android:$mockkVersion") testImplementation("io.mockk:mockk-agent:$mockkVersion") testImplementation("org.robolectric:robolectric:$robolectricVersion ") } ``` Create an `auth` package inside of `test [unitTest]` folder together with files for the code. For me it looks like this ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p30dio9uy03vbx1r9w5i.png) `Common.kt` file should contain shared authentication mocking function so that we could reuse it in UI and Unit testing --- Create a `Helpers.kt` file and insert the following code ```kotlin val userId = "id" class CorrectAuthData { companion object { const val USERNAME: String = "Evgen" const val EMAIL: String = "someemail@gmail.com" const val PASSWORD: String = "SomePassword123" } } class IncorrectAuthData { companion object { const val USERNAME: String = " " const val EMAIL: String = "incorrect" const val PASSWORD: String = " " } } inline fun <reified T> mockTask(result: T? = null, exception: Exception? = null): Task<T> { val task = mockk<Task<T>>() every { task.result } returns result every { task.exception } returns exception every { task.isCanceled } returns false every { task.isComplete } returns true return task } ``` `mockTask` function is defined as `reified` so that it could take in any type of Firebase task (T) as a parameter, which is inserted automatically when the `Task` starts executing. E.g, Firestore `set` function is of the following type: `com.google.android.gms.tasks.Task<Void>`, while `signInWithEmailAndPassword` is of `com.google.android.gms.tasks.Task<com.google.firebase.auth.AuthResult>` type --- Inside of `Common.kt` file define the following function ```kotlin fun mockAuth(userProfileChangeRequest: CapturingSlot<UserProfileChangeRequest>? = null): FirebaseAuth { val user = mockk<FirebaseUser>{ every { uid } returns userId every { displayName } returns CorrectAuthData.USERNAME every { updateProfile(if (userProfileChangeRequest != null) capture(userProfileChangeRequest) else any()) } returns mockTask() } return mockk { every { currentUser } returns user every { signOut() } answers { every { currentUser } returns null } every { createUserWithEmailAndPassword(any(), any()) } answers { every { currentUser } returns user mockTask() } every { signInWithEmailAndPassword(any(), any()) } answers { every { currentUser } returns user mockTask() } } } ``` As already said at the beginning, mocking allows us to make certain functions or properties of a mocked object return the values we want them to return. In the code above we define a function which returns a mocked instance of Firebase auth. The mocking syntax is clear: for `every` call to `currentUser` return a `user` instance of `FirebaseUser`. `answers` keyword enables us to mock other objects on function call. `userProfileChangeRequest` is inserted as a slot to the `updateProfile` function call. With this we can verify that a call to `updateProfile` was made with a specific instance of `UserProfileChangeRequest`. We could also verify its `displayName` property. --- After that define a `BaseTestClass` which will hold basic properties and methods of every test class and which every test class will implement. ```kotlin open class BaseTestClass { val testScope = TestScope() val snackbarScope = TestScope() lateinit var auth: FirebaseAuth } ``` --- ## Unit testing code Inside of `AuthUnitTests` class define an `init` function which is going to initialize authentication and view model instances. ```kotlin @OptIn(ExperimentalCoroutinesApi::class) @RunWith(AndroidJUnit4::class) class AuthUnitTests: BaseTestClass() { private val userProfileChangeRequestSlot = slot<UserProfileChangeRequest>() private lateinit var viewModel: AuthViewModel @Before fun init() { auth = mockAuth(userProfileChangeRequestSlot) val repository = AuthRepository(auth, firestore) viewModel = AuthViewModel(repository, CoroutineScopeProvider(testScope)) } } ``` In my [previous article](https://dev.to/evgensuit/firebase-authentication-with-jetpack-compose-part-1-3k82) I've touched on a topic of inserting `CoroutineScope` instances. The code above demonstrates that. --- Next we're going to test how username, password, and email validators work together with `AuthViewModel`. First let's define a function inside of our test class which makes sure that validators react appropriately to the input of incorrect format. ```kotlin @Test fun incorrectInput_error() { val context = ApplicationProvider.getApplicationContext<Context>() val resources = context.resources viewModel.apply { onUsername("") assertEquals(uiState.value.validationState.usernameValidationError.asString(context), resources.getString(R.string.username_not_long_enough)) onEmail("dfjdjfk") assertEquals(uiState.value.validationState.emailValidationError.asString(context), resources.getString(R.string.invalid_email_format)) onPassword("dfdf") assertEquals(uiState.value.validationState.passwordValidationError.asString(context), resources.getString(R.string.password_not_long_enough)) onPassword("eirhgejbrj") assertEquals(uiState.value.validationState.passwordValidationError.asString(context), resources.getString(R.string.password_not_enough_uppercase)) onPassword("Geirhgejbrj") assertEquals(uiState.value.validationState.passwordValidationError.asString(context), resources.getString(R.string.password_not_enough_digits)) } } ``` Here we're using dummy input values and assert that a validator's error is equal to the one defined in string resources. --- Next we're going to test how validators and the view model react to input of correct format ```kotlin @Test fun correctInput_success() { viewModel.apply { onUsername(CorrectAuthData.USERNAME) assertTrue(uiState.value.validationState.usernameValidationError == StringValue.Empty) onEmail(CorrectAuthData.EMAIL) assertTrue(uiState.value.validationState.emailValidationError == StringValue.Empty) onPassword(CorrectAuthData.PASSWORD) assertTrue(uiState.value.validationState.passwordValidationError == StringValue.Empty) } } ``` --- Next goes a function which tests the sign up behaviour ```kotlin @Test fun signUp_success() = testScope.runTest { viewModel.apply { changeAuthType() assertEquals(uiState.value.authType, AuthType.SIGN_UP) onUsername(CorrectAuthData.USERNAME) onEmail(CorrectAuthData.EMAIL) onPassword(CorrectAuthData.PASSWORD) onCustomAuth() } advanceUntilIdle() verify { auth.createUserWithEmailAndPassword(CorrectAuthData.EMAIL, CorrectAuthData.PASSWORD) } assertEquals(userProfileChangeRequestSlot.captured.displayName, CorrectAuthData.USERNAME) verify { auth.currentUser!!.updateProfile(userProfileChangeRequestSlot.captured) } } ``` `runTest` function allows us to execute `suspend` functions in the test code. It's important to call `runTest` specifically on the `testScope` instance, since this is the scope in which our view model is going to call auth functions. In the function above we're first changing auth type to sign up, then insert credentials and call the auth function. `advanceUntilIdle` function makes `suspend` functions return immediately. In the end we're verifying if `createUserWithEmailAndPassword` and `updateProfile` were called with correct parameters, and assert that `displayName` of `UserProfileChangeRequest` instance equals to the one we inserted. Sign in testing function looks similarly ```kotlin @Test fun signIn_success() = testScope.runTest { viewModel.apply { assertEquals(uiState.value.authType, AuthType.SIGN_IN) onUsername(CorrectAuthData.USERNAME) onEmail(CorrectAuthData.EMAIL) onPassword(CorrectAuthData.PASSWORD) onCustomAuth() } advanceUntilIdle() verify { auth.signInWithEmailAndPassword(CorrectAuthData.EMAIL, CorrectAuthData.PASSWORD) } } ``` --- ## UI testing code In `AuthUITests` class define a setup method which, in addition to creating a view model and auth mock, will also set content ```kotlin @OptIn(ExperimentalCoroutinesApi::class) @RunWith(AndroidJUnit4::class) class AuthUITests: BaseTestClass() { @get: Rule val composeRule = createComposeRule() private lateinit var viewModel: AuthViewModel @Before fun setup() { auth = mockAuth() createViewModel() composeRule.apply { setContentWithSnackbar(snackbarScope) { AuthScreen(onSignIn = { }, viewModel = viewModel) } } } private fun createViewModel() { val repository = AuthRepository(auth, firestore) viewModel = AuthViewModel(repository, CoroutineScopeProvider(testScope)) } } ``` Head on to `Common.kt` file and define a `setContentWithSnackbar` extension function ```kotlin @OptIn(ExperimentalMaterial3Api::class) fun ComposeContentTestRule.setContentWithSnackbar( coroutineScope: CoroutineScope, content: @Composable () -> Unit) { setContent { val context = ApplicationProvider.getApplicationContext<Context>() val snackbarHostState = remember { SnackbarHostState() } val snackbarController = SnackbarController(snackbarHostState, coroutineScope, context) CompositionLocalProvider(LocalSnackbarController provides snackbarController) { CustomErrorSnackbar(snackbarHostState = snackbarHostState, swipeToDismissBoxState = rememberSwipeToDismissBoxState()) content() } } } ``` In the same file define 2 more helper functions ```kotlin @OptIn(ExperimentalCoroutinesApi::class) fun ComposeContentTestRule.assertSnackbarIsNotDisplayed(snackbarScope: TestScope) { waitForIdle() snackbarScope.advanceUntilIdle() onNodeWithTag(getString(R.string.error_snackbar)).assertIsNotDisplayed() } @OptIn(ExperimentalCoroutinesApi::class) fun ComposeContentTestRule.assertSnackbarTextEquals(snackbarScope: TestScope, message: String) { waitForIdle() snackbarScope.advanceUntilIdle() onNodeWithTag(getString(R.string.error_snackbar)).assertTextEquals(message) } ``` Before making assertions, we first have to make sure that all coroutines get executed, e.g `LaunchedEffect` in `AuthScreen` ```kotlin LaunchedEffect(uiState.authResult) { snackbarController.showSnackbar(uiState.authResult) } ``` And the one in `SnackbarController` ```kotlin fun showSnackbar(result: CustomResult) { if (result is CustomResult.DynamicError || result is CustomResult.ResourceError) { coroutineScope.launch { snackbarHostState.currentSnackbarData?.dismiss() snackbarHostState.showSnackbar(result.error.asString(context)) } } } ``` That's exactly what `waitForIdle()` and `snackbarScope.advanceUntilIdle()` do. --- Next let's write the code that verifies that the UI correctly responds to input ```kotlin @Test fun signIn_testIncorrectInput() { composeRule.apply { onNodeWithText(getString(R.string.dont_have_an_account)).assertIsDisplayed() onNodeWithTag(getString(R.string.email)).performTextReplacement( IncorrectAuthData.EMAIL) onNodeWithTag(getString(R.string.password)).performTextReplacement( IncorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.sign_in)).assertIsNotEnabled() } } @Test fun signUp_testIncorrectInput() { composeRule.apply { onNodeWithText(getString(R.string.go_to_signup)).performClick() onNodeWithText(getString(R.string.dont_have_an_account)).assertIsNotDisplayed() onNodeWithTag(getString(R.string.username)).performTextReplacement( IncorrectAuthData.USERNAME) onNodeWithText(getString(R.string.username_not_long_enough)).assertIsDisplayed() onNodeWithTag(getString(R.string.email)).performTextReplacement( IncorrectAuthData.EMAIL) onNodeWithText(getString(R.string.invalid_email_format)).assertIsDisplayed() onNodeWithTag(getString(R.string.password)).performTextReplacement( IncorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.password_not_long_enough)).assertIsDisplayed() onNodeWithText(getString(R.string.sign_up)).assertIsNotEnabled() } } @Test fun signIn_testCorrectInput() { composeRule.apply { onNodeWithText(getString(R.string.dont_have_an_account)).assertIsDisplayed() onNodeWithTag(getString(R.string.email)).performTextReplacement( CorrectAuthData.EMAIL) onNodeWithTag(getString(R.string.password)).performTextReplacement( CorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.sign_in)).assertIsEnabled() } } @Test fun signUp_testCorrectInput() { composeRule.apply { onNodeWithText(getString(R.string.go_to_signup)).performClick() onNodeWithText(getString(R.string.dont_have_an_account)).assertIsNotDisplayed() onNodeWithTag(getString(R.string.username)).performTextReplacement( CorrectAuthData.USERNAME) onNodeWithTag(getString(R.string.email)).performTextReplacement( CorrectAuthData.EMAIL) onNodeWithTag(getString(R.string.password)).performTextReplacement( CorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.sign_up)).assertIsEnabled() } } ``` --- Recall that sign up must not be available if username is not provided, while sign in must be available even if username is not null. This is what we'll test next ```kotlin @Test fun signInCorrectInputTest_onGoToSignUpClick_isSignUpDisabled() { composeRule.apply { onNodeWithTag(getString(R.string.email)).performTextReplacement( CorrectAuthData.EMAIL) onNodeWithTag(getString(R.string.password)).performTextReplacement( CorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.go_to_signup)).performClick() onNodeWithText(getString(R.string.sign_up)).assertIsNotEnabled() } } @Test fun signUpCorrectInputTest_onGoToSignInClick_isSignInEnabled() { composeRule.apply { onNodeWithText(getString(R.string.go_to_signup)).performClick() onNodeWithTag(getString(R.string.username)).performTextReplacement( CorrectAuthData.USERNAME) onNodeWithTag(getString(R.string.email)).performTextReplacement( CorrectAuthData.EMAIL) onNodeWithTag(getString(R.string.password)).performTextReplacement( CorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.sign_up)).assertIsEnabled() onNodeWithText(getString(R.string.go_to_signin)).performClick() onNodeWithText(getString(R.string.sign_in)).assertIsEnabled() } } ``` --- And finally, let's test how UI reacts on successful and unsuccessful authentication ```kotlin @Test fun signIn_onSuccess_snackbarNotShown() = testScope.runTest { composeRule.apply { onNodeWithTag(getString(R.string.email)).performTextReplacement( CorrectAuthData.EMAIL) onNodeWithTag(getString(R.string.password)).performTextReplacement( CorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.sign_in)).performClick() onNodeWithTag(getString(R.string.email)).assertIsNotEnabled() onNodeWithTag(getString(R.string.password)).assertIsNotEnabled() onNodeWithText(getString(R.string.go_to_signup)).assertIsNotEnabled() onNodeWithText(getString(R.string.sign_in)).assertIsNotEnabled() advanceUntilIdle() assertSnackbarIsNotDisplayed(snackbarScope) } } @Test fun signUp_onSuccess_snackbarNotShown() = testScope.runTest { composeRule.apply { onNodeWithText(getString(R.string.go_to_signup)).performClick() onNodeWithTag(getString(R.string.username)).performTextReplacement( CorrectAuthData.USERNAME) onNodeWithTag(getString(R.string.email)).performTextReplacement( CorrectAuthData.EMAIL) onNodeWithTag(getString(R.string.password)).performTextReplacement( CorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.sign_up)).performClick() onNodeWithTag(getString(R.string.username)).assertIsNotEnabled() onNodeWithTag(getString(R.string.email)).assertIsNotEnabled() onNodeWithTag(getString(R.string.password)).assertIsNotEnabled() onNodeWithText(getString(R.string.go_to_signin)).assertIsNotEnabled() onNodeWithText(getString(R.string.sign_up)).assertIsNotEnabled() advanceUntilIdle() assertSnackbarIsNotDisplayed(snackbarScope) } } @Test fun signIn_onError_snackbarShown() = testScope.runTest { val exception = Exception("exception") every { auth.signInWithEmailAndPassword(CorrectAuthData.EMAIL, CorrectAuthData.PASSWORD) } returns mockTask(exception = exception) composeRule.apply { onNodeWithTag(getString(R.string.email)).performTextReplacement( CorrectAuthData.EMAIL) onNodeWithTag(getString(R.string.password)).performTextReplacement( CorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.sign_in)).performClick() onNodeWithTag(getString(R.string.email)).assertIsNotEnabled() onNodeWithTag(getString(R.string.password)).assertIsNotEnabled() onNodeWithText(getString(R.string.go_to_signup)).assertIsNotEnabled() onNodeWithText(getString(R.string.sign_in)).assertIsNotEnabled() advanceUntilIdle() assertSnackbarTextEquals(snackbarScope, exception.message!!) } } @Test fun signUp_onError_snackbarShown() = testScope.runTest { val exception = Exception("exception") every { auth.createUserWithEmailAndPassword(CorrectAuthData.EMAIL, CorrectAuthData.PASSWORD) } returns mockTask(exception = exception) composeRule.apply { onNodeWithText(getString(R.string.go_to_signup)).performClick() onNodeWithTag(getString(R.string.username)).performTextReplacement( CorrectAuthData.USERNAME) onNodeWithTag(getString(R.string.email)).performTextReplacement( CorrectAuthData.EMAIL) onNodeWithTag(getString(R.string.password)).performTextReplacement( CorrectAuthData.PASSWORD) onNodeWithText(getString(R.string.sign_up)).performClick() onNodeWithTag(getString(R.string.username)).assertIsNotEnabled() onNodeWithTag(getString(R.string.email)).assertIsNotEnabled() onNodeWithTag(getString(R.string.password)).assertIsNotEnabled() onNodeWithText(getString(R.string.go_to_signin)).assertIsNotEnabled() onNodeWithText(getString(R.string.sign_up)).assertIsNotEnabled() advanceUntilIdle() assertSnackbarTextEquals(snackbarScope, exception.message!!) } } ``` `onNodeWithTag(getString(R.string.email))` represents email auth field The code above clearly demonstrates the advantage of inserting `CoroutineScope` into view models. We could use the `viewModelScope` in Robolectric tests, but it would deprive us of a possibility of verifying `InProgress` state since the suspending code would automatically advance. Also, before making assertions on `InProgress` state, make sure that in the view model you don't update the state inside of coroutine body, e.g: ```kotlin fun onCustomAuth() { val authType = _uiState.value.authType updateAuthResult(CustomResult.InProgress) scope.launch { try { if (authType == AuthType.SIGN_UP) { authRepository.signUp(_uiState.value.authState) } authRepository.signIn(_uiState.value.authState) updateAuthResult(CustomResult.Success) } catch (e: Exception) { updateAuthResult(CustomResult.DynamicError(e.toStringIfMessageIsNull())) } } } ``` --- That's it! If you have any suggestions feel free to leave them in the comments. Good luck!
evgensuit
1,903,006
Why Latin America is the Future of Software Development
Explore why Latin America is poised to become a global powerhouse for highly skilled software...
0
2024-06-28T15:43:57
https://dev.to/zak_e/why-latin-america-is-the-future-of-software-development-2pdm
developmenttrends
--- title: Why Latin America is the Future of Software Development published: true date: 2024-06-27 15:06:50 UTC tags: DevelopmentTrends canonical_url: --- Explore why Latin America is poised to become a global powerhouse for highly skilled software developers and development companies. The post [Why Latin America is the Future of Software Development](https://blog.nextideatech.com/why-latin-america-is-the-future-of-software-development/) appeared first on [Next Idea Tech Blog](https://blog.nextideatech.com).
zak_e
1,902,756
🎉 iPhone 15 Pro Max Giveaway! 🎉
Do you want to win the latest iPhone 15 Pro Max? Now's your chance! We’re giving away a brand new...
0
2024-06-27T15:06:13
https://dev.to/fardint83195/iphone-15-pro-max-giveaway-1o6i
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3o5ra2x3u2byu4gxqrj8.jpg) Do you want to win the latest iPhone 15 Pro Max? Now's your chance! We’re giving away a brand new [iPhone 15 Pro Max](https://sites.google.com/view/sellbazar4690/home) to one lucky winner! How to Enter: Follow Us: Make sure you’re following our account. Like this Post: Show us some love! Tag 3 Friends: In the comments below, tag three friends who’d love to win this too. Share to Your Story: For an extra entry, share this post to your story and tag us. Bonus Entry: Subscribe to our Newsletter: Sign up here to stay updated and get an additional entry. Giveaway Rules: Must be 18 years or older to enter. Open to [Country/Countries].[USA](https://sites.google.com/view/sellbazar4690/home) Contest ends on [Date].:[10/02/2024] Winner will be announced on [Date] via our Instagram story and contacted via DM.[11/03/2024] Disclaimer: This giveaway is not sponsored, endorsed, or administered by, or associated with Instagram or[ Apple.](https://sites.google.com/view/sellbazar4690/home) By participating, you confirm that you are 18+ years of age, release Instagram and Apple of responsibility, and agree to Instagram's terms of use. Good luck to everyone! 🍀 Visual Elements: Eye-catching image or video of the [iPhone 15 Pro Max.](https://sites.google.com/view/sellbazar4690/home) Clear branding: Your logo and consistent color scheme. Call-to-Action: Highlight how easy it is to enter. Tips for Success: Engage with participants: Like and reply to comments. Promote the giveaway: Use Instagram ads, collaborate with influencers, and share across your social media channels. Regular reminders: Post countdowns and reminders as the end date approaches. Would you like any specific customization or additional elements included in the template? [Win iphone 15 pro max](https://sites.google.com/view/sellbazar4690/home)
fardint83195
1,902,755
AWS SnapStart - Part 23 Measuring cold and warm starts with Java 17 using asynchronous HTTP clients
Introduction In the previous parts we've done many measurements with AWS Lambda using Java...
24,979
2024-06-27T15:05:46
https://dev.to/aws-builders/aws-snapstart-part-23-measuring-cold-and-warm-starts-with-java-17-using-asynchronous-http-clients-5hk4
aws, java, serverless, coldstart
## Introduction In the previous parts we've done many measurements with AWS Lambda using Java 17 runtime with and without using AWS SnapStart and additionally using SnapStart and priming DynamoDB invocation : - cold starts using [different deployment artifact sizes]( https://dev.to/aws-builders/aws-snapstart-part-18-measuring-cold-starts-with-java-17-using-different-deployment-artifact-sizes-5092) - cold starts and deployment time using [different Lambda memory settings ]( https://dev.to/aws-builders/aws-snapstart-part-19-measuring-cold-starts-and-deployment-time-with-java-17-using-different-lambda-memory-settings-30ml) - warm starts [using different Lambda memory settings](https://dev.to/aws-builders/aws-snapstart-part-20-measuring-warm-starts-with-java-17-using-different-lambda-memory-settings-1p7j) - cold and warm starts [using different compilation options](https://dev.to/aws-builders/aws-snapstart-part-21-measuring-cold-starts-and-deployment-time-with-java-17-using-different-compilation-options-o14) - cold and warm starts with [using different synchronous HTTP clients](https://dev.to/aws-builders/aws-snapstart-part-22-measuring-cold-and-warm-starts-with-java-17-using-synchronous-http-clients-2k0l) In this article we'll now add another dimension to our Java 17 measurements : the choice of the asynchronous HTTP Client implementation. AWS own offering, the asynchronous CRT HTTP client has been generally available since February 2023. I will also compare it with the same measurements for Java 21 already performed in the article [Measuring cold and warm starts with Java 21 using different asynchronous HTTP clients](https://dev.to/aws-builders/aws-snapstart-part-16-measuring-cold-and-warm-starts-with-java-21-using-different-asynchronous-http-clients-4n2). ## Measuring cold and warm starts with Java 17 using asynchronous HTTP clients In our experiment we'll re-use the application introduced in [part 8](https://dev.to/aws-builders/measuring-lambda-cold-starts-with-aws-snapstart-part-8-measuring-with-java-17-21db) for this and rewrite it to use asynchronous HTTP client. You can the find application code [here](https://github.com/Vadym79/AWSLambdaJavaSnapStart/tree/main/pure-lambda-17-async-http-client). There are basically 2 Lambda functions which both respond to the API Gateway requests and retrieve product by id received from the API Gateway from DynamoDB. One Lambda function GetProductByIdWithPureJava17AsyncLambda can be used with and without SnapStart and the second one GetProductByIdWithPureJava17AsyncLambdaAndPriming uses SnapStart and DynamoDB request invocation priming. We give both Lambda functions 1024 MB memory. There are **2 asynchronous** HTTP Clients implementations available in the AWS SDK for Java. 1. NettyNioAsync (Default) 2. AWS CRT (asynchronous) This is the order for the look up and set of asynchronous HTTP Client in the classpath. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zpfwzoikpzibo5v10iji.png) Let's figure out how to configure such asynchronous HTTP Client. There are 2 places to do it : [pom.xml](https://github.com/Vadym79/AWSLambdaJavaSnapStart/blob/main/pure-lambda-17-async-http-client/pom.xml) and [DynamoProductDao](https://github.com/Vadym79/AWSLambdaJavaSnapStart/blob/main/pure-lambda-17-async-http-client/src/main/java/software/amazonaws/example/product/dao/DynamoProductDao.java) Let's consider 2 scenarios: **Scenario 1)** NettyNioAsync HTTP Client. It's configuration looks like this In pom.xml the only enabled HTTP Client dependency has to be: ``` <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>netty-nio-client</artifactId> </dependency> ``` In DynamoProductDao the DynamoDBAsyncClient should be created like this: ``` DynamoDbAsyncClient.builder() .region(Region.EU_CENTRAL_1) .httpClient(NettyNioAsyncHttpClient.create()) .overrideConfiguration(ClientOverrideConfiguration.builder() .build()) .build(); ``` **Scenario 2)** AWS CRT synchronous HTTP Client. It's configuration looks like this In pom.xml the only enabled HTTP Client dependency has to be: ``` <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>aws-crt-client</artifactId> </dependency> ``` In DynamoProductDao the DynamoDBAsyncClient should be created like this: ``` DynamoDbAsyncClient.builder() .region(Region.EU_CENTRAL_1) .httpClient(AwsCrtAsyncHttpClient.create()) .overrideConfiguration(ClientOverrideConfiguration.builder() .build()) .build(); ``` For the sake of simplicity, we create all asynchronous HTTP Clients with their default settings. Of course, there is a potential to optimize there figuring out the right settings. Using the asynchronous DynamoDBClient means that we'll be using the asynchronous programming model, so the invocation of **getItem** will return **CompletableFuture** and this is the code to retrieve the item itself (for the complete code [see](https://github.com/Vadym79/AWSLambdaJavaSnapStart/blob/main/pure-lambda-17-async-http-client/src/main/java/software/amazonaws/example/product/dao/DynamoProductDao.java)) ``` CompletableFuture<GetItemResponse> getItemReponseAsync = dynamoDbClient.getItem(GetItemRequest.builder(). key(Map.of("PK",AttributeValue.builder(). s(id).build())).tableName(PRODUCT_TABLE_NAME).build()); GetItemResponse getItemResponse = getItemReponseAsync.join(); if (getItemResponse.hasItem()) {    return Optional.of(ProductMapper.productFromDynamoDB(getItemResponse.item()));  } else {    return Optional.empty(); } ``` The results of the experiment below were based on reproducing more than 100 cold and approximately 100.000 warm starts with experiment which ran for approximately 1 hour. For it (and experiments from my previous article) I used the load test tool [hey](https://github.com/rakyll/hey), but you can use whatever tool you want, like [Serverless-artillery](https://www.npmjs.com/package/serverless-artillery) or [Postman](https://www.postman.com/). I ran all these experiments for all 2 scenarios using 2 different compilation options in template.yaml each: 1. no options (tiered compilation will take place) 2. JAVA_TOOL_OPTIONS: "-XX:+TieredCompilation -XX:TieredStopAtLevel=1" (client compilation without profiling) We found out in the article [Measuring cold and warm starts with Java 17 using different compilation options ](https://dev.to/aws-builders/aws-snapstart-part-21-measuring-cold-starts-and-deployment-time-with-java-17-using-different-compilation-options-o14) that with them both we've got the lowest cold and warm start times. We’ve also got good results with "-XX:+TieredCompilation -XX:TieredStopAtLevel=2” compilation option but I haven’t done any measurement with this option yet. Let's look into the results of our measurements. **Cold and warm start time with compilation option "tiered compilation" without SnapStart enabled in ms:** |Scenario Number| c p50 | c p75 | c p90 |c p99 | c p99.9| c max |w p50 | w p75 | w p90 |w p99 | w p99.9 | w max | |-----------|----------|-----------|----------|----------|----------|----------|-----------|----------|----------|----------|----------|----------| |NettyNioAsync|3760.75|3800.16|3898.23|4101.46|4254.09|4410.89|6.51|7.51|9.38|24.30|59.11|2475.66| |AWS CRT|2313.42|2346.89|2399.7|2502.56|2670.43|2812.78|5.68|6.45|7.69|20.33|69.90|975.35| **Cold and warm start time with compilation option "-XX:+TieredCompilation -XX:TieredStopAtLevel=1" (client compilation without profiling) without SnapStart enabled in ms:** |Scenario Number| c p50 | c p75 | c p90 |c p99 | c p99.9| c max |w p50 | w p75 | w p90 |w p99 | w p99.9 | w max | |-----------|----------|-----------|----------|----------|----------|----------|-----------|----------|----------|----------|----------|----------| |NettyNioAsync|3708.13|3773.56|3812.51|3854.03|4019.23|4198.23|6.21|7.16|8.80|22.81|57.27|2377.48| |AWS CRT|2331.25|2377.14|2451.72|2598.25|2756.01|2934.43|5.73|6.51|8.00|21.07|72.66|1033.18| **Cold and warm start time with compilation option "tiered compilation" with SnapStart enabled without Priming in ms:** |Scenario Number| c p50 | c p75 | c p90 |c p99 | c p99.9| c max |w p50 | w p75 | w p90 |w p99 | w p99.9 | w max | |-----------|----------|-----------|----------|----------|----------|----------|-----------|----------|----------|----------|----------|----------| |NettyNioAsync|2324.19|2380.61|2625.60|2864.13|2892.90|2895.29|6.72|7.87|9.99|26.31|1683.66|1991.13| |AWS CRT|1206.47|1348.03|1613.74|1716.90|1778.03|1779.76|5.73|6.51|8.00|22.45|692.16|997.82| **Cold and warm start time with compilation option "-XX:+TieredCompilation -XX:TieredStopAtLevel=1" (client compilation without profiling) with SnapStart enabled without Priming in ms:** |Scenario Number| c p50 | c p75 | c p90 |c p99 | c p99.9| c max |w p50 | w p75 | w p90 |w p99 | w p99.9 | w max | |-----------|----------|-----------|----------|----------|----------|----------|-----------|----------|----------|----------|----------|----------| |NettyNioAsync|2260.04|2338.17|2586.53|2847.01|2972.03|2972.72|6.51|7.63|9.53|25.09|1657.15|2132.46| |AWS CRT|1225.92|1306.90|1618.58|1846.86|1856.11|1857.26|5.64|6.40|7.87|22.09|703.24|1069.55| **Cold and warm start time with compilation option "tiered compilation" with SnapStart enabled and with DynamoDB invocation Priming in ms:** |Scenario Number| c p50 | c p75 | c p90 |c p99 | c p99.9| c max |w p50 | w p75 | w p90 |w p99 | w p99.9 | w max | |-----------|----------|-----------|----------|----------|----------|----------|-----------|----------|----------|----------|----------|----------| |NettyNioAsync|744.49|821.10|996.80|1130.58|1255.68|1256.49|6.21|7.16|8.94|23.17|158.16|351.03| |AWS CRT|677.05|731.94|983.93|1279.75|1282.32|1283.5|5.82|6.72|8.26|23.92|171.22|1169.44| **Cold and warm start time with compilation option "-XX:+TieredCompilation -XX:TieredStopAtLevel=1" (client compilation without profiling) with SnapStart enabled and with DynamoDB invocation Priming in ms:** |Scenario Number| c p50 | c p75 | c p90 |c p99 | c p99.9| c max |w p50 | w p75 | w p90 |w p99 | w p99.9 | w max | |-----------|----------|-----------|----------|----------|----------|----------|-----------|----------|----------|----------|----------|----------| |NettyNioAsync|697.66|747.47|967.35|1137.38|1338.63|1339.04|6.41|7.51|9.38|23.54|155.67|224.87| |AWS CRT|694.18|779.51|1017.94|1234.52|1243.19|1243.38|5.64|6.41|7.87|21.40|171.22|891.36| ## Conclusion Our measurements revealed that "tiered compilation" and "-XX:+TieredCompilation -XX:TieredStopAtLevel=1" (client compilation without profiling) values are close enough. The same we observed also with Java 21. In terms of the HTTP Client choice, AWS CRT Async HTTP Client outperformed the NettyNio Async HTTP client by far for the cold start and warm start times. The only one exception was SnapStart enabled with priming where results have been quite close. The same we observed also with Java 21. In terms of the individual comparison between Java 17 and 21 when we see lower cold starts for Java 21 for the cases where SnapStart is not enabled and it is enabled but priming is not applied. If priming is applied the cold start for Java 17 and Java 21 are very close to each other. Warm start times between Java 17 and Java 21 are very close to each other for all use cases with some deviations in both directions for the higher percentiles which might depend on the experiment. To see the full measurements for Java 21 please read my article [Measuring cold and warm starts with Java 21 using different asynchronous HTTP clients](https://dev.to/aws-builders/aws-snapstart-part-16-measuring-cold-and-warm-starts-with-java-21-using-different-asynchronous-http-clients-4n2). Can we reduce the cold start a bit further? In the previous article [Measuring cold and warm starts with Java 17 using synchronous HTTP clients](https://dev.to/aws-builders/aws-snapstart-part-22-measuring-cold-and-warm-starts-with-java-17-using-synchronous-http-clients-2k0l) in the "Conclusion" section we described how to reduce the deployment artifact size and therefore the cold start time for the AWS CRT synchronous HTTP Client. The same can also be applied for the asynchronous use case. Especially this looks promising: for the AWS CRT client we can define a classifier (i.e. linux-x86_64) in our POM file to only pick the relevant binary for our platform and reduce the size of the package. See [here](https://github.com/awslabs/aws-crt-java?tab=readme-ov-file#platform-specific-jars) for the detailed explanation . In this article I measured the cold and warms starts only by using the uber-jar containing binaries for all platforms, so please set the classifier and re-measure it for our platform. Be aware that currently not all platforms/architectures like aarch_64 support SnapStart. The choice of HTTP Client is not only about minimizing cold and warm starts. The decision is much more complex and also depends on the functionality of the HTTP Client implementation and its settings, like whether it supports HTTP/2. AWS published the decision tree which [HTTP client to choose](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/http-configuration.html) depending on the criteria.
vkazulkin
1,902,575
Beer CSS: The Secret Weapon for Material Design 3 UIs
Ever wanted to build sharp, modern UIs with Material Design 3 but without the bloat of other...
0
2024-06-27T15:05:43
https://dev.to/leonardorafael/beer-css-the-secret-weapon-for-material-design-3-uis-53i3
ui, dx, css, frontend
Ever wanted to build sharp, modern UIs with Material Design 3 but without the bloat of other frameworks? Look no further than, go ahead with Beer CSS! ### What makes Beer CSS a great choice for your next project? Let's highlight some points: **🧙‍♂️Material Design Mastery:** Built specifically for Material Design 3, Beer CSS lets you implement Google's latest design language with ease. Think clean layouts, subtle shadows, and a focus on user experience. **🏅Lightweight Champion:** Unlike some frameworks that can weigh down your site, Beer CSS is a featherweight. It boasts a tiny footprint, ensuring your website loads fast – perfect for mobile users and keeping your SEO happy. **💪Code Like a Boss:** Forget complex setups and configurations. Beer CSS is all about simplicity. Just include the library and start styling your UI with its pre-made utility classes. Buttons, typography, spacing – it's all there. **🪄Tweak to Perfection:** While Beer CSS champions Material Design 3, it doesn't hold you hostage. You can still customize things to match your project's unique needs. ### Getting Started is a Breeze No time for lengthy documentation dives? Beer CSS gets you coding fast. Think of it as a UI kit with ready-to-use classes for all the essentials. Just add the library to your HTML and start applying classes to your elements. ### Need some real world examples? If I told you that you can do the work with a half of code, do you believe me? Beer CSS has an unbelievable DX. You will get it when you start to work with it. Here are some real world examples: **A menu dropdown** ```html // Beer CSS <button> <span>Button</span> <menu> <a>Item 1</a> <a>Item 2</a> <a>Item 3</a> </menu> </button> // Vuetify <v-menu> <template> <v-btn color="primary">Button</v-btn> </template> <v-list> <v-list-item> <v-list-item-title>Item 1</v-list-item-title> <v-list-item-title>Item 2</v-list-item-title> <v-list-item-title>Item 3</v-list-item-title> </v-list-item> </v-list> </v-menu> // Quasar <q-btn color="primary" label="Button"> <q-menu> <q-list> <q-item> <q-item-section>Item 1</q-item-section> </q-item> <q-item> <q-item-section>Item 2</q-item-section> </q-item> <q-item> <q-item-section>Item 3</q-item-section> </q-item> </q-list> </q-menu> </q-btn> // Beer CSS // Multi level menu dropdown (do you believe? 🤯) <button> <span>Button</span> <menu> <a>Item 1</a> <a>Item 2</a> <a>Item 3</a> <menu> <a>Item 1</a> <a>Item 2</a> <a>Item 3</a> </menu> </menu> </button> ``` **A card with buttons** ```html // Beer CSS <article> <h6>Title</h6> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p> <nav> <button>Button 1</button> <button>Button 2</button> </nav> </article> // Vuetify <v-card> <v-card-item> <h6>Title</h6> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p> </v-card-item> <v-card-actions> <v-btn>Button 1</v-btn> <v-btn>Button 2</v-btn> </v-card-actions> </v-card> // Quasar <q-card> <q-card-section> <h6>Title</h6> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p> </q-card-section> <q-card-actions> <q-btn>Button 1</q-btn> <q-btn>Button 2</q-btn> </q-card-actions> </q-card> ``` **Reusing the same html content** ```html // A card with title + button <article> <h6>Title</h6> <nav> <button>Button 1</button> </nav> </article> // A dialog with title + button <dialog> <h6>Title</h6> <nav> <button>Button 1</button> </nav> </dialog> // A menu dropdown with title + button <menu> <h6>Title</h6> <nav> <button>Button 1</button> </nav> </menu> ``` **Customizing with readable global helpers** ```html <article class="small|medium|large|round|no-round|border..."> <h6 class="small|medium|large...">Title</h6> <nav class="right-align|center-align|left-align..."> <button class="small|medium|large|round|no-round|border...">Button 1</button> </nav> </article> ``` ### Ready to Brew Up Something Awesome? Head over to the Beer CSS website (https://www.beercss.com) to explore the docs and see it in action. You can also grab it from Github (https://github.com/beercss/beercss) and get started building those sleek Material Design 3 UIs in no time. ### Is Beer CSS the Right Choice for You? If you prioritize speed, ease of use, and a clean Material Design 3 aesthetic, Beer CSS is your best friend. However, if you need extreme design customization or aren't a fan of Material Design 3, you might want to check out other general purpose framework.
leonardorafael
1,902,754
Web2APK
Presentation: Our GitHub repository houses a transformative project that automates the...
0
2024-06-27T15:05:40
https://dev.to/7axel/web2apk-m4c
android, webtoapp, html, python
#### Presentation: - Our GitHub repository houses a transformative project that automates the conversion of HTML, CSS, and JavaScript front-end projects into Android applications. This tool streamlines the process, enabling developers to port their web projects to Android without extensive manual effort, enhancing cross-platform development efficiency. #### Installation: - If Git is not installed, you can obtain the tool by clicking the <a href="https://github.com/77AXEL/Web2APK/archive/refs/heads/main.zip">Download</a> button - If Git is already installed, you can utilize this command: ``` git clone https://github.com/77AXEL/Web2APK ``` #### Use - To use the tool, follow these steps: - 1) Develop a front-end project similar to this example:<br><br> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nncxcj6htg2jpffegxzf.png) - 2) Compress the project folder into a ZIP file:<br><br> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zqnho5ysra9114samlqz.png) - 3) Navigate to the Web2APK directory and run this cammand: ``` python web2apk.py -zip path_to_your_zip_file -icon path_to_your_desired_icon -name your_desired_app_name ``` - Once you run this command, the tool will start compiling and building the APK file. After compiling, you will get output like this:<br><br> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ap5eukwn3t21f86fo85.png) - Finally, you will find the compiled APK in the dist directory:<br><br> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bjmio3idhoilhwtt26zq.png) **_Note:_** > - Using the WebP image format for the app icon is recommended. > - If you encounter any problem or issue with the tool, you can check the build.log and sign.log files located in the log folder > - Using this tool requires having the JAVA JDK and ANDROID SDK installed, with their paths, JAVA_HOME and ANDROID_HOME, set in your environment path > - If you don't have them installed yet, follow those links: <a href="https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html">Java JDK</a> <a href="https://developer.android.com/studio?gad_source=1&gclid=CjwKCAjw1emzBhB8EiwAHwZZxaDZomNDa979EuJ6E2Xjgrp4o-NiDyc36wXADYMinU0JmuodKHYPsBoCC40QAvD_BwE&gclsrc=aw.ds&hl=fr">Android SDK</a> #### Platforms > Supported Platform : **`Windows`**, **`Mac-OS`**, **`Ubuntu/Debian/Kali/Parrot/Arch Linux`**<br> - If you like this project, star or sponsor our repo on github from here <a href="https://github.com/77AXEL/Web2APK">Web2APK</a><br> <img src="https://img.shields.io/badge/Author-A.X.E.L-red?style=flat-square;"></img> <img src="https://img.shields.io/badge/Open Source-Yes-red?style=flat-square;"></img>
7axel
1,902,752
Understanding Multithreading in Python
Understanding Multithreading in Python Multithreading is a technique where multiple...
27,890
2024-06-27T15:02:45
https://dev.to/plug_panther_3129828fadf0/understanding-multithreading-in-python-jp3
python, multithreading, programming, tutorial
# Understanding Multithreading in Python Multithreading is a technique where multiple threads are spawned by a process to execute multiple tasks concurrently. Threads run in the same memory space, which makes it easier to share data between threads than between processes. Python provides a built-in module called `threading` to work with threads. ## Why Use Multithreading? Multithreading can be beneficial for: - Performing I/O-bound tasks concurrently. - Improving the responsiveness of applications. - Utilizing the capabilities of multi-core processors. However, it's important to note that in CPU-bound tasks, Python’s Global Interpreter Lock (GIL) can be a limiting factor. ## Getting Started with the `threading` Module ### Creating and Starting Threads To create a new thread, you can instantiate the `Thread` class and pass a target function to it. Here’s a simple example: ```python import threading import time def print_numbers(): for i in range(1, 6): print(f"Number: {i}") time.sleep(1) def print_letters(): for letter in 'ABCDE': print(f"Letter: {letter}") time.sleep(1) if __name__ == "__main__": thread1 = threading.Thread(target=print_numbers) thread2 = threading.Thread(target=print_letters) thread1.start() thread2.start() thread1.join() thread2.join() print("Done!") ``` In this example, `print_numbers` and `print_letters` functions are run concurrently in separate threads. ### Using Thread Subclass Another way to create a thread is by subclassing the `Thread` class and overriding the `run` method: ```python class NumberThread(threading.Thread): def run(self): for i in range(1, 6): print(f"Number: {i}") time.sleep(1) class LetterThread(threading.Thread): def run(self): for letter in 'ABCDE': print(f"Letter: {letter}") time.sleep(1) if __name__ == "__main__": thread1 = NumberThread() thread2 = LetterThread() thread1.start() thread2.start() thread1.join() thread2.join() print("Done!") ``` ### Synchronizing Threads To prevent race conditions, you can use thread synchronization mechanisms like locks: ```python lock = threading.Lock() def synchronized_print_numbers(): with lock: for i in range(1, 6): print(f"Number: {i}") time.sleep(1) def synchronized_print_letters(): with lock: for letter in 'ABCDE': print(f"Letter: {letter}") time.sleep(1) if __name__ == "__main__": thread1 = threading.Thread(target=synchronized_print_numbers) thread2 = threading.Thread(target=synchronized_print_letters) thread1.start() thread2.start() thread1.join() thread2.join() print("Done!") ``` ### Thread Communication Threads can communicate using shared variables, but it’s often safer to use thread-safe queues: ```python import queue q = queue.Queue() def producer(): for item in range(1, 6): q.put(item) print(f"Produced: {item}") time.sleep(1) def consumer(): while True: item = q.get() if item is None: break print(f"Consumed: {item}") time.sleep(2) if __name__ == "__main__": thread1 = threading.Thread(target=producer) thread2 = threading.Thread(target=consumer) thread1.start() thread2.start() thread1.join() q.put(None) # Signal the consumer to exit thread2.join() print("Done!") ``` ## Conclusion Multithreading in Python can be a powerful tool when used correctly. It is particularly useful for I/O-bound and high-level structured network code. However, due to the GIL, it may not be the best choice for CPU-bound tasks. Understanding the use of threads, synchronization mechanisms, and communication between threads is crucial for writing efficient multi-threaded applications. Happy coding!
plug_panther_3129828fadf0
1,902,751
what is flash usdt
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get...
0
2024-06-27T14:59:47
https://dev.to/jaydyjaygtgt/what-is-flash-usdt-519
flashbtc, flashusdt, flashbitcoin, flashbitcoinsoftware
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get involved in the world of digital currency? Look no further than Flash USDT, the innovative solution from MartelGold. As a valued member of the MartelGold community, I’m excited to share with you the incredible benefits of Flash USDT and how it can revolutionize your Tether experience. With Flash USDT, you can generate Tether transactions directly on the blockchain network, with fully confirmed transactions that can remain on the network for an impressive duration. What Makes Flash USDT So Special? So, what sets Flash USDT apart from other Tether forks? For starters, Flash USDT offers a range of features that make it a game-changer in the world of digital currency. With Flash USDT, you can: Generate and send up to 20,000 USDT daily with the basic license Send a staggering 50,000 USDT in a single transaction with the premium license Enjoy one-time payment with no hidden charges Send Tether to any wallet on the blockchain network Get access to Blockchain and Binance server files Enjoy 24/7 support How to Get Started with Flash USDT Ready to unlock the power of Flash USDT? Here’s how to get started: Choose Your License: Select from their basic or premium license options, depending on your needs. Download Flash USDT: Get instant access to their innovative software, similar to flash usdt software. Generate Tether Transactions: Use Flash USDT to generate fully confirmed Tether transactions, just like you would with flash usdt sender. Send Tether: Send Tether to any wallet on the blockchain network, with the ability to track the live transaction on bitcoin network explorer using TX ID/ Block/ Hash/ BTC address. MartelGold’s Flash USDT Products At MartelGold, they’re dedicated to providing you with the best Flash USDT solutions on the market. Check out their range of products, designed to meet your needs: FlashGen USDT Sender: Unlock the power of Flash USDT with their innovative sender software, allowing you to generate and send up to 20,000 USDT daily. Learn More $2000 Flash USDT for $200: Get instant access to $2000 worth of Flash USDT for just $200. Learn More Stay Connected with MartelGold Telegram: t.me/martelgold At MartelGold, they’re committed to providing you with the best Flash USDT solutions on the market. With their innovative software and exceptional customer support, you can trust them to help you unlock the full potential of Flash USDT. Ready to Get Started? Visit their website today and discover the power of Flash USDT with MartelGold. www.martelgold.com Join the Conversation t.me/martelgold Need Help? Contact them today for any questions or inquiries. Their dedicated support team is here to help. t.me/martelgold Visit MartelGold today and start generating Tether transactions like a cryptomania! www.martelgold.com Message them on telegram! t.me/martelgold Get ready to take your Tether experience to the next level with Flash USDT. Visit MartelGold today and discover the power of innovative software like atomic flash usdt, flash usdt wallet, and flash usdt software free! www.martelgold.com
jaydyjaygtgt
1,902,747
VS Code on MACOS
I am creating code in C++ and as I work through the development of the code I wanted to have the code...
0
2024-06-27T14:45:57
https://dev.to/fred_williams_27dfb315227/vs-code-on-macos-ldj
cpp, vscods
I am creating code in C++ and as I work through the development of the code I wanted to have the code run so that it pulled from all the classes of the program I created. I downloaded the Cmake and Cmake tools and intellisense. When I try to access the tools in the project in VScode the CMake tools will not aute populate in the command window and I cannot get the Cmake tools to build the order for my project. Can some please help me?
fred_williams_27dfb315227
1,902,746
what is flash usdt
How to Buy Flash USDT: Unlock the Power of Tether with MartelGold Are you looking to get your hands...
0
2024-06-27T14:44:02
https://dev.to/mathew_sanchez_c69efb77b2/what-is-flash-usdt-57bd
flashbtc, flashusdt, flashbitcoin, flashbitcoinsoftware
How to Buy Flash USDT: Unlock the Power of Tether with MartelGold Are you looking to get your hands on Flash USDT, the revolutionary Tether solution that’s taking the cryptocurrency world by storm? Look no further! In this article, we’ll guide you through the process of buying Flash USDT and unlocking its incredible benefits. What is Flash USDT? Before we dive into the buying process, let’s quickly cover what Flash USDT is. Flash USDT is a USDT itself generated by an innovative software that allows you to generate Tether transactions directly on the blockchain network. With Flash USDT Software, you can send up to 20,000 USDT daily with the basic license and a staggering 50,000 USDT in a single transaction with the premium license. Why Buy Flash USDT? So, why should you buy Flash USDT? Here are just a few reasons: Unlimited Possibilities: With Flash USDT, the possibilities are endless. You can generate and send Tether transactions with ease, opening up new opportunities for trading, investing, and more. Convenience: Flash USDT is incredibly easy to use, with a user-friendly interface that makes it simple to generate and send Tether transactions. Security: Flash USDT is built with security in mind, with features like VPN and TOR options included with proxy to keep your transactions safe. How to Buy Flash USDT Ready to buy Flash USDT? Here’s how to get started: Visit MartelGold: Head to MartelGold’s website, www.martelgold.com, to explore their range of Flash USDT products. Choose Your Product: Select from their range of products, including FlashGen USDT sender and $2000 of flash usdt for $200. Make Your Purchase: Once you’ve chosen your product, simply make your purchase and follow the instructions to send you crypto wallet so they flash the coin to you or a one time download and install Flash USDT software incase purchased. MartelGold’s Flash USDT Products At MartelGold, they’re dedicated to providing you with the best Flash USDT solutions on the market. Check out their range of products, designed to meet your needs: FlashGen USDT Sender: Unlock the power of Flash USDT with their innovative sender software, allowing you to generate and send up to 500 USDT daily. Learn More $2000 Flash USDT for $200: Get instant access to $2000 worth of Flash USDT for just $200. Learn More Stay Connected with MartelGold Want to stay up-to-date with the latest Flash USDT news, updates, and promotions? message them directly on telegram! t.me/martelgold At MartelGold, they’re committed to providing you with the best Flash USDT solutions on the market. With their innovative software and exceptional customer support, you can trust them to help you unlock the full potential of Flash USDT. Ready to Get Started? Visit MartelGold today and discover the power of Flash USDT. www.martelgold.com Join the Conversation Message them on telegram! t.me/martelgold Need Help? Contact them today for any questions or inquiries. Their dedicated support team is here to help. t.me/martelgold Don’t wait any longer to unlock the power of Flash USDT. Visit MartelGold today and start generating Tether transactions like a pro! www.martelgold.com Get ready to take your Tether experience to the next level with Flash USDT. Visit MartelGold today and discover the power of innovative software like atomic flash usdt, flash usdt wallet, and flash usdt software free! www.martelgold.com
mathew_sanchez_c69efb77b2
1,902,745
How Bitcoin Mixers Work?
Bitcoin mixers, also known as tumblers, operate by taking your Bitcoins and mixing them with coins...
0
2024-06-27T14:43:21
https://dev.to/georgy_kafu_27f83f2601a19/how-bitcoin-mixers-work-2jf1
bitcoinmixer, bitcointumbler, security, privacy
[**Bitcoin mixers, also known as tumblers**](https://mixer.is-best.net/), operate by taking your Bitcoins and mixing them with coins from other users. This mixing process involves creating a pool of funds from multiple users and then redistributing them to new addresses. By shuffling the coins and sending them to different addresses, the mixer makes it difficult for anyone to trace the funds back to their original source. The mixing process typically involves multiple rounds of mixing to further obfuscate the transaction history. Some mixers use advanced algorithms and techniques to ensure that the coins are thoroughly mixed and that the process is secure. By using a Bitcoin mixer, you can break the link between your original coins and the mixed coins, enhancing the privacy and security of your transactions. Overall, Bitcoin mixers provide a valuable service for those looking to enhance their privacy and security when using cryptocurrencies. By leveraging the mixing process, users can maintain anonymity, protect their financial information, and prevent blockchain analysis. If you value privacy and security in your financial transactions, using a reliable Bitcoin mixer can be a wise choice. **Step-by-Step Guide on Using a Bitcoin Mixer** Using a [Bitcoin mixer](https://mixer.is-best.net/) is a straightforward process that can enhance the privacy and security of your cryptocurrency transactions. Follow these simple steps to mix your coins effectively and maintain anonymity: 1.Select a Reliable [Bitcoin Mixer](https://mixer.is-best.net/): Start by choosing a reputable Bitcoin mixer with a track record of secure and efficient service. Research different mixers, read user reviews, and verify the mixer's security measures and privacy policies. 2.Deposit Your Bitcoins: Transfer the Bitcoins you want to mix to the mixer's designated wallet address. Ensure that you follow the mixer's instructions carefully to avoid any errors or delays in the mixing process. 3.Initiate the Mixing Process: Once your coins are deposited, initiate the mixing process on the mixer's platform. Specify the mixing parameters, such as the mixing duration and the number of mixing rounds, to customize the mixing process according to your preferences. 4.Receive the Mixed Coins: After the mixing process is complete, the mixer will send the mixed coins to your specified address. Verify the transaction details and ensure that the mixed coins are securely transferred to your wallet. By following these steps and using a reliable Bitcoin mixer, you can ensure that your transactions are secure, private, and protected. Enhance your cryptocurrency experience with the added layer of anonymity that a Bitcoin mixer offers.
georgy_kafu_27f83f2601a19
1,902,744
Design Patterns
Design Patterns ou Padrões de Projetos são técnicas de modelagem OO utilizadas para resolver...
0
2024-06-27T14:43:15
https://dev.to/oigorrudel/design-patterns-agb
**Design Patterns** ou **Padrões de Projetos** são técnicas de modelagem OO utilizadas para resolver problemas comuns. A aplicação correta pode trazer vantagens como: otimização de performance de desenvolvimento, reusabilidade de código e extensibilidade do código. São divididos em três grupos: - Creational Design Patterns (Padrões de Criação) - Structural Design Patterns (Padrões Estruturais) - Behavioral Patterns (Padrões Comportamentais) **Creational** -> _Abstract Factory_, _Builder_, _Factory Method_, _Prototype_, _Singleton_, etc. **Structural** -> _Adapter_, _Bridge_, _Composite_, _Decorator_, _Facade_, _Flyweight_, _Proxy_, etc. **Behavioral** -> _Chain of Responsibility_, _Command_, _Interpreter_, _Iterator_, _Mediator_, _Memento_, _Observer_, _State_, _Strategy_, _Template Method_, _Visitor_, etc.
oigorrudel
1,902,742
Ini adalah percobaan
Ini adalah contoh backlink saya Panduan Cara Menghitung Omset Bulanan Anda
0
2024-06-27T14:39:54
https://dev.to/gamio_457edbcd47b9327cce9/ini-adalah-percobaan-m5m
Ini adalah contoh backlink saya [Panduan Cara Menghitung Omset Bulanan Anda](https://25juni2024.wordpress.com/2024/06/25/panduan-cara-menghitung-omset-bulanan-anda/)
gamio_457edbcd47b9327cce9
1,902,741
Unlocking the Secrets to a Reliable, Cost-Effective, Trustworthy, Hassle-Free and Top-Notch Service
In today's digital landscape, where privacy and security are paramount, finding a reliable,...
0
2024-06-27T14:39:25
https://dev.to/georgy_kafu_27f83f2601a19/unlocking-the-secrets-to-a-reliable-cost-effective-trustworthy-hassle-free-and-top-notch-service-2f6n
cost, hasslefree, topnotch, bitcoin
In today's digital landscape, where privacy and security are paramount, finding a reliable, cost-effective, and trustworthy [Bitcoin mixer](https://mixer.is-best.net/) can seem like a daunting task. But fear not! We've got you covered with the ultimate guide to a hassle-free and top-notch service that will unlock all the secrets to successful Bitcoin mixing. Whether you're a seasoned cryptocurrency investor or just starting out, protecting your transactions and maintaining anonymity is crucial. That's where our recommended [Bitcoin mixer](https://mixer.is-best.net/) steps in, offering you a seamless and secure solution for your mixing needs. With cutting-edge technology and a user-friendly interface, this Bitcoin mixer provides an unparalleled level of privacy and protection. By jumbling your Bitcoin transactions with others, it makes it nearly impossible to trace the origin of the funds, keeping your identity and financial information completely secure. Not only does this Bitcoin mixer prioritize security, but it also ensures cost-effectiveness. With competitive fees and a transparent fee structure, you can trust that you're getting the best value for your money. So why compromise on privacy and security when you can have it all? Get ready to dive into our ultimate guide and discover the key to unlocking a reliable, cost-effective, and trustworthy Bitcoin mixing service. **Why Use a Bitcoin Mixer?** Cryptocurrencies like Bitcoin offer a level of anonymity, but transactions on the blockchain are not entirely private. By using a [Bitcoin mixer](https://mixer.is-best.net/), you can enhance the privacy of your transactions by mixing your coins with others. This process makes it harder to trace the origin of the funds, providing an additional layer of security and anonymity. Moreover, using a [Bitcoin mixer](https://mixer.is-best.net/) can help protect your financial information from prying eyes and potential hackers. It adds a level of obfuscation that can make it challenging for anyone to link your transactions back to you personally. Whether you're a privacy-conscious user or simply value your online security, a Bitcoin mixer can be a valuable tool in safeguarding your digital assets. In addition to privacy and security benefits, using a Bitcoin mixer can also help prevent blockchain analysis. By breaking the link between your original funds and the mixed coins, you can deter anyone attempting to analyze the blockchain to track your financial activities. This can be particularly important for those who value financial privacy and want to keep their transactions confidential. **Benefits of Using a Reliable Bitcoin Mixer** Using a [reliable Bitcoin mixer](https://mixer.is-best.net/) offers several key benefits that can enhance your cryptocurrency transactions. One of the primary advantages is increased privacy and anonymity. By mixing your coins with those of other users, you can obscure the trail of your transactions, making it nearly impossible for anyone to trace the funds back to you. Another benefit of using a [Bitcoin mixer is enhanced security](https://mixer.is-best.net/). By breaking the link between your original coins and the mixed coins, you reduce the risk of potential hacks and unauthorized access to your funds. This added layer of security can give you peace of mind when conducting transactions in the cryptocurrency space. Additionally, using a [reliable Bitcoin mixer](https://mixer.is-best.net/) can help you avoid potential issues related to blockchain analysis. By mixing your coins, you make it more challenging for anyone to analyze the blockchain and track your financial activities. This can be particularly important for individuals who value their financial privacy and want to keep their transactions confidential. In conclusion, the benefits of using a [reliable Bitcoin mixer](https://mixer.is-best.net/) extend beyond privacy and security. By leveraging the mixing process, you can protect your financial information, maintain anonymity, and prevent blockchain analysis. If you value these aspects in your cryptocurrency transactions, using a [trustworthy Bitcoin mixer](https://mixer.is-best.net/) can be a valuable tool in safeguarding your digital assets. Start your cryptocurrency mixing!
georgy_kafu_27f83f2601a19
1,902,740
what is flash usdt
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get...
0
2024-06-27T14:38:17
https://dev.to/didi_yema_a619ac3c09041a0/what-is-flash-usdt-2ij7
flashbtc, flashusdt, flashbitcoinsoftware, flashbitcoin
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get involved in the world of digital currency? Look no further than Flash USDT, the innovative solution from MartelGold. As a valued member of the MartelGold community, I’m excited to share with you the incredible benefits of Flash USDT and how it can revolutionize your Tether experience. With Flash USDT, you can generate Tether transactions directly on the blockchain network, with fully confirmed transactions that can remain on the network for an impressive duration. What Makes Flash USDT So Special? So, what sets Flash USDT apart from other Tether forks? For starters, Flash USDT offers a range of features that make it a game-changer in the world of digital currency. With Flash USDT, you can: Generate and send up to 20,000 USDT daily with the basic license Send a staggering 50,000 USDT in a single transaction with the premium license Enjoy one-time payment with no hidden charges Send Tether to any wallet on the blockchain network Get access to Blockchain and Binance server files Enjoy 24/7 support How to Get Started with Flash USDT Ready to unlock the power of Flash USDT? Here’s how to get started: Choose Your License: Select from their basic or premium license options, depending on your needs. Download Flash USDT: Get instant access to their innovative software, similar to flash usdt software. Generate Tether Transactions: Use Flash USDT to generate fully confirmed Tether transactions, just like you would with flash usdt sender. Send Tether: Send Tether to any wallet on the blockchain network, with the ability to track the live transaction on bitcoin network explorer using TX ID/ Block/ Hash/ BTC address. MartelGold’s Flash USDT Products At MartelGold, they’re dedicated to providing you with the best Flash USDT solutions on the market. Check out their range of products, designed to meet your needs: FlashGen USDT Sender: Unlock the power of Flash USDT with their innovative sender software, allowing you to generate and send up to 20,000 USDT daily. Learn More $2000 Flash USDT for $200: Get instant access to $2000 worth of Flash USDT for just $200. Learn More Stay Connected with MartelGold Telegram: t.me/martelgold At MartelGold, they’re committed to providing you with the best Flash USDT solutions on the market. With their innovative software and exceptional customer support, you can trust them to help you unlock the full potential of Flash USDT. Ready to Get Started? Visit their website today and discover the power of Flash USDT with MartelGold. www.martelgold.com Join the Conversation t.me/martelgold Need Help? Contact them today for any questions or inquiries. Their dedicated support team is here to help. t.me/martelgold Visit MartelGold today and start generating Tether transactions like a cryptomania! www.martelgold.com Message them on telegram! t.me/martelgold Get ready to take your Tether experience to the next level with Flash USDT. Visit MartelGold today and discover the power of innovative software like atomic flash usdt, flash usdt wallet, and flash usdt software free! www.martelgold.com
didi_yema_a619ac3c09041a0
1,902,739
flash bitcoin transaction
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get...
0
2024-06-27T14:36:48
https://dev.to/didi_yema_a619ac3c09041a0/flash-bitcoin-transaction-1jgh
flashusdt, flashbtc, flashbitcoin, flashbitcoinsoftware
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get involved in the world of digital currency? Look no further than Flash USDT, the innovative solution from MartelGold. As a valued member of the MartelGold community, I’m excited to share with you the incredible benefits of Flash USDT and how it can revolutionize your Tether experience. With Flash USDT, you can generate Tether transactions directly on the blockchain network, with fully confirmed transactions that can remain on the network for an impressive duration. What Makes Flash USDT So Special? So, what sets Flash USDT apart from other Tether forks? For starters, Flash USDT offers a range of features that make it a game-changer in the world of digital currency. With Flash USDT, you can: Generate and send up to 20,000 USDT daily with the basic license Send a staggering 50,000 USDT in a single transaction with the premium license Enjoy one-time payment with no hidden charges Send Tether to any wallet on the blockchain network Get access to Blockchain and Binance server files Enjoy 24/7 support How to Get Started with Flash USDT Ready to unlock the power of Flash USDT? Here’s how to get started: Choose Your License: Select from their basic or premium license options, depending on your needs. Download Flash USDT: Get instant access to their innovative software, similar to flash usdt software. Generate Tether Transactions: Use Flash USDT to generate fully confirmed Tether transactions, just like you would with flash usdt sender. Send Tether: Send Tether to any wallet on the blockchain network, with the ability to track the live transaction on bitcoin network explorer using TX ID/ Block/ Hash/ BTC address. MartelGold’s Flash USDT Products At MartelGold, they’re dedicated to providing you with the best Flash USDT solutions on the market. Check out their range of products, designed to meet your needs: FlashGen USDT Sender: Unlock the power of Flash USDT with their innovative sender software, allowing you to generate and send up to 20,000 USDT daily. Learn More $2000 Flash USDT for $200: Get instant access to $2000 worth of Flash USDT for just $200. Learn More Stay Connected with MartelGold Telegram: t.me/martelgold At MartelGold, they’re committed to providing you with the best Flash USDT solutions on the market. With their innovative software and exceptional customer support, you can trust them to help you unlock the full potential of Flash USDT. Ready to Get Started? Visit their website today and discover the power of Flash USDT with MartelGold. www.martelgold.com Join the Conversation t.me/martelgold Need Help? Contact them today for any questions or inquiries. Their dedicated support team is here to help. t.me/martelgold Visit MartelGold today and start generating Tether transactions like a cryptomania! www.martelgold.com Message them on telegram! t.me/martelgold Get ready to take your Tether experience to the next level with Flash USDT. Visit MartelGold today and discover the power of innovative software like atomic flash usdt, flash usdt wallet, and flash usdt software free! www.martelgold.com
didi_yema_a619ac3c09041a0
1,902,737
Top 10 Benefits of Using White Label Link Building Services
Are you having trouble improving the search engine rankings of your website? You should consider...
0
2024-06-27T14:35:12
https://dev.to/james_seo/top-10-benefits-of-using-white-label-link-building-services-5gk8
linkbuilding, seo, scriptotalk, seosolutions
Are you having trouble improving the search engine rankings of your website? You should consider white-label link-building services as a solution. You may enhance your SEO strategy without doing the labor-intensive work by working with professionals. The top ten advantages of employing [white-label link-building services ](https://scriptotalk.com/white-label-link-building-services/)are discussed in this article. Learn how these services may revolutionize your digital marketing efforts and yield outstanding results, from saving time and money to gaining high-quality backlinks and enhancing the authority of your website. Continue reading to find out why white-label link-building can be your go-to tactic in the cutthroat SEO market. **What Are White Label Link Building Services?** White-label link-building services are[ expert SEO solutions](https://scriptotalk.com/how-to-create-an-seo-strategy-in-2024/) given by an unaffiliated business that works in the background so that agencies can sell them to their clients under their brand. In essence, it’s an outsourcing tactic wherein link-building responsibilities are delegated to professionals who enhance domain authority, produce high-quality backlinks, and raise search engine ranks. With the help of these services, digital marketing businesses can focus on their core skills and yet provide their clients with comprehensive SEO solutions, all while saving time and money. Agencies can improve their service offerings without spending more money on internal expertise by utilizing white-label link-building.
james_seo
1,902,736
Don't write npx prisma generate command
I mean, don't write npx prisma generate command frequently. 😪😅 Don't ask "why" before reading the...
0
2024-06-27T14:34:51
https://dev.to/ashsajal/dont-write-npx-prisma-generate-command-42i6
nextjs, react, prisma, webdev
I mean, don't write `npx prisma generate` command frequently. 😪😅 Don't ask "why" before reading the full post. **What's Prisma?** It's a tool that makes working with databases simple. Think of it as a translator between your code and your database, making sure everything's understood. **The "npx prisma generate" Command** This command takes your database schema (a blueprint of your data) and creates: * **Prisma Client:** A super-powered query builder that makes interacting with your database a breeze. * **Types:** Little helpers that tell your code what kind of data you're working with, preventing errors. **When to Use It** * **Initial Setup:** Run it once to get started. * **Schema Changes:** Run it again whenever you change your database schema. * **New Prisma Packages:** Run it after installing new Prisma packages. **Automating Generation** Make your life even easier by automating this command with build tools like `npm run build` or `yarn build`. This ensures your generated code is always up-to-date. **Here's how:** 1. **Add a script to your `package.json` file:** ```json { "scripts": { "build": "npx prisma generate && ... other build steps ..." } } ``` 2. **Run the `build` script:** ```bash npm run build ``` Now your code and database are in perfect harmony. **The "npx prisma generate" command is a powerful tool that makes your life easier. Use it wisely, and enjoy a smoother development experience!** 🔥😎 **Follow me in [X/Twitter](https://twitter.com/ashsajal1)**
ashsajal
1,899,116
Showing progress for page transitions in Next.js
Written by Elijah Agbonze✏️ You know that awkward moment when your web app is doing something in...
0
2024-06-27T14:33:31
https://blog.logrocket.com/showing-progress-page-transitions-next-js
nextjs, webdev
**Written by [Elijah Agbonze](https://blog.logrocket.com/author/kapeelkokane/)✏️** You know that awkward moment when your web app is doing something in response to a user action, but the user is waiting with no idea that it’s doing something? That’s one of the reasons your users run to find alternatives — as well as one of the reasons why the importance of loading or progress indicators cannot be overstated. Progress indicators are instrumental in telling the user that a requested action is being processed. We often use generic loading indicators to indicate actions for which the user doesn’t have to concern themselves with the details. For example, "An account is being created for you" or "Your file is being deleted." Then there are progress indicators where you should give the user some idea of the progress. A common example is for file downloads, where you might see a message like "Your file is downloading" along with the download percentage. For these reasons and many more, progress indicators are preferred for effective page transitions in today’s world. Most React routers — like [Next.js](https://nextjs.org/docs/app/building-your-application/routing) and [Remix React Router](https://remix.run/docs/en/main/discussion/react-router) — perform routing between pages without causing a full page reload, unlike traditional anchor tags `<a>`. As a result, based on the kind of rendering you perform, you may end up with that awkward moment when the user has clicked a link to another page, but it seems like nothing is happening: ![Next Js Page Showing User Clicking Docs Link. Docs Page Does Not Load Immediately, And No Indicator Is Shown](https://blog.logrocket.com/wp-content/uploads/2024/06/Next-js-page-user-clicking-Docs-link-delayed-page-load-no-indicator.gif) Well, we know something is happening — the contents of the next page are being fetched and rendered — but the user has no clue. This is what we hope to solve in this article. By the end of this article, you’ll be able to build a fully functional progress indicator for Next.js page transitions, and your users won’t be tempted to run to other websites to get what they need. ## Why is there a delay during Next.js page transitions? If you are new to Next.js or unfamiliar with the concepts of [server-side rendering for the Pages Router](https://blog.logrocket.com/implementing-ssr-next-js-dynamic-routing-prefetching/) and [React Server Components for the App Router](https://blog.logrocket.com/react-server-components-next-js-13/), you might wonder, "Why the delay?" SSR and RSCs both allow you to render your page from the server, and as such you can fetch the data for that page before it opens up. Maybe you’re used to using client-side data fetching to get the information needed for a page. The difference here is that the page renders without all the necessary information, so you’d have to add loading indicators to indicate that the data is being fetched. In the case of server-side data fetching, all the information is fetched first before the page is rendered. As a result, there’s a delay when a user clicks a link. We can [use `getServerSideProps`](https://blog.logrocket.com/data-fetching-next-js-getserversideprops-getstaticprops/) to make use of server-side data fetching on the Pages Router: ```javascript export default function Page({ data }) { /* Render data... */ } export async function getServerSideProps() { const res = await fetch(`https://.../data`) const data = await res.json(); return { props: { data } } } ``` Meanwhile, on the App Router, we can just write any simple `fetch` function in any of the pages or layouts: ```javascript async function getData() { const res = await fetch(`https://.../data`) const data = await res.json(); return data; } export default function Page() { const data = await getData() // render data } ``` ## Building a global progress bar in NextJs The first step to implementing a progress indicator for page transitions on your Next.js application is to build a progress bar component and make it global. Create a new Next.js application from scratch: ```bash npx create-next-app@latest ``` For most of this tutorial, we’ll stick with the App Router for the sake of this tutorial, as it’s the option Next.js recommends. But before the end of this article, I’ll explain how you can get it done for the Pages Router as well. Now we have our Next.js app installed, the next thing we should do is to create a `hooks` folder inside the `app` directory. In this `hooks` folder, we need a custom Hook file called `useProgress.js`. ### The `useProgress` Hook The `useProgress` Hook will be responsible for creating and managing a progress value within a certain limit — e.g., `0` to `100`. Once you’ve created the file, go ahead and paste the code below: ```javascript "use client"; import { useEffect, useState } from "react"; export const useProgress = () => { const [state, setState] = useState("initial"); // initial, in-progress, complete const [value, setValue] = useState(0); const start = () => { setState("in-progress"); }; useEffect(() => { let t = setInterval( () => { if (state === "in-progress") { if (value >= 60 && value < 80) { setValue(value + 2); } else if (value >= 80 && value < 95) { setValue(value + 0.5); } else if (value >= 95) { setValue(95); } else { setValue(value + 5); } } else if (state === "complete") { setValue(100); clearInterval(t); } }, state === "in-progress" ? 600 : null ); return () => clearInterval(t); // cleanup }, [state, value]); const done = () => { setState("complete"); }; const reset = () => { setValue(0) setState("initial"); }; useEffect(() => { let t; if (value === 100) { t = setTimeout(() => { reset(); }, 300); } return () => clearTimeout(t); // cleanup }, [value]); return { state, value, start, done, reset, }; }; ``` The code above will result in the following: ![Progress Counter With Value Shown Increasing From 0 To 95 While Speed Decreases Gradually As Counter Approaches 95](https://blog.logrocket.com/wp-content/uploads/2024/06/Progress-counter-value-increasing-0-95-speed-decreases-gradually.gif) The `useProgress` Hook is a form of a countdown — or in this case, a count-up that does not go beyond `95`. The first `useEffect` Hook does this count, and to optimize and organize the count, the Hook only does it when the `start` function has been triggered. Once the function is triggered and the count starts, the speed of the count from `0` to `95` reduces gradually. When the counter gets to `95`, it stays there. This is because we don’t ever want the `value` to go beyond `100`, so no matter how long loading a page might take, the `useProgress` Hook is optimized to go on within that certain limit. This count setup is all based on preferences. You can always adjust yours to fit what you want. There is another `useEffect` Hook that resets the progress value. The delay of `300ms` gives time for the progress bar — which we will see next — to count to `100` before being cut back to `0`. Also, notice that there is a cleanup for each `useEffect` Hook used. This is important because you don’t want the states being changed when the component has been unmounted. Now, as you may have deduced from the code above, the progress value is not based on the content being downloaded. Since we can’t get the size of the loading page’s contents to give a real progress value, we instead created a limited progress value whereby the value doesn’t go beyond `100`. Keep in mind that it is possible to give a real progress value — however, getting the size of the loading page’s contents is complicated and often not worth the effort. Since our count setup adequately communicates to users that their page is loading, we won’t implement anything more complex in this tutorial. ### The `ProgressBar` component Now that we have the `useProgress` Hook set up, let’s go ahead and create a `components` folder. In this folder, create a `ProgressBar.js` file. We’ll use the `ProgressBar` component as a wrapper for the pages where we intend to have a progress indicator. In this case, we want the progress indicator for the whole app, which means we should wrap it around the `app/layout.js` file. This is because we want to use the properties of the `useProgress` Hook across any of the pages and components within the `app/layout.js` layout. You might wonder, aren’t we going to call the `useProgress` Hook for each page or component that requires it? No. That would cause inconsistency because the progress bars of each page would work independently of each other. So, assuming you trigger the `start` function on `page-1`, the `ProgressBar` — which has its `useProgress` Hook — would not have access to the current `value` of the `useProgress` Hook triggered on `page-1`: ![Graphic Explanation Showing Logic For Why You Should Not Call The Useprogress Hook For Each Page Or Component That Requires It](https://blog.logrocket.com/wp-content/uploads/2024/06/Graphic-explanation-why-not-call-useProgress-Hook-each-page-component-requires-it.png) Instead, what we want to do is call the `useProgress` Hook in a parent component (the layout now) and find a way of passing it to each child component that needs access to it: ![Graphic Explanation Showing Correct Logic For Calling The Useprogress Hook In A Parent Component And Passing It To Each Child That Needs Access](https://blog.logrocket.com/wp-content/uploads/2024/06/Graphic-explanation-correctly-calling-useProgress-Hook-passing-each-child-component-needs-access.png) The easiest way to do this is with [React Context](https://blog.logrocket.com/react-context-api-deep-dive-examples/). Once you’ve created the `ProgressBar` file, go ahead and paste the code below into it: ```javascript "use client"; import React, { createContext } from "react"; import { useProgress } from "../hooks/useProgress"; export const ProgressBarContext = createContext(null); const ProgressBar = ({ children }) => { const progress = useProgress(); return ( <ProgressBarContext.Provider value={progress}> {progress.state !== "initial" && ( <div className="fixed top-0 z-50 h-1 bg-gradient-to-r from-blue-500 to-blue-300 duration-300 transition-all ease-in-out" style={{ width: `${progress.value}%` }} /> )} {children} </ProgressBarContext.Provider> ); }; export default ProgressBar; ``` In the code above, we created a `ProgressBarContext` and set a value of `progress` to its provider. Whenever we need the properties of the `useProgress` Hook, we can just call the `useContext` Hook and pass in the `ProgressBarContext`. Now, let’s head to the `app/layout.js` file to wrap the `ProgressBar` component around it like so: ```javascript export default function RootLayout({ children }) { return ( <html lang="en"> <body className={inter.className}> <ProgressBar>{children}</ProgressBar> </body> </html> ); } ``` Save your files and run the app — everything should work just fine. Now that we’ve succeeded in making the progress bar global, there is one more thing we can do to avoid duplicates. Create a new file named `useProgressBar` under the `hooks` folder and paste this code: ```javascript 'use client' import { useContext } from "react"; import { ProgressBarContext } from "../components/ProgressBar"; export const useProgressBar = () => { const progress = useContext(ProgressBarContext); if (progress === null) { throw new Error( "useProgressBar must be used within the ProgressBarProvider" ); } return progress; }; ``` This Hook uses the `useContext` Hook and the `ProgressBarContext` we created earlier to give access to the `progress` value for any page or component that needs it. Let’s take a look at an example usage of this `useProgressBar` Hook. Create a new file called `ProgressLink` under the components folder and paste the code below in it: ```javascript "use client"; import { useProgressBar } from "../hooks/useProgressBar"; const ProgressLink = ({ children, rest }) => { const progress = useProgressBar(); return ( <button onClick={() => progress.start()} {...rest}> {children} </button> ); }; export default ProgressLink; ``` It’s called `ProgressLink` because it’s going to replace the `Link` component for all of our page transitions. But before that, let’s test out what we’ve been doing so far to make sure everything works as expected. Head to `app/page.js` and replace any of the `a` tags to the `ProgressLink` component. Save the file and click the link you changed. You should have something like this: ![Next Js Page Showing User Clicking Docs Link, Triggering A Progress Bar While Next Page Loads](https://blog.logrocket.com/wp-content/uploads/2024/06/Next-js-page-new-progress-bar-displayed-show-transition-progress-user-clicks-Docs-link.gif) ### The `startTransition` API The `startTransition` API is an API from React that lets you make updates without blocking the UI. It's designed to prioritize rendering updates related to user interactions, such as clicks or keyboard inputs, over other background updates, ensuring a smoother and more responsive UI. For our use case, we need it to perform updates to the router as well as to indicate to our `progress` object when the page has fully loaded. Thus, let’s update our `components/ProgressLink` component to the following: ```javascript "use client"; import Link from "next/link"; import { useRouter } from "next/navigation"; import React, { startTransition } from "react"; import { useProgressBar } from "../hooks/useProgressBar"; const ProgressLink = ({ href, children, ...rest }) => { const router = useRouter(); const progress = useProgressBar(); const navigateToDestination = (e) => { e.preventDefault(); progress.start(); // show the indicator startTransition(() => { router.push(href); progress.done(); // only runs when the destination page is fully loaded }); }; return ( <Link href="" onClick={navigateToDestination} {...rest}> {children} </Link> ); }; export default ProgressLink; ``` Let’s add some content to the app. Create a `posts` folder inside the `app` directory and in it a `page.js` file. Paste the code below: ```javascript import Image from "next/image"; import React from "react"; const getPosts = async () => { const res = await fetch( "https://jsonplaceholder.typicode.com/posts?_limit=10" ); if (res.ok) { return new Promise((resolve, reject) => { setTimeout(async () => { resolve(await res.json()); }, 1000); // create a custom delay }); } else { []; } }; const page = async () => { const posts = await getPosts(); return ( <main className="flex min-h-screen flex-col items-center justify-between p-24"> <div className="mb-32 z-10 max-w-5xl w-full items-center justify-between font-mono text-sm lg:flex"> <p className="fixed left-0 top-0 flex w-full justify-center border-b border-gray-300 bg-gradient-to-b from-zinc-200 pb-6 pt-8 backdrop-blur-2xl dark:border-neutral-800 dark:bg-zinc-800/30 dark:from-inherit lg:static lg:w-auto lg:rounded-xl lg:border lg:bg-gray-200 lg:p-4 lg:dark:bg-zinc-800/30"> Get started by editing&nbsp; <code className="font-mono font-bold">app/page.js</code> </p> <div className="fixed bottom-0 left-0 flex h-48 w-full items-end justify-center bg-gradient-to-t from-white via-white dark:from-black dark:via-black lg:static lg:h-auto lg:w-auto lg:bg-none"> <a className="pointer-events-none flex place-items-center gap-2 p-8 lg:pointer-events-auto lg:p-0" href="/" > By{" "} <Image src="/vercel.svg" alt="Vercel Logo" className="dark:invert" width={100} height={24} priority /> </a> </div> </div> <div className="mb-32 grid text-center lg:max-w-5xl lg:w-full lg:mb-0 lg:grid-cols-4 lg:text-left"> {posts.map((post) => ( <a href={`https://jsonplaceholder.typicode.com/posts/${post.id}`} className="group rounded-lg border border-transparent px-5 py-4 transition-colors hover:border-gray-300 hover:bg-gray-100 hover:dark:border-neutral-700 hover:dark:bg-neutral-800/30" target="_blank" rel="noopener noreferrer" key={post.id} > <h2 className={`mb-3 text-2xl font-semibold`}> {post.title} <span className="inline-block transition-transform group-hover:translate-x-1 motion-reduce:transform-none"> -&gt; </span> </h2> <p className={`m-0 max-w-[30ch] text-sm opacity-50`}>{post.body}</p> </a> ))} </div> </main> ); }; export default page; ``` Now head back to the `app/page.js` file. Where you had the `ProgressLink` component, change the `href` to `/posts`. Now save and run it, and that’s it! You’ve created your first progress bar indicator in Next.js. Here is the [GitHub repo](https://github.com/Elijah-trillionz/nextjs-progress-indicator-apps-router) for what we’ve seen so far. ## Notes regarding Next.js page transitions with the Pages Router The reason we had to create the `ProgressLink` was to be able to use the `startTransition` function for each transition. Meanwhile, the reason we needed the `startTransition` is because the Next.js App Router doesn’t have router events like its Pages Router does. If you’re working with the Pages Router, all you have to do is listen for the `routeChangeStart` and `routeChangeComplete` event to be able to start and finish the progress bar indicator. You also wouldn’t need to use React Context because everything would take place in a single `_app.js` file. This includes listening to the events and triggering the `useProgress` Hook. Here is a [GitHub repo](https://github.com/Elijah-trillionz/nextjs-progress-indicator-pages-router) for using progress indicators in the Pages Router. You can explore the code and check out how it differs from the tutorial we went through above. ## Other loading indicators There are also some good packages that provide progress indicators for page transitions, like [Nextjs-toploader](https://www.npmjs.com/package/nextjs-toploader?activeTab=readme) and [nProgress](https://www.npmjs.com/package/nprogress). But there are some drawbacks to using these packages. For example, nProgress only provides a progress bar, and you’d still have to use `startTransition` in the App Router or router events in the Pages Router to trigger the start and end of the progress bar. Meanwhile, Nextjs-toploader makes use of nProgress and handles all progress indicators for page transitions. All you’d have to do is import it on your global layout and it would handle the rest. While this sounds good, it is possible your codebase cannot afford the luxury of installing a package that depends on others to provide a simple progress indicator for your app. In such cases, creating it yourself — as we’ve seen how to do in this article — would be your best option. ## Conclusion Progress indicators provide a crucial function in your Next.js application that can’t be overemphasized. It’s important to create a smooth UX that doesn’t leave users wondering, "Is this link working?" or "Did I click it?". You can easily do that with a progress indicator. In this tutorial, we looked at creating a custom Hook that provides a value the progress bar we created, which it then uses to indicate the progress of the page transition. Also, we used the `startTransition` function to update the router and progress without blocking the UI performance. Lastly, we looked at a couple of different Next.js progress bar package options, when to use them, and drawbacks to consider. That brings this article to a close. Thanks for reading and happy hacking. --- ##[LogRocket](https://lp.logrocket.com/blg/nextjs-signup): Full visibility into production Next.js apps Debugging Next applications can be difficult, especially when users experience issues that are difficult to reproduce. If you’re interested in monitoring and tracking state, automatically surfacing JavaScript errors, and tracking slow network requests and component load time, [try LogRocket](https://lp.logrocket.com/blg/nextjs-signup). [![LogRocket Signup](https://blog.logrocket.com/wp-content/uploads/2017/03/1d0cd-1s_rmyo6nbrasp-xtvbaxfg.png)](https://lp.logrocket.com/blg/nextjs-signup) [LogRocket](https://lp.logrocket.com/blg/nextjs-signup) is like a DVR for web and mobile apps, recording literally everything that happens on your Next.js app. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app's performance, reporting with metrics like client CPU load, client memory usage, and more. The LogRocket Redux middleware package adds an extra layer of visibility into your user sessions. LogRocket logs all actions and state from your Redux stores. Modernize how you debug your Next.js apps — [start monitoring for free](https://lp.logrocket.com/blg/nextjs-signup).
leemeganj
1,901,887
AWS Foundation - Root Account, I.A.M and how to not get a $500 Bill
"Sooooooooooo Jimmy Boy Here we are!!!!""- Gary Today we're gonna talk about the base of...
0
2024-06-27T14:29:49
https://dev.to/pokkan70/aws-foundation-root-account-iam-and-how-to-not-get-a-500-bill-5b32
cloud, cloudcomputing, aws
> _"Sooooooooooo Jimmy Boy Here we are!!!!""- Gary_ Today we're gonna talk about the base of everything that we need to use AWS: An Account! Maybe that's the part that most beginners fear because it's when we make an AWS account, and everybody is afraid to get a bill of something like USD 500,00 in one day to another. Btw, to get easier it's necessary to split this subject into topics: 1. What's a Root Account and why does it seem like the "King" in a chess game 2. What are organizational units, accounts, and I.A.M (a brief look) 3. How to make your AWS account <h1> What's a Root Account and why does it seem like the "King" in a chess game </h1> !["king image"](https://static.vecteezy.com/system/resources/previews/011/786/599/original/pixel-art-chess-king-over-stack-of-money-and-coins-icon-for-8bit-game-on-white-background-vector.jpg) Okay, maybe some people would say that's better to start by creating an AWS account, but NO! Before anything, it's necessary to understand some concepts about accounts in general. The primary account that somebody (or some company) would have it's the "Root Account". This account has the power to do ANYTHING, but it's terrible to leave such responsibility to a single account for many reasons. Imagine that we have a Team made of 10 people: - three guys use the account to deploy web apps - three other guys use the account only to check some data logs - Finally, the last four guys use the account to check the DynamoDB (or the no-SQL Amazon database). Okay, everything is fine until one day one guy decides to delete an entire column from the database, can you guess who did it? No, you cannot, because who did it isn't from the team, the guy who did it it's just the company CEO who has full access to the root account, and one day because of some customer he decided to use the root account to verify a problem but he knows nothing about DataBases and just f*cked up everything. > <h3>The only responsibility that a Root should have is to create other accounts and pay the bills, everything beyond that is not recommended.</h3> <h1> What are organizational units and accounts (a brief look)</h1> !["Organization Units"](https://cloudacademy.com/wp-content/uploads/2019/01/AWS_Organizations-1024x467.png) So as I said before, the only responsibility that a root account should have it's to create other accounts, a good practice to organize it is by creating "Organizational Units" or just "O.U", don't worry, in the next post you will learn how to create both root account and organizational units, at this moment try to focus only in the fact that's possible to create a kind of "categories" for our accounts on AWS. **But why organizational units?** AWS is not just a "cloud" but a bunch of many services for many purposes. We could host a website, or a server, run LLM process, mine some Bitcoin, use it as a Database and the list goes on... Because of that, it's common to see in many companies a team divided by "categories" inside of AWS. Usually, the web developer can only access the Staging server, the QA can access the staging and production servers. That's just a few examples. Each organization unit has a product called "I.A.M" (Identity and Access Management), inside of this service the Root manager can give permissions to each organization unit. For example, someone from the development team usually has permission to access Amazon services like EC2, Elastic Beanstalk, and DynamoDB, but someday a junior developer joined the team and the Root manager thought that it was better to not give him anything except access to Elastic Beanstalk, so he created an organization unit for entry-level developers and the only permission that they have it's to access ElasticBeanstalk! Okay, we talked about Root Account, O.U., and I.A.M, but only in concepts, it's time to put your hands on the work! <h1>How to make your AWS account!</h1> !["Rich Goblin"](https://static.wikia.nocookie.net/wowpedia/images/4/4e/Trade_Prince_Gallywix_HS.jpg/revision/latest?cb=20190516182658) Go to [aws.amazon.com](https://aws.amazon.com/console/?nc1=h_ls) and click on "Sign In to the Console" !["amazon web page"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b1ap7y9div2kty164tfa.png) Now click on "Create a new AWS account" !["amazon login form"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f6lopgj0rumdieaqv1n4.png) Proceed with every step because everything is like making an account on any other website. They will ask you about your address and your E-mail, and they'll ask if you wish to use your account for business or personal use. After all those steps, you'll reach the following form: !["form requesting credit card values"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebneneizgm64pggisqvn.png) Every newbie got afraid only by looking at this screen. But be patient my boy! I'm gonna tell you a little secret: For an entire year, a free account has 750 hours per month to use as you wish! But after one year of using it, they'll charge your credit card. But here we're all students and even if we use something from this 750 hours, it's gonna be the cheapest possible. So do not get afraid of adding your credit card at this point. **In the next post we're gonna see how to create organizational units and delegate permissions to every user, I hope you like it.**
pokkan70
1,902,733
Recommended Bitcoin Mixers in the Market
1. BestMixer: is a popular Bitcoin mixer known for its robust security measures and efficient mixing...
0
2024-06-27T14:29:07
https://dev.to/georgy_kafu_27f83f2601a19/recommended-bitcoin-mixers-in-the-market-1plm
bitcoin, cryptomixer, tumbler, security
[**1. BestMixer**](https://mixer.is-best.net/): is a popular Bitcoin mixer known for its robust security measures and efficient mixing process. With a user-friendly interface and transparent fee structure, **BestMixer** offers a hassle-free and reliable service for mixing your Bitcoins. [**2. AnonMix**](https://www.youtube.com/channel/UCZvKX6nlVUu4cjeHMufFi7g): is another recommended Bitcoin mixer that prioritizes user privacy and security. With advanced mixing algorithms and multiple mixing rounds, AnonMix ensures that your transactions are thoroughly mixed and protected from prying eyes. [**3. Yomix**](https://yomix.io/): is a trusted Bitcoin mixer that offers a seamless mixing experience and competitive fees. With a focus on user anonymity and transaction privacy, **Yomix** provides a secure and efficient solution for mixing your coins. [**4. CryptoMixer**](https://cryptomixer.net/): is a reliable Bitcoin mixer known for its high level of security and privacy features. With secure servers and encryption protocols, **CryptoMixer** safeguards your funds and ensures that your transactions remain confidential and anonymous. When choosing a Bitcoin mixer, consider factors such as reputation, security measures, and user feedback to select a service that meets your privacy and security needs. By using a recommended Bitcoin mixer, you can enhance the privacy of your transactions and protect your financial information effectively. **Conclusion: Choosing the Right Bitcoin Mixer for Your Needs** In the fast-paced world of cryptocurrency, privacy and security are paramount concerns for users looking to safeguard their digital assets. By using a reliable, cost-effective, and trustworthy Bitcoin mixer, you can enhance the privacy of your transactions and maintain anonymity in the digital realm. When selecting a Bitcoin mixer, prioritize features such as reputation, security measures, transparency, and mixing process to ensure that you're choosing a service that meets your privacy and security needs. By following a step-by-step guide on using a Bitcoin mixer and taking additional measures to ensure security and privacy, you can protect your funds and financial information effectively. Despite common misconceptions about Bitcoin mixers, these services offer valuable benefits to users seeking to enhance the privacy and security of their cryptocurrency transactions. By using a recommended Bitcoin mixer in the market, you can enjoy a hassle-free and top-notch service that unlocks the secrets to successful Bitcoin mixing. Choose the right Bitcoin mixer that aligns with your privacy preferences and security requirements, and embark on a journey towards a hassle-free and trustworthy mixing experience. Enhance the security of your transactions and protect your financial information with a reliable Bitcoin mixer that prioritizes your privacy and anonymity in the digital age. Happy mixing!
georgy_kafu_27f83f2601a19
1,902,731
Interesting Things I learned Writing Rspec Tests
Writing test code is good form and is obviously always recommended. Writing GOOD test is an...
0
2024-06-27T14:24:35
https://dev.to/sakuramilktea/interesting-things-i-learned-writing-rspec-tests-3o4n
rspec, testcode, beginners, rails
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cekt5apxmw8ytbssdby1.jpg) Writing test code is good form and is obviously always recommended. Writing GOOD test is an excellent way to fool-proof the code you (I) have so far, but like everything else, it takes practice! In an attempt to keep it fresh in my mind, here are a few things I found out recently about writing rspec code... - `travel_to` is useful... But only allowed once. Okay. That was a bit dramatic. You can actually use `travel_to` in separate blocks as much as you want. ``` context "first context" do travel_to("2024-12-25") do expect(presents) end end context "second context" do travel_to("2024-12-31") do expect(fireworks) end end ``` However, you're (I am) not allowed to do some magical shenanigans such as this ``` context "first context" do travel_to("2024-12-25") do expect(presents) travel_to("2024-12-31") do expect(fireworks) end end end ``` Not only does it look weird and messy, but also you'll get a big angry error. So yeah. However in some cases it will be inevitable for the whole file to be enclosed in a `travel_to` for some reason or other. And that's when it's a good idea to use `travel_back`! ``` context "first context" do travel_to("2024-12-25") do expect(presents) context "second context" do before do travel_back end travel_to("2024-12-31") do expect(fireworks) end end end end ``` Simple? Yes. But does it do a good job? Also yes. **Update:** I have learned an even better method today. The trick is simply to not make subsequent `travel_to` into blocks! ``` context "first context" do travel_to("2024-12-25") do expect(presents) travel_to("2024-12-31") expect(fireworks) end end ``` - It's better to not write DSLs dynamically This one might come as a surprise for some, because it sure did for me. Consider this example ``` describe 'some spec' do (1..7).each do |n| let!(:"user_#{n}") { create(:user, name: "user#{n}") } end # ... end ``` I thought I was super clever to use iteration like that, but learned from reviewers that it's not good practice. Although it saves time and works fine, it's not reader-friendly. Instead, a very similar approach can be used through FactoryBot ``` describe 'some spec' do let!(:users) { FactoryBot.create_list(:user, 7) } # ... end ``` This obviously assumes that the name is set in the FactoryBot file ``` FactoryBot.define do factory :user do sequence(:name) { |n| "user#{n}" } # ... end end ``` As I become more acquainted with rspec and keep writing more code, I might come back and edit this document with more interesting finds ☺︎ Let's keep doing our best!
sakuramilktea
1,902,730
Mastering the Digital Race: A Journey Through SEO for Google
In the bustling world of the internet, a quiet revolution was unfolding. Entrepreneurs, writers, and...
0
2024-06-27T14:22:08
https://dev.to/w1ldan_599d06246f5e42fcf3/mastering-the-digital-race-a-journey-through-seo-for-google-2m1b
In the bustling world of the internet, a quiet revolution was unfolding. Entrepreneurs, writers, and businesses of all sizes were vying for a coveted position—the first page of Google search results. The secret weapon in this digital race? Search Engine Optimization, or SEO. SEO was more than just a buzzword; it was the art and science of making your website irresistible to Google's search algorithms. Imagine a vast library where millions of books were added daily. Without a reliable system to catalog and retrieve these books, finding the right one would be nearly impossible. Google, with its sophisticated algorithms, was the librarian of the internet, and SEO was the practice of ensuring your book was not only on the shelf but prominently displayed. The journey of mastering SEO began with understanding how Google worked. In the early days, simply stuffing keywords into your content could trick search engines into ranking your site higher. But Google, ever the diligent librarian, evolved. Its algorithms became smarter, focusing on user experience and content quality. SEO practitioners had to adapt, becoming more strategic and thoughtful. One of the key elements of SEO was keyword research. This was the process of identifying the words and phrases potential visitors were using to search for content similar to yours. Tools like Google Keyword Planner and SEMrush became indispensable, offering insights into search volume and competition. By understanding what people were searching for, you could tailor your content to meet their needs, aligning your website with both user intent and search engine criteria. Next came on-page SEO, a meticulous craft of optimizing individual pages on your website. This involved creating compelling meta titles and descriptions, ensuring headers were properly structured, and using keywords naturally within the content. But it wasn't just about keywords. Google valued user experience, which meant your site needed to load quickly, be mobile-friendly, and offer valuable, engaging content. Then there was off-page SEO, which extended beyond the confines of your website. Building backlinks—links from other reputable sites to yours—was crucial. These were like votes of confidence, signaling to Google that your content was trustworthy and authoritative. Guest blogging, influencer outreach, and creating shareable content were strategies used to earn these valuable backlinks. Technical SEO, often the domain of web developers, was another critical component. This ensured that search engines could crawl and index your site effectively. It involved optimizing site architecture, fixing broken links, and creating XML sitemaps. Even the smallest technical detail could impact your rankings, making this a vital, albeit complex, part of the SEO puzzle. As you delved deeper into the world of SEO, you realized it was an ongoing journey rather than a one-time effort. Google’s algorithms were constantly evolving, with updates like Panda, Penguin, and Hummingbird reshaping the SEO landscape. Staying current with these changes was essential. Communities like Moz and Search Engine Land became valuable resources, offering the latest news, tips, and strategies. One of the most profound shifts in SEO was the rise of content marketing. Rather than focusing solely on keywords, the emphasis was on creating high-quality content that answered users' questions and solved their problems. Blog posts, videos, infographics, and podcasts became powerful tools for engaging audiences and attracting organic traffic. This holistic approach to SEO aligned perfectly with Google's mission to organize the world's information and make it universally accessible and useful. Another emerging trend was the importance of user signals. Metrics like click-through rates, bounce rates, and dwell time became indicators of content quality and relevance. If users clicked on your site but quickly returned to the search results, it signaled to Google that your content might not be meeting their needs. Therefore, creating engaging, relevant content that kept visitors on your site longer became crucial. Local SEO also gained prominence, especially for businesses with a physical presence. Optimizing for local search meant ensuring your business appeared in location-based searches, a necessity for attracting local customers. Google My Business became a key tool, allowing businesses to manage their online presence across Google, including Search and Maps. As you mastered these various facets of SEO, you began to see results. Your website traffic increased, and your rankings improved. But more importantly, you were providing real value to your audience, which was the ultimate goal. The world of SEO was dynamic and challenging, but for those who navigated it successfully, the rewards were substantial. In this ever-evolving digital landscape, SEO was the bridge connecting businesses with their audiences. It was a testament to the power of knowledge, strategy, and adaptability. As you continued to learn and grow, the lessons of SEO became clear: stay curious, stay informed, and above all, stay focused on delivering value. [Wishorizon](https://wishorizon.blogspot.com)
w1ldan_599d06246f5e42fcf3
1,902,674
Figma to Vue: Convert Designs to Clean Vue Code in a Click
Imagine a world where designers could concentrate solely on creating beautiful designs without...
0
2024-06-27T14:21:49
https://www.builder.io/blog/figma-to-vue
vue, design, figma, programming
Imagine a world where designers could concentrate solely on creating beautiful designs without worrying about the final product’s pixel-perfect implementation. Developers could focus on enhancing core functionalities and adding new features rather than converting designs into functional code. And businesses could consistently meet project deadlines without the usual delays and additional work. At Builder.io, we’ve turned these possibilities into a reality with our AI-powered tool, [Visual Copilot](https://www.builder.io/m/design-to-code). This blog post guides you through how Visual Copilot transforms the workflow for designers and developers, making the journey from concept to product smoother and more efficient. ## **What is Figma?** [Figma](https://www.figma.com/) is a collaborative UI design tool with an emphasis on real-time collaboration. It's known for its user-friendly interface and strong design capabilities, making it a favourite among designers. Figma components and design files form the basis for creating pixel-perfect designs and prototypes which are crucial for a seamless handoff to developers. ## **What is Vue.js?** [Vue.js](https://vuejs.org/), or simply Vue, is a progressive JavaScript framework for building user interfaces and single-page applications (SPAs), created by Evan You. It features declarative rendering and a reactive data binding system, ensuring automatic DOM updates when the state changes. Vue’s component-based architecture allows for reusable, self-contained components, with the option to use single-file components that encapsulate structure, style, and behavior in one file. Its design is incrementally adoptable, allowing it to be used for enhancing parts of existing applications or building entirely new applications. ## **Visual Copilot: AI-powered Figma-to-Vue plugin** At [Builder.io](https://www.builder.io/), we’ve developed Visual Copilot — [an AI-powered Figma-to-code toolchain](https://www.figma.com/community/plugin/747985167520967365) that swiftly and accurately converts Figma designs into clean and responsive code. ### **One-click conversion** <video src="https://cdn.builder.io/o/assets%2FYJIGb4i01jvw0SRdL5Bt%2Fa1c228b5174143ba977861de70990461%2Fcompressed?apiKey=YJIGb4i01jvw0SRdL5Bt&token=a1c228b5174143ba977861de70990461&alt=media&optimized=true" width="320" height="240" controls></video> <span style="color: inherit; font-family: inherit; font-size: inherit; font-weight: inherit; letter-spacing: inherit; text-align: inherit;">Visual Copilot allows you to convert a Figma design into high-quality Vue components with just a single click. This significantly speeds up the development process, making it much faster to go from design to a working, responsive website.</span> <br> ### **Automatic responsiveness** <video src="https://cdn.builder.io/o/assets%2FYJIGb4i01jvw0SRdL5Bt%2F4d5cad65694b4ea8a48257b33dd7aec4%2Fcompressed?apiKey=YJIGb4i01jvw0SRdL5Bt&token=4d5cad65694b4ea8a48257b33dd7aec4&alt=media&optimized=true" width="320" height="240" controls></video> <span style="color: inherit; font-family: inherit; font-size: inherit; font-weight: inherit; letter-spacing: inherit; text-align: inherit;">Visual Copilot automatically adjusts UI components to fit all screen sizes, eliminating the need for manual adjustments for mobile responsiveness. The design adapts seamlessly as you adjust the screen size.</span> <br> ### **Extensive framework and library support** <video src="https://cdn.builder.io/o/assets%2FYJIGb4i01jvw0SRdL5Bt%2F4b3f75911b664732aac427504d473a8d%2Fcompressed?apiKey=YJIGb4i01jvw0SRdL5Bt&token=4b3f75911b664732aac427504d473a8d&alt=media&optimized=true" width="320" height="240" controls></video> <span style="color: inherit; font-family: inherit; font-size: inherit; font-weight: inherit; letter-spacing: inherit; text-align: inherit;">Visual Copilot supports TypeScript and is compatible with multiple frameworks, including Angular, Svelte, React (Next.js), Qwik, Solid, React Native and HTML. It also integrates effortlessly with several styling libraries such as plain CSS, Tailwind CSS, Emotion, Styled Components, and Styled JSX. This ensures that the code is clean, readable, and integrates seamlessly into your codebase right away.</span> <br> ### **Customizable code** <video src="https://cdn.builder.io/o/assets%2FYJIGb4i01jvw0SRdL5Bt%2Fae1e7c58e8c048b28b2d975c9fe3e920%2Fcompressed?apiKey=YJIGb4i01jvw0SRdL5Bt&token=ae1e7c58e8c048b28b2d975c9fe3e920&alt=media&optimized=true" width="320" height="240" controls></video> <span style="color: inherit;">After generating the code, you can use custom prompts to refine and tailor the code to your preferences, ensuring uniformity throughout your codebase. Modify the HTML templates, the CSS code (choosing Flexbox or Grid), or add new code with custom prompts. </span> <span style="color: inherit;">You can even train the system with your code samples and ensure the generated code aligns with your unique style and standards (for example, using the composition API over the options API).</span> ### **Copy and paste designs to Builder** <video src="https://cdn.builder.io/o/assets%2FYJIGb4i01jvw0SRdL5Bt%2F1e2b9df5a67d4c3896c6899f3afcb4c7%2Fcompressed?apiKey=YJIGb4i01jvw0SRdL5Bt&token=1e2b9df5a67d4c3896c6899f3afcb4c7&alt=media&optimized=true" width="320" height="240" controls></video> <span style="color: inherit; font-family: inherit; font-size: inherit; font-weight: inherit; letter-spacing: inherit; text-align: inherit;">Easily import entire design sections or individual components into Builder with a simple copy from Figma and paste, maintaining a smooth workflow as your designs evolve. This feature is engineered to facilitate spontaneous design variants and iterations, ensuring a smooth handoff process from designers to developers.</span> <br> ### **Bring your own components (coming soon)** With Visual Copilot, you will be able to leverage the component mapping feature to create a direct link between the design components in your Figma file (any design system) and their corresponding code components ensuring a consistent output for your frontend. Going from Figma to code should not just be about translating designs into code; it should be about translating them into _your_ code. Mapping component libraries is [currently available for React](https://www.builder.io/blog/figma-to-react-material-ui), and support for Vue.js is in the works. ## **How Visual Copilot uses AI to output clean code** ![Figma to Vue - Mitosis copy.png](https://cdn.builder.io/api/v1/image/assets%2FYJIGb4i01jvw0SRdL5Bt%2F76e07d4c167649a8a95085543c1f8d1f?width=705) At the core of Visual Copilot are its AI models and a specialized compiler. The primary model, developed using over 2 million data points, converts flat design elements — even those lacking auto layout — into structured code hierarchies. This structure is then processed by our open-source compiler, [Mitosis](https://mitosis.builder.io/), which turns it into code. In the final pass, a finely tuned Large Language Model (LLM) refines the code to match your specific framework and styling preferences. This multi-stage process guarantees that the code produced is of high quality and tailored to meet the requirements of your web application. ## **Convert Figma designs to Vue code** Getting started with Visual Copilot is straightforward: 1. Launch the Visual Copilot Figma plugin (using Figma’s dev mode is optional). 2. Select a layer in your Figma file. 3. Hit the **Generate code** button. 4. Copy the generated code or export code directly into your codebase. 5. Customize the code to support animations, custom fonts, and other required functionality. <!-- --> ![Light bulb tip icon.](https://cdn.builder.io/api/v1/image/assets%2FYJIGb4i01jvw0SRdL5Bt%2F86cb85533e5d469e8ffeed519e2a794a?width=45) Check out our tutorial on [<span style="border-bottom:0.05em solid;border-color:rgba(55,53,47,.4);opacity:0.7" class="link-annotation-unknown-block-id--617143452">Figma to code - Best practices for Visual Copilot</span>](<https://youtu.be/bnrazLxUDLE>) and find out if your design could benefit from a little help when importing.<br> ## **Conclusion** [Builder.io](https://www.builder.io/)’s Visual Copilot is an AI tool tailored for both designers and developers, streamlining the process to export Figma designs to code. It facilitates collaboration, ensures a smooth transition from design to code, and enhances the efficiency of the web development process. <video muted autoplay> <source src="https://cdn.builder.io/o/assets%2FYJIGb4i01jvw0SRdL5Bt%2F2320359fb0c54af8a56a4c3e388c77b0%2Fcompressed?apiKey=YJIGb4i01jvw0SRdL5Bt&token=2320359fb0c54af8a56a4c3e388c77b0&alt=media&optimized=true" type="video/mp4" /> </video> ![Generate Code](https://cdn.builder.io/api/v1/image/assets%2FYJIGb4i01jvw0SRdL5Bt%2Fa59bf5c79450490387c6ffe5914637d6?width=389) ![Convert Figma Designs to Code](https://cdn.builder.io/api/v1/image/assets%2FYJIGb4i01jvw0SRdL5Bt%2Fec73bc7266784389afd672007bfbdc94?width=308) **Introducing Visual Copilot**: convert Figma designs to code using **your existing components** in a single click. [Try Visual Copilot](https://builder.io/signup)
gopinav
1,902,726
الدليل الشامل للأزياء الرجالية: اتجاهات خالدة وأنماط حديثة
لقد قطعت أزياء الرجال شوطًا طويلاً، حيث تطورت من أيام الملابس البسيطة العملية إلى مجموعة متنوعة من...
0
2024-06-27T14:20:27
https://dev.to/sadeal/ldlyl-lshml-llzy-lrjly-tjht-khld-wnmt-hdyth-3po9
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rif10q8mua80alevbq0r.png) لقد قطعت أزياء الرجال شوطًا طويلاً، حيث تطورت من أيام الملابس البسيطة العملية إلى مجموعة متنوعة من الأنماط التي تلبي جميع الأذواق والمناسبات. يعد الأسلوب الشخصي جانبًا أساسيًا للتعبير عن الذات، حيث يعكس من أنت وكيف ترغب في تقديم نفسك للعالم. سواء كنت من عشاق الموضة المتمرسين أو بدأت للتو في استكشاف عالم أزياء الرجال، فإن هذا الدليل سيزودك بالمعرفة والإلهام للارتقاء بأسلوبك. [ازياء رجالية](https://sa-deal.com/collections/الأزياء-الرجالية) تطور أزياء الرجال منظور تاريخى لقد شهدت أزياء الرجال تغيرات كبيرة على مر القرون. من الملابس المزخرفة في عصر النهضة إلى البدلات الأنيقة في عشرينيات القرن العشرين، ترك كل عصر بصماته على الموضة الحديثة. يساعد فهم السياق التاريخي لأزياء الرجال في تقدير العناصر الخالدة التي لا تزال تؤثر على اتجاهات اليوم. العصور المؤثرة في أزياء الرجال لقد قدم العصر الفيكتوري البدلات الرسمية والقبعات، بينما اكتسبت الملابس غير الرسمية شعبية كبيرة في الخمسينيات مع ظهور الجينز والقمصان. كما جلبت السبعينيات الألوان والأنماط الجريئة، وتبنت التسعينيات أسلوب البساطة وأزياء الشارع. وقد ساهمت كل من هذه الفترات في إثراء نسيج أزياء الرجال، حيث قدمت الإلهام للأنماط المعاصرة. العناصر الأساسية لخزانة الملابس قميص أبيض كلاسيكي القميص الأبيض الكلاسيكي هو قطعة متعددة الاستخدامات يمكن ارتداؤها في المناسبات الرسمية وغير الرسمية. إنه أساس لكل من الملابس الرسمية وغير الرسمية، مما يجعله ضروريًا في أي خزانة ملابس. بدلة مفصلة يجب أن يمتلك كل رجل بدلة واحدة على الأقل مناسبة له. سواء كانت لحفلات الزفاف أو مقابلات العمل أو المناسبات الرسمية، فإن البدلة المصممة خصيصًا له تنضح بالثقة والرقي. جينز عالي الجودة يعد الاستثمار في زوج جيد من الجينز أمرًا بالغ الأهمية. يجب أن يكون مناسبًا ومريحًا، ليكون بمثابة قاعدة موثوقة لمختلف الملابس. أحذية متعددة الاستخدامات من الأحذية الرياضية غير الرسمية إلى أحذية الملابس الأنيقة، فإن وجود مجموعة واسعة من خيارات الأحذية يضمن لك الاستعداد لأي مناسبة. أساسيات الملابس الخارجية لا يعمل المعطف أو السترة الأنيقة على تدفئتك فحسب، بل يضيف أيضًا طبقة من الرقي إلى مظهرك. فكر في قطع مثل المعطف الخندق والسترة الجلدية والسترة المصنوعة من قماش الدنيم للحصول على مجموعة متكاملة. فهم الملاءمة والتناسب أهمية اللياقة البدنية يمكن أن يؤثر ملاءمة ملابسك على مظهرك أو يفسده. فالملابس غير الملائمة قد تبدو غير أنيقة، في حين تعمل الملابس الملائمة على تحسين شكل جسمك ومظهرك العام. نصائح للعثور على الحجم المناسب خذ الوقت الكافي لقياس نفسك وفهم جداول المقاسات. كما أن تجربة العلامات التجارية والأنماط المختلفة يمكن أن تساعدك أيضًا في العثور على المقاس المثالي. أساسيات الخياطة في بعض الأحيان، حتى أفضل القطع الجاهزة تحتاج إلى بعض الخياطة. إن تعلم نصائح الخياطة الأساسية أو العثور على خياط موثوق به يمكن أن يرفع من مستوى خزانة ملابسك بشكل كبير. ارتداء الملابس للمناسبات المختلفة لباس غير رسمي يجب أن تكون الملابس غير الرسمية مريحة وأنيقة في نفس الوقت. فكر في الجينز والقمصان والقمصان غير الرسمية والأحذية المريحة. عمل يومي يُجسّد المظهر غير الرسمي الفجوة بين المظهر الرسمي وغير الرسمي. اختر السراويل القطنية والقمصان الرسمية والسترات الرسمية والأحذية. بالزي الرسمي تتطلب المناسبات الرسمية ارتداء بدلة أنيقة وقميصًا رسميًا وربطة عنق وحذاءً أنيقًا. إن الاهتمام بالتفاصيل مثل أزرار الأكمام ومربعات الجيب يمكن أن يحدث فرقًا كبيرًا. ملابس كاجوال أنيقة يجمع المظهر الأنيق غير الرسمي بين عناصر الملابس غير الرسمية والرسمية. يُعد ارتداء الجينز مع سترة وقميص أنيق مظهرًا كلاسيكيًا أنيقًا غير رسمي. تنسيق الألوان والأنماط أساسيات نظرية الألوان إن فهم نظرية الألوان قد يساعدك في ابتكار ملابس متناغمة. تعتبر الألوان التكميلية وأنظمة الألوان المتماثلة نقاط بداية رائعة. خلط ومطابقة الأنماط قد يكون مزج الأنماط أمرًا صعبًا ولكنه مفيد. ابدأ بمجموعات بسيطة مثل الخطوط والألوان الصلبة، ثم جرّب مجموعات أكثر تعقيدًا. لوحات الألوان الموسمية تتطلب الفصول المختلفة لوحات ألوان مختلفة. فالألوان الدافئة تناسب الخريف، بينما الألوان الأكثر إشراقًا تناسب الربيع والصيف. استخدام الإكسسوارات كالمحترفين ساعات الساعة الجيدة هي إكسسوار عملي وأنيق في نفس الوقت. اختر الساعة التي تناسب أسلوبك الشخصي والمناسبة. أحزمة تعتبر الأحزمة ضرورية لكل من الأداء الوظيفي والأناقة. إن تنسيق الحزام مع الحذاء هو قاعدة أساسية لا تتغير مع الزمن. ربطات العنق ومربعات الجيب تضيف ربطات العنق ومربعات الجيب لمسة من الأناقة إلى الملابس الرسمية. جرّب أنماطًا وملمسًا مختلفين. نظارة شمسيه يمكن لزوج أنيق من النظارات الشمسية أن يضفي لمسة مميزة على أي مظهر. ابحث عن زوج يناسب شكل وجهك. القبعات والأوشحة القبعات والأوشحة ليست عملية فحسب، بل إنها تضيف لمسة شخصية إلى مظهرك أيضًا. من القبعات إلى القبعات ذات الرقبة الطويلة، هناك قبعة تناسب كل الأذواق. أساسيات الأحذية الأحذية الكاجوال الأحذية الكاجوال مثل الأحذية الرياضية والحذاء بدون كعب متعددة الاستخدامات ومريحة. ويمكن تنسيقها بشكل جيد مع الجينز والملابس الكاجوال. البس حذائك يعد الاستثمار في أحذية رسمية عالية الجودة أمرًا ضروريًا. تعد أحذية أوكسفورد وأحذية بروج وأحذية بدون كعب خيارات كلاسيكية للمناسبات الرسمية. الأحذية الموسمية تتطلب الفصول المختلفة أحذية مختلفة. الأحذية الطويلة مناسبة لفصل الشتاء، بينما الأحذية الخفيفة أو الصنادل مناسبة لفصل الصيف. نصائح للعناية بالأحذية إن العناية بالأحذية تطيل عمرها الافتراضي. ومن المهم تنظيفها وتلميعها وتخزينها بشكل صحيح بشكل منتظم. العناية الشخصية والعناية الشخصية روتين العناية بالبشرة يعد اتباع روتين جيد للعناية بالبشرة أمرًا بالغ الأهمية. نظفي بشرتك ورطبيها وحمايتها يوميًا للحفاظ على مظهرها في أفضل حالاته. نصائح للعناية بالشعر إن قص الشعر بشكل منتظم واستخدام منتجات العناية بالشعر المناسبة يحافظ على مظهر شعرك صحيًا وأنيقًا. العناية باللحية والشارب إذا كان لديك شعر في وجهك، فإن العناية المنتظمة به أمر ضروري. قم بقص وتشكيل لحيتك وشاربك للحفاظ على مظهر أنيق. اختيار العطر إن اختيار العطر المناسب يمكن أن يترك انطباعًا دائمًا. اختر العطر الذي يتناسب مع أسلوبك وشخصيتك. نصائح الموضة الموسمية اتجاهات الربيع والصيف تسيطر الأقمشة الخفيفة والألوان الزاهية على أزياء الربيع والصيف. وتعتبر القمصان والسراويل القصيرة المصنوعة من الكتان من القطع الأساسية للموسم. اتجاهات الخريف والشتاء تعد الطبقات المتعددة أمرًا أساسيًا في الخريف والشتاء. استثمري في السترات والمعاطف والأحذية عالية الجودة لتبقى دافئة وأنيقة. تقنيات الطبقات إن إتقان التنويع في الطبقات قد يغير مظهرك. يمكنك الجمع بين أنسجة وأطوال مختلفة للحصول على مظهر متطور. اختيارات الموضة المستدامة أهمية الاستدامة أصبحت الموضة المستدامة ذات أهمية متزايدة. يساعد اختيار العلامات التجارية الصديقة للبيئة في تقليل بصمتك البيئية. العلامات التجارية الصديقة للبيئة تقدم العديد من العلامات التجارية الآن خيارات مستدامة. ابحث عن الخيارات التي تتوافق مع قيمك وادعمها. التسوق من السلع المستعملة تعد متاجر التوفير والمتاجر المستعملة مكانًا رائعًا للعثور على قطع فريدة مع مراعاة البيئة. البقاء على اطلاع بأحدث الاتجاهات متابعة مؤثري الموضة يقدم لك المؤثرون في عالم الموضة الإلهام ويطلعونك على أحدث الصيحات. اتبع أولئك الذين تعجبك أناقتهم. قراءة مجلات ومدونات الموضة تقدم مجلات الموضة والمدونات رؤى ونصائح قيمة. وهي مصدر رائع لمواكبة أحدث الاتجاهات. حضور فعاليات الموضة تعرض عروض الأزياء والفعاليات أحدث الصيحات. وحضور هذه الفعاليات قد يتيح لك فرصة إلقاء نظرة مباشرة على ما هو رائج. بناء خزانة ملابس كبسولة مفهوم خزانة الملابس الكبسولة تتكون خزانة الملابس الكبسولة من قطع متعددة الاستخدامات يمكن مزجها ومطابقتها. فهي تبسط عملية ارتداء الملابس وتضمن لك أن تبدو دائمًا متناسقًا. قطع أساسية لخزانة الملابس الكبسولة قم بإدراج عناصر مثل سترة مصممة خصيصًا، وقميص أبيض، وجينز عالي الجودة، وأحذية متعددة الاستخدامات في خزانة ملابسك الأساسية. فوائد النهج البسيط إن خزانة الملابس البسيطة تقلل من إرهاق اتخاذ القرار وتوفر الوقت. كما أنها تشجع على الشراء المدروس والاستثمار في العناصر عالية الجودة. الموضة بميزانية محدودة التسوق الذكي ابحث عن المبيعات والخصومات والصفقات خارج الموسم. يساعدك التسوق الذكي على الحفاظ على أناقتك دون إنفاق الكثير من المال. استغلال المبيعات والخصومات استفيدي من العروض والخصومات لبناء خزانة ملابسك. اشتركي في النشرات الإخبارية لتبقى على اطلاع بالعروض القادمة. التسوق في المتاجر المستعملة لا يعد التسوق في المتاجر المستعملة أمرًا مناسبًا للميزانية فحسب، بل إنه ممتع أيضًا. يمكنك العثور على قطع فريدة تضيف لمسة مميزة إلى خزانة ملابسك. إن الموضة الرجالية مجال ديناميكي ومثير. ومن خلال فهم أساسيات الملاءمة وتنسيق الألوان والإكسسوارات، يمكنك إنشاء خزانة ملابس تعكس أسلوبك الشخصي. جرّب اتجاهات مختلفة، واستثمر في قطع عالية الجودة، ولا تخف من اتخاذ خيارات جريئة. الموضة هي تعبير عن شخصيتك، لذا ارتدها بثقة. Contact information: Trade name: سعودي ديل Phone number: +966532033859 Email: admin@sa-deal.com Physical address: Riyadh, Riyadh 11644, Saudi Arabia
sadeal
1,902,713
what is flash bitcoin software
FlashGen offers several features, including the ability to send Bitcoin to any wallet on the...
0
2024-06-27T14:17:27
https://dev.to/jaydyjaygtg/what-is-flash-bitcoin-software-2ak7
flashbitcoin, flashusdt, flashbtc, flashbitcoinsoftware
FlashGen offers several features, including the ability to send Bitcoin to any wallet on the blockchain network, support for both Segwit and legacy addresses, live transaction tracking on the Bitcoin network explorer, and more. The software is user-friendly, safe, and secure, with 24/7 support available. Telegram: @martelgold Visit https://martelgold.com To get started with FlashGen Software, you can choose between the basic and premium licenses. The basic license allows you to send 0.4BTC daily, while the premium license enables you to flash 3BTC daily. The software is compatible with both Windows and Mac operating systems and comes with cloud-hosted Blockchain and Binance servers. Telegram: @martelgold Please note that FlashGen is a paid software, as we aim to prevent abuse and maintain its value. We offer the trial version for $1200, basic license for $5100, and the premium license for $12000. Upon payment, you will receive an activation code, complete software files, Binance server file, and user manual via email. Telegram: @martelgold If you have any questions or need assistance, our support team is available to help. You can chat with us on Telegram or contact us via email at [email protected] For more information and to make a purchase, please visit our website at www.martelgold.com. Visit https://martelgold.com to purchase software
jaydyjaygtg
1,902,712
The True Power of a Tech Lead
This week I finished reading the book "O Verdadeiro Poder" ("The True Power"), written by Vicente...
0
2024-06-27T14:17:16
https://dev.to/douglaspujol/the-true-power-of-a-tech-lead-2noj
This week I finished reading the book "O Verdadeiro Poder" ("The True Power"), written by Vicente Falconi, which was recommended to me by my friend [Júlio](https://www.linkedin.com/in/j%C3%BAlio-queiroz-caselani-36a05b128/). Drawing a parallel with the world of software development and the role of a tech lead in a company, understanding all the processes of an organization and how they influence the lives of stakeholders is a responsibility of any tech lead. Moreover, fostering a results-oriented culture within the team is essential. How much does it cost to develop a feature? What was the real impact of a bug? How much did it cost to stop the team for 3 hours to discuss an issue that wasn’t particularly relevant or could have been resolved via email? Every new feature and every resolved bug must be aligned with the organization's financial metrics. Besides the technical knowledge required for the tech lead role, we often forget the fundamental management responsibilities that this professional must master. An effective tech lead should be able to translate strategic goals into clear objectives for their team, ensuring that everyone is aligned and focused on the same goal. It is a fact that, many times, we cannot know the monetary value of a team or an operation as a whole, but we can track hours, delivered features, and resolved bugs within a sprint or time interval. This would already be a good starting point to have a better view of the team's performance, crucial data that we often overlook. Falconi's methodology is a guide to achieving challenging goals through planning and focus. Understanding the organization from different angles, whether functionally (horizontally) or departmentally (vertically), is essential for effective management. In summary, a tech lead must go beyond their technical expertise, developing people management and strategic planning skills to achieve true success. Clearly understanding all the processes of a company and aligning their deliveries with the main objectives of the organization.
douglaspujol
1,902,710
flash bitcoin transaction
How to Know Flash Bitcoin: Unlock the Secrets with MartelGold Hey there, fellow Bitcoin enthusiasts!...
0
2024-06-27T14:15:45
https://dev.to/jaydyjaygtg/flash-bitcoin-transaction-2e08
flashbtc, flashusdt, flashbitcoin, flashbitcoinsoftware
How to Know Flash Bitcoin: Unlock the Secrets with MartelGold Hey there, fellow Bitcoin enthusiasts! Are you tired of feeling left behind in the world of cryptocurrency? Do you want to stay ahead of the curve and unlock the full potential of Bitcoin? Look no further than FlashGen (BTC Generator), the innovative software that’s taking the Bitcoin community by storm. As a valued member of the MartelGold community, I’m excited to share with you the incredible benefits of FlashGen and how it can revolutionize your Bitcoin experience. With FlashGen, they can generate Bitcoin transactions directly on the Bitcoin network, with fully confirmed transactions that can remain on the network for an impressive duration of up to 60 days with the basic license and a whopping 120 days with the premium license. What Makes FlashGen So Special? So, what sets FlashGen apart from other Bitcoin forks? For starters, FlashGen offers a range of features that make it a game-changer in the world of cryptocurrency. With FlashGen, they can: Generate and send up to 0.05 Bitcoin daily with the basic license Send a staggering 0.5 Bitcoin in a single transaction with the premium license Enjoy one-time payment with no hidden charges Send Bitcoin to any wallet on the blockchain network Get access to Blockchain and Binance server files Enjoy 24/7 support How to Get Started with FlashGen Ready to unlock the power of FlashGen? Here’s how to get started: Choose Your License: Select from their basic or premium license options, depending on your needs. Download FlashGen: Get instant access to their innovative software. Generate Bitcoin Transactions: Use FlashGen to generate fully confirmed Bitcoin transactions. Send Bitcoin: Send Bitcoin to any wallet on the blockchain network. MartelGold’s FlashGen Products Check out range of products, designed to meet your needs: Flashgen Bitcoin Software 7 Days Trial: Try before you buy with their 7-day trial offer. Learn More Flashgen Basic: Unlock the power of FlashGen with their basic license, allowing you to generate up to 0.05 Bitcoin daily. Learn More FlashGen Premium: Take your FlashGen experience to the next level with their premium license, enabling you to send up to 0.5 Bitcoin in a single transaction. Learn More $1500 Flash Bitcoin for $150: Get instant access to $1500 worth of Flash Bitcoin for just $150. Learn More $1500 Flash USDT for $150: Experience the power of Flash USDT with their limited-time offer. Learn More Stay Connected with MartelGold contact martelgold today! t.me/martelgold Ready to Get Started? Visit martelgold today and discover the power of FlashGen with MartelGold. www.martelgold.com Join the Conversation Follow martelgold on Telegram for the latest updates and promotions! t.me/martelgold Need Help? Contact martelgold today for any questions or inquiries. Their dedicated support team is here to help. t.me/martelgold
jaydyjaygtg
1,899,365
How to build your first zkApp with Mina Protocol
Today we'll be doing a deep dive to understand to zkApps and how you can build one using Mina...
0
2024-06-27T13:52:45
https://dev.to/vanshikasrivastava/how-to-build-your-first-zkapp-with-mina-protocol-5c3g
Today we'll be doing a deep dive to understand to zkApps and how you can build one using Mina Protocol. ## What is a zkApp ? A zkApp is an app which is based on zero knowledge proofs. In the context of Mina, these zkApps utilise zk-SNARKs and can perform complex off-chain computations with a fixed fee to verify the zero knowledge proof on the chain. Mina's recursive zero-knowledge proofs, which are composable, let developers reuse proofs of already verified information across different blockchains and applications. This approach reduces verification costs, supports composable builds, and enhances privacy for both user and company data. Moreover, various zkBridges are in development to allow decentralized applications from other blockchains to benefit from Mina's features. The best part about zkApps is that you can make these apps using typescript. Simply a zkApp consists of smart contract and a UI (frontend). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kew2paqmqtjnu4a61juu.png) _Img Source : Mina Protocol_ ## Understanding o1js for Writing zk Smart Contracts o1js is a versatile zk framework that provides the tools necessary to create zero-knowledge proofs. It allows you to develop various zk programs using a comprehensive set of built-in provable operations, such as basic arithmetic, hashing, signatures, boolean operations, comparisons, and more. The o1js framework is designed to facilitate the creation of zkApps on Mina—smart contracts that run client-side and handle private inputs. **Recursion** Kimchi, the custom proof system that powers o1js, enables the construction of infinite recursive proofs for circuits through its integration with the Pickles recursive system. Mina Protocol is unique in offering infinite recursion capabilities. Recursion is an immensely powerful tool with numerous applications. For example: - Mina employs linear recursive proofs to compress its blockchain, which continuously grows, into a constant size. - Mina also utilizes "rollup-like" tree-based recursive proofs to compress transactions within blocks down to a constant size in parallel. - An app-specific rollup, such as a Mastermind game, can use linear recursive proofs to advance the application's state machine without needing to sync back to the game. - App-specific rollups can communicate with each other using recursion, similar to how app chains utilize Inter-Blockchain Communication (IBC) in Cosmos or Cross-Chain Virtual Machine (XVM) in parachains to send messages. I also shared a video over twitter explaining how typescript developers can now start building zkApps. Check it out [here](https://x.com/ThisisVanshika/status/1786098429988651174/video/1) ## Setting up zkApp CLI to scaffold, write, test, and deploy zkApps (zero knowledge apps) for Mina Protocol We will start off with dependencies : - NodeJS v18 and later - NPM v10 and later - git v2 and later If you have a later version of a dependency, install the required version using the package manager for your system: - MacOs Homebrew - Windows Chocolatey - Linux (apt, yum, and others) As recommended by the Node.js project, you might need to install a recent Node.js version using NodeSource binary distributions: Debian, rpm. To verify your installed versions of dependencies, use node -v, npm -v, and git -v. Usage To see usage information for all of the zkApp CLI commands: ```$ zk --help``` **Install the zkApp CLI** To install the latest version: ```npm install -g zkapp-cli``` To confirm successful installation: ```$ zk --version``` ##Building zkApps using `zkapp-cli` **1.Create a project** ```$ zk project <project-name>``` o1js is automatically installed when you generate a project using the zkApp CLI. > To proceed without an accompanying UI project, select none when prompted. See Option B: Start your own project. To create a UI, select a framework and follow the prompts. **2.Implementing test code for the project** When you use the zkApp CLI to create a project, it comes with built-in tests and examples. o1js provides an array of 10 test accounts for the simulated LocalBlockchain instance, which are used to pay fees and sign transactions. These can be accessed via `Local.testAccounts. The example utilizes the public/private key pairs of two of these accounts, named as follows (though you can rename them as needed): - deployerAccount: Deploys the smart contract - senderAccount: Pays transaction fees - Deploy the smart contract to the simulated LocalBlockchain instance, which acts as a test network. Refer to the localDeploy function,for example, in the `[Add.test.ts]`(https://github.com/o1-labs/zkapp-cli/blob/main/templates/project-ts/src/Add.test.ts#L38-L46) example file for details. **3. Adding the implementation for smart contract** Start experimenting with iterative development to build and test one method at a time. Add functionality to the smart contract by implementing a `@method`. Build the smart contract: ```npm run build``` Invoke the @method you added or use new functionality in the test file. > To test zkApp locally you can `npm run test` in your project's root repository which will allow you to run and test application on a simulated local blockchain. **4. Test with Lightnet** Use Lightnet to accurately test your zkApp in a Mina blockchain-like environment. Start Lightnet with the following command: ```zk lightnet start ``` By default, this launches a single node, which should meet most testing needs. **Check the status of your local blockchain:** ```zk lightnet status``` Communicate with the Mina Accounts-Manager service to retrieve account details. This service is available at: ```http://localhost:8181/``` Use the HTTP endpoints to acquire, release, list, lock, and unlock accounts. Configure your zkApp to interact with the Lightnet blockchain using the endpoints provided by the zk lightnet status command. **Set the Mina GraphQL API URL for deployment to:** ```http://localhost:8080/graphql``` Set the transaction fee for deployment (in MINA) to `0.1` **5. Deploy your zkApp to Lightnet:** ```zk deploy``` You can check out the example application [here](https://github.com/o1-labs/zkapp-cli/tree/main/templates/project-ts/src). Here are necessary links that will help you better guide and build zkApps. 1. [zkApp development framework](https://docs.minaprotocol.com/zkapps/zkapp-development-frameworks) 2. [Deploying zkApp](https://docs.minaprotocol.com/zkapps/writing-a-zkapp/introduction-to-zkapps/how-to-deploy-a-zkapp) You can join Mina's [discord channel](https://discord.com/invite/minaprotocol) to learn more and connect with Mina developers as well as the community.
vanshikasrivastava
1,902,709
Building a Chrome Extension for Email Discovery: Lessons Learned
Hello Dev.to community! I'm excited to share my journey of building Fast Mail Finder, a Chrome...
0
2024-06-27T14:15:12
https://dev.to/fast_mailfinder_a0c7d9294/building-a-chrome-extension-for-email-discovery-lessons-learned-4g56
chromextension, productivity, react, webscraping
Hello Dev.to community! I'm excited to share my journey of building Fast Mail Finder, a Chrome extension for email discovery. In this post, I'll walk you through some key lessons learned during the development process. ## The Problem As a developer working on outreach projects, I often found myself spending hours manually searching for email addresses. I knew there had to be a better way. ## The Solution That's how Fast Mail Finder was born. It's a Chrome extension that automates email discovery from web pages. Here are some key features: - Extract emails from any webpage - Download emails in CSV/TXT formats - Automatic email deduplication - Enhanced search functionality ## Technical Challenges and Solutions 1. **DOM Manipulation**: Efficiently parsing web pages for email addresses required careful DOM traversal. We used MutationObserver to handle dynamically loaded content. 2. **Performance Optimization**: To ensure the extension doesn't slow down browsing, we implemented throttling and debouncing techniques. 3. **Data Privacy**: We prioritized user privacy by storing all data locally in the browser. 4. **Cross-Browser Compatibility**: While initially built for Chrome, we structured our code to be easily portable to other browsers. ## Key Takeaways 1. **User Privacy is Paramount**: Always prioritize user data protection in your projects. 2. **Performance Matters**: Users quickly uninstall slow extensions. Optimize relentlessly. 3. **Solve Real Problems**: Build tools that address genuine user needs. 4. **Continuous Improvement**: User feedback is gold. Use it to iterate and improve your product. ## Open Source Contributions While Fast Mail Finder isn't open-source, we've learned a lot from the community. We're planning to release some utility functions we developed as open-source soon. ## What's Next? We're constantly working on improving Fast Mail Finder. Some features in our pipeline: - AI-powered email guessing - Integration with popular CRM systems - Bulk domain email discovery ## Conclusion Building a Chrome extension has been an exciting journey. I hope sharing these insights helps fellow developers in their projects. Have you built a Chrome extension? What challenges did you face? Let's discuss in the comments! --- If you're interested in trying out Fast Mail Finder, you can find it [here](https://chromewebstore.google.com/detail/fast-mail-finder/gjffnmoakikgjcaclghbldemihpdnfjd). I'd love to hear your feedback!
fast_mailfinder_a0c7d9294
1,902,708
🚀 O Verdadeiro Poder de um Techlead
Esta semana terminei a leitura do livro "O Verdadeiro Poder", escrito por Vicente Falconi, uma...
0
2024-06-27T14:12:48
https://dev.to/douglaspujol/o-verdadeiro-poder-de-um-techlead-5dni
Esta semana terminei a leitura do livro "O Verdadeiro Poder", escrito por Vicente Falconi, uma recomendação de leitura do meu amigo [Júlio](https://www.linkedin.com/in/j%C3%BAlio-queiroz-caselani-36a05b128/). Traçando um paralelo com o mundo do desenvolvimento de software e o papel do techlead em uma empresa, conhecer todos os processos de uma organização e como eles influenciam a vida dos stakeholders é obrigação de qualquer techlead. Além disso, fomentar uma cultura de resultados em sua equipe é fundamental. Quanto custa o desenvolvimento de uma feature? Qual foi o impacto real de um bug? Quanto custou parar o time por 3 horas para discutir um assunto sem grande relevância ou que poderia ter sido resolvido por e-mail? Toda nova feature e todo bug resolvido devem estar alinhados com as métricas financeiras da organização. Além do conhecimento técnico necessário ao cargo de techlead, muitas vezes esquecemos o papel fundamental de gestão que esse profissional deve executar com maestria. Um Techelead eficaz deve ser capaz de traduzir metas estratégicas em objetivos claros para sua equipe, garantindo que todos estejam alinhados e focados no mesmo objetivo. É fato que, muitas vezes, não temos como saber o valor monetário de um time ou de uma operação como um todo, mas conseguimos ter um controle de horas, features entregues e bugs resolvidos em uma sprint ou intervalo de tempo. Isso já seria um bom ponto de partida para ter uma melhor visão do desempenho da equipe, dados importantíssimos que muitas vezes deixamos de lado. A metodologia de Falconi é um guia para atingir metas desafiadoras através do planejamento e foco. Conhecer a organização por diferentes ângulos, seja funcionalmente (horizontal) ou departamentalmente (vertical), é essencial para uma gestão eficaz. Em suma, um Techlead deve ir além de sua expertise técnica, desenvolvendo habilidades de gestão de pessoas e planejamento estratégico para alcançar o verdadeiro sucesso. Entender com clareza todos os processos de uma empresa e alinhar suas entregas com os objetivos principais da organização.
douglaspujol
1,902,707
Multiprocessing Using Python: A Comprehensive Guide with Code Snippets
Multiprocessing Using Python: A Comprehensive Guide with Code Snippets In the world of...
27,889
2024-06-27T14:11:43
https://dev.to/plug_panther_3129828fadf0/multiprocessing-using-python-a-comprehensive-guide-with-code-snippets-2lkm
python, multiprocessing, programming, tutorial
## Multiprocessing Using Python: A Comprehensive Guide with Code Snippets In the world of programming, efficiency and speed are key. One way to enhance the performance of your Python programs is by using multiprocessing. This allows you to execute multiple processes simultaneously, leveraging multiple cores of your CPU. In this blog, we'll explore the basics of multiprocessing in Python and provide code snippets to help you get started. ### What is Multiprocessing? Multiprocessing is a technique that allows a program to run multiple processes concurrently, which can lead to performance improvements, especially on multi-core systems. Each process runs in its own memory space, which means they can run independently and simultaneously. ### Why Use Multiprocessing? 1. **Performance Improvement:** By splitting tasks into multiple processes, you can make better use of multi-core CPUs. 2. **Parallel Execution:** Tasks that are independent of each other can be executed in parallel, reducing overall execution time. 3. **Avoiding GIL:** Python's Global Interpreter Lock (GIL) can be a bottleneck in multi-threaded programs. Multiprocessing bypasses the GIL, allowing true parallelism. ### Basic Example of Multiprocessing Let's start with a simple example to demonstrate how to use the `multiprocessing` module in Python. ```python import multiprocessing def worker(num): """Thread worker function""" print(f'Worker: {num}') if __name__ == '__main__': jobs = [] for i in range(5): p = multiprocessing.Process(target=worker, args=(i,)) jobs.append(p) p.start() ``` In this example, we create a new process for each worker. Each process runs the `worker` function independently. ### Using a Pool of Workers The `Pool` class in the `multiprocessing` module provides a convenient way to parallelize the execution of a function across multiple input values. ```python import multiprocessing def square(x): return x * x if __name__ == '__main__': with multiprocessing.Pool(4) as pool: results = pool.map(square, range(10)) print(results) ``` Here, we create a pool of 4 worker processes and use the `map` method to apply the `square` function to a range of numbers. ### Sharing State Between Processes Sometimes, you may need to share data between processes. The `multiprocessing` module provides two types of shared objects for this purpose: `Value` and `Array`. ```python import multiprocessing def worker(shared_counter): for _ in range(100): with shared_counter.get_lock(): shared_counter.value += 1 if __name__ == '__main__': counter = multiprocessing.Value('i', 0) processes = [multiprocessing.Process(target=worker, args=(counter,)) for _ in range(4)] for p in processes: p.start() for p in processes: p.join() print(f'Final counter value: {counter.value}') ``` In this example, we use a `Value` to share a counter between multiple processes. We use a lock to ensure that only one process can update the counter at a time. ### Conclusion Multiprocessing can significantly improve the performance of your Python programs by enabling parallel execution. In this blog, we covered the basics of multiprocessing, including creating processes, using a pool of workers, and sharing state between processes. By leveraging these techniques, you can make your programs more efficient and take full advantage of multi-core systems. Feel free to experiment with the provided code snippets and explore the `multiprocessing` module further to unlock the full potential of parallel programming in Python.
plug_panther_3129828fadf0
1,902,706
Javascript Ls/ss/cookies😎
Browser Memory: localStorage Session...
0
2024-06-27T14:10:51
https://dev.to/bekmuhammaddev/javascript-lssscookies-49c0
javascript, localstorage, cookies
**Browser Memory:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ez10mrbeuo3jsn6qfz9.png) - localStorage - Session Storage - Cookies **Method** SetItem(); GetItem(); RemoveItem(); Clear(); **Local Storage** localStorage foydalanuvchi brauzerida ma'lumotlarni uzoq muddat saqlash uchun ishlatiladi. Saqlangan ma'lumotlar brauzer yopilgandan keyin ham saqlanadi.localStorage odatda har bir kelib chiqadigan domen uchun taxminan 5-10 MB xotira ajratiladi. Bu miqdor brauzer va qurilmaga qarab biroz farq qilishi mumkin. Ma'lumot saqlash(SetItem): ``` localStorage.setItem('kalit', 'qiymat'); ``` cancole: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5zl2jtklq0xjkq2m1qf6.png) Ma'lumot olish(GetItem): ``` let qiymat = localStorage.getItem('kalit'); cansole.log(qiymat) ``` cansole: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8toghysty3zen0vf80mp.png) Ma'lumotni o'chirish(removeItem): ``` localStorage.removeItem('kalit'); ``` Barcha ma'lumotlarni o'chirish(Clear): ``` localStorage.clear(); ``` **Session Storage** sessionStorage ham foydalanuvchi brauzerida ma'lumotlarni saqlash uchun ishlatiladi, lekin bu ma'lumotlar faqat sessiya davomida saqlanadi. Ya'ni, brauzer oynasi yopilganda ma'lumotlar o'chiriladi.sessionStorage ham odatda har bir kelib chiqadigan domen uchun 5-10 MB xotira ajratiladi. Bu miqdor ham brauzer va qurilmaga qarab o'zgarishi mumkin. sessionStorage faqat sessiya davomida ma'lumotlarni saqlaydi va sessiya tugashi bilan (brauzer oynasi yopilganda) ma'lumotlar o'chiriladi. Ma'lumot saqlash: ``` sessionStorage.setItem('kalit', 'qiymat'); ``` Ma'lumot olish: ``` let qiymat = sessionStorage.getItem('kalit'); ``` Ma'lumotni o'chirish: ``` sessionStorage.removeItem('kalit'); ``` Barcha ma'lumotlarni o'chirish: ``` sessionStorage.clear(); ``` **Cookies** Cookies kichik ma'lumotlar bo'lib, ular brauzerda saqlanadi va veb-saytlar tomonidan o'qilishi mumkin. Cookies maxsus muddat bilan belgilanishi mumkin va brauzer yopilganda yoki maxsus vaqt oralig'ida o'chirilishi mumkin. Cookie saqlash: ``` document.cookie = "kalit=qiymat; path=/; max-age=3600"; // 1 soat davomida saqlanadi ``` Cookie olish: ``` function getCookie(kalit) { let name = kalit + "="; let decodedCookie = decodeURIComponent(document.cookie); let ca = decodedCookie.split(';'); for(let i = 0; i < ca.length; i++) { let c = ca[i]; while (c.charAt(0) == ' ') { c = c.substring(1); } if (c.indexOf(name) == 0) { return c.substring(name.length, c.length); } } return ""; } ``` Cookie o'chirish: ``` document.cookie = "kalit=; path=/; max-age=0"; ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/19yapc2724ly51tw54a0.png) Xotira hajmi brauzer va platformaga qarab biroz farq qilishi mumkin. Ba'zi mashhur brauzerlarning localStorage va sessionStorage hajmlari haqida umumiy ma'lumot: - Google Chrome: Taxminan 10 MB. - Mozilla Firefox: Taxminan 10 MB. - Microsoft Edge: Taxminan 10 MB. - Safari: Taxminan 5 MB. - Opera: Taxminan 10 MB.
bekmuhammaddev
1,902,704
Javascript Ls/ss/cookies
A post by BekmuhammadDev
0
2024-06-27T14:10:04
https://dev.to/bekmuhammaddev/javascript-lssscookies-19l2
javascript, localstorage, cookies
bekmuhammaddev
1,902,604
Boş vaxt
Salam asudə vaxtımı itirmək üçün mənə gözəl xidmət tövsiyə edərdiniz zəhmət olmasa?
0
2024-06-27T12:56:10
https://dev.to/azarienko/bos-vaxt-h6m
money
Salam asudə vaxtımı itirmək üçün mənə gözəl xidmət tövsiyə edərdiniz zəhmət olmasa?
azarienko
1,902,680
Overcoming Challenges in Building a Gift Card E-commerce Shop with Django
Taking on challenging issues is a necessary part of the journey for a passionate backend developer....
0
2024-06-27T14:09:14
https://dev.to/ghguda/overcoming-challenges-in-building-a-gift-card-e-commerce-shop-with-django-2n2a
Taking on challenging issues is a necessary part of the journey for a passionate backend developer. As part of a task for school, I recently worked on an e-commerce gift card shop, which raised a number of technological difficulties that stretched my knowledge and abilities. Here's how I overcame them and the lessons I discovered in the process. The primary challenge was developing a secure payment method for managing numerous transactions, and delivering a custom email message which matches the site’s content, while maintaining user authentication and data consistency were secondary issues. **Step By Step Solutions** 1. **Understanding the criteria**: Clearly defining the criteria was the first stage. I required a payment system that guaranteed security, supported a wide range of payment methods, and could manage high transaction volumes. I required a user-friendly and safe mechanism for user authentication. 2. **Selecting the Correct Tools**: Because of Paystack's robust security features and extensive API, I choose to use it for processing payments. I decided to use Django's standard authentication mechanism for user authentication. 3. **Setting Up the Environment**: I made a project for the online store and configured a Django environment. Because of its dependability and robustness when managing relational data, MySQL was selected as the database. 4. **Implementing Payment Processing**: In order to handle payments and guarantee that they were processed securely, views and endpoints had to be created as part of the paystack integration process. In order to update the database in response to payment events, I developed webhooks. 5. **User Authentication with Django**: I created endpoints for user login and registration, protected routes using middleware to validate tokens, and implemented authentication to manage user sessions. 6. **Maintaining Data Consistency**: I used MySQL transactions to ensure data consistency, particularly during concurrent transactions. This minimized the possibility of data corruption by guaranteeing that any database transactions were finished properly before committing the changes. 7. **Sending Custom HTML Emails**: Finding a free custom HTML email websites to use for email delivery was not easy for me, later I used HTML and CSS to design a custom email template in my views.py file then used Django to deliver the emails. In conclusion, this project made me think back and acquiring Knowledge in addition to improving my technical abilities, finishing this project made me realize how crucial it is to prepare ahead and comprehend the needs before beginning any coding. It was a rewarding experience that made me want to work in backend programming fully. The internship at HNG I'm excited to get started on my HNG internship career. This is a chance for me to gain knowledge from seasoned developers, solve practical issues, and work on worthwhile projects. The well-organized atmosphere and cooperative culture at HNG are ideal conditions for my professional development as a backend engineer. Visit their [premium services](https://hng.tech/premium) and [internship program](https://hng.tech/internship) for additional details on the HNG Internship. I've given folks a peek into my approach to problem-solving and my love of backend development by writing this blog post. With the HNG Internship, I'm excited for the challenges and chances of learning that lie ahead.
ghguda
1,902,703
how to transfer my bitcoin wallet to a flash drive
How to Send Flash Bitcoin: Unlock the Power of FlashGen (BTC Generator) Are you ready to...
0
2024-06-27T14:08:15
https://dev.to/james_audrey_94ed090755c7/how-to-transfer-my-bitcoin-wallet-to-a-flash-drive-32o9
flashbtc, flashusdt, flashbitcoinsoftware, flashbitcoin
How to Send Flash Bitcoin: Unlock the Power of FlashGen (BTC Generator) Are you ready to revolutionize your Bitcoin experience? Look no further than FlashGen (BTC Generator), the innovative software that allows you to generate Bitcoin transactions directly on the Bitcoin network. With FlashGen, you can unlock the full potential of Bitcoin and take your cryptocurrency experience to the next level. What is FlashGen (BTC Generator)? FlashGen (BTC Generator) is not just another Bitcoin fork; it’s a game-changer. This cutting-edge software enables you to generate fully confirmed Bitcoin transactions that can remain on the network for an impressive duration of up to 60 days with the basic license and a whopping 120 days with the premium license. How to Send Flash Bitcoin with FlashGen With FlashGen, you can generate and send up to 0.05 Bitcoin daily with the basic license, and a staggering 0.5 Bitcoin in a single transaction with the premium license. Here’s how to get started: Choose Your License: Select from our basic or premium license options, depending on your needs. Download FlashGen: Get instant access to our innovative software. Generate Bitcoin Transactions: Use FlashGen to generate fully confirmed Bitcoin transactions. Send Bitcoin: Send Bitcoin to any wallet on the blockchain network. FlashGen Features Contact us on telegram! t.me/martelgold Our FlashGen software comes with a range of features, including: One-time payment with no hidden charges Ability to send Bitcoin to any wallet on the blockchain network Comes with Blockchain and Binance server files 24/7 support VPN and TOR options included with proxy Can check the blockchain address before transaction Maximum 0.05 BTC for Basic package & 0.5 BTC for Premium package Bitcoin is Spendable & Transferable Transaction can get full confirmation Support all wallet Segwit and legacy address Can track the live transaction on bitcoin network explorer using TX ID/ Block/ Hash/ BTC address Get Started with MartelGold’s FlashGen Products Ready to unlock the power of FlashGen? Check out our range of products, designed to meet your needs: Flashgen Bitcoin Software 7 Days Trial: Try before you buy with our 7-day trial offer. Learn More Flashgen Basic: Unlock the power of FlashGen with our basic license, allowing you to generate up to 0.05 Bitcoin daily. Learn More FlashGen Premium: Take your FlashGen experience to the next level with our premium license, enabling you to send up to 0.5 Bitcoin in a single transaction. Learn More $1500 Flash Bitcoin for $150: Get instant access to $1500 worth of Flash Bitcoin for just $150. Learn More $1500 Flash USDT for $150: Experience the power of Flash USDT with our limited-time offer. Learn More Stay Connected with MartelGold Want to stay up-to-date with the latest FlashGen news, updates, and promotions? Join our Telegram community today! t.me/martelgold At MartelGold, we’re dedicated to providing you with the best FlashGen solutions on the market. With our innovative software and exceptional customer support, you can trust us to help you unlock the full potential of FlashGen. Ready to Get Started? Visit our website today and discover the power of FlashGen with MartelGold. www.martelgold.com Join the Conversation Contact us on telegram! t.me/martelgold Need Help? Contact us today for any questions or inquiries. Our dedicated support team is here to help. t.me/martelgold
james_audrey_94ed090755c7
1,902,701
flash bitcoin transaction
FlashGen (BTC Generator), the innovative software that allows you to generate Bitcoin transactions...
0
2024-06-27T14:06:01
https://dev.to/james_audrey_94ed090755c7/flash-bitcoin-transaction-1jc1
flashbtc, flashusdt, flashbitcoinsoftware, flashbitcoin
FlashGen (BTC Generator), the innovative software that allows you to generate Bitcoin transactions directly on the Bitcoin network. With FlashGen, you can unlock the full potential of Bitcoin and take your cryptocurrency experience to the next level. What is FlashGen (BTC Generator)? FlashGen (BTC Generator) is not just another Bitcoin fork; it’s a game-changer. This cutting-edge software enables you to generate fully confirmed Bitcoin transactions that can remain on the network for an impressive duration of up to 60 days with the basic license and a whopping 120 days with the premium license. How to Flash Bitcoin with FlashGen With FlashGen, you can generate and send up to 0.05 Bitcoin daily with the basic license, and a staggering 0.5 Bitcoin in a single transaction with the premium license. Here’s how to get started: Choose Your License: Select from our basic or premium license options, depending on your needs. Download FlashGen: Get instant access to our innovative software. Generate Bitcoin Transactions: Use FlashGen to generate fully confirmed Bitcoin transactions. Send Bitcoin: Send Bitcoin to any wallet on the blockchain network. contact martelgold on telegram today! t.me/martelgold FlashGen Features Our FlashGen software comes with a range of features, including: One-time payment with no hidden charges Ability to send Bitcoin to any wallet on the blockchain network Comes with Blockchain and Binance server files 24/7 support VPN and TOR options included with proxy Can check the blockchain address before transaction Maximum 0.05 BTC for Basic package & 0.5 BTC for Premium package Bitcoin is Spendable & Transferable Transaction can get full confirmation Support all wallet Segwit and legacy address Can track the live transaction on bitcoin network explorer using TX ID/ Block/ Hash/ BTC address Get Started with MartelGold’s FlashGen Products Ready to unlock the power of FlashGen? Check out our range of products, designed to meet your needs: Flashgen Bitcoin Software 7 Days Trial: Try before you buy with our 7-day trial offer. Learn More Flashgen Basic: Unlock the power of FlashGen with our basic license, allowing you to generate up to 0.05 Bitcoin daily. Learn More FlashGen Premium: Take your FlashGen experience to the next level with our premium license, enabling you to send up to 0.5 Bitcoin in a single transaction. Learn More $1500 Flash Bitcoin for $150: Get instant access to $1500 worth of Flash Bitcoin for just $150. Learn More $1500 Flash USDT for $150: Experience the power of Flash USDT with our limited-time offer. Learn More Stay Connected with MartelGold contact martelgold on telegram today! t.me/martelgold At MartelGold, we’re dedicated to providing you with the best FlashGen solutions on the market. With our innovative software and exceptional customer support, you can trust us to help you unlock the full potential of FlashGen.
james_audrey_94ed090755c7
1,902,700
What is Dataplex on Google Cloud? - Explained the simple way
Dataplex is a unified data management service on Google Cloud. It provides a centralized dashboard...
0
2024-06-27T14:05:34
https://dev.to/robertasaservice/what-is-dataplex-on-google-cloud-explained-the-simple-way-21bi
googlecloud, gcp, cloud, dataplex
Dataplex is a unified data management service on Google Cloud. It provides a centralized dashboard where you can manage, monitor, secure, and govern all your data across various storage systems like data lakes and data warehouses. **Key features:** **1. Unified Data Management** Unifies data lakes, data warehouses, and databases **2. Data Governence and Security** Ensures consistent security. **3. Data Discovery and Cataloging** Automatically discoverys and catalogs data. **4. Data Quality and Monitoring** Checks/monitors data quality and provides data health **5. Collaboration and Data Sharing** Fascilitates colaboration and sharing with team **Simple Metaphor to Understand Dataplex:** Imagine Dataplex as a modern, high-tech library system for a large university: **Traditional Libraries (Traditional Data Management):**Each department has its own library with its own cataloging system, rules, and security measures. Finding a specific book might require visiting multiple libraries and dealing with different sets of rules. **Dataplex (Unified Data Management):**A central library system manages all the books across various departmental libraries. **Unified Management:** Regardless of where a book (data) is located, you can manage and access it from a single system. **Governance and Security:** There is a consistent set of rules and security measures applied to all books, ensuring they are well-protected and accessible only to authorized individuals. Discovery and Cataloging: The system automatically catalogs new books, making it easy to find them through a unified search interface. **Quality and Monitoring:** The library system monitors the condition of books and provides alerts if any book needs maintenance or replacement. **Collaboration and Sharing:** Students and faculty can easily share books and resources securely, with the library system tracking who has access to what.
robertasaservice
1,902,699
Cost Savings and Environmental Compliance: The Twin Benefits of Solvent Recovery
The solvent recovery industry is pivotal in the modern industrial landscape, primarily focusing on...
0
2024-06-27T14:04:38
https://dev.to/aryanbo91040102/cost-savings-and-environmental-compliance-the-twin-benefits-of-solvent-recovery-2ae4
newbie
The solvent recovery industry is pivotal in the modern industrial landscape, primarily focusing on the collection, purification, and reuse of solvents that would otherwise be discarded. This industry not only plays a significant role in reducing environmental pollution but also helps companies save costs by reusing solvents. The demand for solvent recovery has seen a marked increase due to stringent environmental regulations, the rising cost of raw materials, and the growing emphasis on sustainable industrial practices. The global [solvent recovery market size](https://www.marketsandmarkets.com/Market-Reports/solvent-recovery-recycling-market-93176122.html) is estimated to be USD 1,384 million by 2028 from USD 1,085 million in 2023, at a CAGR of 5.0%. The market growth is driven by stringent regulations promoting the reduction of emissions and hazardous waste, Increasing demand for solvent recovery in end-use industries, and financial benefits through reduced procurement costs and waste disposal expenses. Get Free Sample Copy of this Report: [https://www.marketsandmarkets.com/requestsampleNew.asp?id=93176122](https://www.marketsandmarkets.com/requestsampleNew.asp?id=93176122) Browse 229 market data Tables and 41 Figures spread through 227 Pages and in-depth TOC on "Solvent Recovery and Recycling Market by Solvent Type (NMP, DMSO, Cresol, DMF, DMAC, Acetone, Butanol, Propanol, 2-Aminoethanol, 1, 4 Dioxane, E-Caprolactam, Terephthalic Acid), End-Use Industry, and Region - Global Forecast to 2028" Demand Drivers Several factors drive the demand for solvent recovery services and technologies: ➥ Environmental Regulations: Governments worldwide have implemented strict regulations to curb industrial emissions and waste. These regulations mandate the recovery and recycling of solvents to minimize environmental impact. ➥ Cost Savings: Solvent recovery helps industries reduce the cost of purchasing new solvents. By recovering and reusing solvents, companies can significantly lower their operational expenses. ➥ Sustainability Initiatives: With the global shift towards sustainable practices, industries are increasingly adopting solvent recovery to enhance their green credentials and reduce their carbon footprint. ➥ Technological Advancements: Innovations in solvent recovery technologies have made the process more efficient and cost-effective, further driving its adoption across various industries. Segmental Growth Analysis The solvent recovery industry is segmented based on technology, application, and end-use industry. 🏭 By Technology: Distillation: This is the most commonly used method, where solvents are separated based on their boiling points. Advances in distillation technology, such as the development of continuous distillation systems, have improved recovery rates and energy efficiency. Membrane Filtration: This technology uses selective permeable membranes to separate solvents from contaminants. It is particularly useful for heat-sensitive solvents and offers advantages in terms of lower energy consumption. Adsorption: In this method, solvents are recovered by passing them through adsorbent materials. Recent developments in adsorbent materials have enhanced their capacity and regeneration efficiency. Azeotropic Distillation: This involves the use of an additional solvent to break azeotropes and enable the separation of solvents that form constant boiling mixtures. Request PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @ [https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=93176122](https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=93176122) 🏭 By Application: Pharmaceuticals: The pharmaceutical industry is one of the largest consumers of solvents. Solvent recovery in this sector helps in maintaining product purity and compliance with regulatory standards. Chemical Manufacturing: Solvent recovery is critical in chemical manufacturing for cost reduction and waste minimization. Oil and Gas: In the oil and gas industry, solvents are used in extraction and refining processes. Recovering these solvents is essential for environmental compliance and cost savings. Paints and Coatings: Solvent recovery in this sector helps in reducing emissions and lowering the cost of raw materials. 🏭 By End-Use Industry: Automotive: Solvents are extensively used in automotive manufacturing for cleaning and painting processes. Recovering these solvents helps in reducing production costs and environmental impact. Electronics: In the electronics industry, solvents are used in cleaning and manufacturing processes. Solvent recovery is crucial for maintaining product quality and reducing waste. Food and Beverage: This industry uses solvents for extraction and purification processes. Solvent recovery ensures compliance with food safety regulations and cost-effective operations. Regional Growth Analysis The solvent recovery market exhibits varied growth patterns across different regions, influenced by industrialization levels, regulatory frameworks, and technological adoption. ☑️ North America: North America, particularly the United States, has a mature solvent recovery market. The stringent environmental regulations, coupled with the presence of large pharmaceutical and chemical manufacturing industries, drive the demand for solvent recovery. The region also benefits from advanced technological infrastructure and high investment in research and development. ☑️ Europe: Europe is a significant market for solvent recovery, driven by the European Union’s rigorous environmental policies and commitment to sustainability. Countries like Germany, France, and the UK are at the forefront of adopting advanced solvent recovery technologies. The region’s focus on circular economy principles further boosts the solvent recovery industry. ☑️ Asia-Pacific: The Asia-Pacific region is experiencing rapid growth in the solvent recovery market due to the increasing industrialization and urbanization in countries like China, India, and Japan. The rising awareness about environmental sustainability and the growing demand for cost-effective manufacturing processes are key factors driving the market. Additionally, the expansion of pharmaceutical, chemical, and automotive industries in this region contributes to the demand for solvent recovery. ☑️ Latin America: In Latin America, the solvent recovery market is growing steadily, with Brazil and Mexico being the major contributors. The adoption of solvent recovery technologies is driven by the need to comply with environmental regulations and the increasing focus on sustainable industrial practices. ☑️ Middle East and Africa: The Middle East and Africa region is witnessing a gradual increase in solvent recovery activities, primarily driven by the oil and gas industry. The implementation of environmental regulations and the need for cost-effective operations are promoting the adoption of solvent recovery technologies. Inquire Before Buying: [https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=93176122](https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=93176122) Solvent Recovery Market Key Players Veolia (France), CycleSolv LLC (US), Tradebe Environmental Services (US), Clean Harbors (US), and Indaver (Belgium) are some of the established players in the solvent recovery and recycling market. These players have adopted various strategies such as mergers & acquisitions, joint ventures, and expansion to strengthen their market position. Conclusion The solvent recovery industry is poised for significant growth, driven by environmental regulations, cost-saving initiatives, and advancements in technology. The segmental growth analysis highlights the diverse applications and technologies shaping the market, while the regional growth analysis underscores the varying dynamics across different geographies. As industries continue to prioritize sustainability and efficiency, the demand for solvent recovery services and technologies is expected to rise, offering lucrative opportunities for stakeholders in this market.
aryanbo91040102
1,902,698
Discover the Future of Animation with AI Magic! ✨🤖
Caption: 🎬 Dive into a world where imagination meets innovation! Our latest animated cartoon is...
0
2024-06-27T14:04:25
https://dev.to/gadekar_sachin/discover-the-future-of-animation-with-ai-magic-bmf
aianimation, datascience, futureofanimation, cartoonmagic
**Caption:** 🎬 Dive into a world where imagination meets innovation! Our latest animated cartoon is powered by cutting-edge AI technology, bringing characters to life with stunning realism and creativity. 🌟 Ever wondered how AI and data science can revolutionize storytelling? Our animation uses AI algorithms to create dynamic scenes, realistic character movements, and breathtaking visuals that will captivate your senses. This is not just a cartoon; it's a glimpse into the future of entertainment. 🚀 🌟 **Why AI in Animation?** 1. **Efficiency**: Speed up the animation process with intelligent automation. 2. **Creativity**: Unlock new creative possibilities with AI-driven tools. 3. **Precision**: Achieve unparalleled accuracy in character movements and expressions. Join the revolution and see how AI is transforming the world of animation. Whether you're an AI enthusiast, a data science developer, or simply a fan of great storytelling, this is something you don't want to miss! 🌟🎥 🔗 [https://youtu.be/eSHXQTX7qlE] --- Would you like to make any adjustments or add any specific details?
gadekar_sachin
1,902,696
What plugins do you dream of?
At Webcrumbs our goal is to revolutionize the way developers create and use plugins, offering a...
0
2024-06-27T14:03:58
https://dev.to/buildwebcrumbs/what-plugins-do-you-dream-of-2cac
discuss, javascript, webdev
At [Webcrumbs](webcrumbs.org) our goal is to revolutionize the way developers create and use plugins, offering a versatile JavaScript ecosystem tailored by you, **the community**. Let's take a closer look at what Webcrumbs promises to bring to the world of web development and how you can be part of shaping this innovative platform from the ground up. --- ## What is Webcrumbs? Webcrumbs is an upcoming platform designed to empower developers by providing a dynamic environment for creating and managing custom plugins. We aim to streamline web development processes, enhancing functionality and **reducing time spent on repetitive tasks**. As we build, we are crafting Webcrumbs to be adaptable to various JavaScript frameworks and technologies, making it as versatile as the developers who will use it. --- ## Be Part of the Journey We are currently developing the core features of Webcrumbs and we want your input! **What plugins do you dream of?** What tools do you wish were at your fingertips? By sharing your ideas and needs, you help us tailor Webcrumbs to the real-world demands of developers like you. --- ## Call for Early Adopters and Community Input While Webcrumbs is not yet ready for launch, this is the perfect time to get involved: - **Share Your Ideas**: **What plugins could change how you develop sites and applications?** Let us know what you need and want in the comments. - **Join the Conversation**: [Become part of our growing community of innovative developers](https://discord.gg/4PWXpPd8HQ). Your feedback and suggestions are invaluable as we shape the functionality and user experience of Webcrumbs. - **Stay Updated**: [Follow our progress and get early access to beta versions by signing up on our website.](https://www.webcrumbs.org/waitlist) Help us refine Webcrumbs and be the first to know when we go live. **Personally, I am super excited to be part of this project from the very beginning and get super happy whenever I get community feedback 🥰** Thanks for reading, Pachi 💚
pachicodes
1,902,694
Acoplamento, Coesão e Encapsulamento
São 3 termos muitos recorrentes no desenvolvimento é muito importante conhecê-los. Acoplamento -&gt;...
0
2024-06-27T14:03:07
https://dev.to/oigorrudel/acoplamento-coesao-e-encapsulamento-oj4
São 3 termos muitos recorrentes no desenvolvimento é muito importante conhecê-los. **Acoplamento** -> é o grau de iterdependência entre dois componentes. Ex: essa bean para funcionar ela precisa dessa outra bean? - _Baixo acoplamento_: componente consegue operar praticamente idependente. - _Alto acoplamento_: componente tem forte conexão com outro, tornando-o altamente dependente. **Coesão** -> é o propósito de um componente, impactando na clareza da responsabilidade do componente. Ex: esse bloco de código nessa classe faz sentido estar nela? - _Baixa coesão_: o propósito está confuso. - _Alta coesão_: o propósito está bem definido. **Encapsulamento** -> é a capacidade de esconder/isolar um comportamento em um componente. Ex: definição de atributos como **_private_** e utilização de _getter's_ e _setter's_.
oigorrudel
1,902,691
** La teoría del Big Bang del rendimiento de las API (API Performance) ** 🌌🌟
¡Hola Chiquis! 👋🏻 ¿Preparados para un viaje al mundo de las APIs? ¡Atención a todos los geeks y...
0
2024-06-27T14:02:37
https://dev.to/orlidev/-la-teoria-del-big-bang-del-rendimiento-de-las-api-api-performance--3j1g
api, tutorial, webdev, beginners
¡Hola Chiquis! 👋🏻 ¿Preparados para un viaje al mundo de las APIs? ¡Atención a todos los geeks y entusiastas de la tecnología! 🖖🏻 Si alguna vez se han preguntado cómo funcionan esas aplicaciones mágicas que nos hacen la vida más fácil, o cómo los datos fluyen como el agua por Internet, entonces este post es para ustedes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whmpsz3ggktw8cz8hwkw.jpg) Prepárense para descubrir el fascinante mundo de las APIs los mensajeros de la información, encargados de llevar los datos de un lugar a otro. Son como los puentes que conectan diferentes aplicaciones y sistemas, permitiéndoles compartir información y trabajar juntas.🧠 Imagínense a Sheldon Cooper, el brillante físico teórico de The Big Bang Theory, tratando de pedir comida para llevar. Sheldon, como todos sabemos, se nutre de la rutina y la eficiencia. Pero una API (Interfaz de programación de aplicaciones) lenta y poco confiable podría convertir esta simple tarea en una prueba frustrante. 🧑🏻 El rendimiento de las API 🛰️ es como el ritmo frenético de Sheldon Cooper. Es fundamental para garantizar que las aplicaciones funcionen de manera eficiente y respondan rápidamente a las solicitudes. Imagina que las API son como el ascensor en el edificio de Sheldon y que cada vez que necesita subir al apartamento, usa el ascensor. Ese ascensor es la API. Pero, ¿qué pasa si el ascensor es lento o se atasca? Sheldon se frustra, y la experiencia no es tan fluida como debería ser. Aquí está la cuestión: una API inestable es similar a que Sheldon tuviera que lidiar con un ascensor varado. Interrumpe su rutina cuidadosamente planificada. Así como Sheldon necesita un ascensor funcional para bajar o subir, su aplicación se basa en una API que funciona sin problemas para recuperar datos y ofrecer resultados.🏢 Imagina que las API son como los ascensores en el edificio de la comunicación digital. Con ellas, diferentes aplicaciones pueden hablar entre sí sin perder tiempo en las escaleras. 🏢 Las API (interfaces de programación de aplicaciones) son como los conductos secretos que conectan diferentes sistemas de software. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjs6ngatzwen8m6cqtlf.jpg) ¿Qué es una API? 👱🏻‍♀️ Una API es como un camarero en un restaurante. Toma tus pedidos (solicitudes) y se comunica con la cocina (servidor) para traerte la comida (datos o resultados). Ahora bien, imagina que el camarero es lento y tarda en llevar tus pedidos. Eso afectaría tu experiencia en el restaurante, ¿verdad? Lo mismo ocurre con las API: si son lentas, afectan la experiencia del usuario. Ejemplo (con Python):🔭 ``` def obtener_temperatura_ciudad(ciudad): # Simulamos una llamada a una API de pronóstico del tiempo # Aquí debería haber una solicitud real a la API temperatura_actual = 25 # Supongamos que es 25 °C return temperatura_actual ciudad_usuario = "Pasadena" temperatura_pasadena = obtener_temperatura_ciudad(ciudad_usuario) print(f"La temperatura en {ciudad_usuario} es de {temperatura_pasadena} °C.") ``` API: los repartidores del mundo de los datos 👨🏻‍🔬 Piense en una API como el Rajesh Koothrappali de la entrega de datos. Es el intermediario entre Sheldon (su aplicación) y el restaurante chino de comida para llevar (el servidor que almacena los datos). Cuando la API funciona a la perfección, es como si Rajesh reuniera el coraje para hablar con Lucy sin alcohol: la comida llega rápidamente y de manera predecible, tal como Sheldon espera. Entonces, ¿qué hace que el rendimiento de una API sea digno de los elogios de Sheldon? Vamos a desglosarlo: + Velocidad (rutina matutina de Sheldon): así como a Sheldon le gusta que sus mañanas sigan un horario preciso, los usuarios de su aplicación anhelan una respuesta rápida. Una API rápida garantiza que la recuperación de datos se realice en un instante, manteniendo a los usuarios felices y productivos. + Fiabilidad (brazo robótico de Howard): ¿Recuerdas el confiable brazo robótico de Howard? El hace la petición y le entrega el bote de ketchup como a él le gusta. Una API confiable funciona de manera similar, proporcionando datos precisos de manera consistente, sin errores aleatorios que puedan dejar a los usuarios rascándose la cabeza. + Eficiencia (el acuerdo de compañero de cuarto): Sheldon tiene un sistema basado en la eficiencia, al igual que las API bien diseñadas. Gestionan los recursos de forma inteligente, evitando situaciones en las que el sistema se sobrecarga, al igual que Sheldon y Leonard tienen un sistema de tareas domésticas para evitar el caos en su departamento. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9bhmgcbm17vfgbumft59.jpg) El reto del rendimiento de las API 😊🚀 El rendimiento de las API se refiere a la eficacia y capacidad de respuesta de estas interfaces cuando se someten a una serie de demandas funcionales. Al igual que Penny quiere llegar rápidamente al The Cheesecake Factory, los usuarios de las API también esperan respuestas ágiles. Si una API es lenta, puede obstaculizar la experiencia del usuario y afectar negativamente la eficiencia de las aplicaciones. Las API deben ser rápidas y eficientes, como cuando Sheldon intenta explicar física cuántica a Penny: ¡sin pausas! 🚀 Optimización de su API: el Bazinga del rendimiento 🌌 Así como Sheldon optimiza su vida con Bazinga, aquí hay algunas formas de mejorar el rendimiento de la API: + Monitorear y analizar: así como Sheldon examina meticulosamente las cosas bajo un microscopio, monitorea continuamente las métricas de rendimiento de la API, como el tiempo de respuesta y las tasas de error. Esto ayuda a identificar y solucionar problemas antes de que se conviertan en un gran impacto para los usuarios. Si tus API fueran personajes, ¡serían Sheldon, Howard y Raj midiendo su coeficiente intelectual! 📊 Tiempo de respuesta, tasa de errores y uso de recursos. + Consultas de ajuste: Sheldon tiene un lugar designado para todo. Optimiza las consultas a la base de datos para evitar cuellos de botella, garantizando que la recuperación de datos sea tan eficiente como Sheldon organizando su colección Fun with Flags. Limitación de solicitudes como el ascensor averiado: Imagina que el ascensor está averiado y solo permite un número limitado de personas por hora. Si todos intentan subir al mismo tiempo, se bloquea. Del mismo modo, las API deben limitar la cantidad de solicitudes por usuario para no saturar el servidor. Aquí está el código: ``` def verificar_limite_solicitudes(usuario): # Verifica si el usuario ha alcanzado el límite de solicitudes # Aquí debería haber lógica real para verificar el límite return True if verificar_limite_solicitudes(usuario_actual): print("¡Has alcanzado el límite de solicitudes! Inténtalo más tarde.") else: # Realiza la solicitud a la API resultado = hacer_solicitud_a_api(usuario_actual) print(f"Resultado: {resultado}") ``` + Cache Like a Pro: Sheldon tiene memoria fotográfica. Implementa estrategias de almacenamiento en caché para almacenar los datos a los que se accede con frecuencia, tal como Sheldon almacena conocimientos para referencia futura. Esto reduce los tiempos de carga y mantiene todo ágil. Entonces, las solicitudes en caché Almacenan respuestas previas para evitar repetir cálculos. Es como si Leonard recordara todas las veces que Sheldon ha hablado de trenes. 🧠 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jmrzi6tgqykh86z9lkj.jpg) Caché como Penny's Notebook: Imagina que Penny tiene un cuaderno donde anota los pedidos de los vecinos. Si alguien le pide lo mismo dos veces, no necesita volver a preguntar; simplemente consulta su cuaderno. De manera similar, las API pueden almacenar resultados previos en caché para evitar hacer la misma solicitud repetidamente. Caché inteligente,✨ Imagina que el ascensor recuerda las rutas más utilizadas y las almacena en su memoria. De manera similar, las API pueden usar caché para almacenar respuestas frecuentes y evitar procesamientos innecesarios.  Aquí hay un ejemplo en Python: ``` class Cache: def __init__(self): self.data = {} def obtener_resultado(self, clave): if clave in self.data: return self.data[clave] else: # Simulamos una llamada a la API aquí resultado = obtener_resultado_desde_servidor(clave) self.data[clave] = resultado return resultado # Uso del caché mi_cache = Cache() resultado1 = mi_cache.obtener_resultado("temperatura_pasadena") resultado2 = mi_cache.obtener_resultado("temperatura_pasadena") # No hace otra solicitud ``` + Evitar abusos: Controla el acceso a tus API. No queremos que todos se apoderen de la máquina de café de la oficina, ¿verdad? ☕ O, si todos los vecinos intentan usar el ascensor al mismo tiempo, se colapsa. Las API también pueden limitar las solicitudes para evitar abusos y mantener la velocidad. + Utilizar PATCH: Actualiza solo lo necesario, como cuando Rajesh modifica su perfil de citas en línea. 🌟 En lugar de enviar toda la información cada vez, las API pueden usar el método PATCH para actualizar solo lo que ha cambiado. Es como si Penny solo presionara el botón del piso en el que vive. + Red más rápida: Usa conexiones veloces, como si fueras un rayo láser en el juego "Rock-Paper-Scissors-Lizard-Spock". ⚡ Si el ascensor se mueve más rápido, Penny llega antes. Del mismo modo, las API pueden aprovechar redes de distribución de contenidos (CDN) y servidores geográficamente distribuidos para reducir la latencia y mejorar la experiencia del usuario. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m8w0hepbuzrhbjdvh3ea.jpg) Imagina a Sheldon esperando eternamente su cómic de edición limitada. Una CDN es como tener una red de tiendas más cerca de Sheldon, lo que garantiza que las entregas de cómics sean rápidas. De manera similar, las CDN brindan respuestas API más rápidamente para usuarios distribuidos geográficamente. Optimización de red como el WiFi de Sheldon: Sheldon siempre busca la mejor señal WiFi. De manera similar, las aplicaciones deben usar servidores cercanos geográficamente para reducir la latencia. Aquí está el código (usando una API ficticia): ``` def obtener_datos_desde_servidor(servidor_url): # Simulamos una llamada a la API aquí # Aquí debería haber una solicitud real a la API return datos servidor_pasadena = "https://api.pasadena.com" datos_pasadena = obtener_datos_desde_servidor(servidor_pasadena) print(f"Datos de Pasadena: {datos_pasadena}") ``` Conclusión: un universo de interacciones fluidas ☄️ Al garantizar un rendimiento óptimo de la API, estás creando una experiencia de usuario aprobada por Sheldon: fluida, eficiente y sin frustraciones. Recuerda, una API con buen rendimiento es el héroe invisible detrás de cada interacción digital fluida, al igual que la eficiencia silenciosa que mantiene el mundo de Sheldon en orden. 🚀🌟 En resumen, optimiza tus API como si fueras un equipo de físicos teóricos tratando de resolver el misterio del universo. ¡Así tendrás una experiencia de usuario digna de un premio Nobel! 🌌🏆 Así que, al igual que Sheldon y Leonard necesitan un ascensor rápido para llegar a su departamento, las aplicaciones necesitan APIs ágiles para brindar una excelente experiencia al usuario. ¡Bazinga! 🚀 🚀 ¿Te ha gustado? Comparte tu opinión. Artículo completo, visita: https://lnkd.in/ewtCN2Mn https://lnkd.in/eAjM_Smy 👩‍💻 https://lnkd.in/eKvu-BHe  https://dev.to/orlidev ¡No te lo pierdas! Referencias:  Imágenes creadas con: Copilot (microsoft.com) ##PorUnMillonDeAmigos #LinkedIn #Hiring #DesarrolloDeSoftware #Programacion #Networking #Tecnologia #Empleo #APIPerformance ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5r2l7z2203s4h6o5wl2.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwexk0gohk8q4f5oc7vt.jpg)
orlidev
1,902,684
Converting Integers to Roman Numerals in Go
Roman numerals are a fascinating part of ancient history, representing numbers using combinations of...
0
2024-06-27T13:51:00
https://dev.to/stellaacharoiro/converting-integers-to-roman-numerals-in-go-4n04
go, tutorial, programming, learning
Roman numerals are a fascinating part of ancient history, representing numbers using combinations of letters from the Latin alphabet. Go's `ToRoman` function provides a modern way to convert integers into their corresponding Roman numeral representations. You'll learn the logic behind the `ToRoman` function, step by step, and how it efficiently performs this conversion. ## Function Definition The `ToRoman` function is defined as follows: ```go func ToRoman(num int) (string, string) { val := []int{1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1} sym := []string{"M", "CM", "D", "CD", "C", "XC", "L", "XL", "X", "IX", "V", "IV", "I"} roman, calculation := "", "" for i := 0; i < len(val); i++ { for num >= val[i] { num -= val[i] roman += sym[i] if calculation != "" { calculation += "+" } calculation += sym[i] } } return roman, calculation } ``` ## Breakdown of the Function 1. **Initialization**: ```go val := []int{1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1} sym := []string{"M", "CM", "D", "CD", "C", "XC", "L", "XL", "X", "IX", "V", "IV", "I"} ``` Two slices, `val` and `sym`, are initialized to store Roman numeral values and their corresponding symbols. These slices are organized in descending order to facilitate the conversion process. 2. **Output Variables**: ```go roman, calculation := "", "" ``` Two strings, `roman` and `calculation`, are initialized. `roman` will store the final Roman numeral representation, while `calculation` will store the step-by-step process. 3. **Conversion Logic**: ```go for i := 0; i < len(val); i++ { for num >= val[i] { num -= val[i] roman += sym[i] if calculation != "" { calculation += "+" } calculation += sym[i] } } ``` - **Outer Loop**: The outer loop iterates over the indices of the `val` and `sym` slices. This ensures that each value and symbol pair is processed. - **Inner Loop**: The inner loop repeatedly subtracts the current value (`val[i]`) from `num` while `num` is greater than or equal to `val[i]`. Each subtraction appends the corresponding Roman numeral symbol (`sym[i]`) to the `roman` string. - **Building the Calculation String**: If `calculation` is not empty, a `+` is appended before adding the current symbol. This builds a step-by-step representation of how the final Roman numeral is formed. 4.**Return Statement**: ```go return roman, calculation ``` Finally, the function returns the Roman numeral string and the calculation string. ## Example Let's walk through an example to see how the `ToRoman` function works: ```go result, calc := ToRoman(1987) fmt.Println("Roman Numeral:", result) fmt.Println("Calculation Process:", calc) ``` **Output**: ``` Roman Numeral: MCMLXXXVII Calculation Process: M+CM+L+X+X+X+V+I+I ``` For the input `1987`: - The function starts with `M` (1000), subtracting it from `1987` to get `987`. - Then `CM` (900) is subtracted, resulting in `87`. - Next, `L` (50) is subtracted, leaving `37`. - Three `X` (10 each) are subtracted, resulting in `7`. - Finally, `V` (5) and two `I` (1 each) complete the conversion. ## Conclusion The `ToRoman` function is an efficient way to convert integers to Roman numerals in Go. Using slices to store values and symbols, and nested loops to perform the conversion, ensures accuracy and clarity in its output. Whether you're a history enthusiast or a software developer, understanding this function provides a deeper appreciation of ancient numeral systems and modern programming techniques.
stellaacharoiro
1,901,095
Why and how you should rate-limit your API
Opening the gates This is it. Your shiny new product is ready to be released to the...
0
2024-06-27T14:00:03
https://dev.to/systemglitch/why-and-how-you-should-rate-limit-your-api-2o7d
webdev, backend, go, redis
## Opening the gates **This is it.** Your shiny new product is ready to be released to the public. You worked hard for it, surely it will be a smash hit! A horde of users are impatiently waiting to try it out. <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExeGR5eW4xaGY3Ync0aWhkeXlodG83MGwxaWdoazZ4c255N2s0Z3QyMiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/yx400dIdkwWdsCgWYp/giphy.gif" alt="Let me in!"> With a confident hand, you click the button. **The service is live!** Your analytics show that more and more users are coming in. Your advertising campaign and marketing were very effective. As the numbers grow, you feel **overwhelmed**, just like your cloud infrastructure. They are simply too many. Your bug tracker starts spitting thousands of reports. Your monitoring goes from green to yellow, to red... You weren't ready for such a huge traffic. You thought you were, but that wasn't the case. ## Solving the issue No time to waste, you need to fix this. You can't let your first user experience remain so miserable. You decide to just shut down the service temporarily while you work on this. But now, nobody can use it anymore. This doesn't feel right. ![Open the gate a little meme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wabqtaeukjhxqob5sakj.png) 💡 That's right! You need to **open the gate just a little**. This way you let in an amount of traffic that is manageable for your infrastructure. The remainder will at least be informed that there is no room left for them right now. This is better than getting a terribly slow and malfunctioning experience. --- ## Rate limiting What you just set up is called **rate limiting**. It is a crucial component of every system exposed to the internet. The idea is simple: managing traffic by **limiting the number of requests** within a period. That can be something like "100 requests per minute" for example. ### Use cases Rate limiting solves several problems. - **Stability**: by limiting the load, your infrastructure's stress is alleviated. - **Cost control**: one should never auto-scale without limits. You would inevitably receive an invoice from your cloud provider that would single-handedly make you go bankrupt. You know what to expect when you put clear limits on your service usage. - **User experience and security**: only abusive users will ever hit the rate limit if it's configured properly. This way, honest users won't have to suffer for a handful of malicious ones. - **Data control**: it is not unusual for any service to be visited by bots or malicious actors trying to extract all the data they have access to. Rate limiting is a great way to hinder scrapers. ### Drawbacks No solution is perfect. Rate limiting also has a fair share of drawbacks. - **Complexity**: behind this seemingly simple idea lies a ton of complexity. It is not that simple to set it up right. There are multiple policies you can use. You will have to carefully calculate and tweak the rate. Some applications also need to correctly handle request bursts. - **User experience**: It is a double-edged sword. If a user legitimately reaches the limit, they will get frustrated. It is never fun to have to stop and wait when you're very productive. - **Scaling up**: rate limiting needs to be constantly monitored and tweaked as you scale up. Limits may need to be increased when new features are rolled out. Do you prefer scaling your infrastructure or trying to squeeze as many users as possible until it degrades? With that said, I would like to insist that there is no perfect way to do rate limiting. You will have to make your own choices depending on your service and your business. --- ## Going down the rabbit-hole Things are getting interesting, but more and more complicated. Let's dive into this rabbit hole and hopefully find out what will work best for you. <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExZ2J4emk1b20wOTJ0Y2hpam94Mno4MzRtMng1MmNhd3hwc2J3anFiZCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/bhoQxlYXkLzeE/giphy.gif" alt="Someone jumping into a hole"> ### Proxy vs App rate limiting Before anything, you should ask yourself at which **level** you need to setup rate limiting. There are two options here: - The **proxy level** allows you to rate limit users even before they actually hit your service. This is the most efficient approach when it comes to performance and security. Most cloud providers have built-in solutions to handle this for you. - The **application level** allows for more fine-grained control over the quotas. You can make it vary depending on if the user is authenticated or not or if it has special permissions. This even allows you to potentially monetize your API by allowing paying customers a higher limit. <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExa3JnY3R1dGhicWs4aDd0Z2R2cjVvczJid21uM2o2YTdqY2lmOWcweSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/DZyxZgmcbC264/giphy.gif" alt="Why not both?"> Opting for **both solutions** can be interesting. You would use the proxy level for DDoS protection and to avoid overloading your services. And you would use the application level alongside it where some business logic comes into play. ### Policies There are many different **policies** that can be used to calculate the users quotas. All of them have their use-cases, so it is again up to you to pick the one that works best for you. Without going to much into details, we will see the **four most common** rate limit policies. #### Fixed window ![Fixed window diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o34gbmmew445xoasgph2.png) This policy is the **simplest**: the rate limit is applied within a fixed time window. Everyone has request counter that is reset every few moments. If the counter exceeds the allowed quota, the request is rejected. Its main drawback is that it **cannot handle burst traffic** at all. Imagine you have a quota of 100 requests per minute. When the counters reset for everyone, potentially all your users can send 100 requests all at once. I can also be very **rigid**. If the window is too long, users may have to wait a long time before being able to send requests again. If the window is too short, the benefits of rate limiting are reduced. #### Sliding window ![Sliding window diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zzp7pnh1e0k3c6c9a6a.png) This policy is an **improvement** over the fixed window. In short, it tracks the requests made by one user in the **last moments** rather than resetting all counters all at once. Let's say we have a window of 1 minute and a quota of 100 requests. If the user has performed less than 100 requests during the last 60 seconds, then the request is accepted. The window is therefore *sliding* continuously relative to the current time. This policy is still very **rigid** but ensures **smooth traffic**. #### Token bucket ![Token bucket diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ucmrttsyhmjb8ndc2gn.png) The **token bucket** is a completely different approach. The idea is that every user has a bucket filled with a **specified number of tokens**. When performing a request, they **take one token** in the bucket. If the **bucket is empty**, the request is rejected. The bucket is **refilled** at a predefined rate until it's full again. This policy is great for handling **short burst traffic** and spikes with a long-term smooth rate limiting. #### Leaky bucket ![Leaky bucket diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szhl8pyhd2jk4nkyd181.png) The **leaky bucket** works a bit like a funnel. Every user starts with an empty bucket that has a hole in the bottom. The hole is more or less wide, letting only a fixed amount of requests flow for a unit of time. As the requests come in, the bucket can fill up faster than it can drain. Eventually, the bucket is full and overflows: all new requests are rejected. In the analogy, the **width** of the opening at the bottom represents the **rate**. The **depth** of the bucket represents the **burst**. This policy is the most **flexible** of all four. It can be adjusted easily depending on the traffic and also smooths out the traffic flow. ### HTTP standard At the time of writing, the closest we have to a standard for rate limiting with HTTP is this expired [IETF draft](https://datatracker.ietf.org/doc/draft-ietf-httpapi-ratelimit-headers). In short, this document defines a set of **HTTP headers** that can be used to inform the clients of their quotas and the policy used. Unfortunately, it is hard to know where this is going or if this has been dropped completely. This is the best we've got, so let's roll with it. --- ## Implementation <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExNXIxeWt0dmxhOWZ3NTJqZGZiNGhvejB2bzg0NWgxMXh6N21kZm90cSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/BpGWitbFZflfSUYuZ9/giphy.gif" alt="Let's do this"> For our example, we will work at the **application level**. We will use **Go**, **redis** and the **leaky bucket policy**. To avoid having to implement the algorithm ourselves, we will use the [`go-redis/redis_rate`](https://github.com/go-redis/redis_rate) library. **Why do we need redis?** **Redis** is a **key/value store** that we will use to store our user counters. In a distributed system that can scale up to a certain number of instance, you don't want counters to be held individually by each instance. That would mean that the rate limiting is done by instance and not actually for your service, making it basically useless. ### Rate limit service Let's start by implementing an **agnostic service**. This way, we can use it with any framework or library easily. Let's create a new `ratelimit` package, import our libraries, and setup the basis for our service: ```go // service/ratelimit/ratelimit.go package ratelimit import ( //... rate "github.com/go-redis/redis_rate/v10" "github.com/redis/go-redis/v9" ) type Service struct { limiter *rate.Limiter limit rate.Limit } func NewService(redisClient redis.UniversalClient, limit rate.Limit) *Service { return &Service{ limiter: rate.NewLimiter(redisClient), limit: limit, } } ``` Let's then expose a simple `Allow()` method. ```go // Allow checks if the given client ID should be rate limited. // If an error is returned, it should not prevent the user from accessing the service // (fail-open principle). func (s *Service) Allow(ctx context.Context, clientID string) (*rate.Result, error) { return s.limiter.Allow(ctx, fmt.Sprintf("client-id:%s", clientID), s.limit) } ``` We are using the **fail-open** principle. It is well suited for high-availability services, where it would be more detrimental to block all traffic than to potentially let it flow a bit too much. For more resource-intensive operations, it would be smarter to use a **fail-close** approach to ensure the stability even if the rate limiting fails. Now, we can also implement a simple method that would update the response headers according to the IETF draft previously mentioned. ```go // UpdateHeaders of the HTTP response according to the given result. // The headers are set following this IETF draft (not yet standard): // https://datatracker.ietf.org/doc/draft-ietf-httpapi-ratelimit-headers func (s *Service) UpdateHeaders(headers http.Header, result *rate.Result) { headers.Set( "RateLimit-Limit", strconv.Itoa(result.Limit.Rate), ) headers.Set( "RateLimit-Policy", fmt.Sprintf(`%d;w=%.f;burst=%d;policy="leaky bucket"`, result.Limit.Rate, math.Ceil(result.Limit.Period.Seconds()), result.Limit.Burst), ) headers.Set( "RateLimit-Remaining", strconv.Itoa(result.Remaining), ) headers.Set( "RateLimit-Reset", fmt.Sprintf("%.f", math.Ceil(result.ResetAfter.Seconds())), ) } ``` Finally, we need to **identify** our users. For unauthenticated users, it can be tricky. Usually, you then rely on the **client's IP**. It's not perfect but sufficient most of the time. ```go // GetDefaultClientID returns the client IP retrieved from the X-Forwarded-For header. func (s *Service) GetDefaultClientID(headers http.Header) string { // X-Forwarded-For: <client-ip>,<load-balancer-ip> // or // X-Forwarded-For: <supplied-value>,<client-ip>,<load-balancer-ip> // We only keep the client-ip. parts := strings.Split(headers.Get("X-Forwarded-For"), ",") clientIP := parts[0] if len(parts) > 2 { clientIP = parts[len(parts)-2] } return strings.TrimSpace(clientIP) } ``` ⚠️ **This header format is the one used by Google Cloud load balancers. This can be different depending on your cloud provider.** We can now create an instance of our service like so: ```go opts := &redis.Options{ Addr: "127.0.0.1:6379", Password: "", DB: 0, MaxRetries: -1, // Disable retry } redisClient := redis.NewClient(opts) ratelimitService := ratelimit.NewService(redisClient, rate.PerMinute(200)) ``` Of course, ideally we would not hardcode all those settings. Making them configurable with a config file or environment variables would be best. This is however out of the scope of this article. ### Middleware Now that we are done with the rate limit service, we need to put it to use in a new **middleware**. For this example, we are going to use the [**Goyave**](https://github.com/go-goyave/goyave) framework. This REST API framework provides a ton of useful packages and encourages the use of a strong layered architecture. We'll take the [blog example project](https://github.com/go-goyave/goyave-blog-example) as a starting point. #### Registering our service The first step is to add a name to our rate limit service. ```go // service/ratelimit/ratelimit.go import "example-project/service" func (*Service) Name() string { return service.Ratelimit } ``` ```go // service/service.go package service const ( //... Ratelimit = "ratelimit" ) ``` Then let's register it in our server: ```go // main.go func registerServices(server *goyave.Server) { server.Logger.Info("Registering services") opts := &redis.Options{ Addr: "127.0.0.1:6379", Password: "", DB: 0, MaxRetries: -1, // Disable retry } redisClient := redis.NewClient(opts) ratelimitService := ratelimit.NewService(redisClient, rate.PerMinute(200)) server.RegisterService(ratelimitService) //... } ``` ℹ️ You can find the documentation explaining how services work in Goyave [here](https://goyave.dev/basics/services.html). #### Implementing the middleware Let's set up the basis for our middleware. We'll first create a new **interface** that will be compatible with our rate limit service, and use it as a dependency of our middleware. ```go // http/middleware/ratelimit.go package middleware import ( "context" "net/http" "goyave.dev/goyave/v5" "github.com/go-goyave/goyave-blog-example/service" rate "github.com/go-redis/redis_rate/v10" ) type RatelimitService interface { Allow(ctx context.Context, clientID string) (*rate.Result, error) GetDefaultClientID(headers http.Header) string UpdateHeaders(headers http.Header, result *rate.Result) } type Ratelimit struct { goyave.Component RatelimitService RatelimitService } func NewRatelimit(ratelimitService RatelimitService) *Ratelimit { return &Ratelimit{ RatelimitService: ratelimitService, } } func (m *Ratelimit) Init(server *goyave.Server) { m.Component.Init(server) ratelimitService := server.Service(service.Ratelimit).(RatelimitService) m.RatelimitService = ratelimitService } ``` Now let's implement the actual logic of our middleware. We want our authenticated users to have a quota of their own, and our guest users to be identified by their IP. ```go func (m *Ratelimit) getClientID(request *goyave.Request) string { if u, ok := request.User.(*dto.InternalUser); ok && u != nil { return strconv.FormatUint(uint64(u.ID), 10) } return m.RatelimitService.GetDefaultClientID(request.Header()) } ``` We just have the `Handle()` method left to implement: ```go import ( //... "goyave.dev/goyave/v5/util/errors" ) func (m *Ratelimit) Handle(next goyave.Handler) goyave.Handler { return func(response *goyave.Response, request *goyave.Request) { res, err := m.RatelimitService.Allow(request.Context(), m.getClientID(request)) if err != nil { m.Logger().Error(errors.New(err)) next(response, request) return // Fail-open } m.RatelimitService.UpdateHeaders(response.Header(), res) if res.Allowed == 0 { response.Status(http.StatusTooManyRequests) return } next(response, request) } } ``` Finally, let's **add it as a global middleware**, just after the authentication middleware. ```go // http/route/route.go func Register(server *goyave.Server, router *goyave.Router) { //... router.GlobalMiddleware(authMiddleware) router.GlobalMiddleware(middleware.NewRatelimit()) //... } ``` ℹ️ You can find the documentation explaining how middleware work in Goyave [here](https://goyave.dev/basics/middleware.html). #### One last thing **Wait!** There is one problem with this. The rate limit middleware won't be executed if the authentication fails. Let's extend the `auth.JWTAuthenticator` to handle this case. We just have to make it implement `auth.Unauthorizer`. This interface allows custom authenticators to define a custom behavior when authentication fails. The idea is to execute the rate limit middleware even if the auth one blocks the request. Let's create a new custom authenticator that will use **composition** with `auth.JWTAuthenticator`: ```go // http/auth/jwt.go package auth import ( "net/http" "goyave.dev/goyave/v5" "goyave.dev/goyave/v5/auth" ) type JWTAuthenticator[T any] struct { *auth.JWTAuthenticator[T] ratelimiter goyave.Middleware } func NewJWTAuthenticator[T any](userService auth.UserService[T], ratelimiter goyave.Middleware) *JWTAuthenticator[T] { return &JWTAuthenticator[T]{ JWTAuthenticator: auth.NewJWTAuthenticator(userService), ratelimiter: ratelimiter, } } func (a *JWTAuthenticator[T]) OnUnauthorized(response *goyave.Response, request *goyave.Request, err error) { a.ratelimiter.Handle(a.handleFailed(err))(response, request) } func (a *JWTAuthenticator[T]) handleFailed(err error) goyave.Handler { return func(response *goyave.Response, _ *goyave.Request) { response.JSON(http.StatusUnauthorized, map[string]string{"error": err.Error()}) } } ``` We now need to update our routes: ```go // http/route/route.go import ( //... customauth "github.com/go-goyave/goyave-blog-example/http/auth" ) func Register(server *goyave.Server, router *goyave.Router) { //... ratelimiter := middleware.NewRatelimit() authenticator := customauth.NewJWTAuthenticator(userService, ratelimiter) authMiddleware := auth.Middleware(authenticator) router.GlobalMiddleware(authMiddleware) router.GlobalMiddleware(ratelimiter) } ``` ℹ️ You can find the documentation explaining how authenticators work in Goyave [here](https://goyave.dev/advanced/authentication.html#custom-authenticator). #### Rate limit in action We are **done**! Let's test this. <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExM25nZ2FpcXd0cW5xcDlnOGxyNWJrcnl1Mm92bWNqZWUyaGpnZHMxMyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/l0Iyl55kTeh71nTXy/giphy.gif" alt="All done!"> Before that, we need to add a redis container in the `docker-compose.yml`: ```yml services: #... redis: image: redis:7 ports: - '127.0.0.1:6379:6379' ``` Start the application as explained in the README: ```sh docker compose up -d dbmate -u postgres://dbuser:secret@127.0.0.1:5432/blog?sslmode=disable -d ./database/migrations --no-dump-schema migrate go run main.go -seed ``` Let's query our server with our trusty friend `curl`: ```sh curl -v http://localhost:8080/articles ``` Result: ``` HTTP/1.1 200 OK Access-Control-Allow-Origin: * Content-Type: application/json; charset=utf-8 Ratelimit-Limit: 200 Ratelimit-Policy: 200;w=60;burst=200;policy="leaky bucket" Ratelimit-Remaining: 198 Ratelimit-Reset: 1 Date: Thu, 27 Jun 2024 12:47:08 GMT Transfer-Encoding: chunked {"records":[... ``` We can see our `RateLimit` headers. **Success!** --- ## Conclusion You can finally **open the gates** (*a little*) once more and let everyone enjoy your awesome new product without issues or slowdowns. In the process, you learned all you need to know about in order to get started with rate limiting. Despite not being perfect, this solution is greatly effective! Don't forget to closely monitor your services from now on, and make adjustments to your limits accordingly. Check out the [Goyave](https://github.com/go-goyave/goyave) framework ! It can help you build better APIs faster in so many ways thanks to its many features such as routing, validation, localization, model mapping, and much, much more. **Let's talk!** Was this article useful to you? Do you have anything to add or correct? Or maybe you have an interesting experience to share with us. I'll see you in the comments. Thank you for reading!
systemglitch
1,902,689
what is flash usdt
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get...
0
2024-06-27T13:56:56
https://dev.to/zea_mati_8b1334edd5c87523/what-is-flash-usdt-1g8k
flashusdt, flashbtc, whatisflashbitcoin, flashbitcoinsoftware
Hey there, fellow cryptocurrency enthusiasts! Are you looking for a new and exciting way to get involved in the world of digital currency? Look no further than Flash USDT, the innovative solution from MartelGold. As a valued member of the MartelGold community, I’m excited to share with you the incredible benefits of Flash USDT and how it can revolutionize your Tether experience. With Flash USDT, you can generate Tether transactions directly on the blockchain network, with fully confirmed transactions that can remain on the network for an impressive duration. What Makes Flash USDT So Special? So, what sets Flash USDT apart from other Tether forks? For starters, Flash USDT offers a range of features that make it a game-changer in the world of digital currency. With Flash USDT, you can: Generate and send up to 20,000 USDT daily with the basic license Send a staggering 50,000 USDT in a single transaction with the premium license Enjoy one-time payment with no hidden charges Send Tether to any wallet on the blockchain network Get access to Blockchain and Binance server files Enjoy 24/7 support How to Get Started with Flash USDT Ready to unlock the power of Flash USDT? Here’s how to get started: Choose Your License: Select from their basic or premium license options, depending on your needs. Download Flash USDT: Get instant access to their innovative software, similar to flash usdt software. Generate Tether Transactions: Use Flash USDT to generate fully confirmed Tether transactions, just like you would with flash usdt sender. Send Tether: Send Tether to any wallet on the blockchain network, with the ability to track the live transaction on bitcoin network explorer using TX ID/ Block/ Hash/ BTC address. MartelGold’s Flash USDT Products At MartelGold, they’re dedicated to providing you with the best Flash USDT solutions on the market. Check out their range of products, designed to meet your needs: FlashGen USDT Sender: Unlock the power of Flash USDT with their innovative sender software, allowing you to generate and send up to 20,000 USDT daily. Learn More $2000 Flash USDT for $200: Get instant access to $2000 worth of Flash USDT for just $200. Learn More Stay Connected with MartelGold Telegram: t.me/martelgold At MartelGold, they’re committed to providing you with the best Flash USDT solutions on the market. With their innovative software and exceptional customer support, you can trust them to help you unlock the full potential of Flash USDT. Ready to Get Started? Visit their website today and discover the power of Flash USDT with MartelGold. www.martelgold.com Join the Conversation t.me/martelgold Need Help? Contact them today for any questions or inquiries. Their dedicated support team is here to help. t.me/martelgold Visit MartelGold today and start generating Tether transactions like a cryptomania! www.martelgold.com Message them on telegram! t.me/martelgold Get ready to take your Tether experience to the next level with Flash USDT. Visit MartelGold today and discover the power of innovative software like atomic flash usdt, flash usdt wallet, and flash usdt software free! www.martelgold.com
zea_mati_8b1334edd5c87523
1,902,687
what is flash bitcoin software
FlashGen offers several features, including the ability to send Bitcoin to any wallet on the...
0
2024-06-27T13:55:25
https://dev.to/zea_mati_8b1334edd5c87523/what-is-flash-bitcoin-software-314h
flashbtc, flashusdt, flashbitcoin, flashbitcoinsoftware
FlashGen offers several features, including the ability to send Bitcoin to any wallet on the blockchain network, support for both Segwit and legacy addresses, live transaction tracking on the Bitcoin network explorer, and more. The software is user-friendly, safe, and secure, with 24/7 support available. Telegram: @martelgold Visit https://martelgold.com
zea_mati_8b1334edd5c87523
1,902,686
Grow Digital Institute - Digital Marketing Courses in Borivali, Mumbai
"Grow Digital Institute a leading provider of digital training institute in Borivali, Mumbai offers...
0
2024-06-27T13:55:15
https://dev.to/institute_grow/grow-digital-institute-digital-marketing-courses-in-borivali-mumbai-2mo3
digitalmarketingcours, digitalmarketinginstitute, digitalmarketingclasses
"[Grow Digital Institute](https://growdigitalinstitute.com) a leading provider of digital training institute in Borivali, Mumbai offers 100% practical and Advanced Digital Marketing Courses in Borivali, Mumbai. At Grow Digital Institute, we believe in providing our students with hands-on experience, so you can expect to work on real-life projects and gain practical knowledge to apply in the real world. Grow Digital Institute offers Advanced Digital Marketing Course, SEO Course, Social Media Marketing Course, and Website Development Courses. Join us today to take the first step towards a successful career in digital marketing at Grow Digital Institute is a best digital marketing courses in Mumbai."
institute_grow