id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,907,252
Build a Responsive Modern Website with Next.js14, TypeScript, and Tailwind CSS
This project is a responsive modern website built with Next.js, TypeScript, and Tailwind CSS. ...
0
2024-07-01T06:41:07
https://dev.to/sudhanshuambastha/build-a-responsive-modern-website-with-nextjs14-typescript-and-tailwind-css-10pn
nextjs, tailwindcss, typescript, react
This project is a responsive modern website built with Next.js, TypeScript, and Tailwind CSS. ##Project Overview This project is designed to provide a robust and flexible template for building responsive and modern websites. It leverages the power of Next.js for server-side rendering, TypeScript for type safety, and Tailwind CSS for utility-first styling. The primary goal is to offer a seamless development experience while ensuring high performance and scalability. - GitHub Repository link => [Responsive Modern Website](https://github.com/Sudhanshu-Ambastha/Responsive-Modern-Website) ## Technologies Used The project leverages technologies for seamless development and styling. [![My Skills](https://skillicons.dev/icons?i=nodejs,npm,react,ts,next,tailwind)](https://skillicons.dev) ## Components The project includes several reusable components: - Button: A reusable button component with optional icons and variants. - Camp: A component to display camping site information. - Features: A component to list the features of the application. - Footer: The footer of the website containing links and contact information. - GetApp: A section to promote downloading the app. - Guide: A component to guide users. - Hero: The hero section of the homepage. - Navbar: The navigation bar of the website. By leveraging these technologies and structures, this project offers an engaging user experience while effectively showcasing your content. Create an impact with React, Tailwind CSS, and Next.js for your next web development endeavor! I made this app with the help of a YouTube tutorial to learn about Next.js and Tailwind CSS functionality with React.js. This repository has received _2 stars_, _6 clones_, and _124 views_. Experience the seamless integration of Tailwind CSS into my web projects by exploring this innovative project! Dive into the rich functionalities crafted with React, Tailwind CSS, and Next.js. While many have cloned my projects, only a few have shown interest by granting them a star. **Plagiarism is bad**, and even if you are copying it, just consider giving it a star. Share your feedback and questions in the comments section as you explore the endless possibilities awaiting you in this dynamic and popular repository.
sudhanshuambastha
1,907,250
GitOps: Streamlining Kubernetes Application Deployment with GitLab CI/CD, Helm Charts, and ArgoCD
In the realm of modern software development and deployment practices, GitOps has emerged as a robust...
0
2024-07-01T06:38:29
https://dev.to/pankaj892/gitops-streamlining-kubernetes-application-deployment-with-gitlab-cicd-helm-charts-and-argocd-685
In the realm of modern software development and deployment practices, GitOps has emerged as a robust methodology for managing Kubernetes applications efficiently. This approach leverages Git as the single source of truth for declarative infrastructure and application code, ensuring consistency, traceability, and collaboration across development teams. In this blog post, we'll delve into the core concepts of GitOps and explore how GitLab CI/CD, Helm Charts, and ArgoCD synergistically enable streamlined application deployment on Kubernetes. #### Understanding GitOps GitOps represents a paradigm shift towards managing infrastructure and applications through version-controlled repositories, typically using Git. The key principles of GitOps include: 1. **Declarative Configuration**: Infrastructure and application state are described declaratively and stored as code in Git repositories. 2. **Version Control**: Git provides a versioned history of changes, enabling rollbacks, audits, and collaboration among team members. 3. **Automation**: Continuous Integration/Continuous Deployment (CI/CD) pipelines automate the deployment process, triggered by Git repository events. 4. **Observability and Monitoring**: GitOps encourages observability by integrating monitoring and alerting tools with CI/CD pipelines to ensure reliability and performance. #### GitLab CI/CD: Automating Builds and Deployments GitLab CI/CD plays a pivotal role in the GitOps workflow by automating build, test, and deployment processes directly from GitLab repositories. Here's how it works: - **Pipeline Configuration**: Developers define CI/CD pipelines using `.gitlab-ci.yml` files, specifying stages such as build, test, and deploy. - **Triggering Deployments**: Changes pushed to specific branches or tags trigger pipeline executions, ensuring that deployments are automatically synchronized with code changes. - **Integration with Kubernetes**: GitLab integrates seamlessly with Kubernetes clusters, enabling deployment of Helm Charts and other Kubernetes resources directly from CI/CD pipelines. #### Helm Charts: Packaging Kubernetes Applications Helm is a package manager for Kubernetes that simplifies the deployment and management of applications. Helm Charts encapsulate Kubernetes manifests, making it easier to define, version, and share complex application configurations. - **Chart Repositories**: Helm Charts are stored in repositories and referenced in CI/CD pipelines for consistent deployment across environments. - **Parameterization**: Helm Charts support templating and parameterization, allowing customization of configurations for different environments or deployment scenarios. #### ArgoCD: Continuous Deployment and GitOps ArgoCD is a GitOps continuous delivery tool for Kubernetes that ensures applications are deployed and maintained consistently across clusters. - **Declarative GitOps Workflows**: ArgoCD continuously monitors Git repositories for changes and reconciles them with the desired state defined in Helm Charts or Kubernetes manifests. - **Automatic Synchronization**: Any changes to the Git repository trigger automatic synchronization and deployment updates to Kubernetes clusters, ensuring consistency and reliability. - **Rollback and Versioning**: ArgoCD provides rollbacks to previous versions and maintains an audit trail of deployments, enhancing traceability and resilience. ### Conclusion GitOps, powered by GitLab CI/CD, Helm Charts, and ArgoCD, represents a transformative approach to Kubernetes application deployment. By centralizing configuration management, automating deployment workflows, and enhancing observability, organizations can achieve greater efficiency, reliability, and collaboration in their DevOps practices. Embracing GitOps not only streamlines deployment processes but also fosters a culture of continuous improvement and innovation in modern software development teams. In summary, GitOps isn't just a methodology but a fundamental shift towards more efficient and reliable Kubernetes operations, leveraging the power of Git and modern CI/CD tools to drive application deployment and management forward.
pankaj892
1,907,249
Share File Across Devices without Internet or USB
A post by IamSh
0
2024-07-01T06:38:25
https://dev.to/banmyaccount/share-file-across-devices-without-internet-or-usb-562j
{% youtube https://www.youtube.com/watch?v=oe-907slDIg&t=30s&ab_channel=ShadeTech %}
banmyaccount
1,907,245
Go or Python ? Which language is used more in production software?
I think that Go is better than Python. And that's my own opinion. But I was wondering if people are...
0
2024-07-01T06:36:04
https://abanoubhanna.com/posts/go-vs-python-production-software/
go, python
I think that Go is better than Python. And that's my own opinion. But I was wondering if people are using Go or Python for their production software in the real world. I want to get statistics of production software to compare and understand the real world. ## Source of statistics In a [previous post](https://abanoubhanna.com/posts/go-vs-rust-use-production/), I used Homebrew as the source of statistics about apps written in each programming language. Homebrew is a package manager for MacOS and Linux distros. Homebrew provides a [json file](https://formulae.brew.sh/api/formula.json) of the index of all packages and apps in Homebrew Core Formulae. So I created a [simple CLI application](https://github.com/abanoubha/gobrew) written in Go to use that updated JSON file to count the apps using a specific programming language. ## Statistics by gobrew | language | June 2 | July 1 | |:---------|:-------|:-------| | Go | 988 | 998 | | Python | 777 | 783 | | Cython | 11 | 11 | According to statistics, Go is used for more production software than Python. ## Why Go is used more than Python ? | comparison | Go | Python / Cython | |:-----------------------|:------------|:----------------------------------------| | compilation speed | very fast | Python: not compiled, Cython: very slow | | runtime performance | very fast | Python: slow, Cython: good | | concurrency support | great | bad | | libraries & frameworks | good enough | huge | | readability | good | good | As you can see, Go checks all factors mentioned, but Python is great at things and bad at other things. Programming languages are just tools, so, overall, I personally choose Go every time as the better tool to get the job done. Most software developers choose Go over Python as you can conclude from the statistics above. I hope you enjoyed reading this post as much as I enjoyed writing it. If you know a person who can benefit from this information, send them a link of this post. If you want to get notified about new posts, follow me on [YouTube](https://www.youtube.com/@AbanoubHA?sub_confirmation=1), [Twitter (x)](https://x.com/abanoubha), [LinkedIn](https://linkedin.com/in/abanoub-hanna/), and [GitHub](https://github.com/abanoubha).
abanoubha
1,907,247
Eventify
A sea of information awaits the attendees! Organize your events in the event management platform...
0
2024-07-01T06:35:52
https://dev.to/lisa04/eventify-6fa
event, app, ticketing
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwu9ciw345pjlc2fek39.png)A sea of information awaits the attendees! Organize your events in the [event management platform](https://eventify.io/) like a pro with Eventify’s exclusive Event Guide feature. Learn more at: https://eventify.io/event-guide
lisa04
1,907,246
why solana devnet is not working
why solana devnet is not working getting errors while creating a transaction or verifying...
0
2024-07-01T06:35:37
https://dev.to/nathan_tran_03d39eb518141/why-solana-devnet-is-not-working-1fpj
why solana devnet is not working getting errors while creating a transaction or verifying transaction whats the issue and when it will be fixed the solana devnet cluster is not working
nathan_tran_03d39eb518141
1,907,243
How to Get Started with Open Source Contributions
Contributing to open source projects is a rewarding way to improve your skills, collaborate with...
0
2024-07-01T06:34:15
https://raajaryan.tech/how-to-get-started-with-open-source-contributions
opensource, beginners, tutorial, github
Contributing to open source projects is a rewarding way to improve your skills, collaborate with others, and give back to the community. Whether you’re a seasoned developer or just starting, open source contributions can enhance your professional profile and broaden your horizons. This guide will walk you through the steps of contributing to open source projects, from finding the right project to making your first contribution. ## 1. Understanding Open Source ### What is Open Source? Open source software is software with source code that anyone can inspect, modify, and enhance. It’s built collaboratively by a community of developers, and its open nature means anyone can contribute. ### Why Contribute to Open Source? - **Skill Development**: Enhance your coding, problem-solving, and project management skills. - **Networking**: Connect with other developers and industry experts. - **Portfolio Building**: Showcase your contributions and experience. - **Community Impact**: Help improve software used by people worldwide. ## 2. Finding the Right Project ### Assess Your Interests and Skills - **Interests**: Choose projects that align with your passions or interests. - **Skills**: Consider your current skill set and areas where you want to grow. ### Explore Open Source Platforms - **GitHub**: The largest platform for open source projects. - **GitLab**: Similar to GitHub with a focus on DevOps and CI/CD. - **Bitbucket**: Popular for projects that use Mercurial or Git. ### Search for Projects - Use keywords related to your interests and skills. - Look for tags like `good first issue` or `beginner-friendly`. ### Evaluate Project Health - **Activity**: Active projects with regular commits and issue responses. - **Community**: Engaged community with helpful maintainers and contributors. - **Documentation**: Comprehensive documentation for easy onboarding. ## 3. Getting Started with a Project ### Fork and Clone the Repository 1. **Fork** the repository to create your copy. 2. **Clone** the forked repository to your local machine: ```bash git clone https://github.com/your-username/project-name.git ``` ### Set Up the Project - Follow the project’s setup instructions. - Install dependencies and configure your development environment. ### Explore the Codebase - Understand the project structure. - Identify the key components and their interactions. ## 4. Making Your First Contribution ### Find an Issue to Work On - Start with issues labeled `good first issue` or `beginner-friendly`. - Read the issue description and comments to understand the problem. ### Communicate with Maintainers - Comment on the issue to express your interest. - Ask for clarification if needed. ### Create a Branch - Create a new branch for your work: ```bash git checkout -b issue-123-description ``` ### Make and Test Your Changes - Write clean, readable, and well-documented code. - Test your changes thoroughly. ### Commit Your Changes - Write clear and descriptive commit messages: ```bash git commit -m "Fix issue #123: Add feature X" ``` ### Push Your Changes - Push your branch to your forked repository: ```bash git push origin issue-123-description ``` ### Create a Pull Request - Navigate to the original repository on GitHub. - Click on `Compare & pull request`. - Provide a detailed description of your changes. - Submit the pull request for review. ## 5. Participating in the Community ### Follow Contribution Guidelines - Adhere to the project’s coding standards and contribution guidelines. - Respect the project’s code of conduct. ### Engage with Other Contributors - Review and comment on other pull requests. - Participate in discussions and meetings. ### Be Patient and Open to Feedback - Maintain a positive attitude. - Be open to feedback and ready to make revisions. ## 6. Tips for Successful Contributions ### Start Small - Begin with documentation updates, bug fixes, or small features. - Gradually take on more complex tasks. ### Write Good Documentation - Improve existing documentation or add new sections. - Create tutorials, examples, or guides. ### Improve Test Coverage - Write unit tests for untested parts of the codebase. - Ensure new features are thoroughly tested. ### Maintain a Consistent Workflow - Keep your fork and branches up to date with the upstream repository. ```bash git fetch upstream git rebase upstream/main ``` ### Learn from Code Reviews - Analyze feedback and incorporate it into your future contributions. - Review other contributors’ code to learn new techniques and best practices. ## 7. Advanced Contributions ### Propose New Features - Discuss your ideas with maintainers before implementation. - Create detailed proposals or design documents. ### Refactor Code - Identify areas for improvement or optimization. - Ensure refactoring doesn’t introduce bugs. ### Become a Maintainer - Consistently contribute high-quality code. - Take on more responsibilities, such as reviewing pull requests and triaging issues. ## 8. Resources and Tools ### Learning Platforms - **freeCodeCamp**: Offers free coding tutorials and exercises. - **Coursera**: Provides courses on various programming topics. ### Communication Tools - **Slack**: Popular for project-specific communication. - **Discord**: Used by many open source communities for real-time chat. ### Documentation Tools - **Markdown**: Standard format for writing documentation. - **Jekyll**: Static site generator for project documentation. ### Code Quality Tools - **ESLint**: Linter for JavaScript. - **Prettier**: Code formatter. ## 9. Overcoming Challenges ### Imposter Syndrome - Understand that everyone starts somewhere. - Celebrate your progress and contributions. ### Time Management - Balance open source contributions with personal and professional responsibilities. - Set realistic goals and prioritize tasks. ### Technical Challenges - Break down complex problems into smaller tasks. - Seek help from the community when stuck. ## 10. Conclusion Contributing to open source is a fulfilling journey that offers numerous benefits, from skill development to community engagement. By following this guide, you can confidently navigate the process of finding, contributing to, and thriving in open source projects. Remember, every contribution counts, no matter how small, and your efforts can make a significant impact on the software and the community. If you are interested in contributing to my project, check out [ULTIMATE-JAVASCRIPT-PROJECT](https://github.com/deepakkumar55/ULTIMATE-JAVASCRIPT-PROJECT). ## 💰 You Can Help Me by Donating [![BuyMeACoffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-ffdd00?style=for-the-badge&logo=buy-me-a-coffee&logoColor=black)](https://buymeacoffee.com/dk119819) Start your open source journey today and become a part of a global movement that promotes collaboration, innovation, and continuous learning. Happy contributing!
raajaryan
1,907,237
Generative AI Dataset Generator App with Streamlit and Lyzr
In today’s data-driven world, generating realistic datasets is essential for testing, training...
0
2024-07-01T06:26:24
https://dev.to/harshitlyzr/generative-ai-dataset-generator-app-with-streamlit-and-lyzr-6no
In today’s data-driven world, generating realistic datasets is essential for testing, training machine learning models, and conducting meaningful analysis. To streamline this process, we present a Streamlit app that leverages the power of Lyzr Automata, a framework that simplifies building and managing AI-driven workflows. This blog will guide you through creating a Dataset Generator app using Lyzr Automata, OpenAI’s GPT models, and Streamlit. **Problem Statement** Creating datasets manually can be time-consuming and prone to errors, especially when the data needs to be diverse and realistic. Automating dataset generation ensures consistency, saves time, and allows data engineers to focus on more complex tasks. This app aims to solve the problem of manual dataset creation by providing an easy-to-use interface where users can specify the format, fields, and number of entries for the dataset. **Solution** Our Streamlit-based Dataset Generator app leverages Lyzr Automata to automate the creation of datasets. Users can input their dataset format (CSV or Table), define the fields they need, and specify the number of entries. The app then generates a dataset that meets these criteria using an AI model. **Why Lyzr Automata?** Lyzr Automata is used for its advanced capabilities in creating and managing AI agents and workflows, particularly in the context of Generative AI. Here are some key reasons why Lyzr Automata is beneficial: Ease of Integration: Lyzr Automata can be easily integrated into existing systems and workflows, making it convenient to implement AI-driven solutions without a complete overhaul of current processes. Automation: It helps automate repetitive tasks, reducing the manual effort required and increasing efficiency. This is particularly useful in tasks such as data preprocessing, content generation, and workflow management. Customization: Lyzr Automata offers a high degree of customization, allowing users to tailor AI agents to specific needs and requirements. This flexibility ensures that the solutions are aligned with business goals and objectives. Scalability: The platform is designed to scale seamlessly, accommodating increasing workloads and expanding as the business grows. This makes it suitable for both small-scale projects and large enterprise applications. Performance Optimization: Lyzr Automata includes tools for monitoring and optimizing the performance of AI agents, ensuring they operate efficiently and effectively. Support for Generative AI: It is particularly strong in supporting generative AI applications, such as creating text, images, and other content types, making it a valuable tool for businesses looking to leverage generative AI capabilities. **How the App Works** User Interface: The app uses Streamlit for its user-friendly interface. Users can enter their OpenAI API key, specify the format (CSV or Table), define the fields, and set the number of entries for the dataset. Lyzr Automata Workflow: The app defines a workflow using Lyzr Automata, where an agent powered by OpenAI’s GPT-4 generates the dataset based on user inputs. Dataset Generation: The specified format, fields, and number of entries are used to create a realistic and diverse dataset. The generated dataset is displayed within the app. **Setting Up the Environment** **Imports:** Imports necessary libraries: streamlit, libraries from lyzr_automata ``` pip install lyzr_automata streamlit ``` ``` import streamlit as st from lyzr_automata.ai_models.openai import OpenAIModel from lyzr_automata import Agent,Task from lyzr_automata.pipelines.linear_sync_pipeline import LinearSyncPipeline from PIL import Image ``` **Sidebar Configuration** ``` api = st.sidebar.text_input("Enter our OPENAI API KEY Here", type="password") if api: openai_model = OpenAIModel( api_key=api, parameters={ "model": "gpt-4-turbo-preview", "temperature": 0.2, "max_tokens": 1500, }, ) else: st.sidebar.error("Please Enter Your OPENAI API KEY") ``` if api:: Checks if an API key is entered. openai_model = OpenAIModel(): If a key is entered, creates an OpenAIModel object with the provided API key, model parameters (gpt-4-turbo-preview, temperature, max_tokens). else: If no key is entered, displays an error message in the sidebar. **api_documentation Function:** ``` def dataset_generation(format, fields, entries): dataset_agent = Agent( prompt_persona=f"You are a Data Engineer with over 10 years of experience.you cares about data integrity and believes in the importance of realistic datasets for meaningful analysis.", role="Data Engineer", ) dataset = Task( name="Dataset generation", output_type=OutputType.TEXT, input_type=InputType.TEXT, model=openai_model, agent=dataset_agent, log_output=True, instructions=f""" Please generate a dataset in {format} format with the following fields: {fields} The dataset should contain {entries} entries.Each entry should be unique and provide a diverse representation across all fields. Ensure the entries are realistic and diverse. Accuracy is important, so ensure that {fields} are plausible and realistic. If using fictional data, maintain consistency and coherence within the dataset. Please provide the generated Dataset or output in the specified format. [!Important]Only generate Dataset nothing apart from it. """, ) output = LinearSyncPipeline( name="Dataset Generation", completion_message="Dataset Generated!", tasks=[ dataset ], ).run() return output[0]['task_output'] ``` def dataset_generation(format, fields, entries):: Defines a function named dataset_generation that takes three arguments: format (CSV or Table), fields (comma-separated list of dataset fields), and entries (number of entries to generate). dataset_agent = Agent(): Creates an Agent object defining the prompt persona and role ("Data Engineer"). dataset = Task(): Creates a Task object specifying details about the dataset generation task. name: Sets the task name to “Dataset generation”. output_type: Sets the expected output type as text. input_type: Sets the input type for the task as text. model: Assigns the openai_model object (if API key is provided). agent: Assigns the dataset_agent object. log_output: Sets logging for the task output to True. instructions: Defines a multi-line string containing instructions for the AI model. The instructions specify the desired format, fields, number of entries, data characteristics (unique, diverse, realistic), and output format. output = LinearSyncPipeline(): Creates a LinearSyncPipeline object named "Dataset Generation" with a completion message and assigns the dataset task to it. return output[0][‘task_output’]: Runs the pipeline, retrieves the task output from the first element (index 0) of the results, and returns it. **User Code Input:** ``` specify_format = st.selectbox("Enter format", ["CSV","Table"],placeholder="CSV") specify_fields = st.text_area("Enter Fields", placeholder="Name: Customer Name, Age: Customer Age",height=300) no_entries = st.number_input("Enter number of entries", placeholder="10") ``` specify_format = st.selectbox(): Creates a dropdown menu named “Enter format” with options “CSV” and “Table” for users to select the desired dataset format. specify_fields = st.text_area(): Creates a multi-line text area named “Enter Fields” where users can input a comma-separated list of dataset fields (e.g., Name: Customer Name, Age: Customer Age). no_entries = st.number_input(): Creates a number input field named “Enter number of entries” where users can specify the desired number of entries for the generated dataset. **Generate Button and Output Display:** ``` if st.button("Generate"): solution = dataset_generation(specify_format, specify_fields, no_entries) st.markdown(solution) ``` if st.button(“Generate”):: Creates a button labeled “Generate”. If the button is clicked, the following code block executes. solution = dataset_generation(): Calls the dataset_generation function with the user-selected format, entered fields, and number of entries. st.markdown(solution): Displays the generated dataset output as markdown formatted text on the app. **Running the App** Finally, run the app using the following command in your terminal: ``` streamlit run app.py ``` try it now: https://github.com/harshit-lyzr/dataset_generator For more information explore the website: [Lyzr](https://www.lyzr.ai/) Contibute to Our Project: https://github.com/LyzrCore/lyzr-automata
harshitlyzr
1,907,242
Looking for Pre-Trained ML/AI Model for Automatic Hotspot Placement in 360-Degree House Images
Hi Everyone, I’m working on creating virtual tours from 360-degree house images and need a...
0
2024-07-01T06:30:11
https://dev.to/aayush_singla_bbda9441ea0/looking-for-pre-trained-mlai-model-for-automatic-hotspot-placement-in-360-degree-house-images-1660
machinelearning, ai
Hi Everyone, I’m working on creating virtual tours from 360-degree house images and need a pre-trained machine learning or AI model that can automatically detect and place navigation hotspots. If anyone knows of any pre-trained models that can perform this task, please let me know. Thanks
aayush_singla_bbda9441ea0
1,906,757
Adding Payment to Django app
So, you've decided to build your eCommerce or SaaS platform. Congratulation! But now comes the big...
0
2024-07-01T06:29:36
https://dev.to/paul_freeman/adding-payment-to-django-app-4cc9
django, stripe, payment, webdev
So, you've decided to build your eCommerce or SaaS platform. Congratulation! But now comes the big question: how will you collect payments from your customers? Having a solid and secure payment system is a must for any online business. This is where adding a [payment gateway](https://templates.foxcraft.tech/blog/b/what-are-payment-gateways) to your Django app comes in. Many payment gateways, such as [Stripe](https://stripe.com), [Paypal](https://paypal.com), have made this easier to collect payments. All you need to do is integrate their API into your application, and they handle the rest—securely collecting payments, ensuring compliance, and more. In this post, we'll walk you through how to set up a payment system in your Django application using Stripe, making sure your transactions are smooth and secure for your customers, and straightforward for you. First make sure to create a [stripe account](https://stripe.com), we'll only need a test account for this tutorial > **NOTE** > Use only stripe test account for testing purposes. Don't send money to youself in production as this violates stripe policy ## Overview of Stripe and Django 1. First we make a call to the stripe API, and redirect the customer to a Stripe secure form, to collect Payment. 2. If the charge was successful, the stripe form will redirect to a success page in our website, otherwise a failed page. 3. We listen to webhook events to confirm the transaction. ## Installing stripe to Django We'll focus on building a small SaaS Payment subscription, even if you are building something else, the below steps will remain the same You can check the implementation in the [Django Saas Boilerplate](https://github.com/PaulleDemon/Django-SAAS-Boilerplate) Install dependency ``` pip install stripe ``` Optain stripe test secret key from dashboard and click on developers ![Stripe key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wxomt37n7wbwawnvfpxk.png) Now under webhook click on test in local environment, you'll find the stripe webhook key as well ![Webhook key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0oytalrelejqwpqxsic.png) now add it to settings.py ```py INSTALLED_APPS = [ ] . . . STRIPE_API_KEY = "sk_test_" STRIPE_WEBHOOK_KEY = "whsec_" . . . ``` Let's create an app and call it transaction, where everything related to the payments is added ``` python manage.py startapp transaction ``` Add a model called Plan (subscription plan) inside transaction/models.py ```py from decimal import Decimal class Plan(models.Model): name = models.CharField() description = models.CharField(max_length=150) # small description of the plan price = models.DecimalField(max_digits=9, decimal_places=2, default="0.0") datetime = models.DateTimeField(auto_now=True) # created datetime def get_total_cents(self): # converts dollar to cents. integer = int(self.price) decimal = int((self.price % 1)*100) return (integer * 100) + decimal return dollar_to_cents(self.price) ``` Now lets add a `Transaction` model that records all the transaction initiated, status and more. ```py class SUBSCRIPTION_STATUS(models.IntegerChoices): INACTIVE = (0, 'inactive') ACTIVE = (1, 'active') CANCELLED = (2, 'cancelled') class PAYMENT_STATUS(models.IntegerChoices): UNPAID = (0, 'unpaid') PAID = (1, 'paid') class Transaction(BasePayment): user = models.ForeignKey(User, null=True, blank=True, on_delete=models.SET_NULL) # foreign key to user model plan = models.ForeignKey(Plan, null=True, blank=True, on_delete=models.SET_NULL) # foreign key to subscription plan total = models.DecimalField(max_digits=9, decimal_places=2, default="0.0") status = models.PositiveSmallIntegerField(choices=PAYMENT_STATUS.choices, default=PAYMENT_STATUS.UNPAID) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) transaction_id = models.CharField(max_length=255, blank=True) subscription_id = models.CharField(max_length=255, null=True, blank=True) # creating stripe subscription customer_id = models.CharField(max_length=255, null=True, blank=True) # for creating stripe subscription subscription_status = models.PositiveSmallIntegerField(choices=SUBSCRIPTION_STATUS.choices, default=SUBSCRIPTION_STATUS.INACTIVE) ``` Now go to view and lets start by listing out plan/product ```py def pricing(request): plans = Plan.objects.all() return render(request, "payment/pricing.html", { 'plans': plans }) def payment_success(request): return render(request, "payment/success.html") def payment_failed(request): return render(request, "payment/failure.html") ``` Now the payment/pricing.html ```html {% extends "base.html" %} {% block title %}Pricing{% endblock title %} {% block description %}Pricing for the SAAS{% endblock description %} {% block content %} <div> <h1>Plans and pricing</h1> <div> This is a sample pricing, the purchase won't be made. </div> </div> <section > {% for plan in plans %} <form action="{% url "create-payment" %}" method="POST"> {% csrf_token %} <div> <h2 >{{plan.name}}</h2> <h3 >$ {{plan.price|stringformat:'d'}}</h3> <input type="hidden" name="plan" value="{{plan.id}}"> <button type="submit""> Get started </button> </div> </form> {% endfor %} </section> {% endblock content %} ``` Now add a success page and a failure page, so if the transaction is successful it can be redirected to success page. success.html ```html {% extends "base.html" %} {% load static %} {% block content %} <div class=""> <div class=""> <i class="bi bi-check-circle tw-text-9xl tw-text-green-600"></i> <div class="">Success</div> </div> </div> {% endblock content %} ``` Similarly failure.html page ```html {% extends "base.html" %} {% load static %} {% block content %} <div class=""> <div class=""> <i class="bi bi-x-circle tw-text-9xl tw-text-red-600"></i> <div class="tw-text-3xl">Payment failed</div> </div> </div> {% endblock content %} ``` Now lets start creating checkout view. So go back to views.py and add the following view. ```py from django.conf import settings from django.contrib.auth.decorators import login_required from django.views.decorators.http import require_http_methods stripe.api_key = settings.STRIPE_API_KEY @login_required @require_http_methods(['POST']) def create_payment(request): plan = request.POST.get("plan") try: plan = Plan.objects.get(id=int(plan)) except (Plan.DoesNotExist, ValueError): return render(request, "404.html", status=404) amount = plan.price payment = Payment.objects.create( total=amount, billing_email=request.user.email, user=request.user, plan=plan ) pay_data = { 'price_data' :{ 'product_data': { 'name': f'{plan.name}', 'description': plan.description or '', }, 'unit_amount': plan.get_total_cents(), # get the currency in the smallest unit 'currency': 'usd', # set this to your currency 'recurring': {'interval': 'month'} # refer: https ://docs.stripe.com/api/checkout/sessions/create?lang=cli#create_checkout_session-line_items-price_data-recurring }, 'quantity' : 1 } checkout_session = stripe.checkout.Session.create( line_items=[ pay_data ], mode='subscription', success_url=request.build_absolute_uri(payment.get_success_url()), cancel_url=request.build_absolute_uri(payment.get_failure_url()), customer=None, client_reference_id=request.user.id, customer_email=request.user.email, metadata={ 'customer': request.user.id, 'payment_id': payment.id } ) payment.transaction_id = checkout_session.id payment.save() return redirect(checkout_session.url) # redirect the user to stripe secure form ``` Now a customer might take time to fill in their details and submit the form to stripe, once its submitted stripe sends events via [webhook](https://www.redhat.com/en/topics/automation/what-is-a-webhook) A webhook is a simple view, that accepts post request and returns a 200 ok status. so lets add webhook to views.py ```py from django.views.decorators.csrf import csrf_exempt from django.views.decorators.http import require_POST @require_POST @csrf_exempt def stripe_webhook(request): payload = request.body sig_header = request.META['HTTP_STRIPE_SIGNATURE'] event = None try: event = stripe.Webhook.construct_event( payload, sig_header, STRIPE_WEBHOOK_SECRET ) except ValueError as e: # Invalid payload return JsonResponse({'error': str(e)}, status=400) except stripe.error.SignatureVerificationError as e: # Invalid signature return JsonResponse({'error': str(e)}, status=400) # print("Event: ", event) data = event['data']['object'] # Handle the even if event['type'] == 'checkout.session.completed': subscription = Transaction.objects.get(transaction_id=data['id']) subscription.status = PAYMENT_STATUS.PAID subscription.subscription_status = SUBSCRIPTION_STATUS.ACTIVE subscription.subscription_id = data['subscription'] subscription.customer_id = data['customer'] subscription.save() if event['type'] == 'checkout.session.expired': subscription = Transaction.objects.get(transaction_id=data['id']) subscription.status = PAYMENT_STATUS.UNPAID subscription.save() elif event['type'] == 'customer.subscription.deleted': # Subscription deleted subscription = Transaction.objects.get(stripe_subscription_id=event['data']['object']['subscription']) subscription.subscription_status = SUBSCRIPTION_STATUS.CANCELLED subscription.save() elif event['type'] == "charge.failed": pass elif event['type'] == 'invoice.payment_succeeded': # Payment succeeded pass elif event['type'] == 'invoice.payment_failed': # Payment succeeded pass elif event['type'] == 'customer.subscription.trial_will_end': # print('Subscription trial will end') pass elif event['type'] == 'customer.subscription.created': # print('Subscription created %s', event.id) pass elif event['type'] == 'customer.subscription.updated': # print('Subscription created %s', event.id) pass return JsonResponse({'status': 'success'}, status=200) ``` You can read more about webhook events in stripe's page: [Stripe events](https://docs.stripe.com/api/events) Now add your paths to your urls.py ```py from django.urls import path from .views import (create_payment, pricing, stripe_webhook, payment_failed, payment_success) urlpatterns = [ path('pricing/', pricing, name='pricing'), path('create-payment/', create_payment, name='create-payment'), path('payment/failed/', payment_failed, name='payment-failed'), path('payment/success/', payment_success, name='payment-success'), path('stripe/webhook/', stripe_webhook, name='webhook'), ] ``` ## Testing stripe webhook locally To test stripe webhook locally, you'll need to install [stripe cli](https://docs.stripe.com/stripe-cli) Once installed, login via ```cmd stripe login ``` Now forward the events to localhost ``` stripe listen --forward-to localhost:8000/stripe/webhooks/ ``` That's it now you can listen to webhook events locally. You can checkout the source code at: https://github.com/PaulleDemon/Django-SAAS-Boilerplate If you have question, drop a comment. Found it helpful? share this article.
paul_freeman
1,907,241
Building an HTML to ReactJS Converter with Streamlit and Lyzr Automata
ReactJS has revolutionized front-end development with its component-based architecture and efficient...
0
2024-07-01T06:28:48
https://dev.to/harshitlyzr/building-an-html-to-reactjs-converter-with-streamlit-and-lyzr-automata-134e
ReactJS has revolutionized front-end development with its component-based architecture and efficient state management. However, converting existing HTML, CSS, and JavaScript code into React components can be a daunting task. This blog post will guide you through building an HTML to ReactJS converter using Streamlit and Lyzr Automata. By leveraging the power of generative AI models, this application streamlines the conversion process, making it more efficient and user-friendly. **Problem:** Developers often face the challenge of migrating legacy web projects from HTML, CSS, and JavaScript to a modern ReactJS framework. This manual conversion process involves breaking down the HTML structure into reusable React components, managing state effectively, and ensuring the overall maintainability and performance of the codebase. The complexity of this task can lead to increased development time, potential for bugs, and a steep learning curve for developers not proficient in ReactJS. **Objective:** To address this challenge, we propose developing an AI-powered HTML to ReactJS converter application. This tool will leverage the capabilities of generative AI models to automate the conversion process, providing developers with a quick, efficient, and accurate way to transform their HTML, CSS, and JavaScript code into ReactJS components. **Scope:** The application will be built using Streamlit for the user interface and Lyzr Automata for integrating AI models. Users will input their HTML, CSS, and JavaScript code, and the application will output well-structured, maintainable ReactJS code. The tool will ensure the following: Component Structure: The HTML design will be broken down into reusable React components with a clear hierarchy. State Management: Identify components that require state management and implement appropriate solutions using React’s built-in hooks or external libraries. Props and Data Flow: Clearly define data flow between components, specifying the necessary props and their types. **Setting Up the Environment** **Imports:** Imports necessary libraries: streamlit, libraries from lyzr_automata ``` pip install lyzr_automata streamlit ``` ``` import streamlit as st from lyzr_automata.ai_models.openai import OpenAIModel from lyzr_automata import Agent,Task from lyzr_automata.pipelines.linear_sync_pipeline import LinearSyncPipeline from PIL import Image ``` **Sidebar Configuration** ``` api = st.sidebar.text_input("Enter our OPENAI API KEY Here", type="password") if api: openai_model = OpenAIModel( api_key=api, parameters={ "model": "gpt-4-turbo-preview", "temperature": 0.2, "max_tokens": 1500, }, ) else: st.sidebar.error("Please Enter Your OPENAI API KEY") ``` if api:: Checks if an API key is entered. openai_model = OpenAIModel(): If a key is entered, creates an OpenAIModel object with the provided API key, model parameters (gpt-4-turbo-preview, temperature, max_tokens). else: If no key is entered, displays an error message in the sidebar. **reactjs_conversion Function:** ``` def reactjs_conversion(html, css, javascript): react_agent = Agent( prompt_persona=f"You are a Frontend Engineer with over 10 years of experience.", role="Frontend Engineer", ) react_task = Task( name="Dataset generation", output_type=OutputType.TEXT, input_type=InputType.TEXT, model=openai_model, agent=react_agent, log_output=True, instructions=f""" We need to convert an existing HTML design with css and js into a ReactJS application. The conversion should result in a well-structured, maintainable, and performant React codebase. Follow Below Instructions: **Component Structure**: Break down the HTML design into reusable React components. Define a clear component hierarchy, ensuring components are logically organized and nested. **State Management**: Identify which components will need to manage state. Decide whether to use React's built-in state management (useState, useReducer) or an external library (Redux, MobX). **Props and Data Flow**: Determine how data will flow between components. Clearly define the props each component will require and their types. Only give ReactJS Code nothing apart from it. HTML: {html} CSS: {css} JAVASCRIPT: {javascript} """, ) output = LinearSyncPipeline( name="Dataset Generation", completion_message="Dataset Generated!", tasks=[ react_task ], ).run() return output[0]['task_output'] ``` Defines a function reactjs_conversion that takes HTML, CSS, and JavaScript code as input and returns the converted ReactJS code. Creates a react_agent object defining the prompt persona as a Frontend Engineer for better task understanding. Creates a react_task object specifying: Task name: “Dataset generation” Output type: Text (the generated ReactJS code) Input type: Text (the provided HTML, CSS, and JS code) Model: The openai_model object Agent: The react_agent object Instructions: A detailed prompt explaining the conversion task, emphasizing component structure, state management, props & data flow, and requesting only ReactJS code as output. Creates a LinearSyncPipeline object to execute the task in a linear sequence. Runs the pipeline and retrieves the task output, which is the generated ReactJS code. Returns the retrieved output. **User Code Input:** ``` col1, col2, col3 = st.columns(3) with col1: html5 = st.text_area("Enter HTML code", height=300) with col2: css3 = st.text_area("Enter CSS Code", height=300) with col3: js = st.text_area("Enter JS code", height=300) ``` Creates three columns using st.columns() for HTML, CSS, and JavaScript code input with text areas using st.text_area(). **Generate Button and Output Display:** ``` if st.button("Convert"): solution = reactjs_conversion(html5, css3, js) st.markdown(solution) ``` Creates a button labeled “Convert” using st.button(). On clicking the button: Calls the reactjs_conversion function with the entered HTML, CSS, and JS code. Displays the converted ReactJS code using st.markdown(). **Running the App** Finally, run the app using the following command in your terminal: ``` streamlit run app.py ``` try it now: https://github.com/harshit-lyzr/reactjs_convertor For more information explore the website: [Lyzr](https://www.lyzr.ai/) Contibute to Our Project: https://github.com/LyzrCore/lyzr-automata
harshitlyzr
1,907,239
JSON Escape and Unescape
Now-a-days the data needs to be shared across different systems and platforms. One of the most...
0
2024-07-01T06:26:52
https://keploy.io/blog/community/json-escape-and-unescape
json, webdev, opensource, news
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0kteor63xoc5xcf337ea.png) Now-a-days the data needs to be shared across different systems and platforms. One of the most common formats for this data exchange is JSON (JavaScript Object Notation). Understanding how to properly handle special characters in JSON is crucial for ensuring data integrity. In this blog, we’ll explore JSON escape and unescape, explain their importance. So let's get started! **What is JSON?** JSON (JavaScript Object Notation) is a lightweight data-interchange format that is easy for humans to read and write and easy for machines to parse and generate. JSON is often used to send data between servers and web applications. Here’s a simple example of JSON: ``` { "name": "John Doe", "age": 25, "isStudent": false, "hobbies": ["reading", "sports", "music"] } ``` This JSON object describes a person with a name, age, student status, and a list of hobbies. **What is JSON Escape?** JSON escape refers to the process of converting certain characters in a JSON string to their escaped representations. This ensures the JSON data remains valid and can be safely transmitted and interpreted by different systems. **Why Do We Need to Escape Characters?** Some characters have special meanings in JSON. For example, double quotes are used to define string values. If you need to include a double quote within a string, you must escape it to prevent it from being interpreted as the end of the string. **Common Escape Sequences in JSON** Here are some common escape sequences in JSON: \" : Escaped double quote \\ : Escaped backslash \/ : Escaped forward slash (optional but often used) \b : Backspace \f : Form feed \n : Newline \r : Carriage return \t : Horizontal tab \uXXXX : Unicode escape sequence for special characters Example of JSON Escape Imagine you have the following string that you want to include in a JSON object: He said, "Hello, World!" and then left. To include this in a JSON object, you need to escape the double quotes inside the string: ``` { "message": "He said, \"Hello, World!\" and then left." } ``` **How to Escape JSON** Most programming languages provide libraries to handle JSON escaping. Here’s how you can do it in Python and JavaScript. Python In Python, there is json module which is use to escape characters: ``` import json data = { "message": 'He said, "Hello, World!" and then left.' } escaped_json = json.dumps(data) print(escaped_json) Output: {"message": "He said, \"Hello, World!\" and then left."} ``` JavaScript In JavaScript, JSON.stringify method can be used: ``` let data = { message: 'He said, "Hello, World!" and then left.' }; let escapedJson = JSON.stringify(data); console.log(escapedJson); Output: {"message":"He said, \"Hello, World!\" and then left."} ``` **What is JSON Unescape?** JSON unescape refers to the process of converting escaped characters in a JSON string back to their original form. This is important for correctly interpreting and displaying the data. **Example of JSON Unescape** Consider the following escaped JSON string: ``` { "message": "Hello, \"world\"!\nThis is a backslash: \\" } The unescaped version would look like this: { "message": "Hello, "world"!\nThis is a backslash: \" } ``` **How to Unescape JSON** Just like escaping, most programming languages provide functions to unescape JSON. Here’s how you can do it in Python and JavaScript. Python In Python, you can use the json module to unescape characters: ``` import json escaped_json = '{"message": "Hello, \\"world\\"!\\nThis is a backslash: \\\\"}' unescaped_json = json.loads(escaped_json) print(unescaped_json) ``` JavaScript In JavaScript, you can use the JSON.parse method: ``` let escapedJson = '{"message": "Hello, \\"world\\"!\\nThis is a backslash: \\\\"}'; let unescapedJson = JSON.parse(escapedJson); console.log(unescapedJson); ``` **Conclusion** JSON escape and unescape are processes that ensure JSON data is correctly interpreted and transmitted. Escaping converts special characters into their escaped forms to prevent parsing errors, while unescaping converts them back to their original forms for accurate data representation. **FAQ's** **What is JSON?** JSON (JavaScript Object Notation) is a lightweight data-interchange format that is easy for humans to read and write and easy for machines to parse and generate. It is often used to send data between servers and web applications. **What does JSON escape mean?** JSON escape refers to the process of converting certain characters in a JSON string to their escaped representations. This ensures that the JSON data remains valid and can be safely transmitted and interpreted by different systems. **Why do we need to escape characters in JSON?** Some characters have special meanings in JSON. For instance, double quotes are used to define string values. If you need to include a double quote within a string, you must escape it to prevent it from being interpreted as the end of the string. **What does JSON unescape mean?** JSON unescape refers to the process of converting escaped characters in a JSON string back to their original form. This is important for correctly interpreting and displaying the data. **Why is understanding JSON escape and unescape important?** Understanding JSON escape and unescape is essential for working with JSON data. Escaping ensures that special characters do not cause parsing errors, while unescaping ensures that data is correctly interpreted and displayed. This is crucial for maintaining data integrity in web development and other applications where JSON is used.
keploy
1,907,238
System Integration Testing: A Complete Guide with Challenges and Best Practices
System Integration Testing (SIT) is a crucial aspect of the software testing life cycle, where the...
0
2024-07-01T06:26:45
https://www.cioinsiderindia.com/news/system-integration-testing-a-complete-guide-with-challenges-and-best-practices-nwid-6007.html
system, integration, testing
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/59hgtfu17c4khtnnoea6.jpg) System Integration Testing (SIT) is a crucial aspect of the software testing life cycle, where the overall system is tested, ensuring seamless interaction and functionality in disparate parts. The development team conducts SIT to ensure that the different components are integrated and work together correctly as a system. SIT can involve the testing of software, systems, or networks. One simple example can be NetSuite-Shopify integration, where the backend is Oracle NetSuite and the front end is Shopify. These systems are integrated where NetSuite keeps track of inventory and Shopify displays the exact inventory to customers online. This comprehensive guide offers an overview of integrated system testing, including its purpose, process, cha- llenges and best practices. **What is System Integration Testing**? System integration testing (SIT) is a type of software testing, carried out to perform the overall testing of a system consisting of various integrated components. SIT is a QA (Quality Assurance) process ensuring the compatibility of two or more systems. It helps developers ensure that the integrated systems are working together correctly and that interactions between them are appropriate and safe. It involves testing of software, systems, or networks to check the system’s performance, integrity, and compatibility. **What are Different System Integration Testing Techniques**? System integration testing, also referred to as Incremental testing, is a process that uses dummy programs and drivers to test the interactions between components of a system. Here are some different system integration testing techniques: **Bottom-Up Integration Testing**: This testing starts at the lowest level of the module in the tech stack’s architecture, where the control flows towards the top of the hierarchy. This test is done during the early phase of development making it for fixing bugs straight away with minimal identification and troubleshooting time. **Top Down Integration Testing**: In top down SIT, the testing starts from the top module, where the control flows from top to the bottom through control flow of architecture structure. **Sandwich Testing**: Sandwich testing, also known as hybrid integration testing combines both top down and bottom up SIT testing. In this approach testing is done in both directions higher-level modules (upwards) and lower-level modules (downwards). The downside of sandwich testing is that it is complex, where the process begins at the middle layer and combines two different integration testing approaches. **Big Bang Integration Testing**: It is a form of non-incremental type of integrated system testing, which is performed when all the modules are assembled into a complete system. All the modules are integrated together and tested as a single unit, suitable for smaller systems. **Who Performs System Integration Testing**? Some of the key stakeholders involved in performing SIT are as follows: **Test Manager / Test Lead** As a part of the development team, they outline the scope, objectives, approach, and schedule for SIT, deciding who can perform system integration testing based on their roles. **Integration Testers** A tester develops a detailed case study highlighting the progress of system integration testing and verifies whether the integrated components are functioning correctly as a software system. He also identifies and logs defect reports for developers, ensuring timely resolution. **System Architects and Developers** They collaborate with the testing team to understand various integration requirements and designs. They also provide the necessary support to set up the integrated testing environment. **Business Analysts** They also collaborate with the testers, ensuring the integrated system meets their business requirements. They participate in various testing processes, reviewing and validating the system integration tests. **Common Challenges in SIT** System integration testing (SIT) involves testing a system that consists of multiple subsystem components, such as hardware, software, or hardware with embedded software. Some challenges of SIT include: ●**Managing Diverse Components**: Integrated system testing often involves a mix of new and legacy systems, custom code, and third-party applications. Ensuring compatibility and smooth data flow across these diverse elements can be a challenge. ●**Ensuring Comprehensive Test Coverage**: Creating test cases that cover all possible interactions and edge cases between integrated systems can be time-consuming and complex. ●**Handling Dependencies**: Integration testing often involves dealing with complex dependencies between systems. Delays or bugs in one system can cause cascading issues throughout the integration. ●**Replicating Real-World Scenarios**: Simulating real-world usage patterns and data volumes during testing can be difficult, potentially leading to integration issues that only surface in production. ●**Coordinating Testing Schedules**: Scheduling and coordinating testing efforts across multiple teams working on different systems can be a logistical challenge. **Conclusion** In conclusion, System Integration Testing is a necessary phase of software development. It helps the team in ensuring that all components of the system are seamlessly working together and efficiently. Here, careful study and documentation of the test are done to ensure that the software is working as intended, ultimately, contributing to the success of the software development and end-user satisfaction. Opkey is a tool that can be used for seamless integrated system testing tests, supporting automation for web, mobile, desktop, and API tests all in one place.
rohitbhandari102
1,907,236
Streamline Your Code Documentation with Lyzr Code Comment Generator
In the world of software development, clear and concise code documentation is crucial. It helps in...
0
2024-07-01T06:23:59
https://dev.to/harshitlyzr/streamline-your-code-documentation-with-lyzr-code-comment-generator-2cbk
In the world of software development, clear and concise code documentation is crucial. It helps in maintaining code, onboarding new team members, and ensuring that codebases remain understandable over time. Introducing the Lyzr Code Comment Generator, an innovative application designed to leverage the power of Lyzr Automata and Streamlit to automatically generate informative comments for your code. **What is Lyzr Code Comment Generator?** The Lyzr Code Comment Generator is an advanced tool that uses AI to analyze your code and generate detailed comments. This app is perfect for developers who want to improve their code readability and maintainability without spending hours on documentation. **Key Features** Automated Code Commenting: Automatically generate clear, concise, and informative comments for your code. Insightful Explanations: Provides insights into the functionality and purpose of each section of code. Best Practices: Promotes good coding practices by highlighting important features and techniques used in the code. User-Friendly Interface: Built using Streamlit, the app offers an intuitive interface for easy use. How It Works Secure API Integration: Enter your OpenAI API key in the sidebar to access the GPT-4 Turbo model. Input Code: Paste your code snippet into the provided text area. Generate Comments: Click the ‘Convert’ button to generate comments for your code using Lyzr Automa. **Setting Up the Environment** **Imports:** Imports necessary libraries: streamlit, libraries from lyzr_automata ``` pip install lyzr_automata streamlit ``` ``` import streamlit as st from lyzr_automata.ai_models.openai import OpenAIModel from lyzr_automata import Agent,Task from lyzr_automata.pipelines.linear_sync_pipeline import LinearSyncPipeline from PIL import Image ``` **Sidebar Configuration** We create a sidebar for user inputs, including an API key input for accessing the OpenAI GPT-4 model. This ensures that the API key remains secure. ``` api = st.sidebar.text_input("Enter our OPENAI API KEY Here", type="password") if api: openai_model = OpenAIModel( api_key=api, parameters={ "model": "gpt-4-turbo-preview", "temperature": 0.2, "max_tokens": 1500, }, ) else: st.sidebar.error("Please Enter Your OPENAI API KEY") ``` **api_documentation Function:** ``` def code_commenter(code_snippet): code_comment_agent = Agent( prompt_persona="you are a seasoned software engineer with a wealth of experience in writing, reviewing, and improving code", role="Software Engineer", ) code_comment_task = Task( name="Code Commenting Task", output_type=OutputType.TEXT, input_type=InputType.TEXT, model=openai_model, agent=code_comment_agent, log_output=True, instructions=f"""You are tasked with generating comments for a given piece of code. Your comments should be clear, concise, and informative, providing insight into the functionality and purpose of each section of code. You should strive to explain the logic behind the code, highlight any important features or techniques used, and offer suggestions for improvement if applicable. Your goal is to help readers understand the code more easily and to promote good coding practices through your comments. Code: {code_snippet} """, ) output = LinearSyncPipeline( name="Generate Comment", completion_message="Comment Generated!", tasks=[ code_comment_task ], ).run() return output[0]['task_output'] ``` def code_commenter(code_snippet):: Defines a function named code_commenter that takes user-provided code as input. code_comment_agent = Agent(...): Creates an Agent object defining the prompt persona and role for the AI model. Here, the persona is a "seasoned software engineer" with expertise in code review and improvement. code_comment_task = Task(...): Creates a Task object specifying the code commenting task. This includes details like: Task name: “Code Commenting Task” Output and Input types (text) The AI model to be used (openai_model) The defined code_comment_agent Instructions for the model: Generate clear, concise, and informative comments for the code. Explain the logic, highlight important features, and suggest improvements. Promote good coding practices through comments. The instructions also specify that the code will be provided as input (Code: {code_snippet}). output = LinearSyncPipeline(...): Creates a LinearSyncPipeline object specifying: Pipeline name: “Generate Comment” Completion message: “Comment Generated!” List of tasks to be executed: only the code_comment_task in this case. output.run(): Executes the pipeline, triggering the code commenting task using the defined model and instructions. return output[0]['task_output']: Retrieves the output of the first task (the code commenting task) from the output list and returns it. This likely contains the generated code comments. **User Code Input:** ``` code = st.text_area("Enter Code", height=300) ``` code = st.text_area creates a text area for users to enter their code snippet. It sets the height to 300 pixels. **Generate Button and Output Display:** ``` if st.button("Convert"): solution = code_commenter(code) st.markdown(solution) ``` Defines a button labeled “Convert”. Clicking the button calls the code_commenter function with the user-provided code and displays the returned comments using markdown formatting. **Running the App** Finally, run the app using the following command in your terminal: ``` streamlit run app.py ``` try it now: https://lyzr-code-comment-generator.streamlit.app/ For more information explore the website: [Lyzr](https://www.lyzr.ai/)
harshitlyzr
1,907,235
Enhance Your Review Management with AI Review Aggregator and Summarizer
In today’s digital marketplace, customer reviews play a pivotal role in shaping consumer decisions...
0
2024-07-01T06:21:55
https://dev.to/harshitlyzr/enhance-your-review-management-with-ai-review-aggregator-and-summarizer-bo9
In today’s digital marketplace, customer reviews play a pivotal role in shaping consumer decisions and brand reputation. Managing and summarizing these reviews effectively can provide invaluable insights for businesses. Introducing the AI Review Aggregator and Summarizer, an innovative application designed to harness the power of Lyzr Automata and Streamlit, making review analysis and summarization more efficient and insightful. **What is AI Review Aggregator and Summarizer?** The AI Review Aggregator and Summarizer is a cutting-edge app that uses advanced AI models to perform sentiment analysis, aggregate reviews, and provide comprehensive summaries. This tool is essential for businesses seeking to streamline their review management process and gain actionable insights from customer feedback. **Key Features** Accurate Sentiment Analysis: Classify reviews into positive, negative, or neutral categories to understand customer sentiment. Thematic Aggregation: Identify common themes and key points from multiple reviews to provide a consolidated overview. Coherent Summaries: Generate concise summaries that capture the essence of individual feedback using both extractive and abstractive text summarization techniques. User-Friendly Display: Present reviews in an easy-to-read format, including key insights, pros and cons, star ratings, sentiment graphs, and keyword clouds. How It Works Simple Setup: Enter your OpenAI API key in the sidebar for secure access to the GPT-4 Turbo model. Input Reviews: Paste your reviews into the provided text area. Analyze and Summarize: Click the ‘Convert’ button to perform analysis and generate summaries. **Setting Up the Environment** **Imports:** Imports necessary libraries: streamlit, libraries from lyzr_automata ``` pip install lyzr_automata streamlit ``` ``` import streamlit as st from lyzr_automata.ai_models.openai import OpenAIModel from lyzr_automata import Agent,Task from lyzr_automata.pipelines.linear_sync_pipeline import LinearSyncPipeline from PIL import Image ``` **Sidebar Configuration** We create a sidebar for user inputs, including an API key input for accessing the OpenAI GPT-4 model. This ensures that the API key remains secure. ``` api = st.sidebar.text_input("Enter our OPENAI API KEY Here", type="password") if api: openai_model = OpenAIModel( api_key=api, parameters={ "model": "gpt-4-turbo-preview", "temperature": 0.2, "max_tokens": 1500, }, ) else: st.sidebar.error("Please Enter Your OPENAI API KEY") ``` **api_documentation Function:** ``` def review_analyst(reviews): review_agent = Agent( prompt_persona="You are an Expert Review Aggregator and Summarizer", role="Review Aggregator and Summarizer", ) review_analysis_task = Task( name="Review Analysis Task", output_type=OutputType.TEXT, input_type=InputType.TEXT, model=openai_model, agent=review_agent, log_output=True, instructions=f"""Perform sentiment analysis to classify reviews into positive, negative, or neutral categories. Aggregate reviews based on common themes, sentiments, and key points.Summarize multiple reviews into a single coherent review that captures the essence of individual feedback. Use techniques like text summarization (extractive and abstractive) to create concise summaries. Display consolidated reviews in a user-friendly format, highlighting key insights, pros and cons, and overall sentiment. Provide visual aids like star ratings, sentiment graphs, and keyword clouds to enhance readability. Reviews: {reviews} ##Output Requirements: ##Movie Name: ##Overview: ##Summarized Reviews: ##Key Insights: ###Pros: ###Cons: ##Overall Sentiment: ###Star Ratings: ⭐(use this emoji for rating️) ###Sentiment Graph: ###Keyword Cloud: """, ) output = LinearSyncPipeline( name="review Analysis", completion_message="Review Analysis Done!", tasks=[ review_analysis_task ], ).run() return output[0]['task_output'] ``` def review_analyst(reviews):: Defines a function named review_analyst that takes user-provided reviews as input. review_agent = Agent(...): Creates an Agent object defining the prompt persona and role for the AI model. Here, the persona is an "Expert Review Aggregator and Summarizer". review_analysis_task = Task(...): Creates a Task object specifying the review analysis task. This includes details like: Task name: “Review Analysis Task” Output and Input types (text) The AI model to be used (openai_model) The defined review_agent Instructions for the model: Perform sentiment analysis (positive, negative, neutral) Aggregate reviews based on themes, sentiments, and key points. Summarize reviews into a single coherent summary capturing individual feedback. Use summarization techniques (extractive and abstractive) for concise summaries. Display consolidated reviews in a user-friendly format with key insights, pros, cons, and overall sentiment. Include visual aids like star ratings, sentiment graphs, and keyword clouds. The instructions also specify the desired output format with sections for movie name, overview, summarized reviews, key insights (pros and cons), overall sentiment (including star ratings, sentiment graph, and keyword cloud). output = LinearSyncPipeline(...): Creates a LinearSyncPipeline object specifying: Pipeline name: “review Analysis” Completion message: “Review Analysis Done!” List of tasks to be executed: only the review_analysis_task in this case. output.run(): Executes the pipeline, triggering the review analysis task using the defined model and instructions. return output[0]['task_output']: Retrieves the output of the first task (the review analysis task) from the output list and returns it. This likely contains the analyzed and summarized reviews. **User Code Input:** ``` review = st.text_area("Enter Your Reviews", height=300) ``` review = st.text_area creates a text area for users to enter their Reviews. It sets the height to 300 pixels. **Generate Button and Output Display:** ``` if st.button("Convert"): solution = review_analyst(review) st.markdown(solution) ``` Defines a button labeled “Convert”. Clicking the button calls the review_analyst function with the user-provided reviews and displays the returned analysis (potentially including summarized reviews, key insights, and visualizations) using markdown formatting. **Running the App** Finally, run the app using the following command in your terminal:s ``` streamlit run app.py ``` try it now: https://lyzr-review-analyst.streamlit.app/ For more information explore the website: [Lyzr](https://www.lyzr.ai/)
harshitlyzr
1,906,521
All The Javascript Concepts You Need To Know Before Learning React (Part 1)
Newbie here don't bash :D Recently, I was sparked with the inspiration to improve my web...
0
2024-07-01T06:21:47
https://dev.to/up_min_sparcs/all-the-javascript-concepts-you-need-to-know-before-learning-react-part-1-3if5
webdev, javascript, beginners, react
> Newbie here don't bash :D Recently, I was sparked with the inspiration to improve my web development skills by learning new technologies, which include React. **React** is a popular open-source JavaScript library used for building user interfaces. When exploring new technologies, adequate preparation will ensure smooth sailing and make life much easier. With this, I would like to share Javascript concepts that will help you in your journey towards learning React. You should have at least understood the fundamentals of Javascript before reading this article :> ## 1. Arrow Functions When creating functions in Javascript, the keyword `function` is used, combined with the function name and, optionally, its arguments. ```js function DoSomething(args) { // your code here } ``` However, there is another way to create functions without using `function`. ```js const DoSomething = (args) => { // your code here } ``` > It is like creating a variable and assigning it to a function. You can choose `let` instead of `const` if you want to reassign your variable, but `const` is more standard. At first, you may think that you are writing much longer code compared to the default way of creating a function. That was my first thought also, but it is actually helpful when dealing with callbacks, which is a common thing in React. There are some instance where it is actually much concise when using arrow function. ```js // Using "function" keyword const add = function(a, b) { return a + b; }; ``` ```js // Using arrow function const add = (a, b) => a + b; ``` When using the arrow function without the curly braces `{}`, the function implicitly returns the expression directly following the arrow `=>`. ### Exporting Using Arrow Function Another perk of using the arrow function is that it can export a function in a "concise" way. ```js // Using "function" keyword export default const add = function(a, b) { return a + b; }; ``` You need to use `export default` keyword to export the function to be used in another file. ```js // Using arrow function export const add = (a, b) => a + b; ``` You can remove `default` when exporting a function. > Make sure you add `"type": "module"` in your `package.json`. Exporting is important, especially in React, as you will export your components or functions to improve code readability. ```jsx export myComponent = () => { return <div></div>; } ``` > In case you are not aware, **React** uses **JSX** as its markup syntax which lets you write HTML elements within Javascript. Above is an example of how to export a component in React. ### Anonymous Function Arrow functions are helpful, especially when using anonymous functions. ```jsx <button onClick={() => { console.log("Hello World"); }}> </button> ``` In normal Javascript, inside the onclick function is your chosen named function. However, in React, you can also utilize anonymous functions, which are commonly used. Anonymous functions, or lambda functions, are functions without function names. ## 2. Ternary Operator I know you probably learned about ternary operator `?` , but for those who don't know, ternary operator is just a concise version of `if-else`. It is useful in React for conditionally rendering components and making your code less compact. ```js let age = 25; let message = age >= 18 ? "You are an adult" : "You are not yet an adult"; // Output: You are an adult ``` The `?` serves as a conditional trigger: if the condition before `?` evaluates to true, the statement immediately before `:` executes; otherwise, the statement after `:` executes instead. ```jsx const App = () => { let isLoggedIn = true; return isLoggedIn ? <p>User is logged in</p> : <p>User is not logged in</p> // The function returns the first p tag because isLoggin is true } ``` In this React example, the function `App` will conditionally render `p` tag depending on the value of `isLogginIn`. The ternary operator `? ` provides explicit control for both true and false conditions. ## 3. Short Circuit `&&` and `||` `&&` and `||` are logical operators that play important roles in conditionally rendering components based on certain conditions in React. ```js let age = 25; let message = age >= 18 && "You are an adult"; // Output: You are an adult ``` Here, when `age >= 18` is `true`, `"You are an adult"` is assigned to the `message` variable. Otherwise, nothing will be assigned to the `message` variable. > Basically, the code after `&&` will only work if the condition before `&&` is `true`. Do this/these if the condition/s is/are satisfied. ```jsx const App = () => { let isLoggedIn = true; return isLoggedIn && <p>User is logged in</p> // The function returns the p tag because isLoggin is true } ``` In this React example, if `isLoggedIn` is `true`, the `p` element with the text `"User is logged in"` will be rendered. If `isLoggedIn` is `false`, nothing will be rendered. ```jsx const App = () => { let message = null; return <p>{message || "No new messages"}</p> // The function returns "No new messages" since message is null } ``` ```jsx const App = () => { let message = "Message received"; return <p>{message || "No new messages"}</p> // The function returns "Message received" since message has a truthy value. } ``` `||` has the unique quality of rendering fallback or default component or element. If message is `null` (or any falsy value), the text `"No new messages"` will be rendered. If `message` has a truthy value, that value will be displayed instead. > Basically, if there is no value, do this/these; if there is a value, use that value. Here is a vanilla javascript example. ```js let userInput = ""; // User did not provide any input // Use the logical || operator to provide a default value let message = userInput || "Default message"; ``` The `&&` (logical AND) and `||` (logical OR) operators in JavaScript are called "short-circuit" operators because they evaluate expressions from left to right and stop (short-circuit) as soon as the outcome is determined: ## 4. Objects Understanding objects is essential not only in React but in other libraries/frameworks, as well. Objects can be used to store and group data flexibly, and this knowledge will be helpful when dealing with APIs as they rely heavily on objects for configuration, event handling, and data management. Objects follow key-value pair structures, making it easy to access, update, and manage data efficiently. ### Destructuring Objects ```js const person = { name: "Makku", age: 19, isStudent: true, }; // Accessing object properties without destructuring const name = person.name; const age = person.age; const isStudent = person.isStudent; ``` Here, we have a `person` object with keys as the properties and values as their corresponding data. To access the properties of the `person` object without destructuring, you would need to reference each property by its key individually, which is tiresome. ```js const person = { name: "Makku", age: 19, isStudent: true, }; // Destructring an object const { name, age, isStudent } = person; ``` Destructuring assigns variables `name`, `age`, and `isStudent` with the corresponding values from the `person` object. You can then use these variables in your program. Destructuring objects is useful, especially when you're working with props in React. ### Defining Objects ```js const name = "Makku"; const age = 19; const isStudent = true; const person = { name, age, isStudent, }; ``` There are many ways to create objects in Javascript and an example of one is the code snippet above. You can create objects by putting defined variables directly inside the curly braces, creating an object with the keys as the variable names and the values as the data of the referenced variables. ```js const person = { name: "Makku", age: 19, isStudent: true, }; const person2 = {...person, name: "Kuma" }; ``` The code block above creates another object with the same properties and values as `person`, except that the `name` property is replaced with `"Kuma"`. This is done using the spread operator `...`, which allows an iterable (like an array or object) to be expanded into individual elements or properties. ## 5. `.map` and `.filter` Functions ```js let names = ["Yldevier", "Precious", "Francis", "Lance", "Adriel", "Karl" ]; ``` Let's say we have an array of names, and we want to add `er` in the end of each name. We can use the `for` loop or `forEach` to do this, but we can also utilize `.map`, which will make this concise and easy. The `.map` function creates a new array by applying a provided callback function to each element of the original array. ```js let names = ["Yldevier", "Precious", "Francis", "Lance", "Adriel", "Karl" ]; let namesList = names.map((name) => { return name + "er"; }); /* Output [ "Yldevierer", "Preciouser", "Franciser", "Lanceer", "Adrieler", "Karler" ] */ ``` In this code snippet, the `.map` function is used on the names array. The `.map` function iterates over each element `(name)` in the names array and applies a transformation function `name => name + "er"`. ```jsx const App = () => { let names = ["Yldevier", "Precious", "Francis", "Lance", "Adriel", "Karl"]; const NamesList = () => { return ( <div> {names.map((name) => ( <h1>{name}</h1> ))} </div> ); }; } ``` In this React example, the `.map` function is used to iterate over the names array and dynamically generate `<h1>` elements for each name. This approach leverages JavaScript's array function within JSX to render a list of names as headers `<h1>` elements inside a React functional component. On the other hand, the `.filter` function creates a new array containing all elements of the original array that pass a specified test implemented by a provided callback function. ```js let names = ["Yldevier", "Precious", "Francis", "Lance", "Adriel", "Karl" ]; let nameList = names.filter((name) => { return name.length > 5; }); // Output: [ "Yldevier", "Precious", "Francis", "Adriel" ] ``` In this code snippet, we filter the `names` array and create a new array containing elements with lengths greater than 5. These functions `.map` and `.filter` are fundamental in React for managing dynamic content and efficiently rendering lists based on specific conditions or transformations of data. > This concludes the first part of the discussion, as I want to grasp the next concepts more. The second article will be posted as soon as I am confident about it. I hope you learn something from this article! Happy Coding! --- > 7/4/2024 - Hello, here is the next part of the article! Happy reading! {% embed https://dev.to/makkukuma/all-the-javascript-concepts-you-need-to-know-before-learning-react-part-2-1de8 %} --- References: https://www.youtube.com/watch?v=m55PTVUrlnA
makkukuma
1,907,203
Revolutionize Your Movie Script Translation with AI
In the digital age, creating multilingual content is essential to reach a broader audience. However,...
0
2024-07-01T06:18:43
https://dev.to/harshitlyzr/revolutionize-your-movie-script-translation-with-ai-27gm
In the digital age, creating multilingual content is essential to reach a broader audience. However, translating movie scripts while preserving their tone, style, and cultural context is a significant challenge. Our new AI Movie Script Autodubbing app, powered by Lyzr Automata and Streamlit, aims to simplify this process, making it seamless and efficient. **What is AI Movie Script Autodubbing?** AI Movie Script Autodubbing is an advanced application designed to translate movie scripts from one language to another while maintaining the original essence of the content. It leverages the powerful capabilities of the OpenAI Model and Lyzr Automata’s sophisticated pipelines to deliver accurate and culturally relevant translations. **How Does It Work?** User-Friendly Interface: Built using Streamlit, the app offers an intuitive interface where users can easily input their script and select the source and target languages. Secure API Integration: Users can securely enter their OpenAI API key to access the GPT-4 Turbo model, ensuring privacy and data protection. Advanced Translation Pipeline: Utilizing Lyzr Automata’s LinearSyncPipeline, the app follows a structured process to translate scripts accurately, maintaining the tone and style of the original content. **Key Features** Accurate Translations: The app ensures that translations are not only accurate but also convey the original meaning and emotions of the dialogues and descriptions. Cultural Adaptation: It adapts cultural references appropriately, making sense to the target language audience. Consistency: The app maintains the characters’ personalities and voices consistent with the original script. Formatting Preservation: It preserves the formatting of the script, including scene headings, action lines, and dialogues. Why Choose AI Movie Script Autodubbing? AI Content Detector: Our app uses advanced AI technology similar to content detectors to ensure high-quality translations. AI Content Generator: Leveraging capabilities akin to AI content generators, the app produces natural and fluent translations. **Setting Up the Environment** **Imports:** Imports necessary libraries: streamlit, libraries from lyzr_automata ``` pip install lyzr_automata streamlit ``` ``` import streamlit as st from lyzr_automata.ai_models.openai import OpenAIModel from lyzr_automata import Agent,Task from lyzr_automata.pipelines.linear_sync_pipeline import LinearSyncPipeline from PIL import Image ``` **Sidebar Configuration** We create a sidebar for user inputs, including an API key input for accessing the OpenAI GPT-4 model. This ensures that the API key remains secure. ``` api = st.sidebar.text_input("Enter our OPENAI API KEY Here", type="password") if api: openai_model = OpenAIModel( api_key=api, parameters={ "model": "gpt-4-turbo-preview", "temperature": 0.2, "max_tokens": 1500, }, ) else: st.sidebar.error("Please Enter Your OPENAI API KEY") ``` **api_documentation Function:** ``` def script_translator(lang1, lang2, script): translator_agent = Agent( prompt_persona=f"You are a Script translator with over 10 years of experience in the film industry.You have a deep understanding of both {lang1} and {lang2} languages and is well-versed in the nuances of movie scripts.", role="Script Translation", ) translation_task = Task( name="Script Translation Task", output_type=OutputType.TEXT, input_type=InputType.TEXT, model=openai_model, agent=translator_agent, log_output=True, instructions=f"""Translate the provided movie script from {lang1} to {lang2} while maintaining the original tone, style, and cultural context. Follow Below Instructions: - Ensure that the translation is accurate and conveys the original meaning and emotions of the dialogues and descriptions. - Adapt cultural references appropriately to make sense to a {lang2}-speaking audience. - Maintain the natural flow of conversations and descriptions, ensuring that the translated text sounds natural to native {lang2} speakers. - Keep the characters' personalities and voices consistent with the original script. - Preserve the formatting of the script, including scene headings, action lines, and dialogues. Script: {script} """, ) output = LinearSyncPipeline( name="Script Translation", completion_message="Script Translation Done!", tasks=[ translation_task ], ).run() return output[0]['task_output'] ``` def script_translator(lang1, lang2, script):: Defines a function named script_translator that takes three arguments: lang1: The source language of the script. lang2: The target language for translation. script: The script content to be translated. translator_agent = Agent(...): Creates an Agent object defining the prompt persona and role for the AI model. The persona describes the agent as a script translator with expertise in the source and target languages. translation_task = Task(...): Creates a Task object defining the translation task. This includes the task name, output and input types, the AI model to be used, the agent persona, logging configuration, and instructions for the model. The instructions specify the translation goals, cultural adaptation considerations, maintaining natural flow and character consistency, and preserving script formatting. output = LinearSyncPipeline(...): Creates a LinearSyncPipeline object specifying the pipeline name, completion message ("Script Translation Done!"), and the list of tasks to be executed (in this case, only the translation_task). output.run(): This line executes the LinearSyncPipeline object. The run method likely triggers the translation task using the defined OpenAI model and agent. return output[0]['task_output']: After running the pipeline, the code retrieves the output of the first task (the translation task) from the output list. The specific index ([0]) is used because there's only one task in this pipeline. The output likely contains the translated script text. **User Code Input:** ``` language1 = st.text_input("Enter Your Script Language", placeholder="English") language2 = st.text_input("Enter Translating language", placeholder="Hindi") scripts = st.text_area("Enter Your Script", height=300) ``` language1 = st.text_input(...): Creates a text input field in the app where users can enter the source language of their script. language2 = st.text_input(...): Creates another text input field for users to specify the desired target language for translation. scripts = st.text_area(...): Creates a text area where users can paste their script content. **Generate Button and Output Display:** ``` if st.button("Translate"): solution = script_translator(language1, language2, scripts) st.markdown(solution) ``` if st.button("Translate"):: Checks if the user clicks a button labeled "Translate". solution = script_translator(...): If the button is clicked, calls the script_translator function with the user-provided languages and script content. The function presumably returns the translated script text. st.markdown(solution): Displays the translated script text (stored in the solution variable) using markdown formatting. **Running the App** Finally, run the app using the following command in your terminal: ``` streamlit run app.py ``` try it now: https://lyzr-script-translation.streamlit.app/ For more information explore the website: [Lyzr](https://www.lyzr.ai/)
harshitlyzr
1,907,202
Best Pest Control Services in Hyderabad
Our company has been in the market for years and we have gained immense experience in the field of...
0
2024-07-01T06:18:02
https://dev.to/ajlpest_controlservices/best-pest-control-services-in-hyderabad-4pa0
pestcontrol, ajlpestcontrolservices, pestcontrolservices, termitecontrol
Our company has been in the market for years and we have gained immense experience in the field of pest control. Our employees are highly trained and skilled to kill the unwanted guests that make a base in your home in the form of rats, rodents, ants, and even termites. The team of experts also provides **_Best Pest Control Services in Hyderabad_** such as shifting the objects to another replace or room. We at AJL PEST control services is quick to respond to your queries by sending you updates on the order placed and the service being done to us. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szb2nlk3kmlain5ybfwf.jpg) The pest control company of ours comprises sizeable employees to cover the city for catering the need. We offer special packages to the customer’s base on the severity of the situation. In case bugs don’t vanish soon, we do double-check to ensure the insects are killed and prevention is done. The team of pest control is divided and each team is specialized in dealing with specific issues relating to pest control. We offer services to both residential and commercial places. Be it a single bedroom flat or a multi-story building our services will reach your doorstep.
ajlpest_controlservices
1,907,201
5 Tips to Improve Your Flutter Performance
When developing mobile applications using Flutter, performance is crucial. A smoothly running...
0
2024-07-01T06:15:18
https://dev.to/tentanganak/5-tips-to-improve-your-flutter-performance-2279
flutter, dart, mobile
When developing mobile applications using Flutter, performance is crucial. A smoothly running application provides a better user experience, allowing users to explore the app without feeling annoyed or frustrated by slow startup times, crashes, or jank. Optimizing application performance includes various aspects, such as app start times and efficient memory management. By minimizing the workload during initial launch, using efficient state management, and properly disposing of resources, developers can ensure a smoother user experience. Here are some ways to improve the performance of Flutter applications. **1. Avoid Unnecessary Initialization in the Main App** The `main()` function is the entry point of a Flutter application. Keeping it clean and avoiding unnecessary initializations here is vital. Heavy operations should be deferred until they are needed, preferably in the appropriate widgets or services. And minimize the use of asynchronous functions in the main() function to ensure the first render time is as fast as possible. This helps to speed up the app's startup time, providing users with a quicker load time. ``` void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: HomePage(), ); } } ``` Additionally, we can also speed up the process by using Future.wait to perform multiple asynchronous operations concurrently. This technique allows us to initiate several tasks simultaneously and wait for all of them to complete, thereby optimizing the overall initialization time and enhancing the app's performance right from the start ``` void main() async { await Future.wait([initFirebase(), initDatabase()]); runApp(const MyApp()); } ``` And consider using a splash screen as the initial page to wait for the initiation to be completed. ``` void main() async { runApp(const MyApp()); } class MyApp extends StatefulWidget { const MyApp({Key? key}) : super(key: key); @override State<MyApp> createState() => _MyAppState(); } class _MyAppState extends State<MyApp> { bool isSplashShow = true; @override void initState() { super.initState(); _init(); } void _init() async { await Future.wait([_initFirebase(), _initDatabase()]); _checkIsLoggedIn(); isSplashShow = false; } Future<void> _initFirebase() async {} Future<void> _initDatabase() async {} void _checkIsLoggedIn() {} @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple), useMaterial3: true, ), home: isSplashShow ? const SplashPage() : const MyHomePage(title: 'Flutter Demo Home Page'), ); } } ``` **2. Prefer Using `ListView` or `CustomScrollView` Over `SingleChildScrollView`** When dealing with scrollable content, using `ListView` or `CustomScrollView` is more performance-efficient compared to `SingleChildScrollView`. `ListView` and `CustomScrollView` are optimized for scrolling performance and memory usage, especially with large datasets, as they lazily build and dispose of widgets as they come into and out of the viewport. For example, an application that displays a list using the SingleChildScrollView widget to show text from 1-999 uses approximately 30-40 MB of memory. ``` class MyHomePage extends StatefulWidget { const MyHomePage({Key? key}) : super(key: key); @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { final List<String> items = []; @override void initState() { super.initState(); _init(); } void _init() { for (int i = 0; i < 9999; i++) { items.add(i.toString()); } } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text("Title"), ), body: SingleChildScrollView( child: Column( children: [ for (final item in items) SizedBox( width: double.infinity, child: Text(item), ), ], ), ), ); } } ``` ![the memory usage around is 30-40 MB](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y1u3gt2m8075dad2qcq6.png) However, if the ListView widget is used to display text from 1-999, the memory usage is around 7-10 MB and also improve frame rendering time. ``` class MyHomePage extends StatefulWidget { const MyHomePage({Key? key}) : super(key: key); @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { final List<String> items = []; @override void initState() { super.initState(); _init(); } void _init() { for (int i = 0; i < 9999; i++) { items.add(i.toString()); } } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text("Title"), ), body: ListView.builder( itemCount: items.length, itemBuilder: (context, index) { return Text(items[index]); }, ), ); } } ``` ![the memory usage is around 7-10 MB](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4gnktdnfg661vl7cy01.png) **3. Use `cacheWidth` and `cacheHeight` in `Image`** Displaying images in a Flutter application is a basic and easy task. However, many of us are unaware that displaying images whose sizes do not match what we want to display in a widget will cause our application to use unnecessary memory. Flutter provides a way to detect oversized images by using debugInvertOversizedImages = true. This can alert developers if the displayed images are larger than desired. ``` void main() { debugInvertOversizedImages = true; return runApp(const MyApp()); } class MyHomePage extends StatelessWidget { const MyHomePage({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text("counter"), ), body: Column( children: [ Image.network( "https://images.unsplash.com/photo-1715196372160-31ba56b1a2f9", ), ], ), ); } } ``` If the image dimensions exceed what is suitable for the widget, Flutter will generate an error when debugInvertOversizedImages is set to true. ![warning error memory](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aq5pu5uaoyiq9u8k3f7m.png) For this issue, we can use the cacheWidth and cacheHeight parameters in the Image widget. These parameters allow us to control memory usage by resizing the displayed image. Let's see how we can utilize them: ``` Image.network( "https://images.unsplash.com/photo-1690906379371-9513895a2615", height: 300, width: 200, cacheHeight: 300, cacheWidth: 200, ), ``` By setting cacheWidth and cacheHeight to appropriate values, we can ensure that the image is displayed with the desired dimensions without unnecessarily consuming memory. However, this solution is not perfect because each device has a different pixel ratio. Errors may not appear on our debug device but might on other devices with different pixel ratios. Therefore, we can calculate cacheHeight and cacheWidth by multiplying with MediaQuery.of(context).devicePixelRatio. For example: ``` extension ImageExtension on num { int cacheSize(BuildContext context) { return (this * MediaQuery.of(context).devicePixelRatio).round(); } } ``` That extension allows us to easily calculate the cache size for our images and make the optimization process smoother: ``` class MyHomePage extends StatelessWidget { const MyHomePage({Key? key}) : super(key: key); @override Widget build(BuildContext context) { debugInvertOversizedImages = false; return Scaffold( appBar: AppBar( title: const Text("counter"), ), body: Column( children: [ Image.network( "https://images.unsplash.com/photo-1690906379371-9513895a2615", height: 300, width: 200, cacheHeight: 300.cacheSize(context), cacheWidth: 200.cacheSize(context), ), ], ), ); } } ``` **4. Dispose Unused Streams and Controller** When streams and controllers are no longer needed, failing to dispose of them can cause memory leaks. Memory leaks occur when memory that is no longer needed is not released, which over time can consume a significant portion of available memory, leading to poor performance and even application crash (out of memory). To prevent memory leaks, it is important to dispose streams and controllers when they are no longer needed. This is typically done in the `dispose` method of a `StatefulWidget`. ``` late StreamSubscription _subscription; late TextEditingController _textEditingController; late ScrollController _scrollController; @override void initState() { super.initState(); _subscription = counterBloc.counterStream.listen((data) { // handle data }); _textEditingController = TextEditingController(); _scrollController = ScrollController(); _textEditingController.addListener(() { // handle data }); _scrollController.addListener(() { // handle data }); } @override void dispose() { _subscription.cancel(); _textEditingController.dispose(); _scrollController.dispose(); super.dispose(); } ``` **5. Use `const` and Prefer State Management Solutions** Using `const` constructors for widgets whenever possible helps Flutter optimize the build process by reusing widgets rather than recreating them. Additionally, employing state management solutions like BLoC, Riverpod, or Provider can lead to better-organized code and more efficient state handling, ultimately improving performance. ``` final counterBloc = CounterBloc(); final counterProvider = StateNotifierProvider.autoDispose<CounterController, int>( (ref) => CounterController(), ); class CounterController extends StateNotifier<int> { CounterController() : super(0); void increment() { state = state + 1; } } class MyHomePage extends StatelessWidget { const MyHomePage({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text("counter"), ), body: Column( children: const [ IncrementWidget(), CounterWidget(), ], ), ); } } class IncrementWidget extends ConsumerWidget { const IncrementWidget({Key? key}) : super(key: key); @override Widget build(BuildContext context, ref) { return ElevatedButton( onPressed: () { ref.read(counterProvider.notifier).increment(); }, child: const Text("counter"), ); } } class CounterWidget extends ConsumerWidget { const CounterWidget({Key? key}) : super(key: key); @override Widget build(BuildContext context, ref) { final count = ref.watch(counterProvider); return Text('$count'); } } ```
edolubis21
1,907,199
Unlocking the Power of R: Essential Libraries for Data Science in 2024
Introduction R has long been a favourite programming language for data scientists, thanks...
0
2024-07-01T06:15:07
https://dev.to/sejal_4218d5cae5da24da188/unlocking-the-power-of-r-essential-libraries-for-data-science-in-2024-em3
datascience, rlibraries, dataanalysis
## Introduction R has long been a favourite programming language for data scientists, thanks to its powerful capabilities for statistical computing and data visualization. As the field of data science evolves, so too do the tools and libraries that [data professionals](https://www.pangaeax.com/) rely on. In 2024, certain R libraries stand out for their robust functionalities and ability to streamline complex data tasks. This blog highlights some of these essential R libraries that every data scientist should be familiar with. ## 1. Tidyverse: A Comprehensive Suite for Data Manipulation and Visualization The Tidyverse is a collection of R packages designed for [data science](https://www.pangaeax.com/2024/01/11/data-science-trends-in-2024/). It includes: • **ggplot2:** For creating elegant data visualizations. • **dplyr:** For data manipulation and transformation. • **tidyr:** For tidying data and making it easier to work with. • **readr:** For fast and friendly data import. These packages work seamlessly together, offering a cohesive and powerful toolkit for managing and visualizing data. ## 2. caret: Simplifying Machine Learning The caret package (Classification And Regression Training) is indispensable for building and evaluating predictive models. It streamlines the process of: **• Data Preprocessing:** Including normalization and feature selection. **• Model Training:** With a unified interface for various machine learning algorithms. **• Model Evaluation:** Using cross-validation and performance metrics. Caret's comprehensive functionality makes it easier to implement and compare different machine learning models. ## 3. shiny: Bringing Data to Life with Interactive Dashboards Shiny allows data scientists to create interactive web applications directly from R. With Shiny, you can: **• Build Dashboards:** That visualize data in real-time. **• Share Insights:** With interactive features that engage stakeholders. **• Integrate with Other Tools:** Such as databases and web services. Shiny is particularly useful for developing prototypes and showcasing data findings in a dynamic format. ## 4. data.table: High-Performance Data Processing The data.table package is renowned for its speed and efficiency in handling large datasets. Key features include: **• Fast Data Manipulation:** With concise and expressive syntax. **• Efficient Memory Usage:** Optimized for performance with large data. **• Robust Data Aggregation:** Simplifying complex data operations. Data.table is essential for data scientists dealing with big data and needing quick processing times. ## 5. sf: Advanced Spatial Data Analysis For data scientists working with geographic data, the sf (simple features) package provides a powerful framework for spatial [data analysis](https://www.pangaeax.com/2022/06/06/data-analytics-solve-business-problems/). It supports: **• Reading and Writing Spatial Data:** From various file formats. **• Geometric Operations:** Such as intersections and unions. **• Spatial Visualization:** Integrated with ggplot2 for mapping. The sf package is crucial for tasks involving geospatial data and spatial statistics. ## 6. text: Text Mining and Natural Language Processing The text package is designed for text mining and [NLP (Natural Language Processing)](https://www.pangaeax.com/2023/05/24/demystifying-natural-language-processing-nlp/) tasks. It facilitates: **• Text Pre-processing:** Including tokenization and stemming. **• Text Analysis:** With tools for sentiment analysis and topic modelling. **• Visualization:** Of textual data insights. As the importance of unstructured text data grows, text mining skills and tools become increasingly vital. ## Conclusion Staying up-to-date with the latest R libraries can significantly enhance your data science projects, making them more efficient, accurate, and insightful. Whether you are manipulating data, building models, or creating interactive visualizations, these libraries offer the tools you need to excel in 2024. For a more detailed insight into the top R libraries for data science in 2024, read our comprehensive blog on the [Pangaea X](https://www.pangaeax.com/2024/05/06/top-r-libraries-for-data-science-in-2024/). Explore these powerful tools and take your data science skills to the next level!
sejal_4218d5cae5da24da188
1,907,198
Is the Roadrunner Email Still Active
In the realm of email services, there are numerous options available, each with its unique features...
0
2024-07-01T06:14:33
https://dev.to/siyaram_choahan_94b56d4e2/is-the-roadrunner-email-still-active-362c
In the realm of email services, there are numerous options available, each with its unique features and benefits. One such service that has been around for years is . [Roadrunner Email Still Active](https://roadrunnermailsupport.com/recover-roadrunner-email-password/) However, with the rise of newer email providers, many users are left wondering, "Is the Roadrunner email still active?" This blog post aims to address this question in detail, exploring the current status of Roadrunner email, its features, and how it compares to other email services available today. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j2qqdi6t875tk4f6t92p.jpg) **Introduction ** Roadrunner email, also known as RR email, has been a reliable service for many users over the years. Originally provided by Time Warner Cable (TWC), it has undergone several changes and rebranding, particularly after the acquisition of TWC by Charter Communications. Today, it is known as Spectrum email. Despite these changes, many long-time users still refer to it as Roadrunner email. This post will delve into the current status of Roadrunner email, its features, and whether it remains a viable option for email users today. **The Evolution of Roadrunner Email ** Roadrunner email has a rich history, starting as a service offered by Time Warner Cable. It gained popularity for its reliability and user-friendly interface. However, with the acquisition of TWC by Charter Communications in 2016, several changes took place, including the rebranding of Roadrunner email to Spectrum email. **The Transition to Spectrum Email ** The transition from Roadrunner email to Spectrum email was a significant change for users. While the core email service remained the same, there were updates to the interface and integration with other Spectrum services. Users were assured that their existing email addresses and data would remain intact, but they would now be accessing their emails through Spectrum's platform. **Is the Roadrunner Email Still Active? ** This transition raised questions about the continuity and support for Roadrunner email. Users wondered, "Is the Roadrunner email still active?" The answer is yes, but with some nuances. While the Roadrunner brand name is no longer in use, the email service itself continues to function under the Spectrum email umbrella. Users can still access their emails using their Roadrunner email addresses, but they do so through Spectrum's email portal. Features and Benefits of Roadrunner (Spectrum) Email Despite the rebranding, the core features and benefits of Roadrunner email remain largely unchanged under Spectrum email. Here are some of the key features that users can still enjoy: **1. Reliable Email Service ** Roadrunner email has always been known for its reliability. This continues under Spectrum email, with robust servers ensuring minimal downtime and consistent performance. **2. User-Friendly Interface ** The interface has been updated, but it remains user-friendly. Users who have been with Roadrunner email for years will find the transition to Spectrum email seamless and intuitive. **3. Integration with Other Spectrum Services ** One of the advantages of the rebranding is the integration with other Spectrum services. Users can now manage their internet, cable, and phone services alongside their email, all from a single platform. **4. Security Features ** Spectrum email comes with enhanced security features, including spam filters, antivirus protection, and options for two-factor authentication. This ensures that users' emails and personal information remain secure. Challenges and Limitations While there are many benefits to using Spectrum email, there are also some challenges and limitations that users should be aware of: **1. Limited Storage Space ** Compared to some newer email providers, Spectrum email offers limited storage space. Users may need to manage their inboxes regularly to avoid running out of space. **2. Lack of Advanced Features ** Spectrum email, while reliable, lacks some of the advanced features found in other email services like Gmail or Outlook. For users who require features like advanced search, integrated calendars, and task management, Spectrum email may fall short. **3. Customer Support ** Some users have reported issues with customer support. While Spectrum provides support for their email service, the response time and effectiveness can vary. Comparing Roadrunner (Spectrum) Email to Other Services To answer the question, "Is the Roadrunner email still active?" we must also compare it to other popular email services to see how it stacks up: 1**. ****Gmail** Gmail, offered by Google, is one of the most popular email services in the world. It offers a plethora of features, including advanced search, integrated calendar, and vast storage space. Compared to Spectrum email, Gmail is more feature-rich and user-friendly. **2. Outlook ** Outlook, offered by Microsoft, is another robust email service. It integrates well with other Microsoft services and offers features like calendar integration, task management, and advanced search options. While Spectrum email is reliable, Outlook provides a more comprehensive email experience. **3. Yahoo Mail ** Yahoo Mail is known for its large storage space and user-friendly interface. It also offers features like news integration and customizable themes. Spectrum email, while reliable, lacks some of these customization options. **4. ProtonMail ** For users concerned about privacy, ProtonMail offers end-to-end encryption and a focus on security. Spectrum email does offer security features, but ProtonMail's focus on privacy is unparalleled. **Conclusion** So, is the [roadrunner email](**https://roadrunner-support-mail.blogspot.com/2024/06/is-roadrunner-good-email-service.html**) still active? The answer is yes, albeit under a new name and branding. Spectrum email continues to provide the core features that Roadrunner email users have come to rely on, with added benefits of integration with other Spectrum services and enhanced security features. While it may lack some of the advanced features and vast storage space offered by newer email services like Gmail and Outlook, it remains a reliable option for many users.
siyaram_choahan_94b56d4e2
1,907,197
Crouch End to Heathrow Airport
Heathrow Airport, a bustling hub for international travelers, serves as the gateway to countless...
0
2024-07-01T06:11:34
https://dev.to/rana_nayab_3e9fb133c75796/crouch-end-to-heathrow-airport-57bf
webdev, beginners, programming
<p><span style="font-size:11pt;">Heathrow Airport, a bustling hub for international travelers, serves as the gateway to countless destinations across the United Kingdom. Whether you&apos;re jetting off on a business trip, embarking on a vacation, or returning home after an adventure, navigating ground transportation to and from Heathrow is an essential part of your journey. Among the myriad of transportation options available, opting for a taxi service offers convenience, comfort, and peace of mind, ensuring a seamless transition from the airport to your final destination.</span></p> <p><span style="font-size:11pt;">Here&apos;s a comprehensive guide to booking a taxi from Heathrow Airport to various popular destinations across the UK:</span></p> <h2><strong><span style="font-size:16pt;">Cardiff to Heathrow Airport:</span></strong></h2> <ol> <li style="list-style-type:decimal;font-size:11pt;"> <p><span style="font-size:11pt;">Cardiff, the capital city of Wales, is approximately 150 miles from Heathrow Airport. When booking a taxi from</span><a href="https://albionairportcars.co.uk/transfer/taxi-from-cardiff-cf10--to-heathrow-airport"><strong><u><span style="color:#1155cc;font-size:11pt;">&nbsp;</span></u></strong><strong><u><span style="color:#1155cc;font-size:10pt;">Cardiff To Heathrow Airport</span></u></strong></a><span style="font-size:11pt;">, prioritize reputable taxi services that offer experienced drivers and comfortable vehicles for the long journey.</span></p> </li> </ol> <h2><strong><span style="font-size:16pt;">Heathrow Airport to Cricklewood:</span></strong></h2> <ol start="2"> <li style="list-style-type:decimal;font-size:11pt;"> <p><span style="font-size:11pt;">Cricklewood, a vibrant district in northwest London, is roughly 18 miles from Heathrow Airport. Look for&nbsp;</span><a href="https://albionairportcars.co.uk/transfer/taxi-from-heathrow-airport-to-cricklewood-nw2"><strong><u><span style="color:#1155cc;font-size:10pt;">Heathrow Airport To Cricklewood</span></u></strong></a><span style="font-size:11pt;">&nbsp;services that provide efficient and reliable transfers, ensuring you reach Cricklewood safely and on time.</span></p> </li> </ol> <h2><strong><span style="font-size:16pt;">Crouch End to Heathrow Airport:</span></strong></h2> <ol start="3"> <li style="list-style-type:decimal;font-size:11pt;"> <p><span style="font-size:11pt;">Crouch End, a leafy suburb in&nbsp;</span><a href="https://albionairportcars.co.uk/transfer/taxi-from-crouch-end-n8-to-heathrow-airport"><strong><u><span style="color:#1155cc;font-size:10pt;">Crouch End To Heathrow Airport</span></u></strong></a><span style="font-size:11pt;">, is approximately 22 miles from Heathrow Airport. Choose taxi services that offer flexible pickup options and competitive rates, allowing you to start your journey stress-free.</span></p> </li> </ol> <h2><strong><span style="font-size:16pt;">Highgate to Heathrow Airport:</span></strong></h2> <ol start="4"> <li style="list-style-type:decimal;font-size:11pt;"> <p><span style="font-size:11pt;">Highgate, known for its picturesque streets and historic landmarks, is around 20 miles from&nbsp;</span><a href="https://albionairportcars.co.uk/transfer/taxi-from-highgate-n6-to-heathrow-airport"><strong><u><span style="color:#1155cc;font-size:10pt;">Highgate To Heathrow Airport</span></u></strong></a><span style="font-size:11pt;">. Opt for taxi services with a track record of punctuality and professionalism, guaranteeing a smooth ride to the airport.</span></p> </li> </ol> <h2><strong><span style="font-size:16pt;">Finchley to Heathrow Airport:</span></strong></h2> <ol start="5"> <li style="list-style-type:decimal;font-size:11pt;"> <p><a href="https://albionairportcars.co.uk/transfer/taxi-from-finchley-n3-to-heathrow-airport"><strong><u><span style="color:#1155cc;font-size:10pt;">Finchley to Heathrow Airport</span></u></strong></a><span style="font-size:11pt;">, is approximately 20 miles from Heathrow Airport. Prioritize taxi services that prioritize customer satisfaction and safety, ensuring a comfortable and enjoyable journey to the airport.</span></p> </li> </ol> <h2><strong><span style="font-size:16pt;">Heathrow to Southampton Port Taxi:</span></strong></h2> <ol start="6"> <li style="list-style-type:decimal;font-size:11pt;"> <p><a href="https://albionairportcars.co.uk/transfer/taxi-from-heathrow-to-southampton-cruise-port"><strong><u><span style="color:#1155cc;font-size:10pt;">heathrow to southampton port taxi</span></u></strong></a><span style="font-size:11pt;">, a major cruise port on the south coast of England, is approximately 65 miles from Heathrow Airport. Select taxi services specializing in port transfers, offering convenient pickup and drop-off options to accommodate your travel plans.</span></p> </li> </ol> <p><span style="font-size:11pt;">Booking a taxi from Heathrow Airport to your desired destination is a straightforward process, thanks to the numerous reputable taxi services available. To ensure a stress-free journey, consider the following tips:</span></p> <ul> <li style="list-style-type:disc;font-size:11pt;"> <p><span style="font-size:11pt;">Book in Advance: Reserve your taxi ahead of time to secure your preferred pickup time and vehicle type, especially during peak travel periods.</span></p> </li> <li style="list-style-type:disc;font-size:11pt;"> <p><span style="font-size:11pt;">Research Providers: Explore reviews and recommendations to identify reputable taxi services with a track record of reliability and customer satisfaction.</span></p> </li> <li style="list-style-type:disc;font-size:11pt;"> <p><span style="font-size:11pt;">Communicate Clearly: Provide accurate details about your pickup location, destination, and any special requirements to ensure a smooth and efficient transfer.</span></p> </li> </ul> <p><span style="font-size:11pt;">Whether you&apos;re traveling for business or pleasure, choosing the right taxi service from Heathrow Airport can make all the difference in your travel experience. Sit back, relax, and enjoy the journey as you embark on your next adventure across the UK.</span></p> <p><br></p> <p><br></p> <p><br></p>
rana_nayab_3e9fb133c75796
1,907,196
I'm Under DDoS Attack
Since the moment I started building my website, I have always considered the possibility of it being...
0
2024-07-01T06:10:18
https://2coffee.dev/en/articles/im-under-ddos-attack
ddos, security
Since the moment I started building my website, I have always considered the possibility of it being targeted for destruction. There are various forms of attacks such as DDoS, spam, or attacks on certain security vulnerabilities... Do you think I have made any enemies that I should be worried about? Actually, no, I have never had any conflicts with anyone, but I can't escape the "watchful eyes" of these malicious actors on the internet. This is not the first website I have built, so paying attention to these unfriendly behaviors is not new to me. Recently, my blog has been experiencing a higher frequency of DDoS attacks. In its nearly 3 years of existence, I have lost count of the number of attacks. The lighter ones would cause the server to "freeze," resulting in slow response times. The heavier ones would cause the server to completely crash, making it inaccessible. So far, there have been no significant damages, but it is always a hassle to deal with this mess. It's not like we always have a computer and internet access nearby. DDoS attacks are not new, but their destructive power is extremely high. Whatever the reason may be for an attacker to decide to DDoS a website, they must find great satisfaction in seeing the website crippled. In the past, I hosted everything on DigitalOcean (DO) with a modest server configuration that was stable enough for the current user load. Occasionally, I would encounter a minor DDoS attack that would bring the server down. Initially, DO would issue warnings that the server was using more than 70% or even 90% of the CPU without knowing the cause. The [UptimeRobot](https://uptimerobot.com/) tool would send alerts that my website was inaccessible. Later, upon checking the access logs, I discovered that I was being DDoSed. Most of the attack durations are very short, and during those times, I could only grit my teeth and wait until it ended, sometimes even occurring in the middle of the night. When I woke up the next morning and rebooted the server, everything would go back to normal. People might think, why not take measures to protect against DDoS? My feeling about that would be :|, because I don't know how to effectively defend against it. Everyone thinks the first step would be to set up rate limits, and let me tell you, I did that. Each IP address is limited to only 10 requests per second. So why not upgrade the server then? Where do I find the money to do that? $6 a month may not be much, but it serves as a financial safety net and covers the normal traffic if there are no malicious actors. Then, should I install more monitoring tools? Do you think I can install anything on a Shared CPU 1GB, 20SSD server?... Actually, I've thought a lot about this issue, but I just can't effectively block DDoS attacks. Remembering the [OSI 7-layer model](https://www.cloudflare.com/learning/ddos/what-is-layer-7/) that most technology students are taught. We have Layer 7 - the topmost layer - the application layer. The rate limits that I applied were at Layer 7, and the software installed also operated at Layer 7... Layer 7 is the layer that directly interacts with your application or server. In simple terms, if an attacker can send an HTTP request to your server, they are attacking at Layer 7. And as you understand, your server will have to process their requests, whether it be simply rejecting the request, it still requires some CPU time to process. A single request might be simple, but what about thousands, hundreds of thousands of requests at once? As you can see, my blog crashes. About 3 weeks ago, I wrote an article [migrating from DO to Cloudflare](https://2coffee.dev/en/articles/migrating-from-digitalocean-to-cloudflare-pages). Everything was carefully planned, starting with migrating the frontend, then the API, and eventually everything to avoid maintaining a centralized server like DO. Perhaps this article "ticked off" some individuals and they decided to see if the "new home" could withstand DDoS attacks. The answer is yes, but only partially. After discovering that the main blog was no longer SSR, attackers quickly realized that the API calls were still going to DO, and they continued attacking the API, with the consequence being the same - it crashed. Finally, I couldn't bear it anymore. Even with limited time, I had to perform a large migration to Cloudflare to put an end to these attacks. ## Solution Cloudflare (CF) - a name not unfamiliar to many. Years ago, many people knew it for its free proxy servers, free SSL, CDN, and caches... there were many interesting things there. In the beginning, I used Cloudflare but didn't fully understand its functionalities, or even after setting up my website, I found that the access was slower or the caching features were quite annoying. However, recently Cloudflare seems to have transformed, with clearer operations, more documentation, and a stronger community. [Cloudflare stops Layer 3, 4 DDoS attacks](https://www.cloudflare.com/learning/ddos/layer-3-ddos-attacks/). This means it can block suspicious requests before they reach Layer 7, your application. CF acts as a shield for you. All you need to do is configure it to block all suspicious queries. To be honest, CF has many features that I haven't fully understood. But first, let's focus on defending against DDoS attacks, we can explore the other features later. The simplest way to defend against DDoS is by transferring your primary domain to Cloudflare, enabling the proxy and setting up rate limits. Rate limiting is a way to limit the number of accesses within a specified period of time. This is one of the simple yet effective methods to counter DDoS attacks. Suddenly, an IP continuously sending requests to your address, whether with a motive to cause destruction or "unintentionally" using a certain "loop" command that they laugh about when asked, "Sorry, I accidentally triggered a bug"... Well, this is quite a "bug," and if you can't "fix" it, let me help. You might find this intrusive, weren't we talking about setting up rate limits on the server earlier, and it didn't work? Well, remember that Cloudflare acts as an intermediary for all requests before they reach your actual server, meaning it operates at Layer 7, and according to them, CF has the technology or the capability to counter DDoS at layers 3 and 4. In summary, setting up rate limits at CF will block a considerable number of requests before they reach your actual server. Now, I will guide readers on how to configure rate limits to minimize DDoS attacks through Cloudflare. Please note that this is one of the methods I have used, and there are many other methods, so if you have a better approach, please leave a comment below the article. First, of course, you need to register a Cloudflare account and set up the domain you want to protect against DDoS. Don't worry, Cloudflare will guide you through the process after successful registration. Once the domain is activated, go to "Security" > "WAF" on the left navigation bar. WAF, in simple terms, is Cloudflare's firewall, which allows you to configure, block, modify... user requests before they reach your actual server. Look to the right side, to reach your server, a request will have to pass through all these security layers of CF. ![WAF](https://static-img.2coffee.dev/toi-dang-bi-ddos_rate-limit-ruletoi-dang-bi-ddos_waf.webp) Right there on the screen, switch to the "Rate limiting rules" tab and click the "Create rule" button at the bottom. ![Rate limiting rules](https://static-img.2coffee.dev/toi-dang-bi-ddos_rate-limit-rule.webp) Here, give a name to the "Rule" and select the path for which Cloudflare will set the limit. For example, here I choose "/", meaning that all paths of the main domain will be protected. ![Rule](https://static-img.2coffee.dev/toi-dang-bi-ddos_rule.webp) Next, set the limit by filling in the "Requests" and "Period" fields - the number of requests within a specified period, combining to form the rate limit. For example, here I choose 10 requests within 10 seconds for each IP address. If the limit is exceeded, users will receive an HTTP Status 429 error. ![Requests](https://static-img.2coffee.dev/toi-dang-bi-ddos_request.webp) Finally, save the configuration and try spamming your website to test if it's working properly. If you're wondering how to determine the number of requests, I suggest you "experiment". Keep trying until you find the right number, as it depends on the number of requests your website receives. For example, if a page loads 20 requests in total, the limit should definitely be higher than 20. Not to mention 1 second, then 2 seconds... and then it continues to make "n" additional requests, or users continuously navigate your website, generating "m" requests within a certain period of time... In conclusion, this number depends heavily on your website, so try and find a reasonable number. Finally, all the statistics about requests exceeding the limit will be displayed here. You can click on that "0" to view the details. Here, you can see that I haven't had any requests exceeding the limit in the past 24 hours. ![Stats](https://static-img.2coffee.dev/toi-dang-bi-ddos_stats.webp)
hoaitx
1,905,563
Q3 Is Pivotal
from : jlabsdigital Hey guys I’m on the road this week so apologies for keeping this one briefer than...
0
2024-06-29T11:31:02
https://dev.to/rohitelyts/q3-is-pivotal-4eo8
jlabsdigital, bitcoin
from : [jlabsdigital](https://jlabsdigital.com/) Hey guys I’m on the road this week so apologies for keeping this one briefer than usual. Rather than our usual intro let’s dive directly into the charts today and get straight and to the point of what you need to know as we head into what will be a pivotal Q3… And if there is hope to be found for a better performance than we witnessed in Q2 it seems to rest solely on the shoulders of ETH and its upcoming spot ETF listing, which could potentially go live as soon as July 4th. So without further ado let’s take a look at how the market is positioning itself. BTC Overhang Though this week we found out we’re potentially only days away from the listing of ETH’s spot ETF, that headline was not the primary driver of this week’s price action. Unlike in May when the sudden change of stance by the SEC sparked a rally back towards year-to-date highs, the market moving news of this week was the seemingly endless supply overhang currently facing BTC. With the perfect storm of miner capitulation, US and German governments selling hundreds of millions worth of Bitcoin, and the upcoming unlock of previously dormant Mt. Gox supply, it’s hard to envision the market regaining the momentum it lost in June. We can see this in the data. The BVIV 30-day implied volatility index has continued its descent back towards January levels. We can thank the overhang of spot supply shattering the market’s expectation of a rally back above $70k in the near future.
rohitelyts
1,907,195
How to Scrape Amazon: A Comprehensive Guide
Amazon, a behemoth in the e-commerce industry, is a goldmine of data for businesses, researchers, and...
0
2024-07-01T06:10:11
https://dev.to/ionegarza/how-to-scrape-amazon-a-comprehensive-guide-502a
amazon, webscraping, scraping, python
Amazon, a behemoth in the e-commerce industry, is a goldmine of data for businesses, researchers, and enthusiasts. Scraping this data-rich platform can unveil invaluable insights, from price trends to customer reviews and product popularity. However, scraping Amazon is no small feat. This guide will walk you through the process, highlighting the tools, techniques, and challenges you'll face. ## Understanding the Basics Before diving into the technical aspects, it's essential to grasp the fundamental principles of web scraping and Amazon's structure. ## Web Scraping 101 [Web scraping](https://rentry.co/Tips-for-Web-Scraping) involves extracting data from websites and transforming it into a structured format, such as a [CSV or JSON file](https://coresignal.com/blog/json-vs-csv/). This process typically includes: 1. **Sending an HTTP Request**: [Accessing the webpage's HTML content](https://www.w3schools.com/tags/ref_httpmethods.asp). 2. **Parsing the HTML**: Identifying and extracting the relevant data. 3. **Storing the Data**: Saving the extracted information in a usable format. ## Amazon's Structure Amazon's web pages are dynamically generated and highly structured, making them both a challenge and an opportunity for web scraping. Key elements to target include: - **Product Listings**: Title, price, rating, reviews, and specifications. - **Customer Reviews**: Text, rating, date, and reviewer information. - **Seller Information**: Name, rating, and product listings. ## Tools of the Trade Selecting the right tools is crucial for effective web scraping. Here are some popular choices: ### Python Libraries - **BeautifulSoup**: Excellent for parsing HTML and XML documents. - **Requests**: Simplifies sending HTTP requests. - **Selenium**: Automates web browsers, useful for dynamic content. - **Scrapy**: A powerful and flexible web scraping framework. ### Proxies Amazon employs sophisticated anti-scraping measures, including IP blocking. To circumvent these, proxies are indispensable. Types include: - **Residential Proxies**: IP addresses from real devices, less likely to be blocked. - **Datacenter Proxies**: Cheaper but more prone to detection. - **Rotating Proxies**: Change IP addresses periodically, enhancing anonymity. ### Browser Automation Tools like [Selenium](https://www.selenium.dev/downloads/) can automate interactions with web pages, simulating human behavior to access dynamically loaded content. ## Step-by-Step Guide to Scraping Amazon Let's break down the process into manageable steps. ### Step 1: Setting Up Your Environment First, ensure you have Python installed. Then, install the necessary [libraries](https://docs.python.org/3/library/index.html): ``` pip install requests pip install beautifulsoup4 pip install selenium pip install scrapy ``` ### Step 2: Sending HTTP Requests Begin by sending a request to an Amazon page. Use the Requests library for this purpose: ``` import requests url = "https://www.amazon.com/s?k=laptops" headers = { "User-Agent": "Your User-Agent" } response = requests.get(url, headers=headers) html_content = response.content ``` ### Step 3: Parsing HTML with BeautifulSoup With the HTML content in hand, use BeautifulSoup to parse and extract the desired data: ``` from bs4 import BeautifulSoup soup = BeautifulSoup(html_content, "html.parser") products = soup.find_all("div", {"data-component-type": "s-search-result"}) for product in products: title = product.h2.text.strip() price = product.find("span", "a-price-whole") if price: price = price.text.strip() rating = product.find("span", "a-icon-alt") if rating: rating = rating.text.strip() print(f"Title: {title}, Price: {price}, Rating: {rating}") ``` ### Step 4: Handling Dynamic Content with Selenium Amazon often loads content dynamically. Use Selenium to handle such cases: ``` from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager driver = webdriver.Chrome(service=Service(ChromeDriverManager().install())) driver.get("https://www.amazon.com/s?k=laptops") products = driver.find_elements(By.CSS_SELECTOR, "div.s-search-result") for product in products: title = product.find_element(By.TAG_NAME, "h2").text price = product.find_element(By.CSS_SELECTOR, "span.a-price-whole") if price: price = price.text rating = product.find_element(By.CSS_SELECTOR, "span.a-icon-alt") if rating: rating = rating.text print(f"Title: {title}, Price: {price}, Rating: {rating}") driver.quit() ``` ### Step 5: Managing Proxies To avoid getting blocked, integrate proxies into your requests. Services like Spaw.co, Bright Data, and Smartproxy are reliable options. Here's how to use them: ``` proxies = { "http": "http://your_proxy:your_port", "https": "https://your_proxy:your_port" } response = requests.get(url, headers=headers, proxies=proxies) ``` ### Step 6: Extracting Customer Reviews To get customer reviews, navigate to the product page and parse the review section: ``` product_url = "https://www.amazon.com/dp/B08N5WRWNW" response = requests.get(product_url, headers=headers) soup = BeautifulSoup(response.content, "html.parser") reviews = soup.find_all("div", {"data-hook": "review"}) for review in reviews: review_text = review.find("span", {"data-hook": "review-body"}).text.strip() review_rating = review.find("i", {"data-hook": "review-star-rating"}).text.strip() review_date = review.find("span", {"data-hook": "review-date"}).text.strip() reviewer_name = review.find("span", {"class": "a-profile-name"}).text.strip() print(f"Reviewer: {reviewer_name}, Rating: {review_rating}, Date: {review_date}, Review: {review_text}") ``` ### Step 7: Dealing with Captchas Amazon employs captchas to thwart automated scraping. Implementing a [captcha-solving service](https://2captcha.com/2captcha-api) can help: ``` import time from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys driver.get(product_url) time.sleep(2) # Allow time for captcha to load if present # Check for captcha if "Enter the characters you see below" in driver.page_source: captcha_input = driver.find_element(By.ID, "captchacharacters") captcha_input.send_keys("solved_captcha_value") # Use a captcha-solving service here captcha_input.send_keys(Keys.RETURN) ``` ### Step 8: Storing Data Finally, save the extracted data into a structured format. Use Pandas for ease: ``` import pandas as pd data = [] for product in products: title = product.h2.text.strip() price = product.find("span", "a-price-whole") if price: price = price.text.strip() rating = product.find("span", "a-icon-alt") if rating: rating = rating.text.strip() data.append({"Title": title, "Price": price, "Rating": rating}) df = pd.DataFrame(data) df.to_csv("amazon_products.csv", index=False) ``` ## Challenges and Solutions ### Anti-Scraping Mechanisms Amazon's anti-scraping measures include IP blocking, captchas, and [dynamic content loading](https://techkluster.com/javascript/dynamic-content-loading/). Mitigate these by using rotating proxies, integrating captcha-solving services, and employing browser automation. ### Legal Consideration Scraping Amazon's data may violate their terms of service. Always check the legal implications and consider using Amazon's official APIs for data access. ### Data Accuracy Dynamic pricing and frequent content updates can lead to data inconsistency. Regularly update your scraping scripts and validate the data to maintain accuracy. ### Efficiency Scraping large volumes of data can be resource-intensive. Optimize your code for efficiency, use asynchronous requests where possible, and consider distributed scraping to handle large-scale tasks. ## Conclusion Scraping Amazon requires a blend of technical prowess, strategic planning, and ethical consideration. By understanding the platform's structure, using the right tools, and addressing potential challenges, you can extract valuable data while navigating the complexities of Amazon's anti-scraping measures. Always stay informed about legal implications and strive for responsible scraping practices.
ionegarza
1,907,194
Low-Carb Frozen Meals for Diabetics
In the quest for holistic wellness, understanding the intricate links between various health aspects...
0
2024-07-01T06:10:04
https://dev.to/rana_nayab_3e9fb133c75796/low-carb-frozen-meals-for-diabetics-17pn
webdev, programming, tutorial
<p><span style="color:#0d0d0d;font-size:12pt;">In the quest for holistic wellness, understanding the intricate links between various health aspects is vital. From addressing urinary tract infections (UTIs) to managing chronic diseases and embracing weight loss, Prime Health Services plays a pivotal role in guiding individuals toward healthier lifestyles. Let&apos;s delve into each area to grasp its significance and explore their interconnectedness in the journey to well-being.</span></p> <h3><strong><span style="color:#0d0d0d;font-size:16.5pt;">Decoding UTIs and Headaches</span></strong></h3> <p><span style="color:#0d0d0d;font-size:12pt;">Urinary tract infections&nbsp;</span><a href="https://primehealthofnj.com/can-uti-cause-headache/"><u><span style="color:#1155cc;font-size:10.5pt;">can uti cause headache</span></u></a><span style="color:#0d0d0d;font-size:12pt;">&nbsp;are common bacterial infections affecting millions globally. While their symptoms typically manifest in the urinary system, including frequent urination and a burning sensation, UTIs can also lead to unexpected side effects like headaches. But how does this occur?</span></p> <p><span style="color:#0d0d0d;font-size:12pt;">UTIs trigger an inflammatory response in the body, releasing chemicals that may affect the nervous system, potentially causing headaches. Additionally, the discomfort from UTIs can induce stress, a known headache trigger. Therefore, addressing UTIs promptly not only relieves urinary symptoms but also helps alleviate associated complications like headaches.</span></p> <h3><strong><span style="color:#0d0d0d;font-size:16.5pt;">Guiding Weight Loss Management for Overall Wellness</span></strong></h3> <p><span style="color:#0d0d0d;font-size:12pt;">Weight management is a multifaceted journey involving dietary habits, physical activity, mindset, and sometimes medical interventions.&nbsp;</span><a href="https://primehealthofnj.com/"><u><span style="color:#1155cc;font-size:10.5pt;">prime health services</span></u></a><span style="color:#0d0d0d;font-size:12pt;">&nbsp;recognizes that sustainable weight loss isn&apos;t just about shedding pounds but fostering overall well-being. By offering personalized strategies tailored to individual needs, Prime Health Services empowers individuals to embark on a transformative path toward a healthier weight.</span></p> <p><span style="color:#0d0d0d;font-size:12pt;">Moreover, weight loss isn&apos;t solely about appearance; it profoundly impacts health, especially in reducing the risk of chronic diseases like diabetes and cardiovascular issues. By incorporating evidence-based practices, Prime Health Services guides individuals to achieve and maintain a healthy weight, laying the foundation for a vibrant life.</span></p> <h3><strong><span style="color:#0d0d0d;font-size:16.5pt;">Comprehensive Chronic Disease Management</span></strong></h3> <p><span style="color:#0d0d0d;font-size:12pt;">Chronic diseases, with their long duration and slow progression, require proactive management strategies. From&nbsp;</span><a href="https://primehealthofnj.com/services/chronic-disease-management/"><u><span style="color:#1155cc;font-size:10.5pt;">Chronic Disease Management</span></u></a><span style="color:#0d0d0d;font-size:12pt;">&nbsp;to autoimmune disorders, Prime Health Services adopts a comprehensive approach to chronic disease management.</span></p> <p><span style="color:#0d0d0d;font-size:12pt;">Through regular monitoring, lifestyle adjustments, medication adherence, and education, Prime Health Services aims to optimize disease control and enhance the quality of life for those living with chronic conditions. By fostering a collaborative partnership between healthcare providers and patients, Prime Health Services empowers individuals to take control of their health journey and thrive despite chronic health challenges.</span></p> <h3><strong><span style="color:#0d0d0d;font-size:16.5pt;">The Synergy of Prime Health Services in Holistic Well-being</span></strong></h3> <p><span style="color:#0d0d0d;font-size:12pt;">While distinct, UTI management,&nbsp;</span><a href="https://primehealthofnj.com/services/weight-loss-management-clinic/"><u><span style="color:#1155cc;font-size:10.5pt;">Weight Loss Management</span></u></a><span style="color:#0d0d0d;font-size:12pt;">, and chronic disease management intersect within Prime Health Services&apos; holistic framework. By promptly addressing UTIs, the service not only relieves immediate symptoms but also helps mitigate complications like headaches, promoting overall well-being.</span></p> <p><br></p> <p><span style="font-size:26pt;">NEW</span></p> <p><span style="color:#0d0d0d;font-size:12pt;">In essence,&nbsp;</span><a href="https://primehealthofnj.com/"><u><span style="color:#1155cc;font-size:10.5pt;">prime health services</span></u></a><span style="color:# ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8b84termnsxl1thm6b3q.jpeg);font-size:12pt;">&nbsp;serves as a guide and support system in the pursuit of optimal health and well-being. By addressing UTIs promptly, navigating weight loss management, and managing chronic diseases comprehensively, Prime Health Services empowers individuals to thrive and embrace a life of vitality and fulfillment.</span></p> <p><br></p> <p><span style="color:#0d0d0d;font-size:12pt;">In the journey towards holistic wellness, Prime Health Services stands as a beacon of guidance, offering comprehensive support in three critical areas: UTI management, weight loss, and chronic disease care. Let&apos;s delve into each aspect to understand its importance and how they intertwine within the framework of Prime Health Services.</span></p> <h3><strong><span style="color:#0d0d0d;font-size:16.5pt;">Understanding UTIs and Associated Symptoms</span></strong></h3> <p><span style="color:#0d0d0d;font-size:12pt;">Urinary Tract Infections (UTIs) are common bacterial infections that affect millions worldwide. While their primary symptoms typically revolve around the urinary system, such as frequent urination and discomfort,</span><a href="https://primehealthofnj.com/can-uti-cause-headache/"><u><span style="color:#1155cc;font-size:10.5pt;">can uti cause headache</span></u></a><span style="color:#0d0d0d;font-size:12pt;">&nbsp;also lead to secondary effects like headaches. This occurs due to the inflammatory response triggered by UTIs, potentially impacting the nervous system and causing headaches. Addressing UTIs promptly not only relieves immediate symptoms but also helps prevent complications like headaches, emphasizing the importance of swift management.</span></p> <h3><strong><span style="color:#0d0d0d;font-size:16.5pt;">Guiding Sustainable Weight Loss for Overall Health</span></strong></h3> <p><span style="color:#0d0d0d;font-size:12pt;">Weight management is a multifaceted journey encompassing dietary habits, physical activity, and mental well-being. Prime Health Services understands that sustainable weight loss goes beyond mere numbers on a scale; it&apos;s about fostering overall health and vitality. By offering personalized strategies tailored to individual needs, Prime Health Services empowers individuals to embark on a transformative path towards a healthier weight. Moreover, achieving and maintaining a healthy&nbsp;</span><a href="https://primehealthofnj.com/services/weight-loss-management-clinic/"><u><span style="color:#1155cc;font-size:10.5pt;">Weight Loss Management</span></u></a><span style="color:#0d0d0d;font-size:12pt;">&nbsp;significantly reduces the risk of chronic diseases, highlighting the importance of weight management in long-term health.</span></p> <h3><strong><span style="color:#0d0d0d;font-size:16.5pt;">The Synergy of Prime Health Services: A Path to Wellness</span></strong></h3> <p><span style="color:#0d0d0d;font-size:12pt;">While distinct, the services offered by Prime Health Services intersect seamlessly, contributing to a holistic approach to wellness. Addressing UTIs promptly not only relieves immediate discomfort but also prevents complications like headaches, promoting overall well-being. Simultaneously, guiding individuals in sustainable weight loss not only improves physical health but also enhances mental and emotional well-being, reducing the risk of&nbsp;</span><a href="https://primehealthofnj.com/services/chronic-disease-management/"><u><span style="color:#1155cc;font-size:10.5pt;">Chronic Disease Management</span></u></a><span style="color:#0d0d0d;font-size:12pt;">&nbsp;in the long run. Furthermore, by providing comprehensive care for chronic conditions, Prime Health Services empowers individuals to take control of their health journey and live life to the fullest.</span></p> <h3><strong><span style="color:#0d0d0d;font-size:16.5pt;">Conclusion: Empowering Health and Well-being</span></strong></h3> <p><span style="color:#0d0d0d;font-size:12pt;">In conclusion, Prime Health Services plays a pivotal role in promoting holistic wellness by addressing UTIs, guiding sustainable weight loss, and providing comprehensive care for chronic conditions. By emphasizing proactive management, personalized care, and patient empowerment, Prime Health Services ensures that individuals receive the support they need to achieve optimal health and well-being. With Prime Health Services as a trusted partner, individuals can embark on a journey towards a healthier, happier life.</span></p> <p><br></p> <p><br></p> <p><br></p>
rana_nayab_3e9fb133c75796
1,907,193
can anyone help me to implement cloud data protection for 365
A post by sanjay kumar
0
2024-07-01T06:10:02
https://dev.to/sanjay2000/can-anyone-help-me-to-implement-cloud-data-protection-for-365-1c51
outlook365, help
sanjay2000
1,905,504
5 engineering interview hints
Today, I want to share a few quick hints to help you frame your job search preparation and better...
0
2024-07-01T06:06:23
https://dev.to/titovmx/5-engineering-interview-hints-3051
softwareengineering, interview, career
Today, I want to share a few quick hints to help you frame your job search preparation and better understand interviewers' expectations. Recently, I participated in a [podcast (in Russian)](https://www.youtube.com/live/NT3bAtdBcGg?si=pvC32P1aeQAmUCUq) devoted to frontend interviews and preparation, where we discussed each type of interview in detail. Despite their differences, these interviews share many common elements. I have over 10 years of engineering experience. I've interviewed candidates at various levels and been through the interview process myself. Last year, I navigated the job market and understand how challenging it is, even for senior engineers, especially if you are located outside the US or EU and looking for relocation and remote options. So let’s go to the most interesting part. ## 1. Minify feedback loop I often see engineers preparing for a job search by trying to fill every gap in their knowledge. They read computer science books, take courses, and experiment with unfamiliar frameworks and libraries. While continuous learning is great, it’s not the most effective approach for job searching. You don’t need to know everything to pass an interview. Interviews have their own rules, and it is beneficial to practice these specific skills as early as possible. Mock interviews are invaluable for practicing and getting feedback on your answers, communication, and the actual knowledge gaps you need to address. You can ask friends, ex-colleagues, or people in your network to conduct different types of interviews for you. Another option is using online services like [pramp.com](http://pramp.com/), where people interview each other. Eventually, you can also take on the role of the interviewer to gain insight into what signals are expected from candidates. I also recommend investigating the job market from day one and creating two lists. The first list should include top-priority companies where you really want to work. The second list should consist of less interesting options. Start with the second list to get a feel for the process, validate your CV, and refine your interview answers. This way, you can gather valuable information about potential questions without being too disappointed if things don't go as planned. So try to get quick feedback on your readiness and start real interviews earlier to not be frustrated about the fact that the interviews are different from what you expected after a half year of tech preparation. ## 2. Focus on communication As I mentioned earlier, interviews have their own rules. The process is not perfect, and interviewers have very limited time to assess your skills and level. This is why practicing communication is crucial. Interviewers look for signals about the tasks you've completed, the scope of your work, whether you just wrote code or organized large projects, or resolved company-level issues. It is important to focus on the value you have delivered. The helpful framework to tell about your experience is STAR - describe the project **s**ituation, **t**ask you need to complete, **a**ctions you are performed for it, and final **r**esult For technical interviews, interviewers expect not only specific knowledge but also problem-solving skills, the ability to refine requirements, recognize task constraints, clearly deliver solutions, and test them. You will need to articulate your experience and approach interview tasks as real problems you are solving in your usual responsibilities. I strongly recommend speaking out loud during your tech interviews, whether it is live coding challenges, algorithmic sections, or system design interviews. Demonstrate your thought process and clearly explain your solutions. You may not always have enough time to complete the tasks, but discussing your approach helps the interviewer understand that you know how to solve it. Remember, the interview is a dialogue, not a monologue. Always start by clarifying the requirements to ensure you understand them correctly. For coding and algorithmic tasks, explain your approach or write the algorithm in pseudocode. For system design, begin with a high-level overview before diving into details. If you go off track or encounter difficulties, the interviewer can assist you if they understand your current approach. The best way to gauge how well you communicate your knowledge and skills is again to get feedback from mock and real interviews. ## 3. Prepare for specific environment The specific challenge of interviews is that you will need to solve problems in an unfamiliar environment. Previously, you would write solutions on paper or a whiteboard, but now most interviews are remote. The coding and algorithmic sections are usually conducted using online editors with limited syntax highlighting and code suggestions. You must be comfortable writing code in your chosen language, stay fluent, and be able to debug and test your solutions. Practice on platforms like [leetcode](https://leetcode.com/), [CodePen](https://codepen.io/) or [CodeSandbox](https://codesandbox.io/) to solve algorithmic tasks and code challenges. System design interviews typically require you to write down requirements, draw diagrams, and create data models. Practice on such platforms to avoid wasting valuable time figuring out how to draw your solution. I recommend Excalidraw, a minimalistic and quick-to-go tool. ## 4. Show your interest Interviews usually include 5-10 minutes at the end for you to ask questions about the company. Remember, the interview is bidirectional - you're also evaluating them as your potential workplace. Research the company, understand what they do, and show your interest by asking insightful questions. For example, identify their competitors, analyze the company's unique selling points, and ask how they achieve these and what challenges they face. Your questions can also signal your level of expertise. Ask technical questions about their engineering processes and solutions, as well as their product strategies. It's best to prepare these questions in advance. ## 5. Stay positive The interview process can be tough and feel like a full-time job. It’s easy to get frustrated by questions and challenges you don't like, but staying positive is crucial. Everyone involved in hiring is trying to find the best candidate, and they have limited time to assess each one. Your best strategy is to remain positive, empathic and friendly, share well-prepared stories about your experience, and send the right signals during technical interviews. Do not forget to rest properly and avoid burnout during this process. --- Hope you find it useful! Write in comments that you would recommend others from your experience. Good luck with your upcoming interviews!
titovmx
1,901,318
How to - Process CSV in Power Automate
Its still crazy to me that Microsoft did not create an out of the box action to process a csv (or...
22,764
2024-07-01T06:01:14
https://dev.to/wyattdave/how-to-process-csv-in-power-automate-535f
powerautomate, powerplatform, lowcode, rpa
Its still crazy to me that Microsoft did not create an out of the box action to process a csv (or .xls files too 😣 ). Fortunatley there are a few ways to do it, 4 in fact, some cool, some easy, some crazy, but all interesting (well I think so 😎) so I want to show them: - Plumsail - Flow - Office Script - DataFlow ## Plumsail (Expensive way) Plumsail offers a payed connector to process any csv [plumsail process csv](https://plumsail.com/docs/actions/v1.x/flow/how-tos/sharepoint/actions-read-a-csv-file-and-bulk-generate-documents.html), but obiously has a cost and thats just boring, so quickly moving on. ## Flow (Crazy way) So a csv is simply one long string with delimitas for rows and columns. The rows are identified by \r\n (for most, though some are just \n) and the column by a comma (,) (comma seperated values, its in the name 😎 ). So our flow is going to split into rows and then into columns, but we have a few issues to fix: - Getting Column Headers - Dealing with commas within values The best way to show is to walk through the flow: First we declare a 'few' variables ![variables](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/85sjh50dtwbhjmbz5kfz.png) - aHeaders = this the array we will fill with our column headers - rIndex = to record current row we are working on - cIndex = to record current column we are working on - aRows = array that saves transformed row data - oRow = object we use to build the row before adding to aRows After variables we need to grab the csv and split the rows. We use the below expression: ``` split(outputs('Get_file_content')?['body'],decodeUriComponent('%0D%0A')) ``` _%0D%0A is the encoded version of \r\n_ ![flow split and loop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gg9u6yk0ryqhtua569bn.png) We use a Do until that checks against the rIndex to know when all rows complete (there is always a blank row at the end so we minus 1 row). Also this could easily be a Apply_to_each, I just have a soft spot for Do_Untils. We use the interationindex to check if its the first row, if it is we split by a comma and then add to the aHeaders array. ![create header array](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o1z83fgxbiyc46dhm9tq.png) For the main rows it is a little more complex. We are going to split the row again and then loop over the new column array. For each column we use the AddProperty expression, using the column index to find the column name (from the aHeaders) and column value. ``` addProperty(variables('oRow'), variables('aHeaders')[iterationIndexes('Do_until_columns')] , outputs('SplitColumns')[iterationIndexes('Do_until_columns')] ) ``` When we addProperty we actually create a copy of the orgional object and add the property. So we need to now update the oRow variable with the value from the compose. ![loop over rows](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m558sd3rkjhbfzcqp2l5.png) Finally we append the oRow object to the aRows array. **Input** ![csv in excel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dy7lw4snmci72rw1d6nb.png) **Output** ![csv converted to json](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wu7um1tq5utbdiapnd6d.png) But did you spot the deliberate mistake.... currently there is no way I know of to process csv's that contain commas inside a value. Back in the day Power Automate use to return the csv like this: `""David","1","TRUE""` so we could split on '",' but now it returns `"David,1,TRUE"` So there is not way to identify a comma in a value to a comma sperator. ## Office Script (Cool way) We love a bit of pro-code, and good news Microsft have made it for us already ([https://learn.microsoft.com/en-us/office/dev/scripts/resources/samples/convert-csv](https://learn.microsoft.com/en-us/office/dev/scripts/resources/samples/convert-csv)), but that is only to convert it to Excel, what if you want to covnert to a JSON to use in your flow. I created a script a while back in this blog [5 Scripts every Power Automate Developer Should Know](https://dev.to/wyattdave/5-scripts-every-power-automate-developer-should-know-nep), but that requires you to hardcode in the columns, what if you want them to be dynamic (ie work for every csv). The problem is Office Scripts are based on TypeScript (so every objects structure has to be declared), and they have banned any (so we can't even uses TypeScripts own workaround). Fortunatley there is a way, and thats to build our own JSON array as a string and get Power Automate to convert it back to a JSON. ``` function main(workbook: ExcelScript.Workbook, csv: string) { let sJson: string = "["; let aHeaders: string[] = [] csv = csv.replace(/\r/g, ""); let rows = csv.split("\n"); const csvRegex = /(?:,|\n|^)("(?:(?:"")*[^"]*)*"|[^",\n]*|(?:\n|$))/g rows.forEach((value, index) => { let rIndex=index; if (value.length > 0) { let row = value.match(csvRegex); if (row[0].charAt(0) === ',') { row.unshift(""); } if (index != 0) { sJson += "{" } row.forEach((cell, index) => { row[index] = cell.indexOf(",") === 0 ? cell.substr(1) : cell; if (rIndex == 0) { aHeaders.push(row[index] .toString()) } else { if (Number(row[index])){ sJson += '"' + aHeaders[index] + '":' + row[index] + ',' } else if (row[index] == "TRUE" || row[index] == "FALSE"){ sJson += '"' + aHeaders[index] + '":' + row[index].toLowerCase() + ',' }else{ sJson += '"' + aHeaders[index] + '":"' + row[index].trim() + '",' } } }); if (index != 0) { sJson = sJson.substring(0, sJson.length - 1); sJson += "}," } } }); sJson = sJson.substring(0, sJson.length - 1); sJson += "]"; return (sJson); } ``` The split is based on Microsfts, but we change a few things: First we declare a string varibale called sJson and set it to '[' , opening our array. If its the first row (ie headers) we add them to a seperate array called aHeaders. On the next row we open our object `if (index != 0) { sJson += "{" }` and then loop over each column adding the corrisponding value from the aHeaders array: ` sJson += '"' + aHeaders[index] + '":"' + row[index].trim() + '",'`. To make sure we convert any numbers/booleans from strings we add a little logic, so all together for each row we end up with: ``` if (rIndex == 0) { aHeaders.push(row[index] .toString()) } else { if (Number(row[index])){ sJson += '"' + aHeaders[index] + '":' + row[index] + ',' } else if (row[index] == "TRUE" || row[index] == "FALSE"){ sJson += '"' + aHeaders[index] + '":' + row[index].toLowerCase() + ',' }else{ sJson += '"' + aHeaders[index] + '":"' + row[index].trim() + '",' } } ``` Finally we do the closing, we remove the last comma ` sJson = sJson.substring(0, sJson.length - 1);`, and then close the object `sJson += "},"`. After all the rows have been processed we repeat to close the array, but swap out '}' to ']' `sJson += "]";` ![office script convert js](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tfc224o05fa1mv1nyxii.png) _Quick note, the office script returns the array as a string so we use the json() expression to convert it back to a json_ ![convert to json](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iepn1u3s98grlcae1lns.png) ## Dataflows (Easy way) Dataflows are cool and still under used (didn't help that for long time they were not solution aware). I have done a full blog a while back [here](https://dev.to/wyattdave/dataflows-hidden-gem-for-power-automate-k0g), but in a nut shell its Power Query (the exact same you see in Excel and Power BI). So we can use a lovely UI to convert our text file (as thats what csv's really are) into proper data. To create one you head over to make.powerapps.com and select Dataflows from left menu, create. ![dataflows](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sn71iy1l0s08kk3kdlhr.png) Select text/csv file type (see I told you so). ![file type](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0y3kx7g87gwz9u7gbhs3.png) Then create some connections and select the file. As its a csv Dataflows automatically transforms the data into rows/columns and sets types. ![auto extra csv](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddvqdqdm8hi8ckgqe8ge.png) You can then add filters and calculated columns if you like (See my previous blog for how), but as we are just grabbing the csv we can leave as is and hit next. ![transform data](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a6f16a4vt7rozfz1q01s.png) Next we have to decide where we store the data, and this is the big draw back, we have to save to Dataverse, so its premium functionality. ![save to dataverse](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w751ang8nyisyjvun1a4.png) We can use exsiting table or create a new table, and then we can download and use the data as needed. Dataflows give us a few options, we can run on Power Automate trigger (when file gets update), or we can schedule the Dataflow to update, and then when it finishes use that to trigger a flow. ![dataflow flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u4veeh4i2py09uy9te5j.png) _This example waits with a timer for the Dataflow to finish and lists table, its not the way to go as its a hard coded wait but you get the idea 😎_ --- Hopefully one of the solutions will work for you (Im also looking at creating a lowcode plugin, will update if I do). All the flows can be found [here](https://github.com/wyattdave/Power-Platform/tree/main/Power%20Automate%20Artifacts) and script [here](https://github.com/wyattdave/Power-Platform/tree/main/Office%20Scripts) to download and look at.
wyattdave
1,907,190
Work Cover Treatment Sydney | Comprehensive Care at The Foot and Ankle Clinic of Australia
In Sydney, workplace injuries can disrupt not only your job but also your daily life. When it comes...
0
2024-07-01T06:00:31
https://dev.to/thefootankleclinic/work-cover-treatment-sydney-comprehensive-care-at-the-foot-and-ankle-clinic-of-australia-9md
In Sydney, workplace injuries can disrupt not only your job but also your daily life. When it comes to injuries affecting your feet or ankles, seeking specialized care is crucial for recovery and returning to work. At [The Foot and Ankle Clinic of Australia](https://thefaca.com.au/), we understand the importance of timely and effective treatment under Work Cover arrangements, ensuring you receive the best possible care to regain mobility and functionality. Understanding [Work Cover Treatment](https://thefaca.com.au/workcover/) Work Cover is designed to support individuals who have sustained injuries in the workplace, providing access to medical treatment, rehabilitation, and financial assistance during recovery. At our clinic in Sydney, we specialize in treating a wide range of work-related foot and ankle injuries, including: Sprains and Strains: Common injuries that can result from slips, falls, or overexertion in the workplace. Fractures: Impact injuries or accidents that lead to fractures in the feet or ankles. Tendonitis and Bursitis: Inflammation of the tendons or bursae due to repetitive motions or strain. Plantar Fasciitis: Pain and inflammation in the heel caused by overuse or improper footwear. Our Approach to Work Cover Treatment When you visit The Foot and Ankle Clinic of Australia for [Work Cover treatment](https://thefaca.com.au/workcover/), you can expect personalized care tailored to your specific injury and recovery needs. Our experienced podiatrists and foot specialists work closely with you to develop a comprehensive treatment plan, which may include: Initial Assessment: Thorough evaluation of your injury through clinical examination and diagnostic tests to determine the extent of damage. Treatment Modalities: Utilizing advanced treatments such as immobilization, physical therapy, orthotic devices, and in severe cases, surgical intervention to ensure optimal recovery. Rehabilitation: Designing rehabilitation programs focused on restoring strength, flexibility, and function to the affected foot or ankle, with the goal of facilitating a safe return to work. Educational Support: Providing guidance on injury prevention strategies and ergonomic practices to reduce the risk of future workplace injuries. Why Choose The Foot and Ankle Clinic of Australia? At our Sydney clinic, we combine expertise with compassion, ensuring that every patient receives the highest standard of care throughout their Work Cover treatment journey. Here’s what sets us apart: Specialized Expertise: Our podiatrists specialize in treating foot and ankle injuries, backed by years of experience in Work Cover cases. Patient-Centered Approach: We prioritize your well-being, ensuring open communication, personalized attention, and comprehensive support at every step. Collaborative Care: Working closely with Work Cover providers and occupational health teams to streamline the claims process and facilitate seamless treatment. Book Your Appointment Today If you’ve been injured at work and require specialized foot or ankle treatment under Work Cover, don’t hesitate to contact [The Foot and Ankle Clinic of Australia ](https://thefaca.com.au/)in Sydney. Our dedicated team is here to help you recover effectively and regain your quality of life. Schedule your appointment today to take the first step towards healing and returning to work with confidence.
thefootankleclinic
1,907,189
Best Practices for Using Middleware in ASP.NET Core Web API for Exception Handling, Authentication, and Error Logging
QuestionForGroup Hi everyone, I'm working on an ASP.NET Core Web API project and I am...
0
2024-07-01T05:59:57
https://dev.to/abdullah_sameer/best-practices-for-using-middleware-in-aspnet-core-web-api-for-exception-handling-authentication-and-error-logging-1d5a
#QuestionForGroup Hi everyone, I'm working on an ASP.NET Core Web API project and I am trying to implement some global functionalities using middleware. Specifically, I want to handle the following: Global Exception Handling: Catching unhandled exceptions and returning standardized error responses. Authentication: Ensuring all requests are authenticated before processing. Global Error Logging: Logging all errors that occur during the request processing pipeline. My questions are: Is it a recommended practice to use middleware for global exception handling in ASP.NET Core Web API? If so, are there any specific patterns or libraries that are commonly used? Is middleware the best approach for implementing authentication, or should this be handled differently? For global error logging, is middleware a suitable solution, or are there other preferred methods or tools for this purpose? I would appreciate any insights or recommendations on the best practices for these tasks, particularly regarding the use of middleware vs other methods. Thanks in advance! ❤️
abdullah_sameer
1,907,188
Best Practices for Using Middleware in ASP.NET Core Web API for Exception Handling, Authentication, and Error Logging
QuestionForGroup Hi everyone, I'm working on an ASP.NET Core Web API project and I am...
0
2024-07-01T05:59:11
https://dev.to/abdullah_sameer/best-practices-for-using-middleware-in-aspnet-core-web-api-for-exception-handling-authentication-and-error-logging-528i
#QuestionForGroup Hi everyone, I'm working on an ASP.NET Core Web API project and I am trying to implement some global functionalities using middleware. Specifically, I want to handle the following: Global Exception Handling: Catching unhandled exceptions and returning standardized error responses. Authentication: Ensuring all requests are authenticated before processing. Global Error Logging: Logging all errors that occur during the request processing pipeline. My questions are: Is it a recommended practice to use middleware for global exception handling in ASP.NET Core Web API? If so, are there any specific patterns or libraries that are commonly used? Is middleware the best approach for implementing authentication, or should this be handled differently? For global error logging, is middleware a suitable solution, or are there other preferred methods or tools for this purpose? I would appreciate any insights or recommendations on the best practices for these tasks, particularly regarding the use of middleware vs other methods. Thanks in advance! ❤️
abdullah_sameer
1,907,186
Exploring Google’s Gemma-2 Model: The Future of Machine Learning and Application Integration
In recent developments, Google has unveiled the Gemma-2 model, a significant step forward in the...
0
2024-07-01T05:56:59
https://dev.to/trinhcamminh/exploring-googles-gemma-2-model-the-future-of-machine-learning-and-application-integration-4pmj
machinelearning, model, ai, gemma
In recent developments, Google has unveiled the Gemma-2 model, a significant step forward in the field of machine learning. This blog post will define what Gemma is, distinguish it from Google’s previous Gemini model, and explore the practical applications of Gemma in real-world tasks. ![Gemma-Google](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hucbcxi6nscbdzqgo9bu.png) ## 👉 What is the Gemma Model? The Gemma-2 model is the latest innovation in Google’s suite of machine learning tools. Designed to enhance natural language understanding and generation, Gemma-2 utilizes advanced neural network architectures to deliver highly accurate and contextually relevant outputs. It is built on the principles of deep learning and leverages vast amounts of data to continually improve its performance. Gemma is currently available in two sizes: 9B and 27B (parameter sizes), and each model has two variants, base (pre-trained) and instruction-tuned. Google has filtered out personal information and other sensitive data from training sets to make the pre-trained models safe and reliable. ![Gemma evaluate](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0gwnlsmsjdsdnktycvj4.png) ## 👉 Built for developers and researchers Getting started with Gemma is straightforward due to its integration with popular tools like Hugging Face Transformers, Kaggle, NVIDIA NeMo, and MaxText. Deployment on Google Cloud is also simple through Vertex AI and Google Kubernetes Engine (GKE). Additionally, Gemma is optimized for AI hardware platforms, including NVIDIA GPUs and Google Cloud TPUs. ## 👉 Gemma vs. Gemini Gemini is available to end customers through Web, Android, and iOS apps, while Gemma models are for developers. Developers can access Gemini via APIs or Vertex AI, making it a closed model. Gemma, being open-source, is accessible to developers, researchers, and businesses for experimentation and integration (through HuggingFace, Kaggle, …). ## 👉 Additional Information The company also plans to release more variants in the future as they expand the Gemma family such as CodeGemma, RecurrentGemma and PaliGemma — each offering unique capabilities for different AI tasks and easily accessible through integrations with partners like Hugging Face, NVIDIA and Ollama. ## 👉 How to use and fine-tuning Gemma with your own applications ### 1️⃣ Setup Select the Colab runtime To complete this tutorial, you’ll need to have a Colab runtime with sufficient resources to run the Gemma model. In this case, you can use a T4 GPU: In the upper-right of the Colab window, select ▾ (Additional connection options). Select Change runtime type. Under Hardware accelerator, select T4 GPU. ### 2️⃣ Gemma setup Before we dive into the tutorial, let’s get you set up with Gemma: Hugging Face Account: If you don’t already have one, you can create a free Hugging Face account by clicking here. Gemma Model Access: Head over to the Gemma model page and accept the usage conditions. Colab with Gemma Power: For this tutorial, you’ll need a Colab runtime with enough resources to handle the Gemma 2B model. Choose an appropriate runtime when starting your Colab session. Hugging Face Token: Generate a Hugging Face access (preferably write permission) token by clicking here. You'll need this token later in the tutorial. ### 3️⃣ Configure your HF token Add your Hugging Face token to the Colab Secrets manager to securely store it. ### 4️⃣ Instantiate and Fine-tuning the model The code for Instantiate and fine-tuning the model is extensive and is therefore included in this [notebook](https://colab.research.google.com/drive/1-gL7j2mORaKRlYnX3zGgTmfFhEDzzb7O?usp=sharing). Please refer to it for detailed instructions and reference. ## 👉 Conclusion The full [notebook](https://colab.research.google.com/drive/1-gL7j2mORaKRlYnX3zGgTmfFhEDzzb7O?usp=sharing) for this article is available here. If you want to find more interesting content like this from me, please don’t hesitate to visit my [Portfolio Website](https://minhct.netlify.app/) and [GitHub](https://github.com/TrinhCamMinh). Lastly, if this post helped you stay up-to-date with technology or was useful in anyway, please leave me a 👏. It means a lot to me 🥰. Feel free to connect with me on [LinkedIn](https://www.linkedin.com/in/tr%E1%BB%8Bnh-c%E1%BA%A9m-minh-34b369274/) for more updates and content! ![TrinhCamMinh's logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vwwu8wqxvt6yhc2w3k0.png)
trinhcamminh
1,907,185
Where Can You Find an Crypto Market Making Bot Development Company?
The world of cryptocurrency trading can be exciting and profitable, but also complex and fast-paced....
0
2024-07-01T05:56:09
https://dev.to/kala12/where-can-you-find-an-crypto-market-making-bot-development-company-1jh0
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9vci0naehq4l1tqrgtk3.jpg) The world of cryptocurrency trading can be exciting and profitable, but also complex and fast-paced. One way to effectively navigate this environment is to use an artificial crypto market bot. These automated trading bots help you buy and sell cryptocurrencies efficiently and ensure you always stay ahead of the market. Here is a 10-point guide to developing your own crypto market artificial intelligence bot. **Understanding Market Making **Market making involves issuing both buy and sell orders on a crypto exchange to increase liquidity. Market makers profit from the difference between the bid and ask price, called the spread. A crypto trading bot automates this process, making it faster and more efficient. **Choosing the right exchange **The first step in developing a market making bot is to choose a crypto exchange that supports API trading. Popular exchanges like Binance, Kraken and Coinbase Pro offer robust APIs that allow your bot to seamlessly interact with their trading platforms. **Set up API access **After choosing an exchange, you need to set up API access. This requires generating an API key and secret on the exchange platform. These keys allow your bot to place orders, check balances and securely access market data. **Choice of programming language **The choice of programming language depends on your technical knowledge and the requirements of the bot. Python is a popular choice due to its extensive libraries and ease of use. Other languages such as JavaScript, Java and C++ are also suitable. **Creating Basic Bot Functions **Basic functions of your market development bot include sending buy and sell orders, tracking market prices and portfolio management. Start coding the basic functions that handle these tasks. Make sure your bot can place orders at the current market price and change those orders when prices change. **Introduction to Risk Management **Risk management is essential in business. To minimize potential losses, your bot should include features such as stop orders. Additionally, setting a maximum amount for each trade and diversifying your portfolio can help spread risk. **Post-testing the bot **Before deploying the bot live, it is important to test its performance using historical data. Backtesting involves running your bot's trading strategies against past market data to see how it will perform. This step will help identify potential problems and refine your strategy. **Bot Deployment **After thorough testing, you are ready to deploy your bot. Check its performance carefully in the early stages to make sure it works as expected. Make necessary adjustments based on real-time performance and market conditions. **Constant monitoring and updates** The cryptocurrency market is dynamic and prices fluctuate rapidly. Constant monitoring of your bot's performance and general market conditions is essential. Regular updates to your bot's algorithm can help it adapt to changing market conditions and be profitable. **Ensuring security** Security is paramount when conducting financial transactions. Use strong encryption and follow cybersecurity best practices to keep your bots and API keys safe. Update your software regularly to protect against new vulnerabilities. **Conclusion** Developing a crypto market bot requires a good understanding of both trading principles and programming. By following ten steps, you can build a bot will help you navigate the complex world of cryptocurrency trading efficiently and profitably. Remember that constant learning and adaptation is the key to success in the ever-evolving crypto market. Visit>>> https://blocksentinels.com/crypto-market-making-bot-development-company Reach our experts: Phone +91 8148147362 Email sales@blocksentinels.com
kala12
1,907,184
Why Do You Need Property Management Software for Multifamily Buildings?
Living harmoniously with several families, each with their preferences, likes, and dislikes, is...
0
2024-07-01T05:52:13
https://dev.to/jeya_c64151260df99a02a0d2/why-do-you-need-property-management-software-for-multifamily-buildings-3bmg
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3se8uddafme6tezggujv.png) Living harmoniously with several families, each with their preferences, likes, and dislikes, is tough. Frequent disputes, misunderstandings, miscommunications, and non-agreements make the lives of the residents a living hell. Property management software for multifamily buildings is an advanced tool that has completely reshaped this landscape, giving residents and property managers peace of mind. In this blog post, we will discuss Property Automate’s cloud-based property management software for multifamily, one of the market-leading solutions, and explore why it’s the preferred choice for small and medium enterprises. So, let’s go. ## What is Property Management Software for Multifamily? Multifamily property management software is a specialized tool designed to assist owners or property managers in efficiently managing communities, apartments, or multifamily buildings. This software helps in maintaining tenant records, rent roll data, submitting maintenance requests, scheduling inspections and preventive maintenance services, and generating reports on occupancy/vacancy rates, accounting, and financials. Additionally, it streamlines other day-to-day operations. Property Automate’s PMS for multifamily automates these operations, providing a complete and transparent view of every aspect of your property. The software enhances communication with tenants through portals and notifications, ensuring timely responses to maintenance requests and updates. It also integrates with other essential tools, such as accounting software and CRM systems, to centralize all property management tasks in one platform. By using property management software, you can reduce manual workload, minimize errors, and improve overall property management efficiency, leading to higher tenant satisfaction and better financial performance. ## Why Do You Need Property Automate’s Multifamily Building Management Software? Dealing with the continuous workflow involving extensive paperwork and digital documents can be overwhelming. Property management software, an integrated platform, can reduce the workload for your team and solve various issues that arise regularly. Let’s explore how Property Automate’s multifamily management software solves these challenges and benefits your business; **Operational inefficiency** Administering property management with manual legacy systems consumes considerable time and effort. Our multifamily PMS, automates these tasks, enhancing operational efficiency, reducing costs, improving time management, and letting you focus more on providing an excellent resident experience. It efficiently handles rent collection, maintenance, amenities booking, visitor and parking management, and communication. **Communication Issues** The lack of a proper communication channel among property managers and tenants can lead to misunderstandings and disputes. With features like collaboration and discussion forums, our software facilitates clear communication of maintenance updates, policy changes, and event invitations, leading to improved communication. **Scattered Data** Missing vital information can result in serious repercussions. Hence storing and maintaining all the crucial data in a single place is necessary for the business's smooth running. The resident directory tool of our software centralizes and maintains an updated repository of residents' data that allows quick access and efficient collaboration. **Financial Invisibility** Keeping track of the income and expenses for a multifamily appartment can drive you crazy. Property Automate’s integrated financial and accounting functions simplify budgeting, expense tracking, rent collection, and financial reporting. The software supports various payment methods, ensuring timely and hassle-free rent collection, and provides complete visibility of the revenue flow. **Delayed Maintenance** Addressing maintenance issues often involves a million phone calls and paperwork, leading to frustration and wasting time. With our property management app, the residents can raise maintenance requests with just a click, which can be promptly addressed by the maintenance team. You can also schedule regular inspections and preventive maintenance of properties, extending their lifespan, reducing unplanned downtime, and improving resident satisfaction. **Disputes over Amenity Booking** Communities with common amenities often face disputes due to the lack of proper booking facilities. With our software, the residents can view the booked slots easily, and make their booking based on the available slots. This prevents conflicts among the residents and encourages cooperation and harmony. **Security Issues** Security is a significant concern in a multifamily building considering the number of residents and visitors. Our software’s visitor and parking module tracks the foot traffic, ensuring the entry of only authorized individuals. It manages parking spaces by assigning spots to residents, issuing guest permits, and monitoring parking availability. Managing residential buildings can become smooth and easy with a little help from the right tool. Property Automate’s multifamily [property management software](https://propertyautomate.com/) effectively empowers property managers and owners to handle the time and energy-consuming tasks, providing them peace of mind and allowing a smooth living experience for the residents. Investing in good software is a choice that guarantees peace of mind and long-term success. So, go for it today!
jeya_c64151260df99a02a0d2
1,907,183
Build Your Own RAG App: A Step-by-Step Guide to Setup LLM locally using Ollama, Python, and ChromaDB
In an era where data privacy is paramount, setting up your own local language model (LLM) provides a...
0
2024-07-01T05:50:03
https://dev.to/nassermaronie/build-your-own-rag-app-a-step-by-step-guide-to-setup-llm-locally-using-ollama-python-and-chromadb-b12
ollama, llm, python, rag
In an era where data privacy is paramount, setting up your own [local language model (LLM)](https://www.cloudflare.com/learning/ai/what-is-large-language-model/) provides a crucial solution for companies and individuals alike. This tutorial is designed to guide you through the process of creating a custom chatbot using [Ollama](https://ollama.com/), [Python 3](https://www.python.org/), and [ChromaDB](https://www.trychroma.com/), all hosted locally on your system. Here are the key reasons why you need this tutorial: - Full Customization: Hosting your own Retrieval-Augmented Generation (RAG) application locally means you have complete control over the setup and customization. You can fine-tune the model to fit your specific needs without relying on external services. - Enhanced Privacy: By setting up your LLM model locally, you avoid the risks associated with sending sensitive data over the internet. This is especially important for companies that handle confidential information. Training your model with private data locally ensures that your data stays within your control. - Data Security: Using third-party LLM models can expose your data to potential breaches and misuse. Local deployment mitigates these risks by keeping your training data, such as PDF documents, within your secure environment. - Control Over Data Processing: When you host your own LLM, you have the ability to manage and process your data exactly how you want. This includes embedding your private data into your ChromaDB vector store, ensuring that your data processing meets your standards and requirements. - Independence from Internet Connectivity: Running your chatbot locally means you are not dependent on an internet connection. This guarantees uninterrupted service and access to your chatbot, even in offline scenarios. This tutorial will empower you to build a robust and secure local chatbot, tailored to your needs, without compromising on privacy or control. ![Fine tuning model](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zp0xm1wsxvuhpucppgnr.jpg) --- ### Retrieval-Augmented Generation (RAG) [Retrieval-Augmented Generation (RAG)](https://aws.amazon.com/what-is/retrieval-augmented-generation/) is an advanced technique that combines the strengths of information retrieval and text generation to create more accurate and contextually relevant responses. Here's a breakdown of how RAG works and why it's beneficial: #### What is RAG? RAG is a hybrid model that enhances the capabilities of language models by incorporating an external knowledge base or document store. The process involves two main components: - Retrieval: In this phase, the model retrieves relevant documents or pieces of information from an external source, such as a database or a vector store, based on the input query. - Generation: The retrieved information is then used by a generative language model to produce a coherent and contextually appropriate response. #### How Does RAG Work? - Query Input: The user inputs a query or question. - Document Retrieval: The system uses the query to search an external knowledge base, retrieving the most relevant documents or snippets of information. - Response Generation: The generative model processes the retrieved information, integrating it with its own knowledge to generate a detailed and accurate response. - Output: The final response, enriched with specific and relevant details from the knowledge base, is presented to the user. #### Benefits of RAG - Enhanced Accuracy: By leveraging external data, RAG models can provide more precise and detailed answers, especially for domain-specific queries. - Contextual Relevance: The retrieval component ensures that the generated response is grounded in relevant and up-to-date information, improving the overall quality of the response. - Scalability: RAG systems can be easily scaled to incorporate vast amounts of data, enabling them to handle a wide range of queries and topics. - Flexibility: These models can be adapted to various domains by simply updating or expanding the external knowledge base, making them highly versatile. #### Why Use RAG Locally? - Privacy and Security: Running a RAG model locally ensures that sensitive data remains secure and private, as it does not need to be sent to external servers. - Customization: You can tailor the retrieval and generation processes to suit your specific needs, including integrating proprietary data sources. - Independence: A local setup ensures that your system remains operational even without internet connectivity, providing consistent and reliable service. By setting up a local RAG application with tools like Ollama, Python, and ChromaDB, you can enjoy the benefits of advanced language models while maintaining control over your data and customization options. ![RAG app](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cfdgjmiavkyte6kt5v3b.jpg) --- ### GPU Running large language models (LLMs) like the ones used in Retrieval-Augmented Generation (RAG) requires significant computational power. One of the key components that enable efficient processing and embedding of data in these models is the Graphics Processing Unit (GPU). Here's why GPUs are essential for this task and how they impact the performance of your local LLM setup: #### What is a GPU? A GPU is a specialized processor designed to accelerate the rendering of images and videos. Unlike Central Processing Units (CPUs), which are optimized for sequential processing tasks, GPUs excel at parallel processing. This makes them particularly well-suited for the complex mathematical computations required by machine learning and deep learning models. #### Why GPUs Matter for LLMs - Parallel Processing Power: GPUs can handle thousands of operations simultaneously, significantly speeding up tasks such as training and inference in LLMs. This parallelism is crucial for the heavy computational loads associated with processing large datasets and generating responses in real-time. - Efficiency in Handling Large Models: LLMs like those used in RAG require substantial memory and computational resources. GPUs are equipped with high-bandwidth memory (HBM) and multiple cores, making them capable of managing the large-scale matrix multiplications and tensor operations needed by these models. - Faster Data Embedding and Retrieval: In a local RAG setup, embedding data into a vector store like ChromaDB and retrieving relevant documents quickly is essential for performance. High-performance GPUs can accelerate these processes, ensuring that your chatbot responds promptly and accurately. - Improved Training Times: Training an LLM involves adjusting millions (or even billions) of parameters. GPUs can drastically reduce the time required for this training phase compared to CPUs, enabling more frequent updates and refinements to your model. #### Choosing the Right GPU When setting up a local LLM, the choice of GPU can significantly impact performance. Here are some factors to consider: - Memory Capacity: Larger models require more GPU memory. Look for GPUs with higher VRAM (video RAM) to accommodate extensive datasets and model parameters. - Compute Capability: The more CUDA cores a GPU has, the better it can handle parallel processing tasks. GPUs with higher compute capabilities are more efficient for deep learning tasks. - Bandwidth: Higher memory bandwidth allows for faster data transfer between the GPU and its memory, improving overall processing speed. #### Examples of High-Performance GPUs for LLMs - NVIDIA RTX 3090: Known for its high VRAM (24 GB) and powerful CUDA cores, it's a popular choice for deep learning tasks. - NVIDIA A100: Designed specifically for AI and machine learning, it offers exceptional performance with large memory capacity and high compute power. - AMD Radeon Pro VII: Another strong contender, with high memory bandwidth and efficient processing capabilities. Investing in a high-performance GPU is crucial for running LLM models locally. It ensures faster data processing, efficient model training, and quick response generation, making your local RAG application more robust and reliable. By leveraging the power of GPUs, you can fully realize the benefits of hosting your own custom chatbot, tailored to your specific needs and data privacy requirements. --- ### Prerequisites Before diving into the setup, ensure you have the following prerequisites in place: - Python 3: Python is a versatile programming language that you'll use to write the code for your RAG app. - ChromaDB: A vector database that will store and manage the embeddings of our data. - Ollama: To download and serve custom LLMs in our local machine. #### Step 1: Install Python 3 and setup your environment To install and setup our Python 3 environment, follow these steps: [Download and setup Python 3](https://www.python.org/downloads/) on your machine. Then make sure your Python 3 installed and run successfully: ```bash $ python3 --version # Python 3.11.7 ``` Create a folder for your project, for example, `local-rag`: ```bash $ mkdir local-rag $ cd local-rag ``` Create a virtual environment named `venv`: ```bash $ python3 -m venv venv ``` Activate the virtual environment: ```bash $ source venv/bin/activate # Windows # venv\Scripts\activate ``` #### Step 2: Install ChromaDB and other dependencies Install ChromaDB using pip: ```bash $ pip install --q chromadb ``` Install Langchain tools to work seamlessly with your model: ```bash $ pip install --q unstructured langchain langchain-text-splitters $ pip install --q "unstructured[all-docs]" ``` Install Flask to serve your app as a HTTP service: ```bash $ pip install --q flask ``` #### Step 3: Install Ollama To install Ollama, follow these steps: Head to [Ollama download page](https://ollama.com/download), and download the installer for your operating system. Verify your Ollama installation by running: ```bash $ ollama --version # ollama version is 0.1.47 ``` Pull the LLM model you need. For example, to use the Mistral model: ```bash $ ollama pull mistral ``` Pull the text embedding model. For instance, to use the Nomic Embed Text model: ```bash $ ollama pull nomic-embed-text ``` Then run your Ollama models: ``` $ ollama serve ``` --- ### Build the RAG app Now that you've set up your environment with Python, Ollama, ChromaDB and other dependencies, it's time to build your custom local RAG app. In this section, we'll walk through the hands-on Python code and provide an overview of how to structure your application. #### `app.py` This is the main Flask application file. It defines routes for embedding files to the vector database, and retrieving the response from the model. ```python import os from dotenv import load_dotenv load_dotenv() from flask import Flask, request, jsonify from embed import embed from query import query from get_vector_db import get_vector_db TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp') os.makedirs(TEMP_FOLDER, exist_ok=True) app = Flask(__name__) @app.route('/embed', methods=['POST']) def route_embed(): if 'file' not in request.files: return jsonify({"error": "No file part"}), 400 file = request.files['file'] if file.filename == '': return jsonify({"error": "No selected file"}), 400 embedded = embed(file) if embedded: return jsonify({"message": "File embedded successfully"}), 200 return jsonify({"error": "File embedded unsuccessfully"}), 400 @app.route('/query', methods=['POST']) def route_query(): data = request.get_json() response = query(data.get('query')) if response: return jsonify({"message": response}), 200 return jsonify({"error": "Something went wrong"}), 400 if __name__ == '__main__': app.run(host="0.0.0.0", port=8080, debug=True) ``` #### `embed.py` This module handles the embedding process, including saving uploaded files, loading and splitting data, and adding documents to the vector database. ```python import os from datetime import datetime from werkzeug.utils import secure_filename from langchain_community.document_loaders import UnstructuredPDFLoader from langchain_text_splitters import RecursiveCharacterTextSplitter from get_vector_db import get_vector_db TEMP_FOLDER = os.getenv('TEMP_FOLDER', './_temp') # Function to check if the uploaded file is allowed (only PDF files) def allowed_file(filename): return '.' in filename and filename.rsplit('.', 1)[1].lower() in {'pdf'} # Function to save the uploaded file to the temporary folder def save_file(file): # Save the uploaded file with a secure filename and return the file path ct = datetime.now() ts = ct.timestamp() filename = str(ts) + "_" + secure_filename(file.filename) file_path = os.path.join(TEMP_FOLDER, filename) file.save(file_path) return file_path # Function to load and split the data from the PDF file def load_and_split_data(file_path): # Load the PDF file and split the data into chunks loader = UnstructuredPDFLoader(file_path=file_path) data = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100) chunks = text_splitter.split_documents(data) return chunks # Main function to handle the embedding process def embed(file): # Check if the file is valid, save it, load and split the data, add to the database, and remove the temporary file if file.filename != '' and file and allowed_file(file.filename): file_path = save_file(file) chunks = load_and_split_data(file_path) db = get_vector_db() db.add_documents(chunks) db.persist() os.remove(file_path) return True return False ``` #### `query.py` This module processes user queries by generating multiple versions of the query, retrieving relevant documents, and providing answers based on the context. ```python import os from langchain_community.chat_models import ChatOllama from langchain.prompts import ChatPromptTemplate, PromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough from langchain.retrievers.multi_query import MultiQueryRetriever from get_vector_db import get_vector_db LLM_MODEL = os.getenv('LLM_MODEL', 'mistral') # Function to get the prompt templates for generating alternative questions and answering based on context def get_prompt(): QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an AI language model assistant. Your task is to generate five different versions of the given user question to retrieve relevant documents from a vector database. By generating multiple perspectives on the user question, your goal is to help the user overcome some of the limitations of the distance-based similarity search. Provide these alternative questions separated by newlines. Original question: {question}""", ) template = """Answer the question based ONLY on the following context: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) return QUERY_PROMPT, prompt # Main function to handle the query process def query(input): if input: # Initialize the language model with the specified model name llm = ChatOllama(model=LLM_MODEL) # Get the vector database instance db = get_vector_db() # Get the prompt templates QUERY_PROMPT, prompt = get_prompt() # Set up the retriever to generate multiple queries using the language model and the query prompt retriever = MultiQueryRetriever.from_llm( db.as_retriever(), llm, prompt=QUERY_PROMPT ) # Define the processing chain to retrieve context, generate the answer, and parse the output chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() ) response = chain.invoke(input) return response return None ``` #### `get_vector_db.py` This module initializes and returns the vector database instance used for storing and retrieving document embeddings. ```python import os from langchain_community.embeddings import OllamaEmbeddings from langchain_community.vectorstores.chroma import Chroma CHROMA_PATH = os.getenv('CHROMA_PATH', 'chroma') COLLECTION_NAME = os.getenv('COLLECTION_NAME', 'local-rag') TEXT_EMBEDDING_MODEL = os.getenv('TEXT_EMBEDDING_MODEL', 'nomic-embed-text') def get_vector_db(): embedding = OllamaEmbeddings(model=TEXT_EMBEDDING_MODEL,show_progress=True) db = Chroma( collection_name=COLLECTION_NAME, persist_directory=CHROMA_PATH, embedding_function=embedding ) return db ``` --- ### Run your app! Create `.env` file to store your environment variables: ```bash TEMP_FOLDER = './_temp' CHROMA_PATH = 'chroma' COLLECTION_NAME = 'local-rag' LLM_MODEL = 'mistral' TEXT_EMBEDDING_MODEL = 'nomic-embed-text' ``` Run the `app.py` file to start your app server: ```bash $ python3 app.py ``` Once the server is running, you can start making requests to the following endpoints: - Example command to embed a PDF file (e.g., resume.pdf): ```bash $ curl --request POST \ --url http://localhost:8080/embed \ --header 'Content-Type: multipart/form-data' \ --form file=@/Users/nassermaronie/Documents/Nasser-resume.pdf # Response { "message": "File embedded successfully" } ``` - Example command to ask a question to your model: ```bash $ curl --request POST \ --url http://localhost:8080/query \ --header 'Content-Type: application/json' \ --data '{ "query": "Who is Nasser?" }' # Response { "message": "Nasser Maronie is a Full Stack Developer with experience in web and mobile app development. He has worked as a Lead Full Stack Engineer at Ulventech, a Senior Full Stack Engineer at Speedoc, a Senior Frontend Engineer at Irvins, and a Software Engineer at Tokopedia. His tech stacks include Typescript, ReactJS, VueJS, React Native, NodeJS, PHP, Golang, Python, MySQL, PostgresQL, MongoDB, Redis, AWS, Firebase, and Supabase. He has a Bachelor's degree in Information System from Universitas Amikom Yogyakarta." } ``` --- ### Conclusion By following these instructions, you can effectively run and interact with your custom local RAG app using Python, Ollama, and ChromaDB, tailored to your needs. Adjust and expand the functionality as necessary to enhance the capabilities of your application. By harnessing the capabilities of local deployment, you not only safeguard sensitive information but also optimize performance and responsiveness. Whether you're enhancing customer interactions or streamlining internal processes, a locally deployed RAG application offers flexibility and robustness to adapt and grow with your requirements. #### Check the source code in this repo: [https://github.com/firstpersoncode/local-rag](https://github.com/firstpersoncode/local-rag) Happy coding!
nassermaronie
1,907,182
High-Quality Fiber Pigtails and LWL Patchkabel from GBIC Shop
GBIC Shop offers top-notch fiber pigtails and LWL patchkabel, designed for optimal performance and...
0
2024-07-01T05:44:52
https://dev.to/gbicshop/high-quality-fiber-pigtails-and-lwl-patchkabel-from-gbic-shop-48hn
pigtails, lwlpatchkabel, fiberpigtails
GBIC Shop offers top-notch fiber pigtails and LWL patchkabel, designed for optimal performance and reliability. These essential components are crucial for seamless fiber optic network connections. Fiber **[pigtails](https://www.gbic-shop.de/12-faser-lwl-pigtails)** from GBIC Shop ensure low insertion and high return loss, making them perfect for fusion splicing. Their **[lwl patchkabel](https://www.gbic-shop.de/lwl-patchkabel_1)** are robust and durable, featuring high-quality connectors for superior signal integrity. GBIC Shop's commitment to excellence ensures that every product meets stringent quality standards, providing customers with the best fiber optic technology. Whether for data centers, telecommunications, or enterprise networks, trust GBIC Shop to deliver reliable and efficient connectivity solutions with their advanced fiber pigtails and LWL patchkabel. **Pigtails** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kl2ouapfokpqkq8b88rn.jpg) **LWL Patchkabel** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jxas45ldcrxyhtndd0ka.jpg)
gbicshop
1,842,585
Instruction Fine-Tuning: Dataset and Library Landscape
Large Language Models have fascinating abilities to understand and output natural language texts....
0
2024-07-01T05:44:34
https://dev.to/admantium/instruction-fine-tuning-dataset-and-library-landscape-5dk
llm
Large Language Models have fascinating abilities to understand and output natural language texts. From knowledge databases to assistants and live chatbots, many applications can be build with an LLM as a component. The capability of an LLM to follow instructions is essential for these use cases. While closed-source LLMs handle instructions very good, pretrained open-source model may not have the skill to follow instructions rigorously. To alleviate this, instruction fine-tuning can be utilized. This article explores the landscape of fine-tuning. It starts with a high-level description of fine-tuning, then lists instruction data-sets and fine tuning libraries, and ends with evaluation methods and concrete projects. With this overview, you can define and select a concrete combination of applicable tools to jump-start your own fine-tuning project. _This article originally appeared at my blog [admantium.com](https://admantium.com/blog/llm18_instruction_finetuning_landscape/)_. ## The Origin of Instruction Fine-Tuning A pre-trained LLM typically consumed billions of tokens with the usual goal of predicting a masked word or the (autoregressively) the next word in a continuous stream. How these models behave when asked to perform a concrete task is dependent on the quality and number of task-liked texts that were used for training. In the early beginning of LLMs, typical NLP benchmarks were used to determine the models capability, and even earlier, the LLMs needed to be task fine-tuned for just those specific tasks. During the continued evolution of LLMs, which I covered in earlier blog pots about [Gen1 LLMs](https://admantium.com/blog/llm02_gen1_overview/) and [Gen2/Gen3 LLMs](https://admantium.com/blog/llm03_gen2_gen3_overview_part1/), several observations were made. Two general observations are a) increasing model complexity and consumed pre-training tokens and b) including high-quality data containing several types of tasks improve LLM capabilities significantly. And a very specific one was shown in the Google research paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683v3). In this paper, researchers re-formulate classical NLP tasks like textual entailment, sentence similarity or translations as text-to-text mappings, By training a model with this data, task performance achieved new levels. This first example of instruction fine-tuning for better task generalization became a foundation for all future LLMs as well. Fine-Tuning LLMs is an active and vibrant subject tackled by practitioners, open-source developers and researchers alike. From datasets to training and evaluation, several hubs and libraries exists. In order to understand the available options and make an educated choice for a practical approach, the following sections highlight essential findings, and will support you in architecting your own fine-tuning project. ## Datasets Researchers and open-source communities published several freely available instruction fine-tuning datasets. A comprehensive and complete overview is given in the GitHub repositories |[awesome-instruction-datasets](https://github.com/jianzhnie/awesome-instruction-datasets) and [Awesome-instruction-tuning](https://github.com/zhilizju/Awesome-instruction-tuning). Considering datasets used in academic research, the following picture shows the dataset evolution. ![](https://admantium.com/images/blog/llm18_instruction_fine_tuning_timeline.png) _Source: The Flan Collection: Designing Data and Methods for Effective Instruction Tuning, <https://arxiv.org/abs/2301.13688>_ The most important academic research datasets are these: - [Natural Instructions](https://instructions.apps.allenai.org/): This dataset contains 1500 tasks, including question answering, summarization, fact generation, answer and question checking, and direct language translations for several language pairs. - [P3](https://huggingface.co/datasets/bigscience/P3): This public data set was used to evaluate prompt formatting and training of the T0 evaluation models. It contains a rich set of tasks, mostly classical NLP tasks like question answering, classification and interference from sources like Wikipedia and Yelp. - [FLAN](https://github.com/google-research/FLAN/tree/main/flan/v2): A master dataset including and combining different other sets. It contains 1826 tasks in a standard prompt, including instructions that invoke chain-of-thoughts reasoning for the target model. While these sets are manually curated, the current trend is to use a powerful LLMs, like GPT3.5 and GPT4, to generate instruction fine-tuning prompts and answers automatically. In this category, the following sets are recommended: - [Alpaca Stanford](https://github.com/tatsu-lab/stanford_alpacad): A 52K dataset created with GPT3.5, containing prompts formatted according to [OpenAI prompt engineering methods](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api). The task span areas such as summarizing, editing and rewriting text, as well as given explanations or performing calculations. - [Guanaco](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset): A multilingual dataset that extends the Alpaca data with tasks for grammar analysis, language understanding, and self-awareness. - [Self Instruct](https://github.com/yizhongw/self-instruct): A large synthetic dataset that was created with GPT, it includes 52k instructions and 82k input/output mappings. Furthermore, the research paper shows a methodology to allow an LLM to self-improve by using the model's output for data generation. ## Training and Fine-Tuning Libraries Fine-Tuning an LLM modifies the model weights and/or add new layers to it. For instruction-finetuning, the essential steps remain: Convert input text to tokens, generate next best-word output probability, convert the token to an output string, and then compute a metric to determine the backward propagation gradient. These steps are repeated until the given metric does not improve anymore. While any generic fine-tuning library can be used, the importance and observed LLM capability improvements of instruction-fine tuning created its own library ecosystem. However, one big obstacle needs to be scaled. A full re-training of an LLM with 7B parameters or greater requires substantial computing resources, several GPUS with 40GB memory and more are required. The answer to this challenge is quantization - methods that reduce the amount of memory and computational costs by compressing and transforming an LLM. These methods are summarized with the term Parameter Efficient Fine-Tuning. Two most prominent ones are: - Lora: In the Low-Rank Adaption approach, an LLMs weights matrix is decomposed into two smaller matrixes. The smaller matrixes are projections, their multiplication represents the complete matrix. These smaller matrixes can be exposed for fine-tuning, leading to delta-updates of the original weights. For a very readable explanation, check [How to fine-tune a Transformer (pt. 2, LoRA)](https://radekosmulski.com/how-to-fine-tune-a-tranformer-pt-2/) - Adapters: A technique in which an LLMs layers are frozen, but additional small layers inserted and then trained. By training the adapters only, the fine-tuning process becomes computational efficient, and furthermore, adapters can be combined to converge single fine-tuned models to a common one. A full explanation and comparison of available methods is explained in the paper [Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning](https://arxiv.org/abs/2303.15647). The following picture gives an indication to the available scope: ![](https://admantium.com/images/blog/llm18_peft_methods.png) The combination of interest into fine-tuning and availability of quantization libraries makes effective fine-tuning on consumer grade hardware possible. Following libraries can be used: - [transformer trainer](https://huggingface.co/docs/transformers/en/index): With the quintessential transformer library, loarding LLMs and their tokenizer becomes only a few lines of Python code. On top of this, the `trainer` object facilitates the definition of all training hyperparameters, and then runs distributed training on several nodes and GPUs. - [bitsandbytes](https://huggingface.co/docs/bitsandbytes/main/en/index): A wrapper for an PyTorch-compatible LLM such as the transformer model. It provides 8-bit and 4-bit quantization, reducing the required RAM or GPU-RAM amount tremendously. - [peft](https://huggingface.co/docs/peft/main/en/index): An umbrella library that implements many state-of-the art quantization methods discovered in scientific research. Concrete methods are explained in the documentation, most notably [soft prompts](https://huggingface.co/docs/peft/en/conceptual_guides/prompting) and [infused adapters](https://huggingface.co/docs/peft/en/conceptual_guides/ia3). - [trl](https://huggingface.co/docs/trl/index): An acronym for Transformer Reinforcement Learning, it provides essential abstractions for incorporating reinforcement learning techniques. Specifically, following steps are applied: a) supervised fine-tuning, in which datasets with expected labels are contained. b) reward modeling, in which training data is separated into accepted and non-accepted answers. c) proximal policy optimization, from the [same-named research paper](https://arxiv.org/abs/1707.06347), which is an algorithm applied to the reward model application to the generations of the model. ## Evaluation Libraries LLMs generate sophisticated texts. Since their inception, text generation capabilities were measured. Initially with typical NLP benchmarks like SQUAD for question-answering or QNLI for natural language interference. Recently with knowledge-domain spanning tests, like questions about economics, biology and history up to complete high-school admission tests, and also measuring language toxicity. To make these measurements transparent and reproducible, several libraries were created. - [instruct-eval](https://github.com/declare-lab/instruct-eval): Model evaluation and comparison for held-out instruction datasets. This library strongly integrates with transformers to load causal LM (autoregressive decoder-only) and sequence-to-sequence (encoder-decoder models) LM checkpoints. It also covers aspects like LLM security (jail-breaking LLM prompts), writing capability of LLMs (informative, professional, argumentative, and creative) - [evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness): Framework for testing LLMs on 60 different tasks. Integrates with transformers and the quantization libary `peft`. and all models that can be accessed via OpenAI API (which opens the door to use ollama.ai and similar local hosting libraries too). Opinioned training library with PyTorch bare metal instructions: - [HELM](https://github.com/stanford-crfm/helm): This framework combines several accepted evaluation datasets from scientific papers, such as natural questions, openbook question answering, and massive multi-task language understanding. Additionally, other language metrics like efficiency, bias, and toxicity can be measured. The library can be used to evaluate a continuously growing list of both closed and open-source models, and it integrates with transformer models that expose causal language modelling features. ## Projects and Notebooks To complete this research article, the following lists shows very concrete examples how to fine-tune Gen2 and Gen3 LLMs. - [LLaMA](https://colab.research.google.com/drive/1vIjBtePIZwUaHWfjfNHzBjwuXOyU_ugD): Fine tuning with chat data using quantized LORA and supervised fine-truning training. - [LLaMA2](https://www.datacamp.com/tutorial/fine-tuning-llama-2) This notebook shows how to fine-tune a 7B LLaMA 2 model with a 16GB GPU. The specific libraries resolve around the Huggingface ecosystem: `transformers`, `accelerate`, `peft`, `trl`, and `bitsandbytes`. - [MPT](https://colab.research.google.com/drive/1HCpQkLL7UXW8xJUJJ29X7QAeNJKO0frZ?usp=sharing): The MPT model from mosaic is a fully open-source model with a performance like LLaMA. This notebook shows how to apply LORA adapters that for fine-tuning with a chat-interaction dataset. - [OPT](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing): In this notebook, Metas OPT model is used, and LORA adapters trained on an example dataset. The notebooks shows how to export only these trained adapters, and how to load them to instantiate a quantized model for evaluation. ## Conclusion In this article, you learned about the landscape of instruction fine-tuning of LLMs. You learned about datasets, quantization methods, fine-tuning libraries, and concrete projects. Effective fine-tuning can be achieved by loading a quantized models (reduces RAM storage) and parameter efficient fine-tuning methods (only re-adjusting partial weights or a weights representation). This quantized model can then be evaluated on a broad, task-agnostic benchmark yielding relative performance scores on a wide array of different tasks. With this landscape uncovered, a specific combination of datasets and libraries can be made. The next article show a practical example for fine-tuning a LLaMA 2 model.
admantium
1,907,181
Rails: Using find_each for Batch Processing
"Rails Tip: Efficiently handle large datasets with find_each. Instead of loading all records at once,...
0
2024-07-01T05:42:38
https://dev.to/m_hussain/rails-using-findeach-for-batch-processing-3hgb
"Rails Tip: Efficiently handle large datasets with find_each. Instead of loading all records at once, find_each processes records in batches, reducing memory usage and improving performance. 🚀 Example: `User.find_each(batch_size: 1000) do |user| # Process each user end` This processes users in batches of 1000, making it ideal for background jobs and large data operations. Give it a try! 💡 #RubyOnRails #RailsTips #WebDevelopment"
m_hussain
1,907,180
Synchronizing Raw Data in GBase 8s Databases via ER
Introduction Once you've defined and set up the replication server, executing cdr start...
0
2024-07-01T05:42:01
https://dev.to/congcong/synchronizing-raw-data-in-gbase-8s-databases-via-er-36md
database
## Introduction Once you've defined and set up the replication server, executing `cdr start replicate` will synchronize the initial data from the source database to the target server. This method is straightforward but has the drawback of being relatively slow, making it suitable for environments with small data volumes. The relevant parameter for this operation is `CDR_QUEUEMEM`. ## Synchronization Process The synchronization process involves adding rows from the source server that do not exist on the target server and modifying rows that exist on both servers but are inconsistent. The strategy for handling rows that exist on the target server but not on the source server is as follows: ### Options and Descriptions - **delete**: Deletes rows and their dependent rows from the target server based on referential integrity constraints. - **keep**: Retains the rows on the target server. - **merge**: Keeps the rows on the target server and replicates them back to the source server. ## Testing Environment In the testing environment, a database named `testdb` is created simultaneously on two instances. Tables `t1` through `t5` are also created, each with primary key constraints, and initial data is inserted into the tables as follows: | table | group1 | group2 | |------------|--------------|--------------| |t1|> select * from t1; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5|> select * from t1; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5| |t2|> select * from t2; <br><br> a b <br><br><br> 1 1 <br><br> 5 5 <br><br> 2 2 <br><br> 4 4|> select * from t2; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5| |t3|> select * from t3; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5|> select * from t3; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5| |t4|> select * from t4; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5 <br><br> 6 6|> select * from t4; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5 <br><br> 6 6| |t5|> select * from t5; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5|> select * from t5; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5| ## Define Replicate ```sql cdr define replicate --conflict=ignore rep_testdb_t1 "testdb@group1:gbasedbt.t1" "select * from t1" "testdb@group2:gbasedbt.t1" "select * from t1" cdr define replicate --conflict=ignore rep_testdb_t2 "testdb@group1:gbasedbt.t2" "select * from t2" "testdb@group2:gbasedbt.t2" "select * from t2" cdr define replicate --conflict=ignore rep_testdb_t3 "testdb@group1:gbasedbt.t3" "select * from t3" "testdb@group2:gbasedbt.t3" "select * from t3" cdr define replicate --conflict=ignore rep_testdb_t4 "testdb@group1:gbasedbt.t4" "select * from t4" "testdb@group2:gbasedbt.t4" "select * from t4" cdr define replicate --conflict=ignore rep_testdb_t5 "testdb@group1:gbasedbt.t5" "select * from t5" "testdb@group2:gbasedbt.t5" "select * from t5" ``` **Now, the replication status is Inactive** ```sql [gbasedbt@gbase42 ~]$ cdr list repl rep_testdb_t1 rep_testdb_t2 rep_testdb_t3 rep_testdb_t4 rep_testdb_t5 ``` ```sql DEFINED REPLICATES ATTRIBUTES ------------------------------ REPLICATE: rep_testdb_t1 STATE: Inactive ON:group2 CONFLICT: Ignore FREQUENCY: immediate QUEUE SIZE: 0 PARTICIPANT: testdb:gbasedbt.t1 OPTIONS: transaction,fullrow REPLID: 131079 / 0x20007 REPLMODE: PRIMARY ON:group2 APPLY-AS: GBASEDBT ON:group2 REPLTYPE: Master REPLICATE: rep_testdb_t2 STATE: Inactive ON:group2 CONFLICT: Ignore FREQUENCY: immediate QUEUE SIZE: 0 PARTICIPANT: testdb:gbasedbt.t2 OPTIONS: transaction,fullrow REPLID: 131080 / 0x20008 REPLMODE: PRIMARY ON:group2 APPLY-AS: GBASEDBT ON:group2 REPLTYPE: Master REPLICATE: rep_testdb_t3 STATE: Inactive ON:group2 CONFLICT: Ignore FREQUENCY: immediate QUEUE SIZE: 0 PARTICIPANT: testdb:gbasedbt.t3 OPTIONS: transaction,fullrow REPLID: 131081 / 0x20009 REPLMODE: PRIMARY ON:group2 APPLY-AS: GBASEDBT ON:group2 REPLTYPE: Master REPLICATE: rep_testdb_t4 STATE: Inactive ON:group2 CONFLICT: Ignore FREQUENCY: immediate QUEUE SIZE: 0 PARTICIPANT: testdb:gbasedbt.t4 OPTIONS: transaction,fullrow REPLID: 131082 / 0x2000a REPLMODE: PRIMARY ON:group2 APPLY-AS: GBASEDBT ON:group2 REPLTYPE: Master REPLICATE: rep_testdb_t5 STATE: Inactive ON:group1 CONFLICT: Ignore FREQUENCY: immediate QUEUE SIZE: 0 PARTICIPANT: testdb:gbasedbt.t5 OPTIONS: transaction,fullrow REPLID: 131083 / 0x2000b REPLMODE: PRIMARY ON:group1 APPLY-AS: GBASEDBT ON:group1 REPLTYPE: Master ``` ### Start Replicate ```sql cdr start repl rep_testdb_t1 cdr start repl rep_testdb_t2 --syncdatasource=group1 --extratargetrows=delete cdr start repl rep_testdb_t3 --syncdatasource=group1 --extratargetrows=keep cdr start repl rep_testdb_t4 --syncdatasource=group1 --extratargetrows=merge cdr start repl rep_testdb_t5 --syncdatasource=group1 ``` **Raw Data Comparison:** |table| Before Replication<br>Group1 | Before Replication<br>Group2 | After Replication<br>Group1 | After Replication<br>Group2 | |----------------|---------------------------------------------------|---------------------------------------------------|---------------------------------------------------|---------------------------------------------------| | t1 | > select * from t1; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5 | > select * from t1; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5 | > select * from t1; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5 | > select * from t1; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5 | | t2 | > select * from t2; <br><br> a b <br><br><br> 1 1 <br><br> 5 5 <br><br> 2 2 <br><br> 4 4 | > select * from t2; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5 | > select * from t2; <br><br> a b <br><br><br> 1 1 <br><br> 5 5 <br><br> 2 2 <br><br> 4 4 | > select * from t2; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 5 5 <br><br> 4 4 | | t3 | > select * from t3; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5 | > select * from t3; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5 | > select * from t3; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5 | > select * from t3; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 3 3 <br><br> 5 5 <br><br> 4 4 | | t4 | > select * from t4; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5 <br><br> 6 6 | > select * from t4; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5 <br><br> 6 6 | > select * from t4; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5 <br><br> 6 6 <br><br> 3 3 | > select * from t4; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 3 3 <br><br> 5 5 <br><br> 6 6 <br><br> 4 4 | | t5 | > select * from t5; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5 | > select * from t5; <br><br> a b <br><br><br> 1 1 <br><br> 2 1 <br><br> 3 3 <br><br> 5 5 | > select * from t5; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 4 4 <br><br> 5 5 | > select * from t5; <br><br> a b <br><br><br> 1 1 <br><br> 2 2 <br><br> 5 5 <br><br> 4 4 | ## Conclusion: - Without specifying the data source with the `-S` parameter, initial synchronization is not enabled, and data present before the replication start is not processed. - When using the `-S` parameter to specify the data source and deleting extra rows on the target side (based on the primary key), the target data will be consistent with the source data. - When using the `-S` parameter to specify the data source and retaining extra rows on the target side, the target data will be the source data plus the extra rows on the target side. - When using the `-S` parameter to specify the data source and merging extra rows on the target side, the target and source data will be consistent, both containing the source data plus the extra rows on the target side. - When using the `-S` parameter to specify the data source, the default behavior for extra rows on the target side is deletion. The replication technology of GBase 8s database not only improves data availability but also meets diverse business needs through flexible conflict handling strategies. Mastering these technologies will help in building more robust and flexible data architectures.
congcong
1,907,179
Is Salesforce an ERP?
Salesforce has a prominent role in the business software world and is mainly known for its...
0
2024-07-01T05:41:30
https://dev.to/devops_den/is-salesforce-an-erp-1496
salesforce, webdev, productivity, devops
Salesforce has a prominent role in the business software world and is mainly known for its proficiency in the customer relationship management (CRM) field. However, with its several features, a basic question arises: is Salesforce an ERP system? To be precise, the answer is no, as it's not a full-fledged ERP system. To know the reason in detail, here's a deeper insight into Salesforce's capabilities in comparison with ERPs. ## CRM vs. ERP: Understanding the Divide Before heading towards Salesforce, let's first compare between ERPs and CRMs. **- Customer Relationship Management (CRM):** A CRM system aims at the front end of a business, operating all data and interactions linked to customers. Yet, it immensely helps with sales pipeline management, customer service, developing strong relationships, and marketing campaigns. **- Enterprise Resource Planning (ERP):** An ERP system is similar to the central nervous system of a business, where the integration of a variety of back-office functions is done. It primarily manages inventory management, supply chain operations, production, and finance. Moreover, a CRM performs well in operating customer-centric activities, whereas an ERP deals more with streamlining internal operations. ## Salesforce's Strengths: The CRM Champion The main strength of Salesforce resides in its CRM, which provides a powerful platform for the following: **- Sales Management:** Tracking leads and opportunities, automating workflows, and operating pipelines to speed sales effectiveness. **- Marketing Automation:** Create targeted campaigns, personalize customer interactions, and properly measure overall marketing performance. **- Customer Service:** Offer remarkable customer support via ticketing systems, live chat, and knowledge bases. **Data Analytics and Reporting:** Form detailed reports and dashboards to get valuable customer data and form sound data decisions. So, Salesforce performs well in the above regions, providing a user-friendly interface, scalability, and a vast arena of integrations and applications. ## Where Salesforce Falls Short in the ERP Realm While Salesforce has a lot of functionalities to deliver that might coincide with ERPs, it doesn't have the detailing and depth that is needed for full-fledged ERP. The following are some key ERP capabilities that are not available in Salesforce: **- Financial Management:** Detailed accounting functionalities, financial reporting features, general ledger, and budgeting. **- Inventory Management:** Warehouse management functions, real-time inventory tracking, and order completion. **- Human Resource Management:** Employee performance management tools, payroll processing, and administration perks. **- Production Planning and Control:** Quality control features, production scheduling, and allocation of resources. All these functionalities are highly important for businesses dealing with complex supply chains, higher financial requirements, or production processes. ## The Power of Integration: Salesforce and ERPs Working Together While Salesforce isn't a substitute for ERP, it can be a valuable complement. Several enterprises integrate Salesforce with their prevailing ERP systems to link customer-related tasks with internal operations. This integration helps them to: **- Synchronized Data:** Guarantee that the customer data, such as the buying history, flows seamlessly within systems, offering a complete customer view. **- Improved Sales and Marketing Efficiency:** Use customer data from the CRM to personalize marketing campaigns and target sales efforts in an effective manner. **- Enhanced Customer Service:** Give a more detailed customer experience by properly integrating support tickets with order history and other data that is relevant. Many third-party solutions and native Salesforce applications run this integration, forming a unified business space. ## The Rise of Cloud-Based ERP Solutions Built on Salesforce The overall landscape is evolving with the growing cloud-based ERP solutions developed on the Salesforce platform. All these solutions provide the main functionalities of conventional ERPs while using the power of Salesforce's CRM platform. However, this forms a smooth user experience where sales, customer service, marketing, and core business operations are all easily accessible on a single platform. Moreover, these solutions are mainly suitable for businesses already invested in the Salesforce ecosystem, providing a familiar interface and simplified integration. However, it's important to check if their functionalities align with your particular ERP needs properly. ## Choosing the Right Solution: Evaluating Your Needs So, how do you choose between Salesforce, an ERP system, or a blend of both? Let's understand the following considerations: **- Business Size and Complexity:** Small-scale businesses might view Salesforce's CRM power as sufficient, while huge businesses with complex operations may need a full-fledged ERP. **- Industry-Specific Needs:** Some industries might have specialized ERP requirements that demand industry-specific solutions. **- Integration Capabilities:** Check the ease of integrating current systems with Salesforce or a potential ERP solution. **- Budget:** Both Salesforce and ERP implementations can seem expensive, so factor in ongoing maintenance costs, licensing fees, and customization needs. Properly analyze your business needs which will guide you towards the most suitable solution. ## Leveraging the Integration Advantage A properly integrated Salesforce ERP blend provides a plethora of perks: **- Eliminate Data Silos:** Break down hurdles between customer-facing and internal data, which permits a 360-degree view of customers and operations. However, sales teams get real-time access to inventory data, improving order accuracy and customer satisfaction. **- Streamlined Workflows:** Automate tasks such as order processing, creating invoices based on CRM data, and instantly forming customer support tickets based on order issues. All of this helps in reducing manual work and enhances efficiency across departments. **- Enhanced Sales and Marketing Effectiveness:** Use customer data from Salesforce to personalize marketing campaigns and target sales efforts more effectively. Sales representatives can access customer buying history and choices, which results in more relevant interactions. **- Improved Customer Service:** Empower customer service persons with a holistic view of customer interactions, past support tickets, and order history. All this permits them to resolve issues faster and offer a more personalized service experience. All these integrations need careful planning and may have third-party solutions or custom development. Yet, the potential perks can enhance overall business efficiency and customer satisfaction. ##Cloud-Based ERPs on Salesforce: A Promising Future The emergence of cloud-based ERPs built on the Salesforce platform shows an exciting opportunity. All these solutions offer the basic functionalities of conventional ERPs while providing: **- Seamless Integration:** Uses native Salesforce functions and features, removing the requirement of tough integrations. **- Familiar User Interface:** Businesses already invested in Salesforce get an advantage from a consistent user experience via ERP and CRM functionalities. **- Scalability and Flexibility:** Cloud-based solutions offer scalability for growing businesses and the flexibility to adapt to changing conditions. However, it's essential to keep in mind that these solutions might not be a one-size fits all approach. Here's a critical evaluation approach: **- Functional Depth:** Evaluate if the given functionalities meet your industry needs. While offer basic ERP features, they might fall short in the depth of specialized industry-specific ERP solutions. **- Customization Capabilities:** Consider the level of customization provided by the cloud-based ERP solution. Can it be tailored to your particular workflows and business processes? **- Total Cost of Ownership:** Think not just about the licensing fees but also the implementation costs, ongoing maintenance, and any additional customizations needed. ## Security Considerations: Keeping Your Data Safe While Salesforce and ERPs provide different functionality, data security still remains on top. Let's understand how they guarantee data protection and compliance: ### Salesforce Security: **- Multi-Factor Authentication (MFA):** Enforces extra login verification steps beyond passwords, mainly reducing unauthorized access risks. **- Data Encryption:** Salesforce encrypts data at rest and in transit, protecting any information of the business and customer that is sensitive. **- User Access Controls:** Granular controls tell what data users can access, protecting unauthorized modifications or any potential breaches. **- Regular Security Audits:** Salesforce undergoes strict security assessments and sticks to industry-standard security rules. ### ERP Security: **- Access Controls:** Same as Salesforce, ERP provides user access controls to restrict access to particular modules and features depending on job roles. **- Data Encryption:** Any sensitive financial, HR, and production data are encrypted to guarantee confidentiality in case of a security breach. **- Audit Trails:** ERPs maintain in-depth logs of user activity, permitting for identification of any suspicious behavior and operating compliance with regulations. **- Disaster Recovery Plans:** Robust disaster recovery plans ensure data availability and organization continuity in case of unforeseen events. Read About [the Salesforce Video Platform ](https://devopsden.io/article/salesforce-video-platform) ### Securing Salesforce-ERP Integrations: Integrating Salesforce and an ERP formes a robust ecosystem, yet it also brings to notice some new security considerations. Let's know why securing these integrations is essential: **- Single Point of Failure:** A security breach in one system can potentially expose data in the other if the integration is not secure. **- Data Visibility:** Integrations might give wider data access to the users, so careful configuration of access controls is crucial. Strategies for securing Salesforce-ERP integrations include: **- Utilizing secure APIs:** Use secure app programming interfaces (APIs) for data exchange within platforms. **- Encrypting Data in Transit:** You can encrypt data transmissions within Salesforce and the ERP to prevent unauthorized interception. **- Regular Penetration Testing:** Conducting regular penetration testing to recognize and address any potential vulnerabilities in the integration. Moreover, by implementing these security measures, organizations can employ the power of ERPs and Salesforce properly, knowing their data stays secure and compliant with relevant regulations. ## Conclusion All in all, Salesforce is a powerful CRM platform, but it's not a replacement for a detailed ERP system. However, by knowing their strengths, limitations, and integration potential, businesses can use them to create a unified ecosystem that empowers sales, customer service, marketing, and back-office operations, ultimately driving success. Remember, the key is to select the solution that best matches your unique business goals and requirements. Read More https://devopsden.io/article/devops-managed-services https://dev.to/devops_den/cloudformation-vs-terraform-choosing-the-right-iac-tool-for-your-needs-mc3 Thank You
devops_den
1,907,178
Why Choose Custom Mobile App Development in Dubai for Your Business
In today's digital age, having a strong mobile presence is crucial for businesses looking to thrive...
0
2024-07-01T05:39:51
https://dev.to/toxsltechnologies/why-choose-custom-mobile-app-development-in-dubai-for-your-business-59c5
mobile, development, appdevelopment
In today's digital age, having a strong mobile presence is crucial for businesses looking to thrive in the competitive marketplace. As the UAE continues to emerge as a global hub for innovation and technology, custom mobile app development in Dubai has become an increasingly attractive option for companies seeking to elevate their digital strategies. This blog will explore the benefits of choosing [custom mobile app development in Dubai](https://toxsl.ae/mobile-app-development-company-dubai) and why it could be the perfect solution for your business needs. **The Rise of Mobile App Development in UAE** The United Arab Emirates, particularly Dubai, has experienced rapid technological growth in recent years. This surge has led to an increased demand for mobile development in UAE, with businesses recognizing the potential of custom apps to reach and engage their target audience. As smartphone usage continues to rise in the region, having a well-designed mobile app has become essential for businesses looking to stay ahead of the curve. ## Why Choose Custom Mobile App Development? ### 1. Tailored to Your Specific Needs Off-the-shelf solutions may seem appealing due to their lower upfront costs, but they often fall short in meeting the unique requirements of your business. Custom mobile app development in Dubai allows you to [create an application](https://toxsl.ae/) that aligns perfectly with your brand identity, target audience, and specific business objectives. This tailored approach ensures that every feature and functionality serves a purpose in enhancing your user experience and achieving your goals. ####2. Scalability and Flexibility As your business grows and evolves, your mobile app should be able to adapt and scale accordingly. Custom mobile app development provides the flexibility to modify and expand your application as needed. This scalability ensures that your app remains relevant and effective in meeting the changing needs of your business and customers over time. #### 3. Enhanced Security With cyber threats becoming increasingly sophisticated, security is a top priority for businesses operating in the digital space. Custom mobile app development allows for the implementation of robust security measures tailored to your specific requirements. This level of protection is particularly crucial for businesses handling sensitive user data or financial transactions. ####4. Seamless Integration with Existing Systems Custom mobile apps can be designed to integrate seamlessly with your existing business systems and processes. This integration streamlines operations, improves efficiency, and provides a more cohesive user experience for both your employees and customers. ####5. Competitive Advantage In a crowded marketplace, standing out from the competition is essential. A custom mobile app developed specifically for your business can give you a significant edge over competitors relying on generic solutions. It allows you to offer unique features and functionalities that set your brand apart and cater to the specific needs of your target audience. ##Conclusion Custom mobile app development in Dubai offers numerous benefits for businesses looking to enhance their digital presence and engage with their target audience more effectively. The city's thriving tech ecosystem, strategic location, and supportive business environment make it an ideal choice for companies seeking top-tier mobile app development services. By partnering with a reputable mobile app company in Dubai like ToXSL Technologies, businesses can leverage expert knowledge and cutting-edge technologies to create custom mobile applications that drive growth, improve user engagement, and provide a competitive edge in the digital marketplace. As the mobile landscape continues to evolve, investing in custom mobile app development in Dubai can be a game-changing decision for your business. Whether you're looking to streamline internal processes, enhance customer experiences, or tap into new revenue streams, a well-designed custom mobile app can be the key to unlocking your business's full potential in the digital age.
toxsltechnologies
1,907,177
How does a Generative AI development company help your business grow?
A Generative AI development company can significantly contribute to the growth of a business by...
0
2024-07-01T05:39:23
https://dev.to/nextbraintechnologies/how-does-a-generative-ai-development-company-help-your-business-grow-36nf
generativeaidevelopmentcompany, generativeaiservices, genaidevelopmentcompany
A Generative AI development company can significantly contribute to the growth of a business by leveraging advanced AI technologies to enhance various aspects of operations, innovation, and customer engagement. ## **Here’s an in-depth look at how such a company can help your business grow:** **Enhanced Customer Experience** Generative AI can create more personalized and interactive customer experiences. By analyzing vast amounts of data, AI can generate personalized content, recommendations, and even customer service interactions. For instance, AI-driven chatbots can handle customer queries in real-time, providing instant support and improving customer satisfaction. This can lead to higher customer retention rates and increased sales. **Product and Service Innovation** Generative AI enables businesses to innovate by creating new products and services. AI can assist in the design and development process by generating novel ideas and prototypes. For example, in industries like fashion, AI can design unique clothing patterns, while in tech, it can help develop new software features. This accelerates the innovation cycle and helps companies stay competitive. **Marketing and Sales Optimization** Generative AI can transform marketing strategies through targeted and personalized campaigns. AI analyzes customer behavior and preferences, enabling businesses to create content that resonates with their audience. This personalized approach can significantly improve the effectiveness of marketing campaigns, leading to higher conversion rates and increased revenue. **Operational Efficiency** AI can streamline various operational processes, reducing costs and increasing efficiency. Generative AI can automate repetitive tasks, such as data entry and report generation, freeing up human resources for more strategic tasks. Moreover, AI can optimize supply chain management by predicting demand and optimizing inventory levels, reducing waste, and ensuring timely delivery. **Data-Driven Decision Making** A [**generative AI development company**](https://www.nextbraintech.com/generative-ai-development-services) can help your business harness the power of data. AI can analyze complex datasets to uncover patterns and insights that are not immediately apparent to humans. This data-driven approach allows for more informed decision-making, enabling businesses to react swiftly to market changes and identify new opportunities for growth. **Creative Content Generation** In industries like media, entertainment, and advertising, generative AI can create high-quality content, such as articles, advertisements, videos, and graphics. This not only speeds up content production but also ensures that the content is tailored to the target audience, enhancing engagement and effectiveness. **Competitive Advantage** By integrating generative AI into your business, you can gain a competitive edge. AI-driven solutions can improve various aspects of your business faster and more accurately than traditional methods. This allows you to stay ahead of competitors who may be slower to adopt these technologies. **Cost Reduction** Generative AI can significantly reduce costs across various business functions. Automation of tasks reduces the need for extensive human labor, while predictive analytics can minimize waste and inefficiencies. These cost savings can be reinvested into the business for further growth and development. **Scalability** AI technologies can help businesses scale their operations more efficiently. As your business grows, AI systems can handle increased workloads without a proportional increase in costs. This scalability is crucial for sustaining long-term growth. **Conclusion** Partnering with a generative AI development company provides businesses with cutting-edge tools and expertise to drive growth. From enhancing customer experience to optimizing operations and driving innovation, AI offers numerous benefits that can transform your business landscape. By embracing these technologies, businesses can stay ahead of the curve, achieving sustainable growth and long-term success.
nextbraintechnologies
1,907,176
Benefits of Outsourcing Android Development
Outsourcing Android development can provide companies with access to a global talent pool of skilled...
0
2024-07-01T05:38:50
https://dev.to/michaeljason_eb570f1a51d6/benefits-of-outsourcing-android-development-3hmc
webdev, beginners, programming, devops
Outsourcing Android development can provide companies with access to a global talent pool of skilled professionals. This can lead to faster project delivery times and higher-quality results, as companies can leverage the expertise of experienced developers from around the world. Additionally, outsourcing Android development can help businesses save on costs associated with hiring and training in-house developers, as well as overhead expenses. Furthermore, outsourcing Android development allows companies to focus on their core competencies and strategic goals, while leaving the technical aspects of app development to external experts. This can result in increased efficiency and productivity, as well as the ability to scale resources up or down based on project needs. Overall, outsourcing Android development can be a strategic decision that enables companies to stay competitive in the rapidly evolving mobile app market.[Hire offshore android engineer](https://www.appsierra.com/blog/offshore-dotnet-development-company ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qeolwrl2fmoxjks701t.jpg)) for better result. Understanding Offshore Development Teams Offshore development teams refer to a group of professionals located in a different country or timezone than the parent company. Companies often choose to work with offshore teams to leverage cost-effective resources and access talent from a global pool. By working with offshore developers, businesses can benefit from around-the-clock productivity, as teams in different locations can collaborate on projects continuously. Understanding cultural differences is crucial when working with offshore development teams. Communication styles, work ethics, and even holidays may vary across different countries. It is essential for companies to establish clear communication channels and set expectations early on to ensure seamless collaboration with offshore teams. By embracing cultural diversity and fostering inclusive practices, businesses can fully harness the potential of their offshore development teams. Key Factors to Consider When Hiring an Android Engineer When hiring an Android engineer, one of the key factors to consider is the candidate's technical expertise. It is important to assess their skills in Java, Kotlin, Android Studio, and other relevant technologies. Look for candidates who have experience in developing mobile applications and have a strong understanding of the Android platform. In addition to technical skills, it is crucial to evaluate the candidate's problem-solving abilities and creativity. An effective Android engineer should be able to think critically, troubleshoot issues efficiently, and come up with innovative solutions to complex problems. Consider asking them about their previous projects and the challenges they faced to gauge their problem-solving skills. Cost Savings Associated with Offshore Android Engineers Outsourcing Android development to offshore teams can result in significant cost savings for businesses. By hiring skilled Android engineers from countries with lower labor costs, companies can reduce their overall expenses while still maintaining quality in their projects. These cost savings can be especially beneficial for startups and small businesses looking to develop mobile applications on a limited budget. In addition to lower labor costs, offshore Android engineers often offer competitive rates due to currency exchange rates and lower living costs in their respective countries. This can allow companies to stretch their budgets further and take on more ambitious projects without breaking the bank. By leveraging the cost advantages of offshore development teams, businesses can allocate resources more efficiently and achieve their Android development goals within budget. Best Practices for Managing an Offshore Android Team One essential best practice for managing an offshore Android team is to establish clear communication channels from the outset. Regularly scheduled meetings using video conferencing tools can help bridge the physical distance and foster a sense of teamwork. It's crucial to ensure that all team members are aligned on project goals, timelines, and expectations to avoid misunderstandings and bottlenecks in the development process. In addition to communication, it is vital to set up robust project management and collaboration tools to streamline workflow and facilitate real-time collaboration. Utilizing tools like project management software, version control systems, and messaging platforms can help keep everyone on the same page and enhance productivity. By leveraging these technologies effectively, managers can monitor progress, allocate tasks efficiently, and address any issues promptly to ensure the success of the offshore Android development team. Challenges of Working with Offshore Android Engineers While outsourcing Android development can offer many benefits, there are some challenges that come with working with offshore Android engineers. One main challenge is the potential for miscommunication due to language barriers and cultural differences. This can lead to misunderstandings, delays in project timelines, and a lack of cohesive teamwork. Additionally, working with offshore Android engineers can present challenges in terms of time zone differences. Coordinating meetings and collaboration between team members located in different parts of the world can be difficult, leading to potential delays in communication and project progress. It is crucial for teams to establish clear communication channels and protocols to minimize the impact of these challenges. How can I ensure effective communication with offshore Android engineers? To ensure effective communication, it is important to establish clear communication channels, schedule regular meetings, utilize project management tools, and provide detailed project briefs and documentation. What are some common challenges when working with offshore Android engineers? Some common challenges include differences in time zones, cultural differences, language barriers, and potential issues with quality control. It is important to address these challenges proactively to ensure successful collaboration.
michaeljason_eb570f1a51d6
1,907,174
Use Cases and Key Capabilities of Microsoft Sentinel
Microsoft Sentinel is a powerful security platform that helps organizations protect their digital...
0
2024-07-01T05:35:34
https://dev.to/shivamchamoli18/use-cases-and-key-capabilities-of-microsoft-sentinel-260j
microsoft, microsoftsentinel, cloudsecurity, infosectrain
Microsoft Sentinel is a powerful security platform that helps organizations protect their digital assets from advanced threats and respond to security incidents. With its wide range of use cases and key capabilities, Sentinel enables security teams to detect and investigate potential threats in real time, streamline incident response, and enhance overall security posture. By leveraging advanced analytics, automation, and integration with other security tools, Microsoft Sentinel provides a comprehensive solution for organizations to strengthen their security defenses and safeguard their data. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zvxrfh7x00i2m3582vqk.jpg) ## **Use Cases of Microsoft Sentinel** 1. **Threat Detection and Response:** Microsoft Sentinel offers real-time threat detection and response by gathering and analyzing security events from diverse sources, including logs, network traffic, and endpoints. It leverages advanced analytics, machine learning, and behavioral analysis to identify potential threats and anomalies, helping security teams proactively respond to security incidents. 2. **Security Incident Investigation:** Sentinel enables efficient and effective investigation of security incidents by aggregating and correlating data from multiple sources into a single interface. It provides Security Analysts with comprehensive visibility into security events, contextual information, and threat intelligence, facilitating faster and more accurate incident responses. 3.** Cloud Security Monitoring**: As the adoption of cloud services continues to grow, organizations require robust cloud security monitoring solutions. Microsoft Sentinel integrates seamlessly with Azure services, enabling organizations to monitor their Azure infrastructure, applications, and services for potential security risks. It helps identify misconfigurations, suspicious activities, and unauthorized access attempts within the cloud environment. Additionally, Microsoft Sentinel supports AWS, GCP, and Oracle, providing comprehensive security coverage across multiple cloud platforms. 4. **Insider Threat Detection:** Sentinel can detect and mitigate insider threats by monitoring user activities, access privileges, and behavioral patterns. It analyzes user behavior and applies machine learning algorithms to identify anomalies, unauthorized activities, and data exfiltration attempts by insiders. ## **Key Capabilities of Microsoft Sentinel** 1. **Data Collection and Integration:** Microsoft Sentinel supports data collection from various sources, including cloud platforms, on-premises infrastructure, security devices, and third-party solutions. It provides built-in connectors and APIs to collect and normalize data, ensuring comprehensive coverage of security events. 2. **Advanced Analytics and Threat Intelligence:** Sentinel leverages advanced analytics and threat intelligence to accurately detect and prioritize security incidents. It applies machine learning algorithms, behavioral analysis, and correlation rules to identify known and unknown threats. Integration with threat intelligence feeds the analysis with up-to-date threat information. 3. **Automated Incident Response:** The platform enables automated incident response by providing playbooks and workflows that help orchestrate response actions. Sentinel integrates with other security tools and services, allowing automated mitigation steps and reducing manual effort in incident response. 4. **Customization and Extensibility:** Microsoft Sentinel offers customization and extensibility options to tailor the platform to specific organizational needs. It provides a query language and visualization tools to create custom dashboards, reports, and alerts. Additionally, organizations can integrate their own threat intelligence feeds and develop custom connectors or playbooks. 5. **Collaboration and Integration:** Sentinel enhances collaboration among security teams by offering a unified view of security events and incidents. It allows teams to collaborate within the platform, share notes, and assign tasks. Integration with other Microsoft security products, services, and third-party solutions enhances the overall security ecosystem. ## **Microsoft Sentinel with InfosecTrain** [InfosecTrain ](https://www.infosectrain.com/)is a recognized provider of cybersecurity training and consulting services. We offer expertise in implementing and optimizing [Microsoft Sentinel Training](https://www.infosectrain.com/courses/azure-sentinel-training/), a powerful Security Information and Event Management (SIEM) solution. InfosecTrain can assist organizations by providing implementation guidance, specialized training programs, support in SOC development, integration of threat intelligence, and continuous monitoring and tuning. Collaborating with InfosecTrain enables organizations to effectively leverage Microsoft Sentinel, enhance their security capabilities, and strengthen their overall security posture.
shivamchamoli18
1,907,173
Learning Python as a JavaScript Developer: A Comprehensive Guide
As a JavaScript and Python full-stack developer, I understand the value of continuously expanding...
0
2024-07-01T05:34:47
https://dev.to/ahmadnyc/learning-python-as-a-javascript-developer-a-comprehensive-guide-4239
As a JavaScript and Python full-stack developer, I understand the value of continuously expanding one's coding weaponry. Like many developers, my journey began with mastering HTML, CSS, and JavaScript—the foundational trio of web development. Over time, I decided to venture into Python and quickly realized its versatility and practicality, particularly in handling backend data and automating tasks like web scraping. If you're considering learning Python but feeling hesitant, rest assured—it shares a similar learning curve with JavaScript, making it accessible for developers at any level. In fact, many find Python easier to learn due to its straightforward syntax and emphasis on readability. For those transitioning from JavaScript to Python, the shift is often smoother, leveraging existing programming concepts while exploring new functionalities and applications ## Why Learn Python? Versatility: Python is widely used in web development, data analysis and machine learning. Ease of Use: Its simple syntax and readability reduce complexity and promote code clarity. Popularity: Python ranks among the top programming languages, supported by a robust community and extensive library ecosystem. Essential Syntax in Python and JavaScript ## Data Types and Variables JavaScript Example: ``` // Data types and variables in JavaScript let message = "Hello, JavaScript!"; let num = 42; let pi = 3.14; let isValid = true; ``` Python Example: ``` # Data types and variables in Python message = "Hello, Python!" num = 42 pi = 3.14 is_valid = True ``` ## Code Blocks and Functions JavaScript Example: ``` // Code blocks and functions in JavaScript function greet(name) { console.log(`Hello, ${name}!`); } greet("World"); ``` Python Example: ``` # Code blocks and functions in Python def greet(name): print(f"Hello, {name}!") greet("World") ``` ## Conditionals JavaScript Example: ``` // Conditionals in JavaScript if (num > 0) { console.log("Positive number"); } else if (num === 0) { console.log("Zero"); } else { console.log("Negative number"); } ``` Python Example: ``` # Conditionals in Python if num > 0: print("Positive number") elif num == 0: print("Zero") else: print("Negative number") ``` ## Lists (Arrays) and Dictionaries (Objects) JavaScript Example: ``` // Arrays and objects in JavaScript let fruits = ["apple", "banana", "cherry"]; let person = {name: "Alice", age: 30, city: "New York"}; ``` Python Example: ``` # Lists and dictionaries in Python fruits = ["apple", "banana", "cherry"] person = {"name": "Alice", "age": 30, "city": "New York"} ``` ## Iteration JavaScript Example: ``` // Iteration in JavaScript for (let i = 0; i < fruits.length; i++) { console.log(fruits[i]); } while (num > 0) { console.log(num); num--; } ``` Python Example: ``` # Iteration in Python for fruit in fruits: print(fruit) while num > 0: print(num) num -= 1 ``` ## Differences and Commonalities Syntax: Python uses indentation for code blocks; JavaScript uses curly braces {}. Typing: Python is dynamically typed, JavaScript is loosely typed. Function Definitions: Python uses def, JavaScript uses function or arrow functions. Data Structures: Lists ([]) and dictionaries ({}) in Python vs. arrays ([]) and objects ({}) in JavaScript. ## Tips for Learning Python - Master Fundamentals: Start with basic syntax, data structures, and control flow. - Leetcode: Rebuild solutions you have already done with your more familiar language using python to reinforce learning syntax. - Build Projects: Apply concepts through practical projects. - Resources: Utilize Python Documentation and [Codecademy Learn Python 3](https://www.codecademy.com/learn/learn-python-3) Course for learning. ## Finale Python's simplicity, versatility, and strong community support make it an invaluable addition for JavaScript developers. Whether you're diving into machine learning , data science or web development, Python will allow you to tackle diverse challenges effectively and efficiently. Thanks for reading and Good Luck!
ahmadnyc
1,907,154
Building a Dynamic Blog with Flask and HTMX
Creating a dynamic blog using Flask and HTMX can be both fun and rewarding. This guide will take you...
0
2024-07-01T05:33:14
https://devtoys.io/2024/06/30/building-a-dynamic-blog-with-flask-and-htmx/
htmx, python, webdev, devtoys
--- canonical_url: https://devtoys.io/2024/06/30/building-a-dynamic-blog-with-flask-and-htmx/ --- Creating a dynamic blog using Flask and HTMX can be both fun and rewarding. This guide will take you through the entire process, focusing on making your blog interactive without the need for a complex single-page application (SPA) framework. By the end, you'll have a fully functional blog where users can create, read, update, and delete posts seamlessly. --- ## What You'll Need - Basic knowledge of HTML, CSS, and JavaScript - Basic understanding of Python and Flask (or your preferred backend framework) - Python and pip installed on your machine --- ## Step 1: Setting Up Your Environment **1.1 Install Flask** First things first, let's set up our Flask environment. Open your terminal and create a virtual environment, then install Flask: ```bash python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate` pip install Flask Flask-SQLAlchemy ``` ## 1.2 Create the Project Structure Organize your project directory as follows: ```bash blog_app/ ├── static/ │ ├── css/ │ │ └── styles.css │ └── js/ │ └── scripts.js ├── templates/ │ ├── base.html │ ├── index.html │ ├── post.html │ ├── edit_post.html │ └── post_snippet.html ├── app.py └── models.py ``` --- ## Step 2: Create the Flask Backend **2.1 Define Models** In models.py, define a simple data model for blog posts using SQLAlchemy: ```python from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() class Post(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100), nullable=False) content = db.Column(db.Text, nullable=False) ``` **2.2 Set Up Flask Application** Next, set up your Flask application in app.py: *---Note---: SQLite is included with Python as a built-in library, which means you don't need to install it separately. SQLite is a lightweight, disk-based database that doesn’t require a separate server process and allows access to the database using a nonstandard variant of the SQL query language.* ```python from flask import Flask, render_template, request, redirect, url_for from models import db, Post app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///blog.db' app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db.init_app(app) with app.app_context(): db.create_all() # Create database tables @app.before_request def method_override(): if request.method == 'POST' and '_method' in request.form: method = request.form['_method'].upper() if method in ['PUT', 'DELETE', 'PATCH']: request.environ['REQUEST_METHOD'] = method @app.route('/') def index(): posts = Post.query.all() return render_template('index.html', posts=posts) @app.route('/post/<int:post_id>') def post(post_id): post = Post.query.get_or_404(post_id) return render_template('post.html', post=post) @app.route('/create', methods=['POST']) def create(): try: title = request.form['title'] content = request.form['content'] if not title or not content: raise ValueError("Title and content cannot be empty") new_post = Post(title=title, content=content) db.session.add(new_post) db.session.commit() # Render the new post as HTML return render_template('post_snippet.html', post=new_post) except Exception as e: print(f"Error occurred: {e}") db.session.rollback() return '', 500 # Return an error response @app.route('/edit/<int:post_id>', methods=['GET', 'POST']) def edit(post_id): post = Post.query.get_or_404(post_id) if request.method == 'POST': post.title = request.form['title'] post.content = request.form['content'] db.session.commit() return redirect(url_for('post', post_id=post.id)) return render_template('edit_post.html', post=post) @app.route('/delete/<int:post_id>', methods=['POST', 'DELETE']) def delete(post_id): post = Post.query.get_or_404(post_id) db.session.delete(post) db.session.commit() return '<div id="post-{}"></div>'.format(post_id) # Return an empty div to swap the deleted post if __name__ == '__main__': app.run(debug=True) ``` --- ## Step 3: Create HTML Templates **3.1 Base Template** In templates/base.html, define the base HTML structure: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Blog App</title> <link rel="stylesheet" href="{{ url_for('static', filename='css/styles.css') }}"> <script src="https://unpkg.com/htmx.org@2.0.0"></script> <script src="{{ url_for('static', filename='js/scripts.js') }}" defer></script> </head> <body> <nav class="navbar"> <a href="{{ url_for('index') }}">Home</a> </nav> <div class="container"> {% block content %}{% endblock %} </div> </body> </html> ``` **3.2 Index Template** In templates/index.html, create the index page to list all posts: ```html {% extends "base.html" %} {% block content %} <h1>Blog Posts</h1> <form hx-post="{{ url_for('create') }}" hx-target="#posts" hx-swap="beforeend" method="post"> <input type="text" name="title" placeholder="Title" required> <textarea name="content" placeholder="Content" required></textarea> <button type="submit" class="btn btn-primary">Create</button> </form> <div id="posts"> {% for post in posts %} {% include 'post_snippet.html' %} {% endfor %} </div> {% endblock %} ``` **3.3 Post Template** In templates/post.html, create the template for displaying a single post: ```html {% extends "base.html" %} {% block content %} <div class="post"> <h1>{{ post.title }}</h1> <p>{{ post.content }}</p> <div class="post-buttons"> <a href="{{ url_for('edit', post_id=post.id) }}" class="btn btn-primary">Edit</a> </div> </div> {% endblock %} ``` **3.4 Post Snippet Template** In templates/post_snippet.html, create a snippet for individual posts to be used for dynamic updates: ```html <div class="post" id="post-{{ post.id }}"> <h2><a href="{{ url_for('post', post_id=post.id) }}">{{ post.title }}</a></h2> <p>{{ post.content }}</p> <div class="post-buttons"> <form action="{{ url_for('delete', post_id=post.id) }}" hx-delete="{{ url_for('delete', post_id=post.id) }}" hx-target="#post-{{ post.id }}" hx-swap="outerHTML" method="post" class="delete-form"> <a href="{{ url_for('edit', post_id=post.id) }}" class="btn btn-primary">Edit</a> <input type="hidden" name="_method" value="DELETE"> <button type="submit" class="btn btn-danger">Delete</button> </form> </div> </div> ``` **3.5 Edit Post Template** In templates/edit_post.html, create the template for editing a post: ```html {% extends "base.html" %} {% block content %} <h1>Edit Post</h1> <form method="post"> <input type="text" name="title" value="{{ post.title }}" required> <textarea name="content" required>{{ post.content }}</textarea> <button type="submit" class="btn btn-primary">Save</button> </form> {% endblock %} ``` 🔥 Fired up to learn HTMX in more depth? This is a MUST read for leveling up. 🆙 [![Hypermedia Systems Kindle Edition](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qqcb9sa06357ri79yxw0.jpg)](https://amzn.to/4eCUL3M) ## [Hypermedia Systems Kindle Edition](https://amzn.to/4eCUL3M) --- ## Step 4: Styling the Application Create a simple CSS file (styles.css) to style your blog: ```css /* General Styles */ body { font-family: Arial, sans-serif; margin: 0; padding: 0; background-color: #f4f4f4; } .container { width: 80%; margin: 0 auto; padding: 20px; background-color: #fff; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); } .navbar { background-color: #343a40; color: #fff; padding: 10px; text-align: center; } .navbar a { color: #fff; margin-right: 10px; text-decoration: none; } h1 { color: #343a40; text-align: center; margin-bottom: 20px; } /* Form Styles */ form { margin-bottom: 20px; padding: 20px; background: #f9f9f9; border-radius: 5px; box-shadow: 0 0 5px rgba(0, 0, 0, 0.1); } form input, form textarea { width: 100%; padding: 10px; margin: 10px 0; border: 1px solid #ccc; border-radius: 5px; } form button { font-size: 0.8rem; background-color: #007bff; color: #fff; border: none; padding: 10px 20px; cursor: pointer; border-radius: 5px; } form button:hover { background-color: #0056b3; } /* Post Styles */ .post { padding: 20px; background: #fff; margin-bottom: 20px; border-radius: 5px; box-shadow: 0 0 5px rgba(0, 0, 0, 0.1); transition: transform 0.2s; } .post:hover { transform: scale(1.02); } .post h2 { margin-top: 0; color: #6c757d; } .post p { margin: 10px 0; color: #6c757d; } /* Post Buttons Styles */ .post-buttons { display: flex; gap: 10px; margin-top: 10px; } .post-buttons .btn { padding: 8px 16px; border-radius: 5px; font-size: 0.8rem; border: none; cursor: pointer; text-align: center; transition: background-color 0.3s, color 0.3s; display: flex; align-items: center; justify-content: center; text-decoration: none; /* Remove underline for anchor tags */ } .post-buttons .edit-btn, .post-buttons .delete-btn { display: inline-flex; align-items: center; justify-content: center; } .post-buttons .btn-primary { background-color: #007bff; color: #fff; } .post-buttons .btn-primary:hover { background-color: #0056b3; } .post-buttons .btn-danger { background-color: #dc3545; color: #fff; } .post-buttons .btn-danger:hover { background-color: #c82333; } .delete-form { display: flex; align-items: center; gap: 10px; /* Ensure space between the buttons within the form */ } ``` --- ## Step 5: Add Enhanced Debugging for HTMX Create a simple JavaScript file (scripts.js) to handle HTMX events for better debugging: ```javascript /* static/js/scripts.js */ document.addEventListener('htmx:afterRequest', (event) => { console.log('HTMX request completed:', event.detail); }); document.addEventListener('htmx:error', (event) => { console.error('HTMX request error:', event.detail); }); ``` ## Step 6: Testing Your Application Now that you have set up the backend, created the HTML templates, and added HTMX for interactivity, it’s time to test your application. Make sure your Flask server is running by using the command: ```bash flask --debug run ``` *Open your web browser and navigate to http://127.0.0.1:5000/. You should see your blog's home page, where you can create, view, edit, and delete blog posts.* --- **Create a Post** Enter a title and content in the form at the top of the page. Click the "Create" button. The new post should appear instantly on the page without a full page reload. --- **View a Post** Click on the title of a post to view its full content on a separate page. --- **Edit a Post** Click the "Edit" link next to a post. Modify the title or content and click "Save". You should be redirected to the updated post's page. Click home on top to go back to home page. --- **Delete a Post** Click the "Delete" button next to a post. The post should be removed instantly without a full page reload. --- ## Conclusion In this comprehensive tutorial, you have learned how to create a dynamic blog application using Flask and HTMX. Here's a quick recap of what we've covered: - Setting up a Flask environment and project structure - Creating and configuring a Flask application - Defining models with SQLAlchemy - Creating HTML templates for your blog - Adding HTMX attributes for dynamic form submission and deletion - Styling your application with CSS By following these steps, you can build modern web applications with enhanced interactivity without the need for complex single-page application frameworks. HTMX allows you to keep your workflow simple and productive while providing a smooth user experience. ## Further Reading and Resources To deepen your understanding and keep up with the latest trends and best practices in web development, here are som👽e resources you might find helpful: - [HTMX Documentation](https://htmx.org/) - [Flask Documentation](https://flask.palletsprojects.com/en/3.0.x/) - [SQLAlchemy Documentation](https://docs.sqlalchemy.org/en/20/) By leveraging these resources, you can continue to enhance your skills and stay updated with the latest trends and best practices in web development. Happy coding! This guide provides a complete walkthrough for creating a dynamic blog using Flask and HTMX, focusing on interactivity and simplicity. By following these steps, you'll have a modern, interactive blog application that can easily be expanded and customized to meet your needs. ## 🔥 If you enjoyed this article come visit our hacker community to connect and find more! [DevToys.io](https://devtoys.io) 👽
3a5abi
1,888,205
Jenkins on docker
Welcome to the comprehensive guide on installing Jenkins on Docker. Jenkins is a open source...
0
2024-07-01T05:32:05
https://dev.to/prateektom/jenkins-on-docker-23n8
devops, jenkins, docker
Welcome to the comprehensive guide on installing Jenkins on Docker. Jenkins is a open source automation server that helps in building, deploying, and automating projects. Docker simplifies the process by providing a consistent environment across multiple systems. Let's deep dive into the process of setting up Jenkins on Docker. **Prerequisites** Before we begin, make sure you should have the following installations. 1. **Docker Installed:** Ensure docker is installed by typing this code on your shell. ``` docker --version Docker version 25.0.4, build 1a576c5 ``` 2.**Basic Command Line Knowledge:** Familiarity with command-line operations will be beneficial. **Step 1: Pulling the Docker Dind Image** ``` docker pull docker:dind ``` **Step 2: Create a Jenkins bridge Network** ``` docker network create -d bridge jenkins ``` **Step 3: Run the docker dind image** ``` docker run \ --name jenkins-docker \ --rm \ --detach \ --privileged \ --network jenkins \ --network-alias docker \ --env DOCKER_TLS_CERTDIR=/certs \ --volume jenkins-docker-certs:/certs/client \ --volume jenkins-data:/var/jenkins_home \ --publish 2376:2376 \ docker:dind \ --storage-driver overlay2 ``` **Step 4: Now you need to Customize the official Jenkins Docker image, by executing the following two steps:** a. Create a Dockerfile with the following content: `FROM jenkins/jenkins:2.452.2-jdk17 USER root RUN apt-get update && apt-get install -y lsb-release RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc \ https://download.docker.com/linux/debian/gpg RUN echo "deb [arch=$(dpkg --print-architecture) \ signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \ https://download.docker.com/linux/debian \ $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list RUN apt-get update && apt-get install -y docker-ce-cli USER jenkins RUN jenkins-plugin-cli --plugins "blueocean docker-workflow" ` b. Build a new docker image from this Dockerfile, and assign the image a meaningful name, such as "myjenkins-blueocean:2.452.2-1": `docker build -t myjenkins-blueocean:2.452.2-1 .` **Step 5: Run your own myjenkins-blueocean:2.452.2-1 image as a container in Docker using the following docker run command:** ``` docker run \ --name jenkins-blueocean \ --restart=on-failure \ --detach \ --network jenkins \ --env DOCKER_HOST=tcp://docker:2376 \ --env DOCKER_CERT_PATH=/certs/client \ --env DOCKER_TLS_VERIFY=1 \ --publish 8080:8080 \ --publish 50000:50000 \ --volume jenkins-data:/var/jenkins_home \ --volume jenkins-docker-certs:/certs/client:ro \ myjenkins-blueocean:2.452.2-1 ``` **Step 5: Accessing you jenkins controller.** - Open a web browser. - Navigate to http://localhost:8080.
prateektom
1,907,170
Star Sailors V2 is publicly available...sort of
(and so begins my long-await return to Dev.to) Hey. Welcome. Bonjour. My name's Liam, and we've just...
27,920
2024-07-01T05:30:17
https://scrooby.micro.blog/2024/06/30/224434.html
starsailors, citizenscience, gamedev, flask
(and so begins my long-await return to Dev.to) Hey. Welcome. Bonjour. My name's Liam, and we've just published the first pre-release version of #Star-Sailors Version 2. So I'm going to be talking a little bit about that here. What is Star Sailors? Well, the simplest way to put it is that it's a series of protocols and practices for meaningful citizen science games. Right now, its main form is a web application that allows you to visit different planets and collect all sorts of resources and data. We've used the highly appropriate domain name [starsailors.space](https://starsailors.space) which I am still surprised was available when I registered it in September 2023. I've been working on Star Sailors for a number of years, and last year I came out with the first version of...well, anything. Before that, it was a few years of code snippets and not much else. Version 1 of Star Sailors was a website that showed you lightcurve data from exoplanet candidates and allowed users to classify them. It had a simple interface and some basic gamification and community mechanics. One day I will go into more detail about what I was doing in the years prior, but right now I want to focus on the big announcement: Version 2 is available for basic testing. ### What is version 2? It's still a web application Users still get given data to classify But... It's more like a game (now) There's resource management and more stuff like 1. More citizen science modules (mars rover data, martian cloud spectroscopy, wildlife tagging & conservation, etc) 2. Collaboration between users 3. Missions to complete 4. Vehicles to travel in 5. Bases & structures to build ### What does "pre-release" mean? Essentially we have something working that I want to share but... it doesn't meet the goals/features I want for Version 2 (all the features described above, and a few more surprises) Right now, Star Sailors is going through the fifth season (our third overall) of Buildspace's Nights & Weekends incubator. As part of this incubator, we have to get users to test out what we're building Additionally, we don't want to put in the huge amount of work to get to V2, release it, and find that it has a bunch of problems or users find it confusing So... We're doing pre-releases We publish a new version weekly With new content, items, things to explore, and hopefully some fixes/improvements from the week before This will culminate in a full release in about 4 weeks (from time of publishing - 01/07/24) But you can start playing it now So, please do And let us know what you think And come back every now and then to see how we're getting along And, feel free to get in touch liam@talonova.space
gizmotronn
1,907,169
Stay ahead in web development: latest news, tools, and insights #39
weeklyfoo #39 is here: your weekly digest of all webdev news you need to know! This time you'll find 52 valuable links in 6 categories! Enjoy!
0
2024-07-01T05:29:31
https://weeklyfoo.com/foos/foo-039/
webdev, weeklyfoo, javascript, node
weeklyfoo #39 is here: your weekly digest of all webdev news you need to know! This time you'll find 52 valuable links in 6 categories! Enjoy! ## 🚀 Read it! - <a href="https://blog.frankmtaylor.com/2024/06/20/a-rant-about-front-end-development/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vYmxvZy5mcmFua210YXlsb3IuY29tLzIwMjQvMDYvMjAvYS1yYW50LWFib3V0LWZyb250LWVuZC1kZXZlbG9wbWVudC8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoicmVhZEl0Iiwic291cmNlIjoid2ViIn19">A Rant about Front-end Development</a>: Such an entertaining read!<small> / </small><small>*engineering*</small><small> / </small><small>42 min read</small> - <a href="https://melkat.blog/p/unsafe-pricing?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vbWVsa2F0LmJsb2cvcC91bnNhZmUtcHJpY2luZyIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJyZWFkSXQiLCJzb3VyY2UiOiJ3ZWIifX0%3D">Unsafe Pricing at Any Scale</a>: Especially the part about blocking AI bots is a mandatory read!<small> / </small><small>*serverless*</small><small> / </small><small>4 min read</small> - <a href="https://htmx.org/essays/htmx-sucks/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vaHRteC5vcmcvZXNzYXlzL2h0bXgtc3Vja3MvIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6InJlYWRJdCIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">htmx sucks</a>: Such a great read!<small> / </small><small>*htmx*</small><small> / </small><small>10 min read</small> <Hr /> ## 📰 Good to know - <a href="https://sunsetglow.net/posts/frontend-build-systems.html?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vc3Vuc2V0Z2xvdy5uZXQvcG9zdHMvZnJvbnRlbmQtYnVpbGQtc3lzdGVtcy5odG1sIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6Imdvb2RUb0tub3ciLCJzb3VyY2UiOiJ3ZWIifX0%3D">Exposition of Frontend Build Systems</a>: Overview of modern tooling around frontend builds.<small> / </small><small>*ci*</small><small> / </small><small>15 min read</small> - <a href="https://blog.codepipes.com/testing/code-coverage.html?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vYmxvZy5jb2RlcGlwZXMuY29tL3Rlc3RpbmcvY29kZS1jb3ZlcmFnZS5odG1sIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6Imdvb2RUb0tub3ciLCJzb3VyY2UiOiJ3ZWIifX0%3D">Getting 100% code coverage doesn't eliminate bugs</a>: TL;DR - Getting 100% coverage on a project doesn’t mean you have zero bugs. Here is an extreme example to prove it.<small> / </small><small>*tests*</small><small> / </small><small>5 min read</small> - <a href="https://blog.znote.io/2024/side-hustle-journey/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vYmxvZy56bm90ZS5pby8yMDI0L3NpZGUtaHVzdGxlLWpvdXJuZXkvIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6Imdvb2RUb0tub3ciLCJzb3VyY2UiOiJ3ZWIifX0%3D">How my weekend project turned into a 3 years journey</a>: Nice read about an idea that started to rise.<small> / </small><small>*znote*, *startups*</small><small> / </small><small>7 min read</small> - <a href="https://developer.mozilla.org/en-US/blog/javascript-set-methods/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZGV2ZWxvcGVyLm1vemlsbGEub3JnL2VuLVVTL2Jsb2cvamF2YXNjcmlwdC1zZXQtbWV0aG9kcy8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoiZ29vZFRvS25vdyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">New JavaScript Set methods</a>: New JavaScript Set methods are arriving! Since Firefox 127, these methods are available in most major browser engines, which means you won't need a polyfill to make them work everywhere.<small> / </small><small>*javascript*</small><small> / </small><small>14 min read</small> - <a href="https://tonsky.me/blog/crdt-filesync/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vdG9uc2t5Lm1lL2Jsb2cvY3JkdC1maWxlc3luYy8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoiZ29vZFRvS25vdyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Local, first, forever</a>: Thinking about a persistence service that stays longer than the average service.<small> / </small><small>*local-first*</small><small> / </small><small>6 min read</small> - <a href="https://holtwick.de/en/blog/localfirst-resilient-sync?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vaG9sdHdpY2suZGUvZW4vYmxvZy9sb2NhbGZpcnN0LXJlc2lsaWVudC1zeW5jIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6Imdvb2RUb0tub3ciLCJzb3VyY2UiOiJ3ZWIifX0%3D">Resilient Sync for Local First</a>: Syncing data in a local first context is not trivial but manageable.<small> / </small><small>*local-first*</small><small> / </small><small>8 min read</small> - <a href="https://danilafe.com/blog/blog_microfeatures/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZGFuaWxhZmUuY29tL2Jsb2cvYmxvZ19taWNyb2ZlYXR1cmVzLyIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJnb29kVG9Lbm93Iiwic291cmNlIjoid2ViIn19">Microfeatures I Love in Blogs and Personal Websites</a>: Sidenotes, TOC, reading progress - lots of things I never considered. But it's worth thinking about integrating some of them.<small> / </small><small>*blogs*</small><small> / </small><small>19 min read</small> - <a href="https://collabfund.com/blog/useful-and-overlooked-skills/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vY29sbGFiZnVuZC5jb20vYmxvZy91c2VmdWwtYW5kLW92ZXJsb29rZWQtc2tpbGxzLyIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJnb29kVG9Lbm93Iiwic291cmNlIjoid2ViIn19">Useful and Overlooked Skills</a>: On his way to be sworn in as the most powerful man in the world, Franklin Delano Roosevelt had to be lifted out of his car and carried up the stairs.<small> / </small><small>*career*</small><small> / </small><small>6 min read</small> - <a href="https://blog.platformatic.dev/nodejs-is-here-to-stay?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vYmxvZy5wbGF0Zm9ybWF0aWMuZGV2L25vZGVqcy1pcy1oZXJlLXRvLXN0YXkiLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoiZ29vZFRvS25vdyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Node.js is Here to Stay</a>: A deep dive into the metrics<small> / </small><small>*node*</small><small> / </small><small>15 min read</small> - <a href="https://slack.engineering/catching-compromised-cookies/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vc2xhY2suZW5naW5lZXJpbmcvY2F0Y2hpbmctY29tcHJvbWlzZWQtY29va2llcy8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoiZ29vZFRvS25vdyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Catching Compromised Cookies</a>: How we automatically detect stolen session cookies<small> / </small><small>*slack*, *sessions*</small><small> / </small><small>14 min read</small> - <a href="https://www.haskellforall.com/2024/06/my-spiciest-take-on-tech-hiring.html?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vd3d3Lmhhc2tlbGxmb3JhbGwuY29tLzIwMjQvMDYvbXktc3BpY2llc3QtdGFrZS1vbi10ZWNoLWhpcmluZy5odG1sIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6Imdvb2RUb0tub3ciLCJzb3VyY2UiOiJ3ZWIifX0%3D">My spiciest take on tech hiring</a>: ...is that you only need to administer one technical interview and one non-technical interview (each no more than an hour long).<small> / </small><small>*career*, *hiring*</small><small> / </small><small>10 min read</small> - <a href="https://sansec.io/research/polyfill-supply-chain-attack?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vc2Fuc2VjLmlvL3Jlc2VhcmNoL3BvbHlmaWxsLXN1cHBseS1jaGFpbi1hdHRhY2siLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoiZ29vZFRvS25vdyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Polyfill supply chain attack hits 100K+ sites</a>: The new Chinese owner of the popular Polyfill JS project injects malware into more than 100 thousand sites.<small> / </small><small>*security*</small><small> / </small><small>6 min read</small> - <a href="https://www.bennadel.com/blog/4669-exploring-randomness-in-javascript.htm?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vd3d3LmJlbm5hZGVsLmNvbS9ibG9nLzQ2NjktZXhwbG9yaW5nLXJhbmRvbW5lc3MtaW4tamF2YXNjcmlwdC5odG0iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoiZ29vZFRvS25vdyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Exploring Randomness In JavaScript</a>: Math.random() and Crypto.getRandomValues() compared.<small> / </small><small>*random*</small><small> / </small><small>1 min read</small> - <a href="https://kmcd.dev/posts/grpc-the-bad-parts/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8va21jZC5kZXYvcG9zdHMvZ3JwYy10aGUtYmFkLXBhcnRzLyIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJnb29kVG9Lbm93Iiwic291cmNlIjoid2ViIn19">gRPC - The Bad Parts</a>: Downsides of gRPC<small> / </small><small>*grpc*</small><small> / </small><small>9 min read</small> - <a href="https://knowler.dev/blog/maintaining-dotfiles?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8va25vd2xlci5kZXYvYmxvZy9tYWludGFpbmluZy1kb3RmaWxlcyIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJnb29kVG9Lbm93Iiwic291cmNlIjoid2ViIn19">Maintaining dotfiles</a>: Will definitely try out this bare repo method.<small> / </small><small>*git*, *dotfiles*</small><small> / </small><small>4 min read</small> <Hr /> ## 🧰 Tools - <a href="https://www.notionavatarmaker.com/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vd3d3Lm5vdGlvbmF2YXRhcm1ha2VyLmNvbS8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">Notion Avatar Maker</a>: A Notion Avatar is a personalized avatar that aligns with the design style of Notion, a widely-used note-taking and organization tool platform.<small> / </small><small>*avatars*</small> - <a href="https://github.com/TomaszRewak/js-spread-grid?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9Ub21hc3pSZXdhay9qcy1zcHJlYWQtZ3JpZCIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">SpreadGrid</a>: JS library for creating high-performance grid-based applications<small> / </small><small>*grids*</small> - <a href="https://github.com/anthropics/anthropic-sdk-typescript?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9hbnRocm9waWNzL2FudGhyb3BpYy1zZGstdHlwZXNjcmlwdCIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Anthropic TypeScript API Library</a>: Access to Anthropic's safety-first language model APIs<small> / </small><small>*sdk*, *anthropic*</small> - <a href="https://maneken.app/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vbWFuZWtlbi5hcHAvIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6InRvb2xzIiwic291cmNlIjoid2ViIn19">Maneken</a>: The browser powered mockup editor<small> / </small><small>*mockups*, *images*</small> - <a href="https://github.com/sxyazi/yazi?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9zeHlhemkveWF6aSIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Yazi</a>: Blazing Fast Terminal File Manager<small> / </small><small>*cli*</small> - <a href="https://github.com/ekzhang/rushlight?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9la3poYW5nL3J1c2hsaWdodCIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Rushlight</a>: Real-time collaborative code editing on your own infrastructure<small> / </small><small>*editors*</small> - <a href="https://github.com/alexcambose/custom-cache-decorator?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9hbGV4Y2FtYm9zZS9jdXN0b20tY2FjaGUtZGVjb3JhdG9yIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6InRvb2xzIiwic291cmNlIjoid2ViIn19">Cache Decorator</a>: A TypeScript library providing a customizable cache decorator for methods. This library allows you to easily cache method results with configurable caching mechanisms.<small> / </small><small>*cache*</small> - <a href="https://github.com/dotenvx/dotenvx?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9kb3RlbnZ4L2RvdGVudngiLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">dotenvx</a>: a better dotenv–from the creator of dotenv<small> / </small><small>*dotenv*</small> - <a href="https://github.com/glasskube/glasskube?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9nbGFzc2t1YmUvZ2xhc3NrdWJlIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6InRvb2xzIiwic291cmNlIjoid2ViIn19">Glasskube</a>: The next generation Package Manager for Kubernetes<small> / </small><small>*k8s*, *kubernetes*</small> - <a href="https://logoipsum.com/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vbG9nb2lwc3VtLmNvbS8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">Logoipsum</a>: 100 free placeholder logos<small> / </small><small>*images*</small> - <a href="https://kaplayjs.com/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8va2FwbGF5anMuY29tLyIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Kaplay</a>: KAPLAY is a JavaScript library that helps you make games fast and fun!<small> / </small><small>*games*, *javascript*</small> - <a href="https://github.com/junkdog/tachyonfx?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9qdW5rZG9nL3RhY2h5b25meCIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Tachyonfx</a>: shader-like effects library for ratatui applications<small> / </small><small>*cli*</small> - <a href="https://github.com/pdfslick/pdfslick?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9wZGZzbGljay9wZGZzbGljayIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">pdfslick</a>: View and Interact with PDFs in React SolidJS, Svelte and JavaScript apps<small> / </small><small>*pdf*</small> - <a href="https://github.com/zheksoon/snapdrag?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS96aGVrc29vbi9zbmFwZHJhZyIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Snapdrag</a>: A simple, lightweight, and performant drag and drop library for React and vanilla JS<small> / </small><small>*dnd*</small> - <a href="https://github.com/i365dev/LetterDrop?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9pMzY1ZGV2L0xldHRlckRyb3AiLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">LetterDrop</a>: LetterDrop is a secure and efficient newsletter management service powered by Cloudflare Workers, enabling easy creation, distribution, and subscription management of newsletters.<small> / </small><small>*newsletters*</small> - <a href="https://scroll.pub/blog/stamp.html?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vc2Nyb2xsLnB1Yi9ibG9nL3N0YW1wLmh0bWwiLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">Stamp</a>: a mini-language for project templates<small> / </small><small>*cli*</small> - <a href="https://refero.design/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vcmVmZXJvLmRlc2lnbi8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">Refero</a>: Explore real-world designs from the best products<small> / </small><small>*ux*</small> - <a href="https://github.com/kciter/ascii-3d-renderer.js?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9rY2l0ZXIvYXNjaWktM2QtcmVuZGVyZXIuanMiLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">ascii-3d-renderer.js</a>: 3D Renderer using ASCII.<small> / </small><small>*ascii*, *3d*</small> - <a href="https://marmelab.com/blog/2024/06/20/react-admin-v5.html?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vbWFybWVsYWIuY29tL2Jsb2cvMjAyNC8wNi8yMC9yZWFjdC1hZG1pbi12NS5odG1sIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6InRvb2xzIiwic291cmNlIjoid2ViIn19">Introducing React-Admin V5</a>: React boilerplate app under MIT license.<small> / </small><small>*react*</small> - <a href="https://github.com/mapbox/pixelmatch?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9tYXBib3gvcGl4ZWxtYXRjaCIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">pixelmatch</a>: The smallest, simplest and fastest JavaScript pixel-level image comparison library<small> / </small><small>*images*</small> - <a href="https://brm.io/matter-js/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vYnJtLmlvL21hdHRlci1qcy8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">matter.js</a>: Matter.js is a 2D physics engine for the web<small> / </small><small>*physics*</small> - <a href="https://github.com/dorklyorg/dorkly/wiki?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9kb3JrbHlvcmcvZG9ya2x5L3dpa2kiLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">Dorkly</a>: Free Open Source Feature Flag system. Dorkly is a git-based open source feature flag backend for LaunchDarkly's open source SDKs.<small> / </small><small>*featureflags*</small> - <a href="https://github.com/facebook/memlab?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9mYWNlYm9vay9tZW1sYWIiLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidG9vbHMiLCJzb3VyY2UiOiJ3ZWIifX0%3D">MemLab</a>: A framework for finding JavaScript memory leaks and analyzing heap snapshots<small> / </small><small>*javascript*</small> - <a href="https://github.com/danvergara/dblab?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9kYW52ZXJnYXJhL2RibGFiIiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6InRvb2xzIiwic291cmNlIjoid2ViIn19">dblab</a>: The database client every command line junkie deserves.<small> / </small><small>*db*, *cli*</small> - <a href="https://github.com/projectdiscovery/katana?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9wcm9qZWN0ZGlzY292ZXJ5L2thdGFuYSIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Katana</a>: A next-generation crawling and spidering framework.<small> / </small><small>*crawling*</small> - <a href="https://github.com/openstatusHQ/openstatus?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9vcGVuc3RhdHVzSFEvb3BlbnN0YXR1cyIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJ0b29scyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">OpenStatus</a>: The open-source synthetic & real user monitoring platform<small> / </small><small>*monitoring*</small> <Hr /> ## 🎨 Design - <a href="https://goodpractices.design/articles/colour-contrast?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZ29vZHByYWN0aWNlcy5kZXNpZ24vYXJ0aWNsZXMvY29sb3VyLWNvbnRyYXN0IiwicHJvamVjdCI6IndlZWtseWZvbyIsImluZGV4IjozOSwic2VjdGlvbiI6ImRlc2lnbiIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Intro to colour contrast</a>: Colour contrast makes a big part of the user experience for all users. Accessibility guidelines however, are not always easy to follow. In this article we will see how to meet the requirements with practical examples.<small> / </small><small>*colors*</small><small> / </small><small>18 min read</small> - <a href="https://bootcamp.uxdesign.cc/designing-profile-account-and-setting-pages-for-better-ux-345ef4ca1490?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vYm9vdGNhbXAudXhkZXNpZ24uY2MvZGVzaWduaW5nLXByb2ZpbGUtYWNjb3VudC1hbmQtc2V0dGluZy1wYWdlcy1mb3ItYmV0dGVyLXV4LTM0NWVmNGNhMTQ5MCIsInByb2plY3QiOiJ3ZWVrbHlmb28iLCJpbmRleCI6MzksInNlY3Rpb24iOiJkZXNpZ24iLCJzb3VyY2UiOiJ3ZWIifX0%3D">Designing profile, account, and setting pages for better UX</a>: Account vs Profile, and what content should be included.<small> / </small><small>*ux*</small><small> / </small><small>11 min read</small> <Hr /> ## 📚 Tutorials - <a href="https://developer.mozilla.org/en-US/blog/securing-apis-express-rate-limit-and-slow-down/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZGV2ZWxvcGVyLm1vemlsbGEub3JnL2VuLVVTL2Jsb2cvc2VjdXJpbmctYXBpcy1leHByZXNzLXJhdGUtbGltaXQtYW5kLXNsb3ctZG93bi8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidHV0Iiwic291cmNlIjoid2ViIn19">Securing APIs - Express rate limit and slow down</a>: As soon as an API gets used more frequently, definitely something to consider.<small> / </small><small>*express*</small><small> / </small><small>11 min read</small> - <a href="https://frontendmasters.com/blog/pure-css-circular-text-without-requiring-a-monospace-font/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZnJvbnRlbmRtYXN0ZXJzLmNvbS9ibG9nL3B1cmUtY3NzLWNpcmN1bGFyLXRleHQtd2l0aG91dC1yZXF1aXJpbmctYS1tb25vc3BhY2UtZm9udC8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidHV0Iiwic291cmNlIjoid2ViIn19">Pure CSS Circular Text (without Requiring a Monospace Font)</a>: There is no simple and obvious way to set text on a circle in CSS. Good news though! You can create a beautiful, colorful, and even rotating circular text with pure CSS. It just takes a bit of work and we’ll go over that here.<small> / </small><small>*css*</small><small> / </small><small>11 min read</small> - <a href="https://frontendmasters.com/blog/popovers-work-pretty-nicely-as-slide-out-drawers/?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vZnJvbnRlbmRtYXN0ZXJzLmNvbS9ibG9nL3BvcG92ZXJzLXdvcmstcHJldHR5LW5pY2VseS1hcy1zbGlkZS1vdXQtZHJhd2Vycy8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidHV0Iiwic291cmNlIjoid2ViIn19">Popovers Work Pretty Nicely as Slide-Out Drawers</a>: Pretty nice and all with built-in APIs<small> / </small><small>*css*</small><small> / </small><small>6 min read</small> - <a href="https://johnnyreilly.com/web-workers-comlink-vite-tanstack-query?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vam9obm55cmVpbGx5LmNvbS93ZWItd29ya2Vycy1jb21saW5rLXZpdGUtdGFuc3RhY2stcXVlcnkiLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidHV0Iiwic291cmNlIjoid2ViIn19">Web Workers, Comlink, Vite and TanStack Query</a>: Use case to compute super expensive task.<small> / </small><small>*workers*</small><small> / </small><small>8 min read</small> <Hr /> ## 📺 Videos - <a href="https://www.youtube.com/watch?v=su6WA0kUUJE?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vd3d3LnlvdXR1YmUuY29tL3dhdGNoP3Y9c3U2V0Ewa1VVSkUiLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidmlkcyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Web Design Engineering With the New CSS</a>: CSS Day 2024<small> / </small><small>*css*</small> - <a href="https://www.youtube.com/watch?v=nzbV0YgSBuo?utm_source=weeklyfoo&utm_medium=web&utm_campaign=weeklyfoo-39&ref=weeklyfoo" target="_blank" rel="noopener" ping="https://api.weeklyfoo.com/api/foo/bar?foo=eyJldmVudCI6InVybC1jbGljayIsInByb3BzIjp7InVybCI6Imh0dHBzOi8vd3d3LnlvdXR1YmUuY29tL3dhdGNoP3Y9bnpiVjBZZ1NCdW8iLCJwcm9qZWN0Ijoid2Vla2x5Zm9vIiwiaW5kZXgiOjM5LCJzZWN0aW9uIjoidmlkcyIsInNvdXJjZSI6IndlYiJ9fQ%3D%3D">Facing Frontend's Existential Crisis</a>: React Summit 2024<small> / </small><small>*frontend*</small> Want to read more? Check out the full article [here](https://weeklyfoo.com/foos/foo-039/). To sign up for the weekly newsletter, visit [weeklyfoo.com](https://weeklyfoo.com).
urbanisierung
1,907,168
Alfama: Fine grained reactive UI library with explicit subscriptions
Hey all I would like to introduce Alfama, a fine grained reactive UI library with explicit...
0
2024-07-01T05:27:13
https://dev.to/abhishiv/alfama-fine-grained-reactive-ui-library-with-explicit-subscriptions-1m0f
javascript, reactive, vue, react
Hey all I would like to introduce Alfama, a fine grained reactive UI library with explicit subscriptions. https://github.com/abhishiv/alfama Features: - Small. Fully featured at ~9kB gzip. - Truly reactive and fine grained. Unlike VDOM libraries which use diffing to compute changes, it uses fine grained updates to target only the DOM which needs to update. **- No Magic Explicit subscriptions obviate the need of sample/untrack methods found in other fine grained reactive libraries like solid/sinuous. Importantly, many feel that this also makes your code easy to reason about.** - Signals and Stores. Signals for primitives and Stores for deeply nested objects/arrays. - **First class HMR Preserves Signals/Stores across HMR loads for a truly stable HMR experience**. - DevEx. no compile step needed if you want: choose your view syntax: h for plain javascript or <JSX/> for babel/typescript. - **Rich and Complete. From support for SVG to popular patterns like dangerouslySetInnerHTML, ref to Fragment and Portal: Alfama has you covered.** There's also a react-router like router for Alfama called [alfama-router](https://github.com/abhishiv/alfama-router) Would love to hear from people what they think.
abhishiv
1,881,507
Performance and Scalability for Database-Backed Applications
Lessons learneda about improving performance of database-backed applications
0
2024-07-01T05:26:27
https://dev.to/nestedsoftware/performance-and-scalability-for-database-backed-applications-pca
database, sql, performance, scalability
--- title: Performance and Scalability for Database-Backed Applications published: true description: Lessons learneda about improving performance of database-backed applications tags: database, sql, performance, scalability cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cdp0f109wcggtuacvs9t.png --- The following are some techniques that I've found to be both useful and practical over the years for scaling data-intensive applications. # Split Large Jobs into Smaller Parallel Jobs When processing large amounts of data, a very useful technique to improve performance is to break up a given job into smaller jobs that can run in parallel. Once all of the jobs have completed, the partial results can be integrated together. Keep in mind that when the parallel jobs are hitting the same database, locking can become a bottleneck. Also, this approach requires that you be wary of over-taxing the database, since all of these sessions will be running concurrently. For very high scalability, this type of idea can be implemented with tools like [MapReduce](https://www.talend.com/resources/what-is-mapreduce). # Pre-fetch Data Before Processing I/O latency is a common cause of performance obstacles. Replacing multiple calls to the database with a single call is often helpful. Here you would pre-load data from the database and cache it in memory. That way the data can be used/reused without requiring separate round trips to the database. It's important to keep in mind the possibility that the cached data may be updated during processing, which may or may not have ramification for the given use case. Storing large amounts of data in RAM also increases the resource usage of the application, so it's important to consider the tradeoffs between performance and memory usage. # Batch Multiple SQL Executions into a Single Call Consider a batch data import job. The job may repeatedly execute SQL statements to persist data in a loop. You can instead collect a certain amount of data within the application, and issue a single SQL call to the database. This again reduces the amount of I/O required. One issue with this approach is that a single failure will cause the entire transaction to rollback. When a batch fails, you can re-run each item in that batch again one at a time so that the rest of the data can still be persisted, and only the failing records will produce an error. > Note: If you're sending individual SQL statements in a loop, you can also set the database commit frequency so as to commit in batches rather than for each individual row. # Optimize SQL Queries Working with relational databases can be somewhat of an art form. When queries perform poorly, it can be helpful to deeply understand the execution plan used by the database engine and to improve the SQL based on that information. Rewriting inefficient SQL queries as well as reviewing indexes associated with the tables in the query can help to improve performance. In Oracle, one can add database hints to help improve queries, though personally I prefer to avoid that as much as possible. # Use Separate Databases/Schemas Having a single large database can be convenient, but it also can introduce performance problems when there are huge numbers of rows in important tables. For example, let's say a b2b enterprise application is used by many different companies. Having a separate database or schema for each company can significantly improve performance. Such partitioning also makes it easier to maintain security so that a company's data won't be accidentally accessed by the wrong users. > When data is broken up across multiple schemas, it may make sense to aggregate it into a single database that can be used for management and analytics - in the example above this database would have information about all of the companies in the system. # Refactor Database Structure In some cases, the structure of the database tables can reduce performance significantly. Sometimes breaking up a single table into multiple tables can help (this is known as normalizing the tables), as the original table structure may have a large number of nullable columns. In other cases, it may be helpful to go the other way and de-normalize tables (combine data from multiple tables into a single table). This allows data to be retrieved all at once, without requiring joins. Instead of fully denormalizing the data, it may be preferable to use a materialized view. Working with the indexes available on database tables can also be helpful. In general we want to avoid using indexes too much when reading large amounts of data. We also want to keep in mind that indexes increase the cost for updates to the database even as they improve reads. If we occasionally read data but frequently update that data, improving the performance of the former at the expense of the latter may be a bad idea. # Organize Transactions into Sagas Database transactions can have a significant impact on performance, so keeping transactions small is a good idea. It may be possible to break up long-running transactions into multiple transactions. What was once a single transaction becomes known as a saga. For example, let’s say you’re building an application that handles purchases. You can save an order in an unapproved state, and then move the order through to completion in multiple steps where each step is a separate transaction. With sagas, it's important to understand that the database will now have data that may later be deemed invalid - e.g. a pending order may end up not being finalized. In some cases, data that has been persisted may need to be undone at the application level rather than relying on the transaction rollback - this is known as backward recovery. Alternatively, it may be possible to fix the problems that caused the initial failure and to keep the saga going - this is called forward recovery (see [Saga Patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/cloud-design-patterns/saga.html)). # Separate Transactional Processing from Reporting and Analytics There is a fundamental tradeoff in database optimization when managing small transactions vs. running large reports (see [OLTP vs. OLAP](https://aws.amazon.com/compare/the-difference-between-olap-and-oltp/)). When running large and complex reports, it can be helpful to maintain a reporting database that can be used just for executing reports (this can be generalized to a data warehouse). In the meantime, a transactional database can continue to be used separately by the main application logic. A variation on this idea is to implement [CQRS](https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs), a pattern where we use one model for write operations and another one for read operations. Usually there are separate databases for reads and writes. In both cases, the distributed nature of the databases means that changes that occur on the write side as part of a transaction won't be visible immediately on the read side - this is known as eventual consistency (see [Eventual Consistency](https://en.wikipedia.org/wiki/Eventual_consistency)). # Split Monolith into (Micro)services We can take the previously mentioned idea of partitioning the database further by breaking up an application into multiple applications, each with its own database. In this case each application will communicate with the others via something like [REST](https://blog.postman.com/rest-api-examples), RPC (e.g. [gRPC](https://grpc.io)), or a message queue (e.g. [Redis](https://redis.io), [Kafka](https://kafka.apache.org/intro), or [RabbitMQ](https://www.rabbitmq.com)). This approach offers advantages, such as more flexible development and deployment (you can develop and deploy each microservice separately). It also offers scaling benefits, since services can be orchestrated to run in different geographies, and instances of running services can be added and removed dynamically based on usage (e.g. using orchestration tools like [Docker Swarm](https://docs.docker.com/engine/swarm/key-concepts/) and [Kubernetes](https://kubernetes.io/)). The data for a given service can be managed more efficiently - both in terms of the amount of data and the way it is structured, since it is specific to that service. Of course services also present many challenges. Modifying a service may cause bugs in other services that depend on it. It can also be difficult to understand the overall behaviour of the system when a workflow crosses many service boundaries Even something that sounds as simple as local testing can become more complex, as a given workflow may require deploying a variety of different services. There can be surprising bottlenecks as well. I find this video about Netflix's migration to microservices is still very relevant: {%embed https://www.youtube.com/watch?v=CZ3wIuvmHeM %} With separate databases for each service, we can no longer guarantee the same type of consistency that we get with single transactions against a relational database. All in all, my advice is to be aware of the difficulties that services present and to take a realistic and clear eyed view of the various tradeoffs involved. If you'd like to learn more about microservices and service-oriented architecture, I recommend reading [Monolith to Microservices](https://www.oreilly.com/library/view/monolith-to-microservices/9781492047834/), by Sam Newman. # References * [MapReduce](https://www.talend.com/resources/what-is-mapreduce) * [Saga Patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/cloud-design-patterns/saga.html) * [OLTP vs. OLAP](https://aws.amazon.com/compare/the-difference-between-olap-and-oltp) * [CQRS](https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs) * [Eventual Consistency](https://en.wikipedia.org/wiki/Eventual_consistency) * [REST](https://blog.postman.com/rest-api-examples) * [gRPC](https://grpc.io) * [Redis](https://redis.io) * [Kafka](https://kafka.apache.org/intro) * [RabbitMQ](https://www.rabbitmq.com) * [Docker Swarm](https://docs.docker.com/engine/swarm/key-concepts) * [Kubernetes](https://kubernetes.io) * [Mastering Chaos - A Netflix Guide to Microservices](https://www.youtube.com/watch?v=CZ3wIuvmHeM) * [Monolith to Microservices](https://www.oreilly.com/library/view/monolith-to-microservices/9781492047834)
nestedsoftware
1,907,167
Understanding Python: Interpreted vs. Compiled with a Practical Example
When learning about programming languages, one of the fundamental concepts to grasp is the difference...
0
2024-07-01T05:24:14
https://dev.to/kishoranushka/understanding-python-interpreted-vs-compiled-with-a-practical-example-55ki
python, beginners, development, softwaredevelopment
When learning about programming languages, one of the fundamental concepts to grasp is the difference between interpreted and compiled languages. Let’s take a practical journey with a Python file, from creation to execution, to understand why Python is classified as an interpreted language despite involving some compilation steps. ### Step 1: Writing the Python Script Imagine you write a simple Python script named hello.py: This is your source code written in Python, a high-level programming language. ### Step 2: Running the Python Script When you run this script using the command: Here’s what happens behind the scenes: #### 1. Compilation to Bytecode: **Hidden Compilation**: The Python interpreter first translates your source code into an intermediate form known as bytecode. This step is automatic and hidden from you, the programmer. You don’t need to run a separate command for this; it happens when you execute your script. **Bytecode File**: For efficiency, Python might store the compiled bytecode in a .pyc file within the __pycache__ directory. For instance, running hello.py might create a file like __pycache__/hello.cpython-312.pyc (the name can vary depending on your Python version). This file is a compiled version of your script but in a form that is still not machine code. #### 2. Interpreting Bytecode: **Execution**: The Python virtual machine (PVM) reads and executes the bytecode. This means the PVM interprets the bytecode instructions line by line, converting them into machine code on the fly. **Dynamic Execution**: Python can execute scripts interactively, allowing you to run and test code snippets immediately. This feature is common in interpreted languages. #### Compiled Languages vs. Interpreted Languages To better understand the distinction, let’s contrast this with how a compiled language like C works: ##### 1. C Language Workflow: **Source Code**: You write C code in a file, say hello.c. **Compilation**: You run a compiler (e.g., gcc hello.c) which converts the source code into machine code (a binary executable). This step is explicit and separate from running the code. **Execution**: You run the resulting executable file (e.g., ./a.out), which directly executes on the machine’s hardware. ##### 2. Key Differences: **Visibility of Compilation**: In compiled languages, the compilation step is explicit and produces a separate executable file. In Python, the compilation to bytecode is implicit and handled by the interpreter. **Execution**: Compiled code runs directly on the hardware, offering potential performance benefits. Interpreted code runs within an interpreter, adding a layer between your code and the hardware. #### Why Python is Called an Interpreted Language Despite the compilation step to bytecode, Python is termed an interpreted language for several reasons: **1. Implicit Compilation**: The bytecode compilation is an internal process that happens automatically. As a programmer, you interact with Python as an interpreted language, writing and executing code directly without a separate compilation step. **2. Interactivity**: Python supports interactive execution, allowing you to enter and run code in a REPL (Read-Eval-Print Loop) environment. This interactivity is a hallmark of interpreted languages. **3. Execution by PVM**: The final execution of Python code is done by the Python virtual machine, which interprets the bytecode instructions. This layer of interpretation is what fundamentally defines Python’s behavior. ####Conclusion In summary, Python combines elements of both interpreted and compiled languages but is predominantly considered an interpreted language. The journey from writing hello.py to seeing "Hello, World!" on your screen involves an unseen compilation to bytecode followed by interpretation by the Python virtual machine. This unique blend allows Python to be both powerful and user-friendly, making it a favorite among developers for scripting, web development, data analysis, and more.
kishoranushka
1,907,144
Why TypeScript is Transforming Modern Web Development
🚀 TypeScript: A Game-Changer for Modern Web Development 🚀 As web development evolves, so do the...
0
2024-07-01T05:10:48
https://dev.to/cristain/why-typescript-is-transforming-modern-web-development-372
webdev, typescript, coding, javascript
🚀 TypeScript: A Game-Changer for Modern Web Development 🚀 As web development evolves, so do the tools and languages we use. One of the most impactful advancements in recent years has been TypeScript. Whether you’re a seasoned developer or just starting out, TypeScript is revolutionizing how we write, maintain, and scale code. Here’s why you should consider embracing TypeScript: 🔹 **Type Safety:** TypeScript introduces static typing to JavaScript, helping you catch errors at compile time rather than at runtime. This means fewer bugs and a more robust codebase. - **Error Detection:** Identify potential issues early in the development process. - **Code Quality:** Maintain a cleaner and more reliable codebase. 🔹 **Enhanced Developer Experience:** TypeScript’s powerful type system improves the development workflow by providing better tools and features. - **IntelliSense:** Enjoy smarter code completions, navigation, and refactoring. - **Documentation:** Benefit from automatic documentation generation and better IDE support. 🔹 **Scalable Codebase:** As your project grows, TypeScript helps manage complexity with its advanced type system. - **Modular Code:** Write modular, reusable, and maintainable code. - **Refactoring:** Safely refactor your codebase with confidence. 🔹 **Seamless Integration:** TypeScript is designed to integrate smoothly with existing JavaScript projects and libraries. - **Gradual Adoption:** Migrate from JavaScript to TypeScript incrementally. - **Compatibility:** Works seamlessly with popular frameworks and libraries like React, Angular, and Node.js. 🔹 **Community and Ecosystem:** TypeScript has a vibrant and growing community, which means continuous improvements and a rich ecosystem of tools and libraries. - **Support:** Access extensive resources, tutorials, and community support. - **Innovation:** Stay ahead with the latest features and updates. 🔹 **Real-World Impact:** - **Improved Code Quality:** Teams experience fewer bugs and easier debugging. - **Enhanced Collaboration:** Clearer code structure improves team collaboration and onboarding. TypeScript isn’t just a trend; it’s a powerful tool that enhances productivity, code quality, and maintainability. By leveraging TypeScript, we can build more reliable, scalable, and efficient applications.
cristain
1,907,151
The Enigmatic Mawarliga: Guardian Spirit of Southeast Asia's Forests
The Enigmatic Mawarliga: Guardian Spirit of Southeast Asia's Forests In the heart of...
0
2024-07-01T05:19:59
https://dev.to/jack_campbell_7001f7a0f61/the-enigmatic-mawarliga-guardian-spirit-of-southeast-asias-forests-497c
mawarliga, slotgacorhariini, linkaltmawarliga, mawar
### The Enigmatic Mawarliga: Guardian Spirit of Southeast Asia's Forests In the heart of Southeast Asia's ancient forests, there exists a creature of legend and mystery known as the [Mawarliga](https://mawarliga.live). Revered for its elusive nature and spiritual significance, the Mawarliga symbolizes the deep connection between humanity and the natural world, embodying themes of ecological balance and cultural heritage. #### Origins and Ethereal Presence The origins of the Mawarliga are deeply rooted in Southeast Asian folklore, passed down through generations via oral traditions. Often described as a graceful, deer-like creature with a coat that shimmers under the moonlight, the Mawarliga exudes an ethereal beauty. Its eyes are believed to hold the ancient wisdom of the forest, reflecting the serenity and secrets of the natural world. #### The Silent Guardian A central aspect of the Mawarliga's legend is its extraordinary ability to move silently and unseen through the forest. This supernatural stealth allows it to act as a guardian spirit, protecting the delicate balance of the ecosystem. The Mawarliga is believed to intervene in times of ecological distress, guiding lost travelers and safeguarding the forest's inhabitants. Rare sightings of the Mawarliga are considered auspicious, symbolizing harmony and the forest's blessing. #### Symbolism and Cultural Significance In Southeast Asian culture, the Mawarliga represents the harmonious relationship between humans and nature. It emphasizes the interconnectedness of all living beings and highlights the importance of preserving natural environments. Rituals and traditions often honor the Mawarliga, reflecting a deep-seated belief that ecological balance brings prosperity and peace to the community. #### Unique Folkloric Elements The Mawarliga stands out in folklore due to its unique blend of characteristics from various Southeast Asian cultures. In some tales, it is depicted with antlers made of intertwined vines and flowers, symbolizing its role as a nurturer of the forest. In other stories, it has the ability to vanish into a cloud of mist, representing the elusive and ephemeral nature of its existence. These varied depictions add rich layers to its legend, making the Mawarliga a truly unique mythical creature. #### Contemporary Reflections In today's rapidly changing world, the message of the Mawarliga is more relevant than ever. As global communities face unprecedented environmental challenges, the legend of the Mawarliga serves as a poignant reminder of the need for sustainable practices and a respectful relationship with the natural world. Its story inspires a renewed commitment to protecting biodiversity and fostering a harmonious coexistence with nature. #### Conclusion: Embracing the Legacy The Mawarliga invites us to explore the intersection of myth and reality, offering timeless lessons in ecological wisdom and cultural heritage. By honoring the spirit of the Mawarliga, we celebrate a legacy that transcends time, inspiring us to protect and cherish the natural wonders that sustain life. In embracing the legacy of the Mawarliga, we acknowledge our role as custodians of the Earth, dedicated to preserving its biodiversity and fostering a harmonious relationship with the environment. Through its enchanting legend, we are reminded of the beauty and mystery within Southeast Asia's forests, urging us to cherish and safeguard these treasures for future generations. ---
jack_campbell_7001f7a0f61
1,907,150
The Secret to Finding Perfectly Sized Square Area Rugs for Any Room
Have you ever walked into a room and felt like something was missing? It could be the absence of...
0
2024-07-01T05:18:48
https://dev.to/rugsbysize/the-secret-to-finding-perfectly-sized-square-area-rugs-for-any-room-1lj8
productivity, rugs
Have you ever walked into a room and felt like something was missing? It could be the absence of warmth or that final touch. Many times, the solution is simpler than one might imagine. The right rug can change everything about your space. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0eh7n641o9520u4x3h1t.jpg) However, how do you know which **[square area rug](https://www.rugsbysize.com/browse-rugs/square-rugs)** will fit well in your room? Let’s look at this mystery. ## Know Your Space Take a good look around the room first off. Consider its size, shape and function. Are you dealing with a large living room or a small cozy bedroom? You must measure your space. A tape measure should be your best friend right now; grab it and record all dimensions (length x width) because most people guesstimate which leads them into getting rugs either too small or too big for an area. Accurate measurements guaranteeing the best match ever. ## Style Selection Now that you have the figures, what style would best suit you? Do you want something modern or more traditional looking? Modern area rugs usually come with smooth designs coupled with neutral colors while their traditional counterparts may contain elaborate patterns done using vibrant shades. Your choice should also blend in seamlessly with other decorations and furniture present within that space. For instance can you imagine having a shaggy type carpet in an uptown setup? It adds both depth as well as extra coziness. ## Color And Pattern The shade and design of your mat have an immense influence on the overall outlook of a room. Light shades create an illusion of larger space in smaller rooms whereas dark ones bring about warmth within big areas. If there are many different designs used on fabrics or wallpapers etc., then solid-colored rugs or Shag Rugs would help calm down such busy settings; conversely, if everything else is quite plain simple around this zone then maybe some intricately patterned rug could be used to liven things up visually. But always remember that whatever decision you make regarding these two aspects must harmonize well with all other elements in the given area. ## Contemplate on Functionality Your carpet is not just for beautification purposes; it should also serve its intent. You should know the number of people who will be walking in and out of the room where this carpet will lie. Additionally, the room in which this particular rug would be placed determines its type too. For instance, if we talk about high-traffic areas such as living rooms or hallways then they need stronger rugs unlike those used infrequently like dining or bedrooms. Nevertheless while considering their material make sure they have enough strength but still feel gentle beneath one’s feet so that even if someone spends most hours during a day without their shoes on within such spaces comfort may not become an issue. ## Perfect Placement When it comes to the perfect placement of a square rug, size matters, small rugs can make a room feel disjointed so go for the right size. Your rug should anchor the furniture, so all major pieces should sit on it for living rooms. In case this is not possible, ensure that at least the front legs of your seating are on top of the carpet. The rug should be big enough to accommodate your table and chairs when they are pulled out in dining areas. ## Explore Options Don't rush this big choice­. Think carefully. Look into different kinds. Wool, cotton, or synthe­tic fibers each have pros and cons. Some­ resist stains better than othe­rs. But some cost more. For example­, wool lasts long as it doesn't stain easily. But it's pricey. If you're­ on a budget, try synthetic fibers. The­y clean up easily. Great for familie­s with kids who play on hallway carpets near bedroom and bathroom e­ntrances. Where visitors with muddy shoe­s come in. ## Take Advantage of Sales Watch for deals on **[8x10 area rugs sale](https://www.rugsbysize.com/browse-rugs/8x10)**. Stores some­times sell them che­aper. Because of the sale, you can find quality rugs at low price­s. That fits your budget perfectly. ## Final Words Finding the right are­a rug can be easy. First, look at your room. What size rug do you ne­ed? Next, pick a style you like­. Think about how the rug will be used. A nice­ rug can make a room look great. Are you re­ady to change your space? Measure­ your room first. Then look at different rugs. The­ right rug can change how a room looks and feels. Have­ fun choosing a new rug!
rugsbysize
1,907,148
먹튀
토토사이트는 온라인 베팅과 게임을 즐기는 이들에게 중요한 선택지로 자리잡고 있습니다. 그 중에서도 "먹튀 아웃룩 인디아 플러그인 플레이 묵투로얄"은 2024년 최고의 1등 토토사이트...
0
2024-07-01T05:15:55
https://dev.to/totosite06/meogtwi-1adi
토토사이트는 온라인 베팅과 게임을 즐기는 이들에게 중요한 선택지로 자리잡고 있습니다. 그 중에서도 "먹튀 아웃룩 인디아 플러그인 플레이 묵투로얄"은 2024년 최고의 1등 토토사이트 커뮤니티로서 많은 이들의 관심을 받고 있습니다. 이 사이트는 안전성과 신뢰성을 바탕으로, 다양한 베팅 옵션과 최상의 사용자 경험을 제공하는 것으로 유명합니다. 안전성과 신뢰성 먹튀 아웃룩 인디아 플러그인 플레이 묵투로얄의 가장 큰 장점은 그 안전성입니다. 온라인 베팅 사이트에서 가장 우려되는 부분 중 하나는 '먹튀'(불법적으로 운영되는 사이트가 사용자 자금을 탈취하는 행위)입니다. 그러나 이 사이트는 철저한 검증 시스템과 보안 체계를 통해 이러한 위험을 최소화하고 있습니다. 사용자의 개인 정보와 자금이 안전하게 보호되며, 모든 거래가 투명하게 이루어집니다. **_[먹튀](https://www.outlookindia.com/plugin-play/%EB%A8%B9%ED%8A%80%EB%A1%9C%EC%96%84-2024-%EB%85%84-best-no1-%ED%86%A0%ED%86%A0%EC%82%AC%EC%9D%B4%ED%8A%B8-%EC%BB%A4%EB%AE%A4%EB%8B%88%ED%8B%B0)_** 다양한 베팅 옵션 이 사이트는 스포츠 베팅부터 카지노 게임에 이르기까지 다양한 베팅 옵션을 제공합니다. 축구, 농구, 야구 등 주요 스포츠 이벤트에 대한 베팅은 물론, 라이브 카지노, 슬롯 머신, 포커 등 다양한 게임을 즐길 수 있습니다. 이러한 다양성은 사용자들에게 선택의 폭을 넓혀주며, 각자의 취향에 맞는 게임을 찾을 수 있도록 돕습니다. 최상의 사용자 경험 사용자 경험(UX)은 먹튀 아웃룩 인디아 플러그인 플레이 묵투로얄의 핵심 요소 중 하나입니다. 직관적인 인터페이스와 빠른 로딩 속도는 사용자들이 사이트를 쉽게 탐색하고 베팅을 즐길 수 있게 합니다. 또한, 24시간 고객 지원 서비스가 제공되어 언제든지 궁금한 점이나 문제가 발생했을 때 신속하게 도움을 받을 수 있습니다. 커뮤니티의 힘 묵투로얄은 단순한 베팅 사이트를 넘어 하나의 커뮤니티로 자리잡고 있습니다. 사용자들은 포럼과 채팅을 통해 서로 정보를 공유하고, 팁을 나누며, 게임에 대한 의견을 교환할 수 있습니다. 이는 사용자들이 단순히 베팅을 넘어서, 함께 소통하고 즐길 수 있는 공간을 제공합니다. 2024년 최고의 1등 토토사이트 이 모든 장점들 덕분에 먹튀 아웃룩 인디아 플러그인 플레이 묵투로얄은 2024년 최고의 1등 토토사이트로 평가받고 있습니다. 안전성과 신뢰성, 다양한 베팅 옵션, 최상의 사용자 경험, 그리고 강력한 커뮤니티의 결합은 이 사이트를 다른 경쟁자들보다 한 단계 더 높게 만들어줍니다. 결론 온라인 베팅을 즐기는 이들에게 안전하고 신뢰할 수 있는 사이트를 찾는 것은 매우 중요합니다. 먹튀 아웃룩 인디아 플러그인 플레이 묵투로얄은 이러한 요구를 완벽하게 충족시키며, 2024년 최고의 1등 토토사이트로 자리잡았습니다. 다양한 게임과 베팅 옵션, 뛰어난 사용자 경험, 그리고 강력한 커뮤니티를 통해 이 사이트는 베팅을 즐기는 모든 이들에게 최고의 선택이 될 것입니다. 앞으로도 묵투로얄은 지속적인 발전과 혁신을 통해 사용자들에게 최상의 서비스를 제공할 것입니다.
totosite06
1,907,147
GenAI Knowledge Management Strategy
Knowledge management is strategically important to ensure that organizations can innovate by...
0
2024-07-01T05:13:47
https://dev.to/ragavi_document360/genai-knowledge-management-strategy-4if2
Knowledge management is strategically important to ensure that organizations can innovate by utilizing institutional memory and preserving customer loyalty. About 80% of organizational knowledge is unstructured and available in text, videos, images, and so on. Organizations that fail to mobilize their knowledge assets are heading toward business closure! Given the proliferation of GenAI technology, it is prime time for many organizations to put forward a stronger culture of knowledge creation and sharing to stimulate business growth. In this blog, we shall cover motivations for a new knowledge management strategy, a playbook to create a modern knowledge management strategy along with business use cases, and how to execute it with the strong cultural ethos of the organization. ## Motivations for New Knowledge Management Strategy GenAI technology has changed the way new knowledge is created, how it is been shared, and more importantly how it is consumed for strategic and tactical business advantage. GenAI technology is helping many internal stakeholders to create, organize, and share knowledge seamlessly within their organizations. GenAI technology can be viewed as a strategic investment in helping organizations mobilize internal knowledge spanning across a multitude of business systems in various formats! More importantly, any new knowledge that is created is attached to many metadata, it helps stakeholders understand the importance and nuances of using the specific knowledge for their use cases. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ljiyudhbbgbsldj9l19h.png) ## Business Use Cases of GenAI Knowledge Management There are several business use cases where GenAI technology can applied effectively for knowledge creation, organizing knowledge, and knowledge dissemination. GenAI capabilities can be tapped into for organization-wide knowledge searches that can index and source knowledge across different knowledge repositories. ### Use case #1 – Meetings & Email Meetings help organizations get clarity on issues, help to make decisions, talk about risks, and so on.These meetings can be transcribed into textual form using GenAI technology and curated to form baseline knowledge on topics of interest. The typical use case where organizations see value is through automatically curated Minutes of Meetings (MoM) and action items. GenAI technology can also assign action items to the right set of stakeholders based on additional data such as contact lists, meeting invitees, and so on. Moreover, learnings from each strategic program and project can be made available to GenAI capabilities so that any internal stakeholders can query and consume the knowledge without any friction to avoid making wrong decisions. To continue reading about GenAI knowledge management strategy,[Click here](https://document360.com/blog/genai-knowledge-management-strategy/)
ragavi_document360
1,907,146
먹튀
2024년에 접어들며, 토토사이트 커뮤니티에서 가장 신뢰받는 플랫폼으로 떠오른 묵투로얄은 사용자들 사이에서 '먹튀 아웃룩'으로 불리며 큰 인기를 끌고 있습니다. 이 사이트는 특히...
0
2024-07-01T05:12:02
https://dev.to/totosite06/meogtwi-5c97
2024년에 접어들며, 토토사이트 커뮤니티에서 가장 신뢰받는 플랫폼으로 떠오른 묵투로얄은 사용자들 사이에서 '먹튀 아웃룩'으로 불리며 큰 인기를 끌고 있습니다. 이 사이트는 특히 인디아 플러그인 플레이와 같은 혁신적인 기능을 도입하여 사용자 경험을 한층 높였습니다. 묵투로얄의 특징 안전한 환경: 묵투로얄은 엄격한 보안 시스템을 갖추고 있어 사용자의 정보와 자금을 안전하게 보호합니다. '먹튀 아웃룩'이라는 별명이 붙은 이유는 그만큼 안전하게 이용할 수 있기 때문입니다. **_[먹튀](https://www.outlookindia.com/plugin-play/%EB%A8%B9%ED%8A%80%EB%A1%9C%EC%96%84-2024-%EB%85%84-best-no1-%ED%86%A0%ED%86%A0%EC%82%AC%EC%9D%B4%ED%8A%B8-%EC%BB%A4%EB%AE%A4%EB%8B%88%ED%8B%B0)_** 인디아 플러그인 플레이: 이 기능은 사용자들이 보다 편리하게 게임에 참여할 수 있도록 돕습니다. 빠르고 간편한 접근성과 직관적인 인터페이스는 초보자도 쉽게 적응할 수 있게 해줍니다. 다양한 게임: 묵투로얄은 스포츠 베팅부터 카지노 게임, 그리고 다양한 e스포츠 게임까지 폭넓은 게임 선택지를 제공합니다. 이를 통해 사용자들은 자신의 취향에 맞는 게임을 선택할 수 있습니다. 고객 지원: 24/7 고객 지원을 제공하여 사용자들이 언제든지 도움을 받을 수 있도록 합니다. 이는 사용자가 게임을 하면서 발생할 수 있는 문제를 신속하게 해결할 수 있게 해줍니다. 커뮤니티 기능: 묵투로얄은 단순히 게임을 즐기는 곳이 아닌, 사용자들 간의 소통과 정보 공유를 할 수 있는 커뮤니티 공간도 제공합니다. 이를 통해 사용자들은 더 나은 베팅 전략을 공유하고, 새로운 친구를 사귈 수 있습니다. 2024년 최고 1등 토토사이트 커뮤니티로 선정된 이유 묵투로얄이 2024년 최고 1등 토토사이트 커뮤니티로 선정된 것은 여러 가지 이유가 있습니다. 첫째, 사용자 친화적인 인터페이스와 혁신적인 기능 덕분에 사용자 만족도가 매우 높습니다. 둘째, 안전하고 신뢰할 수 있는 시스템을 구축하여 사용자들이 안심하고 게임을 즐길 수 있습니다. 마지막으로, 다양한 게임과 풍부한 콘텐츠를 제공하여 사용자들이 질리지 않고 지속적으로 사이트를 이용할 수 있게 합니다. 미래 전망 묵투로얄은 앞으로도 지속적으로 성장할 것으로 예상됩니다. 특히, 사용자들의 피드백을 적극 반영하여 더 나은 서비스를 제공하기 위해 노력하고 있습니다. 또한, 새로운 기술과 트렌드를 반영한 업데이트를 통해 항상 최신의 게임 환경을 제공할 계획입니다. 결론적으로, 묵투로얄은 2024년 최고 1등 토토사이트 커뮤니티로서의 자리를 굳건히 지키고 있습니다. 안전하고 신뢰할 수 있는 환경, 혁신적인 기능, 다양한 게임 선택지, 그리고 우수한 고객 지원은 묵투로얄을 다른 토토사이트들과 차별화시키는 요소들입니다. 이 모든 요소들이 합쳐져 묵투로얄은 사용자들에게 최상의 게임 경험을 제공하고 있으며, 앞으로도 지속적으로 발전해 나갈 것입니다.
totosite06
1,907,143
Countdown Timer with Link Interaction and Final Message Display
Here is the JavaScript code to create a download button with a countdown feature as per your request....
0
2024-07-01T05:10:38
https://dev.to/mrs_hao/countdown-timer-with-link-interaction-and-final-message-display-27cj
javascript, beginners, tutorial, html
Here is the JavaScript code to create a download button with a countdown feature as per your request. When the user clicks the button for the first time, a 20-second countdown will start. After the countdown ends, a message will prompt the user to click any link. When the user clicks the link, a second 10-second countdown will begin and finally display the content **1.The HTML part displays the content.** `<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Countdown Download Button</title> </head> <body> <button id="downloadBtn">Start Countdown</button> <div id="message"></div> <div id="vnet">content here</div> </body> </html>` **2.Process CSS for beauty** `#downloadBtn { padding: 10px 20px; font-size: 16px; } #message, #vnet { margin-top: 20px; } #vnet { display: none; }` **3. JavaScript handles the download button event.** `let downloadBtn = document.getElementById('downloadBtn'); let message = document.getElementById('message'); let vnet = document.getElementById('vnet'); downloadBtn.addEventListener('click', function() { startCountdown(20, function() { message.innerHTML = 'Countdown finished. Please click <a href="#" id="anyLink">this link</a> to continue.'; document.getElementById('anyLink').addEventListener('click', function(event) { event.preventDefault(); startCountdown(10, function() { vnet.style.display = 'block'; }); }); }); }); function startCountdown(seconds, callback) { let counter = seconds; message.innerHTML = `Countdown: ${counter} seconds`; let interval = setInterval(function() { counter--; if (counter >= 0) { message.innerHTML = `Countdown: ${counter} seconds`; } if (counter === 0) { clearInterval(interval); callback(); } }, 1000); }` **Here's how the code works:** - When the user clicks the "Start Countdown" button, a 20-second countdown begins. - After the 20-second countdown ends, a message will appear asking the user to click a link. - When the user clicks the link, a second 10-second countdown starts. - After the 10-second countdown ends, the content "content here" will be displayed.
mrs_hao
1,907,142
The Global Mobile Phone Accessories Market with Future Trends
The global mobile phone accessories market is a thriving industry, with a staggering market size of**...
0
2024-07-01T05:08:13
https://dev.to/harshita_09/the-global-mobile-phone-accessories-market-with-future-trends-55dj
mobile, phone, market
The global mobile phone accessories market is a thriving industry, with a staggering market size of** $225.3 billion in 2022.** This massive market is projected to grow at a Compound Annual Growth Rate (CAGR) of** 8.7% **from 2023 to 2030, driven by the increasing adoption of smartphones and the desire for personalization and enhanced functionality. **[If you want to explore more of my blogs!](https://hackmd.io/@7EBPvZCvQbCgsrPyimR2gQ/rkVk9hISR)** ## Growth Factors of Mobile Phone Accessories Market - Rising Smartphone Penetration: The widespread adoption of smartphones has fueled the demand for accessories that enhance the user experience, such as protective cases, screen protectors, and wireless earbuds. - Technological Advancements: Innovations in mobile technology, including wireless charging, virtual reality (VR), and augmented reality (AR), have paved the way for new and advanced accessories. - Changing Consumer Preferences: Consumers are increasingly seeking accessories that reflect their personal style and preferences, driving the demand for customized and trendy products. - Affordability and Accessibility: With the availability of affordable accessories from various brands, consumers have a wide range of options to choose from, further propelling market growth. ## Mobile Phone Accessories Market Segmentation The global **[mobile phone accessories market](https://www.kenresearch.com/mobile-phone-and-accessories-market?utm_source=seo&utm_medium=seo&utm_campaign=Harshita)** can be segmented based on product type, distribution channel, and geography. - Product Type: This segment includes protective cases, headphones/earphones, power banks, chargers, memory cards, and other accessories. - Distribution Channel: The market is divided into online and offline channels, with e-commerce platforms and retail stores playing crucial roles in sales. - Geography: The market is further segmented into regions such as North America, Europe, Asia-Pacific, Latin America, and the Middle East & Africa, each with its unique market dynamics. ## Target Audience The global mobile phone accessories market caters to a diverse range of consumers, including: - Tech-savvy Millennials: This demographic is often the early adopters of new technologies and seeks accessories that enhance their mobile experience. - Professionals and Business Users: Individuals in this segment prioritize productivity and efficiency, leading to a demand for accessories like power banks, wireless chargers, and hands-free devices. - Outdoor Enthusiasts: Adventure seekers and outdoor enthusiasts require durable and rugged accessories, such as waterproof cases and portable battery packs. - Fashion-conscious Consumers: This segment is driven by the desire for stylish and trendy accessories that complement their personal style and mobile devices. ## Future Trends of Mobile Phone Accessories Industry The global mobile phone accessories market is poised to witness several exciting trends in the coming years: - Sustainable and Eco-friendly Accessories: With increasing environmental awareness, consumers are seeking accessories made from sustainable and recycled materials, driving manufacturers to adopt eco-friendly practices. - Wireless Charging and Power Solutions: The demand for wireless charging solutions and high-capacity power banks is expected to rise as consumers seek more convenient and efficient ways to keep their devices charged. - Integration of Artificial Intelligence (AI) and Internet of Things (IoT): AI-powered virtual assistants and IoT-enabled accessories, such as smart home devices, are likely to gain traction, enhancing the overall user experience. - Mobile Gaming Accessories: The growing popularity of mobile gaming has created a niche market for gaming accessories, including controllers, headsets, and cooling devices. ## Conclusion The global **[mobile phone accessories industry](https://www.kenresearch.com/blog/2020/08/global-mobile-phone-accessories-market/?utm_source=seo&utm_medium=seo&utm_campaign=Harshita)** is witnessing remarkable growth, driven by factors such as technological advancements, changing consumer preferences, and the increasing adoption of smartphones. With a diverse range of products catering to various consumer segments, this market offers ample opportunities for businesses to innovate and capitalize on emerging trends. As the world becomes more mobile-centric, the demand for accessories that enhance functionality, style, and convenience will continue to rise, making this market a lucrative and dynamic space to explore.
harshita_09
1,901,435
9 Best Restaurant Advertising Ideas to Elevate Your Business and Profits
Standing out as a restaurant amidst the sea of dining options has become a formidable challenge. With...
0
2024-07-01T05:07:00
https://www.kopatech.com/blog/9-best-restaurant-advertising-ideas-to-elevate-your-business-and-profits
foodorderingsystem, kopatech
Standing out as a restaurant amidst the sea of dining options has become a formidable challenge. With thousands of eateries lining the streets of every city, the competition is fierce, demanding innovative restaurant advertising ideas and precision in advertising strategies. In this dynamic landscape, success hinges on serving delectable dishes and crafting a unique identity that resonates with a niche customer base. This is where the art of restaurant advertising ideas comes into play, as establishments seek to fine-tune their approaches to capture attention and drive foot traffic. It's a delicate balance of creativity and strategy, where the ultimate goal isn't just profitability and creating memorable, enjoyable experiences for patrons. ## The 9 Restaurant Advertising Ideas to Increase Profits In this pursuit, restaurants must navigate the terrain of digital marketing, social media engagement, and experiential promotions to carve out their slice of the market. Join us as we delve into the intricacies of crafting compelling advertising strategies and optimizing the [online food ordering system](https://www.kopatech.com/online-food-ordering-system) option to attract customers and infuse more excitement and delight into the dining experience. ### 1. An attractive Website is the Key to Attracting Top-Notch Customers The internet serves as the primary gateway to discovering new dining experiences, the importance of an attractive website cannot be overstated. It's not merely a virtual storefront but a crucial tool in shaping perceptions, enticing potential patrons, and ultimately, driving foot traffic to your restaurant. Here's why investing in a visually appealing and user-friendly website is considered one of the greatest restaurant advertising ideas anyone can think about. A seamless browsing experience is fundamental to retaining the attention of visitors and guiding them toward making a reservation or visiting your establishment. With intuitive navigation, customers can effortlessly explore your menu, locate essential information such as location and hours of operation, and even place orders online if applicable. Streamlined navigation enhances user satisfaction and encourages repeat visits. ### 2. Make Your Advertising Initiatives Image-Rich A picture is worth a thousand words, and nowhere is this truer than in the realm of restaurant websites. High-quality, realistic images of your signature dishes tantalize the taste buds of potential customers, igniting their desire to experience your culinary offerings firsthand. Showcasing visually appetizing photos that accurately represent the portion sizes and presentation is one of the top restaurant advertising ideas you can stage for a memorable dining experience and build a customer base. ### 3. Showcase the Restaurant’s Dining Space Beyond the food itself, prospective diners are also keen to envision the ambiance and atmosphere of your restaurant. Incorporating images of your dining areas, whether it's the cozy interior decor, alfresco seating with picturesque views, or bustling bar area, provides a glimpse into the unique dining environment you offer. Authentic visuals create a connection with your audience, fostering a sense of familiarity and comfort that can sway their decision to choose your establishment over competitors. ### 4. Well-crafted Menu That Attracts Eyeballs A well-curated menu is not merely a list of dishes but one of the worthy restaurant advertising ideas that caters to diverse customer preferences and dining habits. Whether customers dine in person or order through a [multi vendor food ordering system](https://www.kopatech.com/multi-restaurant-delivery-software), the key to success lies in creating an all-inclusive menu that resonates with various customer groups while offering a seamless browsing experience. Here's how restaurants can achieve this feat and delight diners of all ages and preferences. Understanding the primary customer segments frequenting your establishment is the first step towards crafting an inclusive menu. Whether it's families with young children, health-conscious individuals, or adventurous foodies, tailoring menu offerings to cater to these groups' preferences ensures that no customer feels left out. By conducting market research and analyzing customer feedback, restaurants can identify the most popular dishes and adjust their menu accordingly to meet demand. ### 5. Adding Variety is a Another Lot of Leading Restaurant Advertising Ideas Variety is the spice of life, and this holds when it comes to menu offerings. From vegetarian and vegan options to gluten-free and keto-friendly dishes, providing a diverse selection ensures that every customer finds something to suit their dietary preferences and restrictions. By incorporating a range of cuisines, flavors, and cooking styles, restaurants can appeal to a broad spectrum of tastes and attract a wider customer base. ### 6. Target Advertisements at Specific Age Groups Different age groups often have distinct dining preferences and appetites. For example, millennials may gravitate towards trendy, Instagram-worthy dishes, while older adults may prefer classic comfort foods with a modern twist. By segmenting the menu to cater to various age demographics, restaurants can create a tailored dining experience that resonates with each group. Additionally, considering portion sizes and pricing strategies based on age demographics can further enhance customer satisfaction and loyalty. ### 7. Create Flipbooks for Easy for Offline Advertisement In the era of digital dining, convenience is paramount. Creating digital flipbooks or interactive menus that customers can easily browse when ordering food through multi-vendor platforms streamlines the ordering process and enhances the overall user experience. Incorporating vibrant images, detailed descriptions, and user-friendly navigation ensures that customers can quickly find what they're looking for and make informed decisions, whether they're dining in or ordering for delivery. Flipbooks are unique, interactive viewable materials that create a captivating animation when their pages are flipped quickly. They’re often used as promotional tools due to their novelty and engaging nature. In the digital age, creating flip books are wonderful restaurant advertising ideas. It can be easily distributed through delivery apps and websites. Here’s how: **Digital Flipbooks:** Create a digital version of your flipbook. This can be an animated GIF or a short video clip that can be embedded on your website or shared on social media platforms. **Delivery Apps:** Partner with delivery apps to include your digital flipbook in their user interface. It could be featured in the app’s loading screen, as part of the order confirmation process, or even as a special promotion within the app. **Website Advertising:** Embed the digital flipbook on your website. It can serve as an interactive banner ad or be included in the website’s content to engage visitors. **Email Marketing:**Include the digital flipbook in your email newsletters. This can help increase engagement rates and drive traffic to your website or app. Remember, the key to successful advertising is to make it engaging and relevant to your audience. With their unique charm and interactive nature, flipbooks can be a great tool to achieve this. ### 8. Restaurant Advertisement Ideas for Loyalty Building Building customer loyalty for your restaurant involves creating a memorable dining experience and rewarding customers for their patronage. Here are six restaurant advertising ideas to foster loyalty: **Rewards and Coupons:** Launch a loyalty program that offers rewards points for every purchase. These points can be redeemed for discounts, free meals, or special treats. Advertise this program prominently on your menus, website, and social media platforms. **Free Packages:** Offer a ‘buy X get Y free’ deal. For example, buy five meals and get the sixth one free. This encourages repeat visits and gives customers a sense of getting a bargain. **Referral Rewards:** Implement a referral program where customers get a reward, such as a discount or a free meal, for recommending your restaurant to their friends and family. This not only retains existing customers but also attracts new ones. **Corporate Loyalty Programs:** Target corporate customers and businesses by offering special packages for corporate events or business lunches. You could also offer a corporate loyalty card that provides discounts for regular bookings. **Parties and Special Events:** Host special events like theme nights, cooking classes, or tasting events. Offering exclusive invitations to these events to your loyal customers can make them feel valued and increase their loyalty. **Exclusive Memberships:** Offer an exclusive membership program with perks like priority reservations, members-only events, and sneak peeks at new menu items. This can create a sense of exclusivity and belonging, encouraging customers to return. ### 9. Partnering with Other Businesses for Mutual Benefit In the competitive world of the restaurant industry, innovative advertising strategies are key to standing out. One such strategy is partnering with unrelated businesses. This might seem counterintuitive at first, but it can be a game-changer for restaurants looking to expand their customer base and increase brand visibility. The concept is simple: by partnering with businesses that don’t directly compete with you, both parties can benefit from shared marketing efforts, cross-promotion, and expanded reach. This strategy is about finding common ground where both businesses can benefit. ## Two Examples of Restaurant Advertising Ideas by Partnering 1. For instance, a restaurant could partner with a local gym. The gym’s clientele, who are likely health-conscious, could be interested in the restaurant’s healthy menu options. The restaurant could offer special discounts to gym members, and the gym could provide promotional offers to the restaurant’s customers. This creates a win-win situation where both businesses can attract new customers. 2. Another example could be a partnership between a restaurant and a bookstore. They could collaborate on events like book signings, author meet-and-greets, or book club meetings, with the restaurant providing food and beverages. This not only increases foot traffic for both businesses but also enhances the customer experience by offering unique, value-added events. Partnering with unrelated businesses also opens up opportunities for creative marketing campaigns. A restaurant could collaborate with a local fashion retailer for a “Dine and Design” event, where customers get a special discount at the restaurant after attending a fashion show at the retailer. Partnerships can extend to digital platforms also. Restaurants can collaborate with unrelated businesses on social media campaigns, online contests, or joint email marketing efforts. This can significantly increase online visibility and engagement for both businesses. However, it’s crucial to choose the right partner. The businesses should share similar values and cater to a similar demographic. The partnership should feel natural and beneficial to the customers of both businesses. _This post originally appeared on [kopatech.com](https://www.kopatech.com/blog/9-best-restaurant-advertising-ideas-to-elevate-your-business-and-profits)_
kopatech2000
1,899,355
Setting up AWS IAM Identity Center as an identity provider for Confluence
AWS IAM Identity Center is a great tool for managing access to multiple AWS accounts in one...
0
2024-07-01T05:05:41
https://dev.to/aws-builders/setting-up-aws-iam-identity-center-as-an-identity-provider-for-confluence-2l8
aws, iamidentitycenter, saml, confluence
[AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/) is a great tool for managing access to multiple AWS accounts in one centralized location. Users can assume roles in the AWS accounts they have access to and work in the AWS console or CLI. It also supports single sign-on (SSO) capabilities to log in to some AWS services or third-party applications. One example of these applications is Confluence, which is widely used in enterprises. This blog post shows how to set up SSO in Confluence using AWS IAM Identity Center. In what situations would you find this blog post useful? 1. You want to use Identity Center's SSO capabilities in Confluence. 2. You want to configure a source application for Amazon Q to test Q's security features (Q only uses information when a specific user has access to it) 3. You want to learn more about SSO and SAML. In my case, I want to set up the Confluence integration to get familiar with Amazon Q. I chose the Confluence data sources because Confluence supports page-specific permissions, which should also be handled in Q. I'm using the trial version of Confluence for my tests and this blog post. ## Getting started: AWS IAM Identity Center I assume that AWS IAM Identity Center is already configured. To integrate Confluence as an application, open AWS IAM Identity Center in the AWS console and select "Applications" in the navigation. Then, add a new application. ![AWS IAM Identity Center: Add a new application](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mptvcmwmxb78krq7er78.png) AWS offers a catalog of out-of-the-box integrations for more than 300 applications. Confluence is one of them, so choose it. ![Select application from the catalog](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i36uinec9r817tofetf3.png) Search for Confluence and select the result: ![Search for Confluence](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/clzrnba3mr4rg0j6m7y1.png) The next page provides a button to open step-by-step instructions for additional configuration assistance. Select this button to view these instructions. There are also options to change the name or description used in Identity Center. ![Configure a new application in AWS IAM Identity Center](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6wpdac9hkyhwt1agwjt3.png) In the step-by-step instructions, you will find application-specific instructions to configure the integration. For example it shows which values have to be configured in Confluence and in Identity Center. Download the certificate and copy the URL (both URLs are the same). You will need this information later again when setting up the identity provider in Confluence. ![Step-by-step instructions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88cy6z3783ta7f3svfi5.png) Now proceed with the necessary steps in Confluence. Later, we will need to perform some additional configurations in AWS IAM Identity Center. ## Confluence: Adding a domain Open the Atlassian Admin by navigating to the URL https://admin.atlassian.com/. In Atlassian Admin, you can manage the settings required for Identity Center integration. First, add the domain used by your users. Accounts in your domain can become managed accounts, which means you can use the SSO capabilities of Identity Center. ![Confluence: Verify your company domain](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4c3op468o4ja8i5s0dxt.png) Enter the domain name. ![Enter the domain name](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkx35irekecmer1sd3gh.png) Before the domain can be associated, the ownership must be verified. I used DNS verification - feel free to use any of the other methods. For DNS validation, create the TXT record in the DNS management for your domain. ![Domain DNS verification](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73ut8uuop1116fgpyr5f.png) Ensure that the status is set to verified. If it is not verified, check the domain verification again. Next, select "Claim accounts" to automatically claim new accounts under this domain. ![Domains: Overview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/031nre67rg8snvgs7jfj.png) Use the recommended "Automatically claim" option to claim new accounts from your domain. ![Claim settings](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qwpv2cisb47f88do6r6n.png) Now "claim setting" is set to "Automatically". ![Domain: claim setting automatically](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qldsgdiita0912l1q6lk.png) ## Confluence: Create a identity provider In the next step, we will create a identity provider and link it to AWS IAM Identity Center. In the security settings, select "Identity providers" and use "Other provider" as there is no specific integration for Identity Center. ![Create identity provider type other identity provider](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w13i2mrg3lh3wd8hxtd6.png) Confirm to start the free trial for Atlassian Guard to enable enterprise grade features. ![Subscribe to Atlassian Guard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hs9r37c5pokfyc3niv0.png) Enter a name for the identity provider, e.g. "AWS IAM Identity Center". ![Identity provider name](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/st59p1hhq6l17fjd9mj2.png) Proceed with the SAML single sign-on integration. ![SAML single sign-on integration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zoaf6tvf8nlpgzje5yxl.png) Read the notes, then continue to the next step. ![Notes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1kwnij5djyrmtzfez431.png) Now paste the certificate and URL you copied during the application configuration in AWS IAM Identity Center. ![Identity provider settings](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6ayqrvsf4dmwhsw90b9.png) In case you didn't copy the values, you can display them again. ![AWS IAM Identity Center values](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5mpai29wsh0gnzom9yem.png) After you have configured the values in Confluence, the Confluence wizard will display two URLs that need to be copied to AWS IAM Identity Center. ![Confluence identity provider URLS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8i2wua29hu52l4ldxtns.png) In the Identity Center configuration, enter the URL of your Confluence instance and paste the two URLs you copied earlier. ![Maintain configuration in AWS IAM Identity Center](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4p7e1qhojkq9ps6ea19.png) Now continue in Confluence again. Select the previously created domain. ![Select the domain created before](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qs7d1mlv5tdtohymra3f.png) Stop the configuration wizard and save the SAML settings. ![Save SAML settings](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qbxr59yixqfdtqz73r8v.png) ## Confluence: Update authentication policy As the final configuration step in Confluence, open the authentication policies and edit the newly created configuration (in my case, it is named "AWS IAM Identity Center"). ![Update authentication policy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bl8vfvdhob570xmgc3kj.png) Select the option "Enforce single sign-on". ![Enforce single sign-on](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f68ism0o27xfg3usuar6.png) ## AWS Identity Center: Assign users or groups In Identity Center, add the users or groups that will be allowed to use the new application. I created a group and assigned all users to it.![Identity Center: Users and groups](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m9v99mvm362o6g7raqiq.png) ## Testing the integration Log in to the AWS IAM Identity Center with one of the users. You should see the newly created Confluence application. Open the application. ![Identity Center: Confluence application](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hxcvgh8tvkr6y4501pck.png) Your browser will open a new window and you will be automatically signed in to Confluence. ![Confuence: Welcome](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vmvkmzhijey9piiibmkw.png) ## Summary If everything is set up correctly, the Confluence integration with AWS IAM Identity Center works well. The step-by-step instructions are useful, but read the documentation provided by Atlassian if you encounter any issues. Be careful when copying configuration values/URLs. Don't mix up the different URLs - this can cause errors and SSO won't work. The integration of Confluence with AWS IAM Identity Center is just one example - many other applications can be integrated as well. AWS IAM Identity Center can also be used if you need a free SAML or OAuth 2.0 identity provider for software development or any other use case.
jumic
1,907,139
Top 5 Pillars of Cloud Security
As our world becomes more digitized, many organizations turn to the cloud to harness its flexibility,...
0
2024-07-01T05:04:26
https://dev.to/shivamchamoli18/top-5-pillars-of-cloud-security-295o
azure, azurecloud, cloudcomputing, infosectrain
As our world becomes more digitized, many organizations turn to the cloud to harness its flexibility, scalability, and cost-effectiveness. However, this migration comes with complex security challenges that require a comprehensive and strategic approach to overcome. Cloud security pillars provide a comprehensive framework of principles, strategies, and best practices that can enhance the security of cloud environments and protect them against various threats and vulnerabilities. These pillars establish a strong foundation for building, deploying, and managing secure cloud infrastructures, regardless of the cloud service model used, such as IaaS, PaaS, or SaaS. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5h2mbr5fyrs71g5po5us.jpg) By adopting Cloud security pillars, organizations can systematically address various security aspects. Each pillar represents a specific area of concern and expertise, working together to create a complete security posture that aligns with an organization's risk tolerance and compliance requirements. ## **5 Pillars of Cloud Security** Cloud security pillars are foundational principles guiding secure cloud computing. **1. Data Protection:** Data protection focuses on securing sensitive data using encryption, access controls, and proper data classification. It ensures data remains confidential and protected from unauthorized access or breaches. **2. Identity and Access Management (IAM):** IAM is vital in ensuring the security of the cloud, as it manages user authentication, authorization, and access control. Implementing robust authentication measures, Role-based Access Control (RBAC), and least privilege principles ensures that only authorized users can access valuable resources and sensitive data. **3. Network Security:** Securing the network infrastructure is essential to prevent any unauthorized network access and protect against potential cyber threats. It involves configuring firewalls, Intrusion Detection and Prevention Systems (IDPS), and implementing Virtual Private Networks (VPNs) to establish secure communication channels. **4. Threat Detection and Incident Response:** Threat detection and incident response are critical pillars of cloud security as they focus on implementing advanced monitoring and detection mechanisms to identify, analyze, and address security breaches, vulnerabilities, and cyberattacks. It minimizes potential damage and downtime and ensures quick recovery. **5. Zero-trust Cloud Network Security Controls:** The zero-trust cloud network security controls approach treats all network traffic as potentially untrusted and requires continuous verification for access. It is accomplished through stringent identity verification protocols, least privilege access measures, and micro-segmentation tactics that minimize attack surfaces and boost overall security. ## **How can InfosecTrain Help?** At [InfosecTrain](https://www.infosectrain.com/), we present diverse cloud security courses, some focusing on cloud security pillars. We provide [Cloud Security Practitioner](https://www.infosectrain.com/courses/cloud-security-practitioner-training/), [CCSK Foundation](https://www.infosectrain.com/courses/ccsk-certificate-cloud-security-knowledge-training/), [CCSE](https://www.infosectrain.com/courses/certified-cloud-security-engineer-training-course/), [CCSP](https://www.infosectrain.com/courses/ccsp-certification-training/), and [AWS Certified Security Specialty](https://www.infosectrain.com/courses/aws-certified-security-specialty-training/), as well as other online training and certification programs. These comprehensive training programs cover IAM, data and network security, compliance, threat detection, incident response, and more. We provide a focused learning environment with a structured curriculum, expert-led instruction, hands-on labs, and networking opportunities. This equips learners with practical skills and industry insights, making them proficient in cloud security and more employable in today's cloud-oriented world.
shivamchamoli18
1,907,138
Tackling Complex Backend Challenges: My Journey with HNG Internship
Introduction Hello, tech enthusiasts! My name is Rafael John, and I am a full-stack...
0
2024-07-01T05:03:34
https://dev.to/rafaeljohn9/tackling-complex-backend-challenges-my-journey-with-hng-internship-3fg
## Introduction Hello, tech enthusiasts! My name is Rafael John, and I am a full-stack developer with a passion for backend development. Recently, I faced a particularly challenging backend problem that tested my skills and determination. In this blog post, I’ll walk you through the problem, how I approached it, and the solution I implemented. Additionally, I'll share my excitement about starting my journey with the HNG Internship and why this opportunity is so important to me. ## The Challenge The problem I encountered involved optimizing a complex data processing pipeline for a client project. The system was built using Python and Flask, with a MySQL database for storing processed data. The challenge was to reduce the processing time and improve the overall efficiency of the system. The existing solution was slow, and the client needed a faster, more reliable system. Breaking Down the Solution - Analyzing the Bottlenecks The first step was to identify the main bottlenecks in the existing system. I used profiling tools to monitor the performance of each component in the pipeline. This helped me pinpoint areas where the system was slowing down. - Database Optimization I noticed that the database queries were a major bottleneck. To address this, I optimized the queries by adding appropriate indexes and restructuring the database schema. This significantly improved the query performance. - Efficient Data Processing The data processing logic was another area that needed improvement. I refactored the code to use more efficient algorithms and data structures. Additionally, I implemented batch processing to handle large datasets more effectively. - Asynchronous Processing To further enhance performance, I integrated asynchronous processing using Celery and Redis. This allowed the system to handle multiple tasks concurrently, reducing the overall processing time. - Caching Frequently Accessed Data I implemented caching using Redis to store frequently accessed data. This reduced the load on the database and improved the system's response time. - Testing and Deployment After implementing the optimizations, I thoroughly tested the system to ensure it met the client's requirements. Finally, I deployed the updated system using Docker and Nginx, ensuring it was scalable and easy to maintain. ## Results The optimizations resulted in a significant reduction in processing time, from several hours to just under 30 minutes. The client was thrilled with the improved performance and reliability of the system. My Journey with HNG Internship I am incredibly excited to start my journey with the HNG Internship. This program is a fantastic opportunity to learn from industry experts and collaborate with other talented developers. The hands-on experience and mentorship provided by the HNG Internship will help me grow as a backend developer and tackle even more complex challenges in the future. The HNG Internship is not just about coding; it's about building a network, gaining real-world experience, and becoming a part of a vibrant community. If you're interested in learning more about the program, check out the HNG Internship website and HNG Premium. ## Conclusion Solving complex backend challenges requires a combination of analytical thinking, technical skills, and perseverance. I am proud of the solution I developed for my client and excited about the journey ahead with the HNG Internship. This experience has reinforced my passion for backend development and my commitment to continuous learning and improvement. Thank you for reading, and I hope my story inspires you to tackle your own challenges with determination and creativity. Let's continue to learn and grow together!
rafaeljohn9
1,907,137
먹튀
인터넷 상에서 스포츠 베팅과 토토사이트가 점점 인기를 끌면서, 안전하고 신뢰할 수 있는 사이트를 찾는 것이 매우 중요해졌습니다. "먹튀 아웃룩"은 이러한 필요를 충족시키기 위해...
0
2024-07-01T05:00:31
https://dev.to/totosite06/meogtwi-ife
인터넷 상에서 스포츠 베팅과 토토사이트가 점점 인기를 끌면서, 안전하고 신뢰할 수 있는 사이트를 찾는 것이 매우 중요해졌습니다. "먹튀 아웃룩"은 이러한 필요를 충족시키기 위해 만들어진 커뮤니티로, 사용자들에게 신뢰할 수 있는 정보와 리뷰를 제공합니다. 특히, 인디아 플러그인 플레이와 묵투로얄을 중심으로 2024년 최고의 1등 토토사이트 커뮤니티로 자리잡고 있습니다. 인디아 플러그인 플레이 인디아 플러그인 플레이는 최신 기술과 사용자 친화적인 인터페이스로 유명한 토토사이트입니다. 이 사이트는 다양한 스포츠 경기와 이벤트에 베팅할 수 있는 기회를 제공하며, 사용자들이 쉽게 접근하고 이용할 수 있도록 설계되었습니다. 또한, 강력한 보안 시스템을 갖추고 있어 사용자 정보와 자산을 안전하게 보호합니다. **_[먹튀](https://www.outlookindia.com/plugin-play/%EB%A8%B9%ED%8A%80%EB%A1%9C%EC%96%84-2024-%EB%85%84-best-no1-%ED%86%A0%ED%86%A0%EC%82%AC%EC%9D%B4%ED%8A%B8-%EC%BB%A4%EB%AE%A4%EB%8B%88%ED%8B%B0)_** 주요 특징 다양한 베팅 옵션: 축구, 농구, 야구 등 다양한 스포츠 경기와 이벤트에 베팅할 수 있습니다. 사용자 친화적인 인터페이스: 누구나 쉽게 이용할 수 있는 간편한 인터페이스를 제공합니다. 강력한 보안 시스템: 최신 보안 기술을 도입하여 사용자 정보를 철저히 보호합니다. 묵투로얄 묵투로얄은 2024년 최고의 1등 토토사이트로, 사용자들에게 최고 수준의 베팅 경험을 제공합니다. 이 사이트는 신뢰성 있는 운영과 빠른 결제 시스템으로 유명하며, 사용자들의 만족도를 높이기 위해 다양한 프로모션과 보너스를 제공합니다. 주요 특징 신뢰성 있는 운영: 오랜 경험과 높은 신뢰도를 바탕으로 운영됩니다. 빠른 결제 시스템: 사용자들이 빠르고 편리하게 결제할 수 있는 시스템을 갖추고 있습니다. 다양한 프로모션과 보너스: 사용자들에게 다양한 혜택을 제공하여 만족도를 높입니다. 2024년 최고 1등 토토사이트 커뮤니티 "먹튀 아웃룩"은 인디아 플러그인 플레이와 묵투로얄을 중심으로 2024년 최고의 토토사이트 커뮤니티로 성장하고 있습니다. 이 커뮤니티는 사용자들이 안전하고 신뢰할 수 있는 사이트를 찾을 수 있도록 돕는 것을 목표로 합니다. 사용자들은 다양한 리뷰와 정보를 통해 자신에게 맞는 사이트를 선택할 수 있습니다. 커뮤니티의 역할 사이트 리뷰 제공: 사용자들이 신뢰할 수 있는 사이트를 선택할 수 있도록 다양한 리뷰를 제공합니다. 최신 정보 업데이트: 토토사이트 관련 최신 정보와 뉴스를 신속하게 제공합니다. 사용자 경험 공유: 사용자들이 자신의 경험을 공유하고, 다른 사용자들의 경험을 통해 배울 수 있습니다. 결론 2024년, 먹튀 아웃룩은 인디아 플러그인 플레이와 묵투로얄을 중심으로 최고의 1등 토토사이트 커뮤니티로 자리잡았습니다. 이 커뮤니티는 사용자들에게 신뢰할 수 있는 정보와 리뷰를 제공하여 안전한 베팅 환경을 조성하는 데 기여하고 있습니다. 다양한 베팅 옵션과 강력한 보안 시스템을 갖춘 인디아 플러그인 플레이, 그리고 신뢰성 있는 운영과 빠른 결제 시스템을 자랑하는 묵투로얄을 통해 사용자들은 최상의 베팅 경험을 누릴 수 있습니다. 먹튀 아웃룩은 앞으로도 사용자들에게 최고의 서비스를 제공하기 위해 지속적으로 노력할 것입니다.
totosite06
1,838,301
Design: The Ultimate Developer Hack
In the world of coding, developers are always on the hunt for hacks – clever shortcuts and...
27,357
2024-07-01T05:00:00
https://dev.to/shieldstring/design-the-ultimate-developer-hack-19e9
webdev, design, ui, beginners
In the world of coding, developers are always on the hunt for hacks – clever shortcuts and workarounds to streamline their workflow and boost efficiency. But what if the ultimate developer hack wasn't a line of code or a hidden setting? What if it was design? **Design: Beyond Aesthetics** Traditionally, design has been seen as the domain of visual artists, crafting the look and feel of a product. But good design goes far beyond aesthetics. It's about understanding user needs, creating intuitive interfaces, and anticipating user behavior. **How Design Hacks the Development Process** Here's how embracing design principles can be a developer's secret weapon: * **Reduced Development Time:** Well-designed interfaces with clear user flows require less code. Imagine spending less time wrestling with complex logic and more time focusing on core functionalities. * **Fewer Bugs and Errors:** A well-designed user experience (UX) leads to fewer user errors and unexpected interactions. This translates to less debugging and a smoother development process. * **Increased Maintainability:** A clean and modular design translates to cleaner code. This makes the codebase easier to maintain, update, and scale in the future, saving developers time and headaches down the line. * **Improved Communication:** Design acts as a common language between developers, designers, and product managers. Clear design documentation and prototypes facilitate communication and prevent misunderstandings. * **Happier Users, Happier Developers:** When users can easily navigate and interact with an application, everyone wins. Design that prioritizes usability leads to a more positive user experience, reducing developer support requests and frustration. **Design Hacks for Developers:** * **Learn the Basics of Design:** A fundamental understanding of design principles like user experience (UX), user interface (UI), and visual hierarchy can significantly improve your development process. * **Embrace Design Tools:** Many design tools offer features like prototyping and design handoff, allowing developers to get a feel for the user journey and identify potential issues early on. * **Collaborate with Designers:** Don't see designers as separate entities. Build strong working relationships with designers and involve them throughout the development process. * **Focus on Usability Testing:** Actively participate in usability testing sessions. Observe how users interact with the design and use this feedback to inform your development decisions. **Design: An Investment in Efficiency** Investing time in understanding and applying design principles is not a detour; it's a shortcut. By embracing design as the ultimate developer hack, you can streamline your workflow, create more user-friendly applications, and ultimately, become a more well-rounded and valuable developer. **So, the next time you're looking for a way to boost your development efficiency, don't just reach for another code snippet. Consider the power of design. It might just be the ultimate developer hack you've been overlooking.**
shieldstring
1,883,571
Four Lessons I Wish I Knew Before Becoming a Software Engineer
I originally posted this post on my blog a long time ago in a galaxy far, far away. It has been more...
27,567
2024-07-01T05:00:00
https://canro91.github.io/2022/12/12/ThingsToKnowBeforeBeingSoftwareEngineer/
career, careerdevelopment, beginners, softwareengineering
_I originally posted this post on [my blog](https://canro91.github.io/2022/12/12/ThingsToKnowBeforeBeingSoftwareEngineer/) a long time ago in a galaxy far, far away._ It has been more than 10 years since I started working as a Software Engineer. I began designing reports by hand using iTextSharp. And by hand, I mean drawing lines and pixels on a blank canvas. Arrggg! I used Visual Studio 2010 and learned about LINQ for the first time those days. Then I moved to some sort of full-stack role writing DotNetNuke modules with Bootstrap and Knockout.js. In more recent years, I switched to work as a backend engineer. I got tired of getting feedback on colors, alignment, and other styling issues. That's not the work I enjoy doing. If I could start all over again, these are four lessons I wished I knew before becoming a Software Engineer again. ## 1. Find a Way To Stand Out Learning a second language is a perfect way to stand out. I'm a bit biased since language learning is one of my hobbies. For most of us, standing out means learning English as a second language. A second language opens doors to new markets, professional relationships, and job opportunities. And, you can brag about a second language on your CV. After an interview, you can be remembered for the languages you speak. "Ah! The guy who speaks languages." ## 2. Never Stop Learning Let's be honest. University will teach you lots of subjects. Probably, you don't need most of them and the ones you need you will have to study them on your own. You will have to study books, watch online conferences, and read blog posts. Never stop learning! That would keep you in the game in the long run. But, it can be daunting if you try to learn everything about everything. "Learn something about everything, and everything about something," says popular wisdom. Libraries and frameworks come and go. Stick to the principles. ## 3. Have an Escape Plan There is no safe place to work. Period! Full stop! Companies lay off employees without any further notice and apparent reason. You can get seriously injured or sicked. You won't be able to work forever. If you're reading this from the future, ask your parents or grandparents about the year 2020. Lots of people lost their jobs or got their salaries cut by half in a few days. And there were nothing they could do about it. Have an escape plan. A side income, your own business, a hobby you can turn into a profitable idea. You name it! Apart from an escape plan, have an emergency fund. The book "The Simple Path to Wealth" calls emergency funds: "F-you" money. Keep enough savings in your account to avoid worrying about when to leave a job or when the choice isn't yours. ## 4. Have an Active Online Presence If I could do something different, I would have an active online presence way earlier. Be active online. Have a blog, a LinkedIn profile, or a professional profile on any other social network. Use social networks to your advantage. In the beginning, you might think you don't know enough to start writing. But you can share what you learn, the resources you use to learn, and your sources of inspiration. You can learn in public and show your work. Voilà! These are four lessons I wished I knew before starting a software engineer career. Remember, every journey is different and we're all figuring out life. In any case, > "Your career is your responsibility, not your employer's" I learned that from The Clean Coder. *** _[Join my free 7-day email course to refactor your software engineering career now.](https://imcsarag.gumroad.com/l/careerlessonsfromthetrenches)_ _Happy coding!_
canro91
1,907,135
Measuring Community Health: The Metrics That Actually Matter for Startup DevTools
So, you've built an awesome developer tool and _started a community. _That's amazing! But now comes...
0
2024-07-01T04:56:06
https://dev.to/swati1267/measuring-community-health-the-metrics-that-actually-matter-for-startup-devtools-529k
community, marketing, contentwriting, devrel
_So, you've built an awesome developer tool and _[started a community](https://www.doc-e.ai/post/the-lean-startups-guide-to-developer-engagement-how-to-build-a-thriving-community-with-limited-resources). _That's amazing! But now comes the real challenge: how do you know if your community is thriving?_ _As a startup, it's easy to get caught up in vanity metrics like the number of members or social media followers. But those numbers don't always tell the whole story. What you really need are insights that help you understand if your community is healthy, engaged, and helping you grow.That's where community health metrics come in. By tracking the right metrics, you can get a clear picture of how your community is doing and identify areas for improvement. This helps you make smart decisions about how to invest your time and resources, even on a tight budget._ ‍ **Why Community Health Matters (More Than Just Bragging Rights)** A healthy developer community isn't just about having a lot of members. It's about creating a space where developers feel valued, supported, and empowered to learn and grow together. This kind of community can do wonders for your startup: **Turbocharge Product Adoption**: Engaged developers are more likely to become loyal users and spread the word to their network. **Unleash a Feedback Goldmine**: Active communities are full of insights that can help you refine your product and build features that developers actually want. **Build a Loyal Army of Advocates**: Happy developers become your biggest fans, singing your praises and attracting new users. **Level Up Your Support Game**: A strong community helps answer questions and troubleshoot issues, freeing up your team to focus on other priorities. ‍ **The Metrics That Matter Most (And Why They're Your Secret Weapon)** Forget about vanity metrics. Here's what you should be tracking to get a real pulse on your community's health: 1.**Active Users**: This is the number of developers who are actively participating in your community. Are they asking questions, sharing ideas, and helping others out? A high number of active users is a good sign your community is buzzing with energy. 2.**New Member Growth**: Is your community attracting new people? This is a key indicator of your overall reach and appeal. A healthy community should be growing steadily over time. 3.**Time to First Response (TTFR)**: How long does it take for someone to answer a question or respond to a post?Quick responses show that your community is supportive and engaged. 4.**Sentiment**: Are people generally positive or negative about your product and community? You can get a feel for this by reading comments and conversations, but tools like Doc-E.ai can automatically analyze sentiment for you,giving you a clear picture of how people feel. 5.**Contributor Ratio**: How many members are actively contributing to the community? Are a few people doing all the heavy lifting, or is there a good mix of voices? A healthy community has a balanced mix of contributors and consumers. **Bonus Tip**: Doc-E.ai can help you track all of these metrics and more, giving you valuable insights to make data-driven decisions about your community. ‍ **Taking Action: How to Improve Your Community Health Score** Once you start tracking these metrics, you'll be able to identify areas where your community might be struggling. Here are a few tips to boost engagement and create a healthier, happier space: **Create High-Quality Content**: Offer tutorials, guides, and resources that address your developers' specific needs and pain points. **Spark Conversations**: Ask thought-provoking questions, run polls, and host AMAs (Ask Me Anything) sessions. **Recognize and Reward Contributors**: Highlight awesome community members and give them shout-outs,badges, or other rewards. **Make it Easy to Participate**: Make sure your community platform is easy to use and navigate. Consider adding features like gamification to make participation more fun. **Be Responsive and Supportive**: Answer questions quickly, offer help when needed, and create a welcoming environment where everyone feels valued. **Ready to take your developer community to the next level?** Try Doc-E.ai for free and see how it can help you track metrics, gather insights, and [create a thriving community](https://www.doc-e.ai/post/the-ultimate-guide-to-developer-engagement-and-community-building-unlocking-the-power-of-developer-centric-growth) that drives your business forward.
swati1267
1,907,134
What Is Staff Augmentation In Consulting?
Staff augmentation in consulting refers to a flexible outsourcing strategy where a company hires...
0
2024-07-01T04:55:49
https://dev.to/bytesfarms/what-is-staff-augmentation-in-consulting-1heb
staff, augmentation, webdev, javascript
Staff augmentation in consulting refers to a flexible outsourcing strategy where a company hires external consultants, specialists, or temporary workers to fill specific roles or skill gaps within their organization. This approach allows businesses to scale their teams quickly, meet project demands, and access specialized expertise without the long-term commitment and costs associated with hiring full-time employees. ## Benefits of Staff Augmentation **Scalability:** Businesses can easily scale their workforce up or down based on project demands. This flexibility allows for efficient resource management without the constraints of long-term employment contracts. **Access to Specialized Skills:** Many projects require niche expertise that may not be available internally. Staff augmentation allows companies to hire experts with specific skills or industry knowledge for the duration of the project. **Cost Efficiency:** By hiring temporary consultants, companies avoid the costs associated with recruiting, training, and providing benefits to full-time employees. This can result in significant cost savings, especially for short-term or specialized projects. **Reduced Time to Market:** Bringing in external experts can speed up project timelines, as these professionals often come with the necessary skills and experience to hit the ground running. This is particularly valuable in fast-paced industries where time to market is crucial. **Flexibility and Control:** Unlike traditional outsourcing, staff augmentation gives companies more control over their projects. The augmented staff work directly under the company’s management, ensuring that the organization's standards and processes are maintained. ## Use Cases for Staff Augmentation IT and Software Development: Companies often need additional developers, analysts, or IT specialists for specific projects such as system upgrades, software development, or cybersecurity initiatives. **Healthcare:** Hospitals and healthcare providers may require temporary medical staff, IT experts for electronic health record (EHR) implementations, or consultants for regulatory compliance. **Finance and Accounting:** Financial institutions might need auditors, accountants, or compliance experts during peak periods such as tax season or during financial audits. **Marketing and Sales:** Marketing agencies might bring in specialized talent like digital marketing experts, content creators, or market analysts to support specific campaigns or projects. **Manufacturing:** Companies in the manufacturing sector might require engineers, project managers, or quality assurance experts for new product launches or process improvements. ## How Staff Augmentation Works **Assessment of Needs:** The first step is identifying the specific skills and roles needed for the project. This involves assessing the current team’s capabilities and determining where gaps exist. **Sourcing Talent:** Companies can source augmented staff through staffing agencies, consulting firms, or freelance platforms. The selection process typically includes interviews, skill assessments, and background checks to ensure a good fit. **Onboarding:** Once the consultants are selected, they are onboarded into the company’s processes and systems. This step ensures that they understand the project’s goals, timelines, and any specific company policies or procedures. **Integration and Management:** The augmented staff work alongside the existing team, reporting to the company’s managers and following the company’s workflows. This integration is crucial for maintaining cohesion and ensuring that project objectives are met. **Project Execution:** With the augmented team in place, the project is executed according to the defined plan. The flexibility of staff augmentation allows for adjustments as needed, ensuring that the project stays on track. **Evaluation and Offboarding:** Upon project completion, the performance of the augmented staff is evaluated. Successful projects often lead to ongoing relationships, with consultants being rehired for future needs. The offboarding process includes knowledge transfer and ensuring that all project documentation is handed over. ## Challenges of Staff Augmentation While staff augmentation offers many benefits, it also comes with challenges: **Integration:** Ensuring that temporary staff integrate smoothly with the existing team can be challenging. Clear communication and defined roles are essential. **Cultural Fit:** Temporary staff may not always align with the company culture, which can affect team dynamics and productivity. **Dependence on External Talent:** Over-reliance on augmented staff can lead to a lack of skill development within the permanent team. It’s important to balance the use of external and internal resources. **Quality Control:** Maintaining consistent quality of work can be difficult when managing a diverse team of permanent and temporary staff. ## Key aspects of staff augmentation in consulting include: **Flexibility:** Companies can adjust the size of their workforce based on project needs, scaling up or down as necessary. **Access to Expertise:** Organizations can bring in consultants with specialized skills or industry knowledge that may not be available in-house. **Cost-Effectiveness:** By hiring temporary staff, companies can reduce overhead costs related to recruitment, training, and benefits for full-time employees. **Speed:** Staff augmentation allows for quick onboarding of professionals, enabling companies to respond promptly to market demands or project deadlines. **Focus:** Internal teams can concentrate on core business activities while augmented staff handle specific tasks or projects. Staff augmentation in consulting provides businesses with a strategic way to enhance their capabilities, improve efficiency, and maintain competitiveness in a dynamic market environment. **Conclusion** Staff augmentation in consulting is a powerful strategy for businesses looking to enhance their capabilities, manage costs, and maintain flexibility in a dynamic market. By leveraging external expertise, companies can address skill gaps, accelerate project timelines, and achieve their strategic objectives. However, careful planning and management are crucial to ensure successful integration and project execution. Read More: [What Is Staff Augmentation In Consulting?](https://bytesfarms.com/what-is-staff-augmentation-in-consulting/)
bytesfarms
1,906,619
1/30 Days of Data Structure and Algorithm
Day 1 Title: Finding index that sums to a target in JavaScript (Four...
0
2024-07-01T04:54:07
https://dev.to/rajusaha/130-days-of-data-structure-and-algorithm-2nho
javascript, algorithms, datastructures, learning
## Day 1 **Title: Finding index that sums to a target in JavaScript (Four Method)** **Introduction** This post explains four approaches for solving the classic algorithm problem: finding two indexes in an array that add up to a given target. We'll cover methods for both unsorted and sorted arrays: - Brute Force (Unsorted) - Hash Map (Unsorted) - Binary Search (Sorted) - Two Pointers (Sorted) We'll analyze their time complexities and discuss their suitability for different scenarios. **The Problem:** Given an array of numbers `(numberList)` and a target sum `(target)`, write a function that returns the indices of two numbers within the array that add up to` target`. There should be only one such pair, and the first index `(index1)` must be less than the second index `(index2)`. **Method 1: Brute Force (Unsorted - O(n^2) Time Complexity)** The brute force approach iterates through each element in the array and compares it with every subsequent element. If the sum matches the target, we return the corresponding indices. ```javascript function SumOfTwo(numberList, target) { for (let i = 0; i < numberList.length - 1; i++) { for (let j = i + 1; j < numberList.length; j++) { if (numberList[i] + numberList[j] === target) { return [i, j]; } } } return null; // No pair found } ``` Time Complexity: **O(n^2)** The nested loop results in quadratic time complexity. This approach can be slow for larger datasets. **Method 2: Hash Map (Unsorted - O(n) Time Complexity)** The hash map approach leverages a Map object to store seen numbers and their indices efficiently. We iterate through the array, calculating the `complement (x) `for each element (the number needed to reach the target). If the complement exists in the map (meaning we've seen its pair already), we return the indices of that pair. Otherwise, we add the current element and its index to the map. ```javascript function SumOfTwoMap(numberList, target) { const mapObj = new Map(); for (let i = 0; i < numberList.length; i++) { const x = target - numberList[i]; if (mapObj.has(x)) { return [mapObj.get(x), i]; } mapObj.set(numberList[i], i); } return null; // No pair found } ``` Time Complexity: **O(n)**. The loop iterates through the array once, and accessing/updating the map is typically constant time with a good implementation. This makes the hash map approach significantly faster for larger arrays. **Method 3: Binary Search (Sorted - O(n log n) Time Complexity)** This approach leverages binary search to efficiently find the `complement (x)` of the current element that would add up to the target. However, it requires the array to be sorted beforehand. Here's the implementation: ```javascript function binarySearch(arr, target, startIndex) { let left = startIndex; let right = arr.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); if (arr[mid] === target) { return mid; } else if (arr[mid] < target) { left = mid + 1; } else { right = mid - 1; } } return -1; } function twoSumBinarySearch(numberList, target) { numberList.sort((a, b) => a - b); // Sort the array for (let i = 0; i < numberList.length; i++) { const num2 = target - numberList[i]; const complementIndex = binarySearch(numberList, num2, i + 1); //to avoid the duplicate pass it one increment if (complementIndex !== -1) { return [i, complementIndex]; } } return null; // No pair found } ``` Time Complexity: **O(n log n)** due to the binary search This can be efficient for large sorted arrays, else it adds the cost of sorting. **Method 4: Two Pointers (Sorted - O(n) Time Complexity)** This method exploits the sorted nature of the array by using two pointers, one at the beginning `(i)` and another at the end `(j)`. We iterate while `i` is less than `j`: - If `numberList[i] + numberList[j]` is equal to `target`, we return the indices. - If the `sum` is less than `target`, we move the left pointer `(i)` forward to increase the `sum`. - If the `sum` is greater than `target`, we move the right pointer `(j)` backward to decrease the `sum`. ```javascript function twoSumPointers(numberList, target) { let i = 0; let j = numberList.length - 1; while (i < j) { const sum = numberList[i] + numberList[j]; if (sum === target) { return [i, j]; } else if (sum < target) { i++; } else { j--; } } return null; // No pair found } ``` Time Complexity: **O(n)**. This approach leverages the sorted nature of the array and has a linear time complexity, making it very efficient for sorted arrays. Example : ```javascript console.log(SumOfTwo([3, 9, 10, 12], 12)); // Output: [0, 1] console.log(SumOfTwoMap([2, 11, 31, 50], 42)); // Output: [1, 2] console.log(twoSumBinarySearch([2,7,11,15],22)) // Output : [1,3] console.log(twoSumPointers([2,7,11,15],22)) // Output : [1,3] ``` **Additional Considerations**: - Error handling: You can add checks to handle cases where no pair is found or the input is invalid. - Optimization: For specific use cases, you might consider more advanced data structures or algorithms.
rajusaha
1,864,087
"🧠Amazon Bedrock's Foundation Models: The Backbone of Gen-AI⚡"
Hello There!!! Called Sarvar, I am an Enterprise Architect, Currently working at Deloitte. With years...
0
2024-07-01T04:49:25
https://dev.to/aws-builders/aws-foundation-models-the-backbone-of-gen-ai-2g8c
Hello There!!! Called Sarvar, I am an Enterprise Architect, Currently working at Deloitte. With years of experience working on cutting-edge technologies, I have honed my expertise in Cloud Operations (Azure and AWS), Data Operations, Data Analytics, and DevOps. Throughout my career, I’ve worked with clients from all around the world, delivering excellent results, and going above and beyond expectations. I am passionate about learning the latest and treading technologies. Today, we'll look at the foundation model of Amazon Bedrock, one of the most prominent services offered by Amazon Web Services. The core of Gen-AI is the foundation model. Thus, we'll take a quick look at the foundation model and see all of the that Amazon Bedrock Service has to offer. The list of foundation models in Amazon Bedrock is the primary topic of this post, and we'll look at each foundation model's detailed details. Let's begin by discussing the Amazon Bedrock Foundation Model. ## **What is Foundation Model:** In machine learning, foundation models are large, pre-trained models that serve as the starting point for developing more specialized or task-specific models. These models are trained on extensive and diverse datasets to uncover broad patterns and information representations, capturing a comprehensive understanding of words, images, or other types of data. The primary goal of foundation models is to fine-tune them for specific tasks or domains by leveraging the knowledge gained during their general pre-training. This approach is particularly prevalent in fields like computer vision and natural language processing (NLP). Amazon Bedrock leverages these foundation models to revolutionize the creation of generative AI applications, offering a rich selection of FMs from leading AI innovators such as OpenAI, Anthropic, Meta, and AI21 Labs, and incorporating Retrieval Augmentation Generation for relevant, fine-tuned responses. ## **Let's see all the available Foundation Model in Amazon Bedrock:** Amazon Bedrock offers access to a variety of powerful foundation models, each tailored for specific tasks and applications. Here are the main models available: ### AI21 Labs Jurassic AI21 Labs' Jurassic models are designed for generating high-quality text in multiple languages. They are useful for content creation, summarization, and answering questions in various languages. 1. **Jurassic-2 Ultra** - **Primary Use**: Generates advanced and detailed text in multiple languages (English, Spanish, French, German, Portuguese, Italian, Dutch). - **Real-Time Example**: Ideal for creating complex reports, summarizing long documents, or generating drafts for finance, legal, and research fields. - **Pricing**: $0.0188 for every 1,000 words. 2. **Jurassic-2 Mid** - **Primary Use**: Produces high-quality text in various languages for a wide range of applications. - **Real-Time Example**: Great for writing blog posts, answering questions, and extracting important information from texts. - **Pricing**: $0.0125 for every 1,000 words. ### Anthropic Claude Anthropic's Claude models are powerful text generators capable of handling large volumes of text and providing comprehensive analysis. They support multiple languages and are used for creative content, coding, and document comparison. 1. **Claude 2.1** - **Primary Use**: Can handle large volumes of text and understand multiple languages, making it useful for detailed analysis. - **Real-Time Example**: Used for analyzing trends, comparing lengthy documents, and creating comprehensive reports. - **Pricing**: $0.00800 for input, $0.02400 for output per 1,000 words. 2. **Claude 2.0** - **Primary Use**: Supports creative writing and coding, helping with content creation and technical support. - **Real-Time Example**: Useful for developing software applications, creating educational materials, or generating creative content. - **Pricing**: $0.00800 for input, $0.02400 for output per 1,000 words. 3. **Claude 1.3** - **Primary Use**: Assists in writing and providing advice, making it suitable for editing and coding support. - **Real-Time Example**: Helps in editing documents, writing code, and offering general advice. - **Pricing**: $0.00800 for input, $0.02400 for output per 1,000 words. 4. **Claude Instant** - **Primary Use**: Quickly generates responses, making it ideal for real-time applications. - **Real-Time Example**: Perfect for customer support, generating quick summaries, and creating content on the fly. - **Pricing**: $0.00163 for input, $0.00551 for output per 1,000 words. ### Cohere Command & Embed Cohere's Command & Embed models specialize in text and chat generation. They are used for customer support, content creation, and semantic search. The models are efficient and adaptable for various business applications. 1. **Command** - **Primary Use**: Advanced text and chat generation in English, suitable for creating dynamic content. - **Real-Time Example**: Ideal for producing engaging marketing content, customer support chats, and media content. - **Pricing**: $0.0015 for input, $0.0020 for output per 1,000 words. 2. **Command Light** - **Primary Use**: Efficiently generates text in English, good for smaller tasks. - **Real-Time Example**: Useful for simple customer interactions, business communication, and small content tasks. - **Pricing**: $0.0003 for input, $0.0006 for output per 1,000 words. 3. **Embed – English** - **Primary Use**: Helps find and classify text, useful for organizing information. - **Real-Time Example**: Great for searching through large documents and organizing data. - **Pricing**: $0.0001 per 1,000 words. 4. **Embed – Multilingual** - **Primary Use**: Similar to Embed – English but supports many languages, making it versatile for global applications. - **Real-Time Example**: Useful for searching and organizing text in different languages for international use. - **Pricing**: $0.0001 per 1,000 words. ### Meta Llama 2 Meta's Llama 2 models are optimized for dialogue and language tasks. They are used for language translation, text classification, and creating detailed conversations for customer service and other applications. 1. **Llama-2-13b-chat** - **Primary Use**: Optimized for conversations and small-scale tasks like translation and classification. - **Real-Time Example**: Helps in translating languages, classifying text, and having detailed conversations. - **Pricing**: $0.00075 for input, $0.00100 for output per 1,000 words. 2. **Llama-2-70b-chat** - **Primary Use**: Enhanced for large-scale text generation, suitable for more detailed tasks. - **Real-Time Example**: Ideal for creating detailed customer service dialogues and comprehensive creative content. - **Pricing**: $0.00195 for input, $0.00256 for output per 1,000 words. ### Stable Diffusion Stable Diffusion models are designed for generating high-quality images from text prompts. They are ideal for advertising, gaming, and media production, excelling in photorealism and artistic style creation. 1. **SDXL 1.0** - **Primary Use**: Creates high-quality, photorealistic images from text prompts. - **Real-Time Example**: Perfect for designing advertisements, creating game graphics, and producing media content. - **Pricing**: $0.04 per 1024×1024 image (standard), $0.08 per 1024×1024 image (premium). 2. **SDXL 0.8** - **Primary Use**: Converts text into images, suitable for creative asset development. - **Real-Time Example**: Useful for marketing, media production, and developing artistic visuals. - **Pricing**: $0.018 per 512×512 image (standard), $0.036 per 512×512 image (premium). ### Amazon Titan Amazon's Titan models support text generation and text-to-numerical data conversion. They are used for content creation, text classification, and combining text and images for e-commerce and digital media applications. 1. **Titan Text Express** - **Primary Use**: High-performance text generation in over 100 languages, suitable for various tasks. - **Real-Time Example**: Ideal for content creation in education and marketing, writing detailed reports. - **Pricing**: $0.0008 for input, $0.0016 for output per 1,000 words. 2. **Titan Text Lite** - **Primary Use**: Cost-effective text generation in English, efficient for simpler tasks. - **Real-Time Example**: Great for writing summaries, creating marketing copy, and general business communication. - **Pricing**: $0.0003 for input, $0.0004 for output per 1,000 words. 3. **Titan Text Embeddings** - **Primary Use**: Converts text into numerical data for easier analysis and similarity comparison. - **Real-Time Example**: Useful for analyzing and comparing large sets of data. - **Pricing**: $0.0001 per 1,000 words. 4. **Titan Multimodal Embeddings** - **Primary Use**: Combines text and images for more accurate searches and recommendations. - **Real-Time Example**: Ideal for e-commerce product searches and digital media recommendations. - **Pricing**: $0.0008 per 1,000 words; $0.00006 per image. 5. **Titan Image Generator** - **Primary Use**: Creates high-quality images from text prompts, useful in various creative industries. - **Real-Time Example**: Generates images for advertising, e-commerce, and entertainment. - **Pricing**: $0.008 per 512×512 image; $0.01 per 1024×1024 image. ### Mistral AI Models Mistral AI offers a range of advanced language models designed for various text generation and processing tasks. Here are the key models available: 1. **Mistral Large** - **Primary Use**: Generates advanced and detailed text with top-tier reasoning capabilities. - **Real-Time Example**: Ideal for precise instruction following, text summarization, translation, and complex multilingual reasoning tasks. Also suitable for math and coding tasks, including code generation. - **Pricing**: $0.004 per 1,000 words; $0.012 per 1,000 words. 2. **Mistral Small** - **Primary Use**: Generates efficient text for high-volume, low-latency tasks. - **Real-Time Example**: Great for bulk tasks like classification, customer support, and text generation. - **Pricing**: $0.001 per 1,000 words; $0.003 per 1,000 words. 3. **Mistral 8X7B Instruct** - **Primary Use**: Generates text for summarization, structuring, question answering, and code completion. - **Real-Time Example**: Useful for summarizing large documents, structuring information, answering complex questions, and completing code. - **Pricing**: $0.00045 per 1,000 words; $0.0007 per 1,000 words. 4. **Mistral 7B Instruct** - **Primary Use**: Generates text for summarization, structuring, question answering, and code completion. - **Real-Time Example**: Effective for summarizing documents, organizing information, answering questions, and generating code snippets. - **Pricing**: $0.00015 per 1,000 words; $0.0002 per 1,000 words. > _**Conclusion: Amazon Bedrock offers several powerful foundation models. AI21 Labs' Jurassic models generate high-quality, multilingual text for detailed reports and content creation. Anthropic's Claude models handle large text volumes, suitable for creative content and coding. Cohere's Command & Embed models are efficient for customer support and semantic search. Meta's Llama 2 models excel in dialogue and language tasks, like translation and classification. Stable Diffusion models create high-quality images from text, ideal for advertising and media. Amazon's Titan models provide versatile text generation and multimodal search capabilities for e-commerce and digital media.**_ — — — — — — — — _**Here is the End!**_ **Thank you for taking the time to read my article. I hope you found this article informative and helpful. As I continue to explore the latest developments in technology, I look forward to sharing my insights with you. Stay tuned for more articles like this one that break down complex concepts and make them easier to understand.** _Remember, learning is a lifelong journey, and it’s important to keep up with the latest trends and developments to stay ahead of the curve. Thank you again for reading, and I hope to see you in the next article!_ **Happy Learning!**
sarvar_04
1,907,131
Hoppscotch v2024.6.0: Collection Runner on CLI, Team Invite Links, Client Certificates, and more
Hoppscotch is a super simple API client. You can easily get started with Hoppscotch using our web...
0
2024-07-01T04:44:50
https://dev.to/hoppscotch/hoppscotch-v202460-collection-runner-on-cli-team-invite-links-client-certificates-and-more-1k4n
javascript, webdev, opensource, api
Hoppscotch is a super simple API client. You can easily get started with Hoppscotch using our web application or [desktop app](https://hoppscotch.com/download), and collaborate with your team via our cloud! Now, if you prefer hosting on your own, check out Hoppscotch [Self-Host Editions](https://hoppscotch.com/pricing) and deploy Hoppscotch on your servers. With our latest release v2024.6.0, we’re taking a step forward in building out our Hoppscotch ecosystem of products as well as bringing some cool features across the cloud API client and our self-hosted products! ## Connect your CLI with the API client ![CLI collections runs on Hoppscotch](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ue7rqt5yjlqus8x4iibt.png) Hoppscotch CLI can now connect with your API client and run collections that are present in shared workspaces. To run the collections present in your API client, you need to create a personal access token! Once the personal access token is created, you can right-click any collection, select Run Collection, copy the command you see, and run it on the CLI! To run a collection present in the Hoppscotch cloud, the command looks like the one below: ```bash hopp test -e <environment id> -d <delay_i_ms> <hoppscotch collection id> --token <access_token> ``` You can also run a collection present on your self-hosted instance by passing your server URL as an argument ```bash hopp test -e <environment id> -d <delay_i_ms> <hoppscotch collection id> --token <access_token> --server <server url> ``` [Click here to read more about running a collection](https://docs.hoppscotch.io/documentation/clients/cli/overview#running-collections-present-on-the-api-client) ## Client Certificates on the Hoppscotch App We’ve brought the ability to add client certificates to authenticate requests on the Hoppscotch desktop app! You can now use a **`.pem`** or **`.pfx/.pkcs12`** certificate to authenticate your requests to a configured domain. Once set up, these certificates are applied whenever you request HTTP to your specified domains. [Read more about setting up client certificates on the Hoppscotch Documentation](https://docs.hoppscotch.io/documentation/features/client-certificate) ![Client Certificates on Hoppscotch](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/emfejp04b9fkn5ibwz4e.png) ## Custom Banners Custom banners allow the self-host admin to share important announcements such as a scheduled maintenance or instance upgrade with the rest of your team! Banners are exclusive to Hoppscotch self-host enterprise. ![Custom banners on Hoppscotch](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dt65q5feys1p8kwt7oac.png) ## Active users Self-host admins can now see the last active time of the users in your organization via the user management page on the Hoppscotch Admin dashboard, making it easy for you to manage inactive users on the app --- That’s it for the June release, a big thanks to all our contributors and supporters, and thank you so much for reading! If you’ve any feedback, contact us at **hello@hoppscotch.io** we’d love to hear from you!
thetronjohnson
1,907,130
Complete frequency
Weekly Challenge 276 Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance...
0
2024-07-01T04:44:42
https://dev.to/simongreennet/complete-frequency-2fke
perl, python, theweeklychallenge
## Weekly Challenge 276 Each week Mohammad S. Anwar sends out [The Weekly Challenge](https://theweeklychallenge.org/), a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding. [Challenge](https://theweeklychallenge.org/blog/perl-weekly-challenge-276/), [My solutions](https://github.com/manwar/perlweeklychallenge-club/tree/master/challenge-276/sgreen) ## Task 1: Complete Day ### Task You are given an array of integers, `@hours`. Write a script to return the number of pairs that forms a complete day. A complete day is defined as a time duration that is an exact multiple of 24 hours. ### My solution For this task, I could have used the [combinations](https://docs.python.org/3/library/itertools.html#itertools.combinations) generator to create the pairs, but that seems like overkill for this task. Instead I use a double loop. The other loop - with the variable `i` - is from 0 to two less than the number of items in the list. The inner loop - with the variable `j` - is from one more than `i` to one less than the number of items in the list. This method ensures that we use every possible pairs. I have a variable called `count` which records the number of pairs when the combination of hours is a multiple of 24. ```python def complete_day(hours: list) -> int: count = 0 items = len(hours) for i in range(items-1): for j in range(i+1, items): if (hours[i] + hours[j]) % 24 == 0: count += 1 return count ``` ### Examples ```bash $ ./ch-1.py 12 12 30 24 24 2 $ ./ch-1.py 72 48 24 5 3 $ ./ch-1.py 12 18 24 0 ``` ## Task 2: Maximum Frequency ### Task You are given an array of positive integers, `@ints`. Write a script to return the total number of elements in the given array which have the highest frequency. ### My solution For this task, I use the [Counters](https://docs.python.org/3/library/collections.html#collections.Counter) function to turn the the list into a dict of frequencies. The key is the integer, the value is the number of times it occurs. Perl doesn't have a similar function, so I do this manually in my Perl solution. The steps I take is as follows: 1. Calculate the frequency of each integer, and store this in the `freq` dict (hash in Perl). 1. Find the maximum frequency, and store this as `max_freq`. 1. Count the number of elements in the `freq` dict that have `max_freq`. This is stored as the `elements` variable. 1. Return the product of the `max_freq` and `elements` variable. This represents the number of items in the original array that have the highest frequency. ```python def maximum_frequency(ints: list) -> str: freq = Counter(ints) max_freq = max(freq.values()) elements = sum(1 for v in freq.values() if v == max_freq) return elements * max_freq ``` ### Examples ```bash $ ./ch-2.py 1 2 2 4 1 5 4 $ ./ch-2.py 1 2 3 4 5 5 ```
simongreennet
1,907,129
6 repos used by the top 1% of Next.js dev 🏆
Struggling to find good projects for learning or building something cool in Next.js? In this...
0
2024-07-01T04:44:36
https://dev.to/manojgohel/6-repos-used-by-the-top-1-of-nextjs-dev-efe
javascript, webdev, programming, nextjs
Struggling to find good projects for learning or building something cool in Next.js? In this article, I will help you discover the best projects that top developers use to grow faster and get good career opportunities. Ready to find some of the coolest projects in Next.js, let’s start: ## 1\. Extrapolate ![](https://miro.medium.com/v2/resize:fit:700/1*nUnPZg9f4IAZQsZEMY5sow.png) **Extrapolate** uses cutting-edge AI to give you a glimpse into the future. Curious about how you’ll look in 10, 20, or even 90 years? Simply upload a photo and watch as the AI generates a fascinating age progression. With over 371.9K photos generated, Extrapolate offers a fun and intriguing way to satisfy your curiosity about aging. Created by Steven and Ajay, this innovative project ensures your data is safe and gives you full control over your account. Try it out and see your future self today! [**GitHub Link**](https://github.com/steven-tey/extrapolate) ## 2\. Taxonomy ![](https://miro.medium.com/v2/resize:fit:700/1*X5As0Upu7nYz3Fwy4Z2Emw.png) **Taxonomy** is an innovative example application developed using Next.js 13 server components, designed to showcase modern web app capabilities. Led by Shadcn and hosted on Vercel, Taxonomy integrates key features like authentication with NextAuth.js, subscription services via Stripe, and utilizes Prisma for ORM. Built-in the app directory structure, it incorporates React 18’s server and client components alongside UI elements from Radix UI and Tailwind CSS styling. The project also includes a comprehensive blog and documentation site powered by Contentlayer and MDX, all proudly open source on GitHub. [**GitHub Link**](https://github.com/shadcn-ui/taxonomy) ## 3\. Dub.co ![](https://miro.medium.com/v2/resize:fit:700/1*xnjg07LBTHkO8E_AascufA.png) **Dub.co** is a cutting-edge open-source link management platform designed for modern marketing teams. Offering much more than basic link shortening, Dub.co equips users with powerful analytics, custom branded links, QR code generation, and seamless team collaboration. Trusted by top companies like Vercel and Prisma, it enables marketers to track detailed metrics, personalize links, and optimize campaigns with ease. With an intuitive user interface and robust features, Dub.co is the ultimate tool to supercharge your marketing efforts. [**GitHub Link**](https://github.com/dubinc/dub) **Need More Amazing Projects 👇** [**200 x Amazing Projects for Developers**](https://mohitvaswani21.gumroad.com/l/50-amazing-products) [**300 x Rust Projects**](https://mohitvaswani21.gumroad.com/l/50-rust-projects) [**200 x Amazing NextJs Projects**](http://mohitvaswani21.gumroad.com/l/50-nextjs-projects) [**50 x TypeScript Projects**](http://mohitvaswani21.gumroad.com/l/50typescriptprojects) ## 4\. QrGPT ![](https://miro.medium.com/v2/resize:fit:700/1*FlEitkP-Le-JEEz2racrSg.png) **QrGPT** makes generating stylish AI-powered QR codes quick and easy. In just seconds, you can create unique QR codes for free, perfect for any need. Proudly open-source, QrGPT’s code is available on GitHub, reflecting the collaborative spirit behind its creation by Hassan and Kevin. Whether for personal use or business, QrGPT combines simplicity and innovation to help you generate the perfect QR code effortlessly. [**Github Link**](https://github.com/Nutlope/qrGPT) ## 5\. OpenBio ![](https://miro.medium.com/v2/resize:fit:700/1*BYcS0ec0jA6GPrPVcTHMqg.png) **OpenBio** is an open-source link-in-bio page builder that allows you to create stunning link-in-bio pages for free. Start with our forever-free plan offering essential features like one link, basic analytics, and a custom domain. As you grow, upgrade to the Pro plan for unlimited links, advanced analytics, and priority support for just $9 per month. With OpenBio, you can easily manage and customize your online presence, backed by the transparency and collaboration of an open-source community. Created by developers passionate about simplicity and accessibility, OpenBio helps you make the most of your digital footprint. [**GitHub Link**](https://github.com/vanxh/openbio) ## 6\. TurboSeek ![](https://miro.medium.com/v2/resize:fit:700/1*LZnHE6urXxHR3yb3N7STCQ.png) **TurboSeek** is an innovative AI search engine inspired by Perplexity, designed to deliver fast and accurate search results. As an open-source project powered by Together.ai, TurboSeek leverages advanced technologies like Mixtral 8x7B, Llama-3, and Bing’s search API. It processes user queries by retrieving and contextualizing top search results and then providing insightful answers and related questions. Built with Next.js and Tailwind, TurboSeek ensures smooth performance and observability with Helicone and analytics via Plausible. Explore the future of search with TurboSeek’s cutting-edge AI capabilities and collaborative open-source spirit. [**GitHub Link**](https://github.com/Nutlope/turboseek) Thanks for Reading!
manojgohel
1,907,128
Outdoor vs Indoor Sauna: Which Is Better to Buy?
Upon my experience with both types of saunas—indoor and outdoor—I discovered that although indoor...
0
2024-07-01T04:43:58
https://dev.to/jack71180/outdoor-vs-indoor-sauna-which-is-better-to-buy-5ejf
beginners, healthydebate, webdev, learning
Upon my experience with both types of saunas—indoor and outdoor—I discovered that although indoor saunas are more convenient and easily integrated into the home, outdoor saunas allow greater flexibility and a distinct connection to nature. In the end, the optimal option will rely on your tastes and the amount of room you have available. Like designing your house or organizing a meal, selecting an indoor sauna is a matter of taste. There are many customization options available to let your cabin express your personality. Everything is up to you to choose from, including the form and style of the construction. One of the most important choices you’ll have to make is whether to build an outdoor or indoor sauna. We offer complete instructions in our website article to assist you in creating the ideal home sauna that suits your preferences. To read full blog click [Outdoor vs Indoor Sauna](https://sunasusa.com/outdoor-vs-indoor-sauna-which-is-better-to-buy/ ).
jack71180
1,906,926
How to Use Bcrypt for Password Hashing in Node.js
Page Content Introduction to Bcrypt Getting Started with Bcrypt Using Bcrypt with...
0
2024-07-01T04:43:28
https://dev.to/mbugua70/how-to-use-bcrypt-for-password-hashing-in-nodejs-1l7e
node, mongodb, mongoose, backenddevelopment
##Page Content * [Introduction to Bcrypt](#introduction-to-bcrypt) * [Getting Started with Bcrypt](#getting-started-with-bcrypt) * [Using Bcrypt with Mongoose Pre Save Middleware](#using-bcrypt-with-mongoose-pre-save-middleware) * [Implementing Password Comparison with Instance Methods in a Nodejs and Mongoose Application](#implementing-password-comparison-with-instance-methods-in-a-nodejs-and-mongoose-application) * [Password Verification in Nodejs Using Bcrypt with Mongoose Instance Methods](#password-verification-in-nodejs-using-bcrypt-with-mongoose-instance-methods) * [Conclusion](#conclusion) ##Introduction to Bcrypt Securing user data has become paramount, and one of the most critical aspects of this security is password protection. This is where password hashing comes into play. **Password hashing:** is a process of transforming a plain text password into a fixed length string of characters, which is typically a cryptographic hash. **Bcrypt:** A library to help you hash passwords. ##Getting started with Bcrypt ![Let's get started](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kch2h1a058usjm9dcqcd.gif) To start using bcrypt in your Node.js application, you first need to install it. Below are the instructions for installing bcrypt: `npm install bcrypt` ##Using Bcrypt with Mongoose Pre-Save Middleware In this section, we will explore how to use bcrypt with Mongoose pre save middleware to securely hash passwords before saving them to the database. This approach ensures that plain text passwords are never stored in the database, enhancing the security of your Node.js application. Before we begin, make sure you have installed both mongoose and bcrypt in your project: `npm install mongoose bcrypt` ###Importing required modules ``` const mongoose = require('mongoose'); const bcrypt = require('bcrypt'); ``` The pre save middleware is a function that runs before a document is saved to the database. This method it is an important way to hash the user's password: ``` // pre save middleware UserSchema.pre("save", async function () { // hashing the password const salt = await bcrypt.genSalt(15); this.password = await hash(this.password, salt); }); ``` ##Implementing Password Comparison with Instance Methods in a Node.js and Mongoose Application In this section, we will discuss how to use instance methods in Mongoose to compare passwords using bcrypt. This approach is particularly useful for verifying user credentials during the login process. Here is the code for the instance method, followed by an explanation of how it works: ``` UserSchema.methods.comparePassword = async function (mainpassword) { return await bcrypt.compare(mainpassword, this.password); }; ``` ###Explanation **Instance Method Definition** - _UserSchema.methods.comparePassword_: This line defines a new instance method called comparePassword on the Mongoose schema UserSchema. Instance methods are functions that operate on individual documents _instances_ of the model. **Async Function** - _async function (mainpassword) { ... }_ : The method is defined as an asynchronous function that takes mainpassword as an argument. mainpassword represents the plain text password that needs to be compared with the stored hashed password. **Password Comparison** - _return await bcrypt.compare(mainpassword, this.password);_ : This line uses bcrypt compare function to compare the provided plain text password **mainpassword** with the hashed password stored in the current document **this.password** . The compare function returns a promise that resolves to true if the passwords match and false otherwise. ##Password Verification in Node.js Using Bcrypt with Mongoose Instance Methods In this section, we will focus on how to use the **comparePassword** instance method within the login logic of a Node.js application to securely verify user passwords using bcrypt. Please note that the validation and error handling in this example are not the primary focus and are included for completeness. ``` const login = async (req, res) => { const { email, password } = req.body; // validation if ((!email, !password)) { throw new BadRequestError("Please provide email and password"); } const userLogin = await UserModel.findOne({ email }); if (!userLogin) { throw new UnauthenticatedError("Invalid credentials"); } const isPasswordCorrect = await userLogin.comparePassword(password); if (!isPasswordCorrect) { throw new UnauthenticatedError("Invalid credentials"); } const token = userLogin.createToken(); res .status(StatusCodes.OK) .json({ user: { name: userLogin.getName() }, token }); }; ``` ###Explanation of comparePassword Usage **Finding the User** - The code first retrieves the user document from the database using `UserModel.findOne({ email })` - If the user is not found, it throws an error indicating invalid credentials. **Comparing Passwords** `const isPasswordCorrect = await userLogin.comparePassword(password);` - This line uses the **comparePassword** instance method defined on the user schema. - The method compares the provided plain text password **password** with the hashed password stored in the database _userLogin.password_ . - _comparePassword_ uses _bcrypt_ compare function/method and returns true if the passwords match, or false otherwise. **Handling Incorrect Passwords** - If the password comparison fails _!isPasswordCorrect_ , an error is thrown indicating invalid credentials. **Generating and Returning a Token** - If the password comparison succeeds, a token is generated using _userLogin.createToken()_ - The response includes the user name and the generated token ##Conclusion In this article, we explored how to use [bcrypt](https://www.npmjs.com/package/bcrypt) in a [Node.js](https://nodejs.org/en) application with [Mongoose](https://mongoosejs.com/) to securely hash and verify passwords. We covered the installation of [bcrypt](https://www.npmjs.com/package/bcrypt), the implementation of password hashing using [Mongoose](https://mongoosejs.com/) pre save middleware, and the use of [Mongoose](https://mongoosejs.com/) instance methods for password comparison during login. By following these steps, you can enhance the security of your application authentication system, ensuring that user passwords are properly protected. ![Happy Coding](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54ytgodi9v7ydrwanpsu.jpg)
mbugua70
1,907,127
What is Azure Data Factory (ADF) Integration Runtime?
What is Azure Data Factory? Azure Data Factory (ADF) is a cloud-based data integration...
0
2024-07-01T04:42:48
https://dev.to/shivamchamoli18/what-is-azure-data-factory-adf-integration-runtime-3jd3
azure, azurecloud, azuredatafactory, infosectrain
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0po09ec1a4956ofq3lbn.jpg) ## **What is Azure Data Factory?** Azure Data Factory (ADF) is a cloud-based data integration service provided by Microsoft Azure. It is designed to enable organizations to create, schedule, and manage data pipelines that can move data from various source systems to destination systems, transforming and processing it along the way. ## **What is Integration Runtime in Azure?** Integration Runtime (IR) is the backbone of Azure Data Factory, providing the essential computational resources needed for data transfer operations and efficiently executing data-related tasks. In Azure Data Factory, a pipeline is constructed from a series of activities, each representing a specific action to be carried out. These actions can encompass tasks like data transfers or dispatching other actions within the pipeline. There are three types of integration runtime: ➔     Azure IR ➔     Self-hosted ➔     Azure-SSIS ## **Integration Runtime in Azure Active Directory** Azure Data Factory (ADF) Integration Runtime is a critical component of Microsoft Azure Data Factory, which is responsible for executing and managing data integration and transformation tasks. It provides the infrastructure and resources necessary to move, transform, and process data within ADF pipelines. Integration Runtime serves as the bridge between data sources and destinations, ensuring seamless data movement across various systems, whether they are on-premises, in the cloud, or in hybrid environments. ➔ **Facilitates Data Movement:** It enables data to flow between diverse data stores, such as databases, cloud storage, and files, supporting tasks like data migration, replication, and extraction. ➔ **Empowers Data Transformation:** Integration Runtime allows for data transformations, including cleaning, aggregating, and structuring data to meet specific business requirements. ➔ **Manages Orchestration:** It schedules and orchestrates the execution of activities within ADF pipelines, ensuring that data processing occurs in the desired order and timeframe. ➔ **Ensures Hybrid Compatibility:** ADF Integration Runtime is versatile and capable of handling data integration between on-premises and cloud-based systems, making it suitable for hybrid data architectures. ➔ **Integrates with Azure Services:** It seamlessly integrates with other Azure services, enhancing its capabilities and supporting a wide range of data integration scenarios. ➔ **Monitors and Secures Data Movement:** Integration Runtime provides monitoring and security features to track and safeguard data during its transfer and transformation. ## **Microsoft Azure with InfosecTrain** [InfosecTrain](https://www.infosectrain.com/) is a leading provider of IT and security training and consulting services. We provide [AZ-204Training ](https://www.infosectrain.com/courses/developing-solutions-microsoft-azure-training/)courses designed to improve your cloud computing knowledge and skills. By learning from our seasoned industry experts, you can acquire the fundamental skills necessary to excel in this swiftly expanding field.
shivamchamoli18
1,907,126
GenAI Is Trying to Kill Creativity
But finally, humans are starting to push back A ‘joker in the pack’ is a person or thing that...
0
2024-07-01T04:42:18
https://dev.to/manojgohel/genai-is-trying-to-kill-creativity-2eof
genai, creativity, webdev, beginners
> But finally, humans are starting to push back A ‘joker in the pack’ is a person or thing that could change a situation in an unexpected way. For Sam Altman and his fellow AI grifters, they are certainly changing the creative landscape — just not in the way they hoped. This week, my feeds have been dominated by the Luma Dream Machine, another AI model that can create videos from text prompts. The results have been pretty awful. Somebody turned [classic memes into videos](https://x.com/hey_madni/status/1801900554488291414), somehow producing something worse than the videos the memes are based on. Or remember The Flash film that was widely ridiculed for its laughable CGI? It can rest easy now, as GenAI has shown it could have been [_a lot_ worse](https://x.com/blizaine/status/1801522377479885303). It’s the same but different to OpenAI’s maybe-to-be-released model, [Sora](https://twitter.com/OpenAI/status/1758192957386342435). Like ChatGPT, you enter a prompt, and bam, you have a strange cat video where the [person’s hand is detached](https://twitter.com/tomwarren/status/1758203473881956689?ref=wheresyoured.at). Put in another prompt, and wham, you have a woman walking down the street with her legs doing all [sorts of weird skips](https://twitter.com/OpenAI/status/1758192965703647443?s=20&ref=wheresyoured.at). Put in another prompt, and bingo, you have a video of a couple walking in the snow while everyone around them [disappears into thin air](https://twitter.com/OpenAI/status/1758192957386342435?s=20&ref=wheresyoured.at). Despite people claiming to be blown away by these developments, let’s get serious for a second — it’s _barely_ impressive-ish. It’s just serviceable enough until your brain snaps out of the trance, and you enter the uncanny valley. I don’t get it. Is this sort of Gen AI even useful? If you want to produce videos that are kinda real for a split second until the facade falls apart, then sure. Even if/when it improves, a more important question remains — do we need it? I like the way [Brian Merchant](https://open.substack.com/users/934423-brian-merchant?utm_source=mentions) puts it; > “\[As\] the limitations of generative AI become painfully clear, as the companies responsible for it become more ethically compromised: What is the AI-generated variety for? People generally prefer humans in customer service over AI and automated systems. AI art is widely maligned online; teens have taken to disparaging it as “Boomer art.” AI doesn’t offer better products, necessarily: It just offers more, and for less money.” The AI companies know this. That’s why we’re seeing their continued efforts to push away from human creativity. (To clarify: I don’t consider typing in prompts as being creative.) Rather than turn to a professional or a passionate creator who has invested their life in videography (or photography, art, or writing, for that matter) and can offer a wealth of deep knowledge and understanding of a subject based on _personal experience_, Sam Altman and his merry men want us to turn to “democratized” technology, so we can mindlessly produce the output ourselves, only a more procedural, human-lite version. Always consider the motive. Take OpenAI. Does it really give a shit about how many people can make videos? No. The reality is that it needs a new product to shill as interest and usage in ChatGPT [begins to wane](https://www.reuters.com/technology/booming-traffic-openais-chatgpt-posts-first-ever-monthly-dip-june-similarweb-2023-07-05/), just at the point Sam Altman has moved the company to being for-proft. It needs more products, more users, and more money. It’s why they signed the terrible deal with Apple, in which Apple isn’t paying a cent for its users to use ChatGPT — and only if they opt-in every time to do so (the deal will actually cost OpenAI money). It’s desperate to spread wide and broad in any way it can, seemingly at the expense of good business acumen. **It’s clear that the bigger goal of AI companies is to turn creativity into a commodity**. They want to dismantle the very thing that makes creativity so unique — the talent, the dedication and the vision — and package it up in a shiny ChatGPT wrapper to sell back to us. With the continued attempts of tech companies to force AI technology into everything around us while simultaneously selling us the distorted dream of living our lives in alternative realities through headsets, we’re beginning to move closer to a creative-less existence — a world where we plug ourselves into our headsets, lost in a fantasy land we’re told is better than base reality, achieving creativity with no effort, sitting in our own filth, drooling, waiting for Big Tech to feed us our next dopamine hit. Okay, that’s a bit extreme (I hope?) But it’s not far off what some tech companies want to sell you with “democratized” technology. They don’t care about creativity. They don’t care what it means to be creative or the unparalleled personal growth and satisfaction of learning to do something that gets those creative juices flowing in your brain. They care about getting more and more people to use their products — and become dependent on using those products — because of money. They don’t care about the process of creativity; they care that you use their technology more than anyone else’s. Returning to AI, I feel the pursuit has gone backward. Or, perhaps, it was backward from the start. I am tech-cynical, but you always hope new technologies deliver something that improves society as a whole. And then, it always goes the same way: the needs of society are pushed aside in pursuit of profit. Ask yourself this: why are we letting AI companies dumb down creativity to the basic input of “_enter what you want to create”_? Call me a traditionalist — or a delusionist — but to me, creativity shouldn’t be democratized. It shouldn’t be made available to everyone and anyone at the touch of a button or the typing of a prompt. Creativity is, by definition, finite and limited because not everyone has it in them to commit to the pursuit of mastering a creative endeavor. And that’s fine. If you don’t have the capacity, the skill, or the patience to take up a creative pursuit, that’s just how it is. Why should something that has taken others years to master become available to you at the typing of a prompt? I already know that there will be those in the comments who are believe they _are_ being creative by making AI stuff. But to me, creative people are those who dedicate themselves to a craft. And it’s becoming apparent that these two camps can’t co-exist. We’re only really a year into the AI trend, and already, it’s leading to a movement of sorts — another divide in the creator spectrum. The anti-LLM stance is gaining momentum, with more and more creators, media sites and brands adopting a zero-use approach or branding themselves as human-first. Actors pushed back. Writers are demanding platforms remove AI work and prevent AI bots from training off their content. Music artists are following, with over [200 calling for protections](https://pitchfork.com/news/billie-eilish-rem-kacey-musgraves-more-sign-open-letter-warning-of-ai-infringement-on-artists-rights/) against the “predatory use of AI” that “infringes upon and devalues the rights of artists.” Sound familiar? The audience is going to split down this line, too, choosing to support brands that do or don’t use AI. Before you know it, you’ll have AI companies that are hungry for profit trying to grow in an ever-shrinking market. Hype and bubbles will only carry you so far — if the audience turns against you, it’s game over. As I said in the intro, Sam Altman thinks he and his fellow AI shillers are the joker in the pack, leading the revolution in creative output at the expense of creativity and creatives who give so much to their craft. It seems he’s started the revolution all right — just not the one he was counting on.
manojgohel
1,907,125
Piles treatment in kochi
Say goodbye to the discomfort of piles and regain your quality of life with the best piles treatment...
0
2024-07-01T04:39:06
https://dev.to/vichu_9036b15e01af17d684a/piles-treatment-in-kochi-nna
piles, kochi, mykarehealth
Say goodbye to the discomfort of piles and regain your quality of life with the best [piles treatment in kochi](https://mykarehealth.com/kochi/proctology-treatment/piles-surgery). Book your consultation with us today and take the first step towards a healthier, happier you. Choose Myakre health for your health care journey. Using advanced technology, it reduces pain and discomfort with minimal recovery time. It’s a great option for those looking for effective relief from piles you can choos best piles treatment in kochi. We provide Personalised care, right from consultation till recovery. Meet with our skilled surgeons before the condition for a stress-free, highly advanced piles surgery at affordable prices. choose Mykare Health for your better health care journey.
vichu_9036b15e01af17d684a
1,907,124
Explaining Decorators in Django: A Guide for Beginners
Learn how decorators in Django can streamline your code, enhance security, and improve...
0
2024-07-01T04:37:49
https://dev.to/ismailsoftdev/explaining-decorators-in-django-a-guide-for-beginners-9gl
django, webdev, python
Learn how decorators in Django can streamline your code, enhance security, and improve maintainability by adding reusable functionality to your views. ## **1. Introduction to Decorators** Understand how decorators in Python modify function behavior, laying the groundwork for their powerful application in Django. ### **1.1. What Are Decorators in Python?** Decorators in Python are a powerful tool that allows you to modify the behavior of a function or class. They provide a simple syntax for calling higher-order functions and are often used to add functionality to existing code in a clean and readable way. ### **1.2. Why Decorators Are Useful in Django Development** In Django, decorators are particularly useful because they allow you to manage access control, perform checks, and handle repetitive tasks across multiple views, enhancing code reusability and readability. ## **2. Basic Python Decorators** Explore the fundamental structure and usage of decorators in native Python, setting the stage for their practical implementation in Django. ### **2.1. How Decorators Work in Python** At their core, decorators are functions that wrap another function to extend its behavior. Here’s a quick overview of their structure: ```python def my_decorator(func): def wrapper(*args, **kwargs): print("Something is happening before the function is called.") result = func(*args, **kwargs) print("Something is happening after the function is called.") return result return wrapper ``` ### **2.2. Example of Simple Decorators in Python** Here’s a simple example of using a decorator to print messages before and after a function call: ```python @my_decorator def say_hello(): print("Hello!") # call the function say_hello() ``` ## **3. Understanding Django View Functions** Delve into Django’s view functions and their pivotal role in handling HTTP requests and generating appropriate responses. ### **3.1. Django View Functions: Handling HTTP Requests** Django view functions are Python functions that take a web request and return a web response. They are the cornerstone of Django’s web handling capabilities, responsible for processing user input, interacting with the database, and returning the appropriate output. ### **3.2. Generating HTTP Responses with Django View Functions** When a view function processes a request, it generates an HTTP response. This response can be an HTML page, a JSON object, a redirect, or any other valid HTTP response. ## **4. Introduction to Decorators in Django** Discover how decorators in Django can efficiently manage access control, security checks, and other cross-cutting concerns within your application. ### **4.1. Using Decorators in Django** In Django, decorators are commonly used to modify the behavior of view functions. They help streamline code by managing access control, ensuring security, and handling other cross-cutting concerns. ### **4.2. The @decorator Syntax in Django** Django decorators use the `@decorator` syntax, which makes it easy to apply them to view functions. This syntax is concise and keeps the codebase clean and maintainable. ## **5. Common Built-in Decorators in Django** Explore essential Django decorators like `@login_required`, `@permission_required`, and others, optimizing security and user access management. ### **5.1. @login_required: Ensuring Authenticated Access** The `@login_required` decorator restricts access to a view to authenticated users only. If a user is not logged in, they are redirected to the login page. ```python from django.contrib.auth.decorators import login_required @login_required def my_view(request): pass ``` ### **5.2. @permission_required: Restricting Access Based on Permissions** The `@permission_required` decorator restricts access based on user permissions. It ensures that only users with the specified permissions can access the view. ```python from django.contrib.auth.decorators import permission_required @permission_required('app_name.permission_codename') def my_view(request): pass ``` ### **5.3. @csrf_protect: Securing Against CSRF Attacks** The `@csrf_protect` decorator adds protection against Cross-Site Request Forgery (CSRF) attacks by ensuring that POST requests contain a valid CSRF token. ```python from django.views.decorators.csrf import csrf_protect @csrf_protect def my_view(request): pass ``` ### **5.4. @require_http_methods: Specifying Allowed HTTP Methods** The `@require_http_methods` decorator restricts a view to handle only specified HTTP methods, such as GET or POST. ```python from django.views.decorators.http import require_http_methods @require_http_methods(["GET", "POST"]) def my_view(request): pass ``` ## **6. Creating Custom Decorators** Learn how to craft custom decorators in Django to encapsulate specific business logic and enforce application-specific rules. ### **6.1. How to Create Custom Decorators in Django** Creating custom decorators in Django involves defining a function that returns a wrapper function. This wrapper function contains the additional functionality you want to apply to your view. ### **6.2. Example: Creating a @staff_required Decorator** Here’s an example of a custom decorator that restricts access to staff members only: ```python from django.http import HttpResponseForbidden def staff_required(view_func): def _wrapped_view(request, *args, **kwargs): if not request.user.is_staff: return HttpResponseForbidden("You do not have permission to view this page.") return view_func(request, *args, **kwargs) return _wrapped_view ``` ## **7. Chaining Decorators** Master the art of combining multiple decorators to apply layered functionality, ensuring comprehensive and efficient view management in Django. ### **7.1. How to Chain Decorators** Decorators can be chained to apply multiple layers of functionality to a single view. Chaining decorators allows you to combine their effects seamlessly. ### **7.2. Example: Applying @login_required and @permission_required** Here’s an example of chaining the `@login_required` and `@permission_required` decorators: ```python from django.contrib.auth.decorators import login_required, permission_required @login_required @permission_required('app_name.permission_codename') def my_view(request): pass ``` ### **Best Practices and Tips** Implementing best practices and effective tips ensures that you use decorators in Django to their fullest potential, maintaining code readability, organization, and performance. **Tips for Using Decorators Effectively:** - Use built-in decorators whenever possible to take advantage of Django’s optimized solutions. - Keep your custom decorators simple and focused on a single task. **Best Practices for Organizing and Naming Decorators:** - Store custom decorators in a separate module, such as `decorators.py`, for better organization. - Use clear and descriptive names for your decorators to indicate their purpose. ### **Performance Considerations** - Be mindful of the performance impact of multiple decorators. Each decorator adds a layer of processing to your view. - Test your views to ensure that the added decorators do not significantly slow down response times. Embracing decorators in Django empowers you to enhance their applications with robust functionality while maintaining clarity and efficiency in code. By leveraging built-in decorators and creating custom ones tailored to specific needs, You can achieve better access control, improved security measures, and streamlined development processes. Mastering decorators not only boosts the functionality of your Django projects but also fosters a more structured and maintainable codebase, ensuring long-term scalability and reliability.
ismailsoftdev
1,907,123
Demystifying Concurrency: Exploring Multithreading vs. Multiprocessing in Python
In the fast-paced world of programming, efficiency is paramount. Python empowers developers with...
0
2024-07-01T04:36:54
https://dev.to/epakconsultant/demystifying-concurrency-exploring-multithreading-vs-multiprocessing-in-python-3cb6
python
In the fast-paced world of programming, efficiency is paramount. Python empowers developers with various techniques to achieve concurrency, where multiple tasks appear to execute simultaneously. This article delves into two prominent approaches: multithreading and multiprocessing, guiding you through their strengths, weaknesses, and ideal use cases. Understanding Concurrency: Imagine a chef preparing a meal. While boiling pasta, they can chop vegetables (concurrency). In programming, concurrency allows your application to handle multiple tasks seemingly at once, potentially improving performance and responsiveness. 1. Multithreading: - Concept: Spawns multiple threads within a single process. Threads share the same memory space and resources (CPU, memory) but have their own execution stack. - Benefits: 1. - Lightweight: Threads are less resource-intensive to create and manage compared to processes. 2. - Fast Context Switching: Switching between threads within the same process is efficient. 3. - Shared Memory Access: Threads can directly access and modify shared data structures. - Drawbacks: 1. Global Interpreter Lock (GIL): Python's GIL restricts true parallel execution of CPU-bound tasks. Only one thread can execute Python bytecode at a time, potentially negating performance benefits for CPU-intensive operations. 2. Race Conditions: Since threads share data, careful synchronization is required to prevent data corruption when multiple threads attempt to access or modify the same data simultaneously. 2. Multiprocessing: - Concept: Creates multiple separate processes. Each process has its own memory space, resources, and execution stack. Processes communicate through inter-process communication (IPC) mechanisms. - Benefits: 1. True Parallelism: Multiple processes can genuinely execute CPU-bound tasks in parallel, leveraging the capabilities of multi-core processors. 2. No GIL Limitation: The GIL doesn't restrict parallel execution of CPU-bound tasks within separate processes. - Drawbacks: 1. Heavyweight: Creating and managing processes is more resource-intensive compared to threads. 2. Slower Context Switching: Switching between processes involves more overhead compared to threads. 3. Shared Memory Access: Processes cannot directly access each other's memory space. Data exchange requires explicit IPC mechanisms. Choosing the Right Approach: The optimal approach depends on your application's needs: - I/O-Bound Tasks: For tasks involving significant waiting (e.g., network requests, file I/O), multithreading can be beneficial. Threads can efficiently manage waiting periods while keeping the application responsive. - CPU-Bound Tasks: For computationally intensive tasks (e.g., scientific calculations, image processing), multiprocessing shines. Separate processes can leverage multiple cores for true parallel execution. - Shared Data Considerations: If your tasks involve extensive data sharing, multithreading might be simpler due to direct memory access. However, prioritize robust synchronization mechanisms to avoid race conditions. [Flutter Mobile App Development: A Beginner's Guide to Creating Your First App](https://www.amazon.com/dp/B0CTHQ9YGB) Python Libraries: Python offers libraries for both multithreading and multiprocessing: - Multithreading: The built-in threading module provides functionalities for creating, managing, and synchronizing threads. - Multiprocessing: The multiprocessing module offers tools for creating and managing processes, along with functionalities for inter-process communication. Synchronization Techniques: When using multithreading, proper synchronization is crucial: - Locks (Mutexes): Ensure only one thread can access a critical section of code (e.g., modifying shared data) at a time. - Semaphores: Control access to a limited pool of resources, preventing overconsumption. Conclusion: Multithreading and multiprocessing provide powerful tools for achieving concurrency in Python applications. Understanding their strengths, weaknesses, and ideal use cases empowers you to make informed decisions for your specific programming endeavors. Remember, effective use of concurrency involves careful planning, considering factors like task parallelism, shared memory access, and proper synchronization techniques. By leveraging these concepts effectively, you can craft applications that are efficient, responsive, and well-suited to handle demanding workloads.
epakconsultant
1,907,122
🔐Never Forget a Password Again: Build Your Own Secure Manager
Hello everyone! Today, I'm excited to present my new weekly project: a password manager designed to...
0
2024-07-01T04:35:33
https://dev.to/brokarim/never-forget-a-password-again-build-your-own-secure-manager-i34
react, mysql, node, express
Hello everyone! Today, I'm excited to present my new weekly project: a password manager designed to solve the common challenges we all face when trying to keep track of our passwords. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rytxyrb31ug0thfxeixy.gif) With this password manager, you can securely save all your passwords without the fear of them being exposed or forgotten. This password manager, built with ReactJS for the frontend, MySQL for the backend, and ExpressJS for the server, provides a secure and user-friendly environment for users to store and manage their credentials. It offers convenient features like adding, editing, and deleting password entries. Users can show or hide their passwords by simply hovering over the password card. Additionally, when users add a platform name, our system automatically recognizes it and displays the platform logo, making it easy to identify your accounts at a glance. Behind the scenes, when you add a password, our app encrypts it using advanced encryption algorithms to ensure it stays secure. This encryption involves a unique key and initialization vector, making sure your passwords are well protected. The encrypted password is then sent to our secure backend server and stored in a database. 🔥This project will have several parts because I want to add other features such as login, search bar, edit, and delete So stay tune for more ...👌🏻👋 Demo : [Instagram](https://www.instagram.com/p/C81OlCXvqg8/) Source Code : [Github](https://github.com/BroKarim-Project/MyPass)
brokarim
1,907,120
Creating a Developer Content Strategy
When I first started to write content, I thought that each. time you create something, it has to be...
24,582
2024-07-01T04:34:30
https://dev.to/jacobandrewsky/creating-a-developer-content-strategy-1nlh
devrel, javascript, programming, beginners
When I first started to write content, I thought that each. time you create something, it has to be unique. For example, if you write an article, then if you would like to record a video, it should be something completely different. While in fact, it could be better to choose one topic that can be used in several channels so that all interest groups can actually benefit from what you are creating. It does not mean that you should stick to one topic for the next 5 or more content types. It is more about the fact that there are topics that can easily be presented in many different forms to satisfy different people. Some of the people I know prefer to learn from video tutorials, while others prefer documentation. And I could probably name even more types and it is especially important for us, Content Creators if we want to grow a big audience. ## Building Headless Commerce with Nuxt 3, Shopify, and TailwindCSS Let's take a look at my topic about `Building Headless Commerce with Nuxt 3, Shopify, and TailwindCSS`. It is based on my first ever video tutorial: {% youtube QK6wPHFiRyM %} I wanted to cover this topic as I thought that building an E-Commerce website with these technologies is really efficient and the experience is just great. So, there is the video. After that, I have created a GitHub Repository so that you can fork the project and work on it on your own. You can check it out [here](https://github.com/Baroshem/nuxt-shopify-tailwind) ![Nuxt 3, Shopify, Tailwind GitHub Repository](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y79aybyglkgvqdm28nog.png) Next, I decided to write a short blog post about it. It was not much as the whole process of building the headless commerce is explained in the video but still it is a content that can target a different type of audience. You can check out it [here](https://dev.to/jacobandrewsky/building-headless-commerce-with-nuxt-3-shopify-and-tailwindcss-293c) ![Nuxt 3, Shopify, Tailwind Dev.to article](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhbeign1wkfztwzlkf1x.png) Furthermore, I submitted a Call for Paper for Vue.js Germany with this topic and managed to be accepted! In this talk, I will be talking about how easy it is to built an E-Commerce with Nuxt 3 from scratch. You can see more details about the conference [here](https://conf.vuejs.de/) ![Vue Germany Conference](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cork4prrlgkwbn07tuji.png) ## Algolia Module for Nuxt 3 I am really happy to say that I am a main contributor and maintainer behind the Algolia module for Nuxt 3 that you can check out [here](https://github.com/nuxt-community/algolia-module). ![Algolia module for Nuxt 3 code repository](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0gbthq8utjq148i72b9.png) Then, I wrote an article about it so that people who prefer to read, could easily get started. You can check it out [here](https://dev.to/jacobandrewsky/how-to-add-algolia-search-to-nuxt-3-3o) ![Article in Dev.to about Algolia module for Nuxt 3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iovnia9j8zgqlthtfbd8.png) After that, I have recorded a video on how you can use this module to add Algolia to your Headless Commerce. {% youtube GGyp0Rqsxb8 %} And finally, I gave a talk at Algolia Search Party about the usage of the module in both Nuxt 3 and Vue Storefront 2. {% youtube baGZ3JY8bnU %} ## Summary For people who would like to start creating Developer content, I would like to give this advice that you do not have to create unique content each time you want to share something. At first you might have several interesting ideas to share but with the time passing, the amount of topics will be smaller and you may have difficulties to create something unique. And I can assure you, even if you create a content that will have millions of views, there will still be people who have not seen it so that you talk, video, or an article will be unique for them :)
jacobandrewsky
1,907,121
Tractorscope - The developer’s data visualization tool
Tractorscope is the modern SQL editing and data visualization platform built by engineers and...
0
2024-07-01T04:33:21
https://dev.to/tractorscope/tractorscope-the-developers-data-visualization-tool-13n6
analytics, database, dashboards, sql
Tractorscope is the modern SQL editing and data visualization platform built by engineers and designers for developers. Embed analytics into your apps or websites with just a few lines of code, and save hundreds of hours of development time. [https://tractorscope.com](https://tractorscope.com)
tractorscope
1,907,118
"Yoga For Life" by wix studio
A post by Sharmila kannan
0
2024-07-01T04:31:18
https://dev.to/sharmi2020/yoga-for-life-by-wix-studio-36ia
devchallenge, wixstudiochallenge, webdev, javascript
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpwg65u2k8ro4qcojio5.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8imvnowg628i579ueh4q.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxjzt84lb78zmdvdp2os.png)
sharmi2020
1,907,117
Building a Dynamic Work Budget Feature with React and NodeJS
Building a Dynamic Work Budget Feature with React and NodeJS
0
2024-07-01T04:30:57
https://radzion.com/blog/work-budget
react, node, typescript, ui
{% embed https://youtu.be/j4ndqIk8SRk %} 🐙 [GitHub](https://github.com/radzionc/radzionkit) | 🎮 [Demo](https://increaser.org) ### Introducing the Work Budget Feature in Increaser In today's article, we're going to build an exciting new feature for the productivity app, [Increaser](https://increaser.org). We're introducing a "Work Budget" feature that allows you to set weekly work targets, review previous weeks' work hours, and track average work durations for weekdays and weekends. This tool provides real-time updates on your current week's progress, making it an invaluable resource for monitoring your work habits and discovering strategies to boost your productivity by working smarter, not harder. The front end of this feature is developed using React, while the back end is powered by NodeJS and DynamoDB. Although the [Increaser](https://increaser.org) source code is private, all reusable components and utilities are available through the [RadzionKit](https://github.com/radzionc/radzionkit) repository. ![Increaser Work Budget](https://cdn-images-1.medium.com/proxy/1*6r-p09h3dpxdIR9D7bWyCg.jpeg) Our feature is thoughtfully divided into two main parts. On the left side, you can adjust your work budget using sliders, which provide an immediate preview on a bar chart to visualize what your week might look like. On the right side, a detailed report is segmented into three sections. The first section compares your total hours worked this week to your preset budget. The second section displays the average workday and weekend hours over the last 30 days, using colored days for visual representation. The final section shows a bar chart of your work hours from the past four weeks. ```tsx import { FixedWidthContent } from "@increaser/app/components/reusable/fixed-width-content" import { PageTitle } from "@increaser/app/ui/PageTitle" import { Page } from "@lib/next-ui/Page" import { UserStateOnly } from "../user/state/UserStateOnly" import { ManageWorkBudget } from "./ManageWorkBudget" import { UniformColumnGrid } from "@lib/ui/layout/UniformColumnGrid" import { WorkBudgetReport } from "./WorkBudgetReport" const title = "Work Budget" export const WorkBudgetPage: Page = () => { return ( <FixedWidthContent> <PageTitle documentTitle={`👍 ${title}`} title={title} /> <UserStateOnly> <UniformColumnGrid style={{ alignItems: "start" }} fullWidth minChildrenWidth={320} gap={40} > <ManageWorkBudget /> <WorkBudgetReport /> </UniformColumnGrid> </UserStateOnly> </FixedWidthContent> ) } ``` ### Design and Structure of the Work Budget Interface To ensure these components are evenly distributed across the interface, we are utilizing the UniformColumnGrid component from [RadzionKit](https://github.com/radzionc/radzionkit). By setting the `minWidth` attribute, we ensure that the layout remains aesthetically pleasing and functional on mobile screens, adapting to a single column layout as necessary. ```tsx import { VStack } from "@lib/ui/layout/Stack" import { useAssertUserState } from "@increaser/ui/user/UserStateContext" import { useTheme } from "styled-components" import { WorkBudgetInput } from "@increaser/ui/workBudget/WorkBudgetInput" import { getWorkdayColor } from "@increaser/ui/workBudget/getWorkdayColor" import { getWeekendColor } from "@increaser/ui/workBudget/getWeekendColor" import { useUpdateUserMutation } from "../user/mutations/useUpdateUserMutation" import { BarChart } from "@lib/ui/charts/BarChart" import { Text } from "@lib/ui/text" import { getShortWeekday } from "@lib/utils/time" import { formatDuration } from "@lib/utils/time/formatDuration" import { SectionTitle } from "@lib/ui/text/SectionTitle" import { Panel } from "@lib/ui/panel/Panel" import { InputDebounce } from "@lib/ui/inputs/InputDebounce" import { getWorkBudgetTotal } from "@increaser/entities-utils/workBudget/getWorkBudgetTotal" import { workdaysNumber } from "@lib/utils/time/workweek" import { useDaysBudget } from "@increaser/ui/workBudget/hooks/useDaysBudget" export const ManageWorkBudget = () => { const { workdayHours, weekendHours } = useAssertUserState() const { mutate: updateUser } = useUpdateUserMutation() const theme = useTheme() const workBudgetTotal = getWorkBudgetTotal({ workdayHours, weekendHours, }) const formattedWorkdBudgetTotal = formatDuration(workBudgetTotal, "h", { maxUnit: "h", kind: "long", }) const daysBudget = useDaysBudget() return ( <Panel> <VStack gap={20}> <SectionTitle> My preference ~ {formattedWorkdBudgetTotal} / week </SectionTitle> <VStack gap={40}> <VStack gap={28}> <InputDebounce value={workdayHours} onChange={(workdayHours) => updateUser({ workdayHours })} render={({ value, onChange }) => ( <WorkBudgetInput value={value} onChange={onChange} color={getWorkdayColor(theme)} name="Workday" /> )} /> <InputDebounce value={weekendHours} onChange={(weekendHours) => updateUser({ weekendHours })} render={({ value, onChange }) => ( <WorkBudgetInput value={value} onChange={onChange} color={getWeekendColor(theme)} name="Weekend" /> )} /> </VStack> <BarChart height={160} items={daysBudget.map((value, index) => { const color = index < workdaysNumber ? getWorkdayColor(theme) : getWeekendColor(theme) return { value, label: <Text>{getShortWeekday(index)}</Text>, color, renderValue: value > 0 ? () => ( <Text> {formatDuration(value, "min", { maxUnit: "h" })} </Text> ) : undefined, } })} /> </VStack> </VStack> </Panel> ) } ``` We encapsulate both sections in a `Panel` component from [RadzionKit](https://github.com/radzionc/radzionkit) for structured layout management. To visually separate them, we assign different 'kind' properties to each. The report section is set with a transparent background, effectively creating a subtle contrast that enhances the overall clarity of the interface. ```tsx import styled, { css } from "styled-components" import { toSizeUnit } from "../css/toSizeUnit" import { getColor } from "../theme/getters" import { match } from "@lib/utils/match" import { borderRadius } from "../css/borderRadius" type PanelKind = "regular" | "secondary" export interface PanelProps { width?: React.CSSProperties["width"] padding?: React.CSSProperties["padding"] direction?: React.CSSProperties["flexDirection"] kind?: PanelKind withSections?: boolean } export const panelDefaultPadding = 20 const panelPaddingCSS = css<{ padding?: React.CSSProperties["padding"] }>` padding: ${({ padding }) => toSizeUnit(padding || panelDefaultPadding)}; ` export const Panel = styled.div<PanelProps>` ${borderRadius.m}; width: ${({ width }) => (width ? toSizeUnit(width) : undefined)}; overflow: hidden; ${({ withSections, direction = "column", kind = "regular", theme }) => { const contentBackground = match(kind, { secondary: () => theme.colors.background.toCssValue(), regular: () => theme.colors.mist.toCssValue(), }) const contentCSS = css` ${panelPaddingCSS} background: ${contentBackground}; ` return withSections ? css` display: flex; flex-direction: ${direction}; ${kind === "secondary" ? css` background: ${getColor("mist")}; gap: 2px; ` : css` gap: 1px; `} > * { ${contentCSS} } ` : contentCSS }} ${({ kind }) => kind === "secondary" && css` border: 2px solid ${getColor("mist")}; `} ` ``` At the top of the `ManageWorkBudget` component, we display a title that includes the work budget selected by the user. To convert minutes into a readable time format, we utilize the `formatDuration` utility from [RadzionKit](https://github.com/radzionc/radzionkit). ```tsx import { convertDuration } from "./convertDuration" import { pluralize } from "../pluralize" import { durationUnitName, DurationUnit, durationUnits } from "./DurationUnit" import { match } from "../match" import { padWithZero } from "../padWithZero" import { isEmpty } from "../array/isEmpty" type FormatDurationKind = "short" | "long" | "digitalClock" interface FormatDurationOptions { maxUnit?: DurationUnit minUnit?: DurationUnit kind?: FormatDurationKind } export const formatDuration = ( duration: number, durationUnit: DurationUnit, options: FormatDurationOptions = {} ) => { if (duration < 0) { formatDuration(Math.abs(duration), durationUnit, options) } const kind = options.kind ?? "short" const maxUnit = options.maxUnit || "d" const minUnit = options.minUnit || "min" const maxUnitIndex = durationUnits.indexOf(maxUnit) const minUnitIndex = durationUnits.indexOf(minUnit) if (maxUnitIndex < minUnitIndex) { throw new Error("maxUnit must be greater than minUnit") } const units = durationUnits.slice(minUnitIndex, maxUnitIndex + 1).reverse() const result: string[] = [] units.forEach((unit, index) => { const convertedValue = convertDuration(duration, durationUnit, unit) const isLastUnit = index === units.length - 1 const wholeValue = isLastUnit ? Math.round(convertedValue) : Math.floor(convertedValue) duration -= convertDuration(wholeValue, unit, durationUnit) if (wholeValue === 0) { if (kind === "digitalClock") { if (index < units.length - 2 && isEmpty(result)) { return } } else if (!isLastUnit || !isEmpty(result)) { return } } const value = match(kind, { short: () => `${wholeValue}${unit.slice(0, 1)}`, long: () => pluralize(wholeValue, durationUnitName[unit]), digitalClock: () => padWithZero(wholeValue), }) result.push(value) }) return result.join(kind === "digitalClock" ? ":" : " ") } ``` ### Customizing the WorkBudgetInput Slider for Enhanced Usability Most users are unlikely to track more than 10 hours per day, aligning with [Increaser](https://increaser.org)'s philosophy of not encouraging excessive work hours. To accommodate this, we use a slider component named `WorkBudgetInput`. This component accepts a value in hours, an `onChange` callback for real-time updates, a `name` attribute for labeling, and a `color` in HSLA format. For more details on HSLA color formatting, you can refer to [this article](https://radzion.com/blog/hsla-color). ```tsx import { HSLA } from "@lib/ui/colors/HSLA" import { InputContainer } from "@lib/ui/inputs/InputContainer" import { LabelText } from "@lib/ui/inputs/LabelText" import { InputProps } from "@lib/ui/props" import { SegmentedSlider } from "@lib/ui/inputs/Slider/SegmentedSlider" type WorkBudgetInputProps = InputProps<number> & { color: HSLA name: string } export const WorkBudgetInput = ({ value, onChange, color, name, }: WorkBudgetInputProps) => { return ( <InputContainer as="div"> <LabelText>{name}</LabelText> <SegmentedSlider max={10} value={value} onChange={onChange} color={color} /> </InputContainer> ) } ``` The `WorkBudgetInput` component acts primarily as a wrapper around the `SegmentedSlider` from [RadzionKit](https://github.com/radzionc/radzionkit). This variant of a slider is designed with clear segments, making it particularly suitable for scenarios with a relatively small range of values. Users can easily count the segments to gauge the value quickly. The `WorkBudgetInput` enhances this setup by adding a label to the slider, making it more user-friendly and informative. ```tsx import styled, { useTheme } from "styled-components" import { PressTracker } from "../../base/PressTracker" import { InputProps } from "../../props" import { interactive } from "../../css/interactive" import { centerContent } from "../../css/centerContent" import { toSizeUnit } from "../../css/toSizeUnit" import { defaultTransition } from "../../css/transition" import { getColor } from "../../theme/getters" import { InvisibleHTMLSlider } from "./InvisibleHtmlSlider" import { PositionAbsolutelyCenterVertically } from "../../layout/PositionAbsolutelyCenterVertically" import { toPercents } from "@lib/utils/toPercents" import { Center } from "../../layout/Center" import { range } from "@lib/utils/array/range" import { HSLA } from "../../colors/HSLA" import { UniformColumnGrid } from "../../layout/UniformColumnGrid" type SegmentedSliderProps = InputProps<number> & { max: number color: HSLA } const sliderConfig = { railHeight: 20, controlSize: 24, } const Control = styled.div` transition: outline ${defaultTransition}; outline: 4px solid transparent; width: 8px; height: ${toSizeUnit(sliderConfig.controlSize)}; background: ${getColor("contrast")}; border-radius: 2px; ` const Container = styled.label` width: 100%; height: ${toSizeUnit(sliderConfig.controlSize + 4)}; ${interactive}; ${centerContent}; position: relative; &:focus-within ${Control} { outline: 8px solid ${getColor("mistExtra")}; } &:hover ${Control} { outline-color: ${getColor("mist")}; } ` const Line = styled(UniformColumnGrid)` width: 100%; height: ${toSizeUnit(sliderConfig.railHeight)}; border-radius: 4px; position: relative; overflow: hidden; ` const Section = styled.div`` export const SegmentedSlider = ({ value, onChange, max, color, }: SegmentedSliderProps) => { const { colors } = useTheme() const xPosition = toPercents(value / max) return ( <PressTracker onChange={({ position }) => { if (position) { const newValue = Math.round(position.x * max) onChange(newValue) } }} render={({ props }) => ( <Container {...props}> <InvisibleHTMLSlider step={1} value={value} onChange={onChange} min={0} max={max} /> <Line gap={1}> {range(max).map((index) => ( <Section style={{ background: (index < value ? color : colors.mist ).toCssValue(), }} key={index} /> ))} </Line> <PositionAbsolutelyCenterVertically left={xPosition} fullHeight> <Center> <Control /> </Center> </PositionAbsolutelyCenterVertically> </Container> )} /> ) } ``` To construct this custom segmented slider, we utilize several components from [RadzionKit](https://github.com/radzionc/radzionkit). At the core is the `PressTracker` component, which accurately tracks the user's press position on the slider. For more in-depth information on `PressTracker`, you can refer to [this article](https://radzion.com/blog/press-tracker). Additionally, the `InvisibleHTMLSlider` component is employed to enable native keyboard interactions while remaining visually concealed. The visual segmentation of the slider is achieved using the `UniformColumnGrid` component. This component forms a CSS grid with a 1px gap between each section. Sections are dynamically colored based on the current value of the slider, creating a clear and intuitive visual representation of selected values. ### Optimizing Interaction and Data Visualization in the Work Budget Feature To efficiently manage slider interactions without overloading the server, we use the `InputDebounce` component from [RadzionKit](https://github.com/radzionc/radzionkit). This component delays the `onChange` callback until the user stops interacting with the slider for a specified interval, typically 300 milliseconds. This approach ensures server updates are only made after the user has finished adjusting, reducing unnecessary network traffic and enhancing responsiveness. ```tsx import { ReactNode, useEffect, useState } from "react" import { InputProps } from "../props" type InputDebounceProps<T> = InputProps<T> & { render: (props: InputProps<T>) => ReactNode interval?: number } export function InputDebounce<T>({ value, onChange, interval = 300, render, }: InputDebounceProps<T>) { const [currentValue, setCurrentValue] = useState<T>(value) useEffect(() => { if (currentValue === value) return const timeout = setTimeout(() => { onChange(currentValue) }, interval) return () => clearTimeout(timeout) }, [currentValue, interval, onChange, value]) return ( <> {render({ value: currentValue, onChange: setCurrentValue, })} </> ) } ``` Below the sliders we want to display a bar chart with seven days of the week starting from Monday, we fill workdays bars with the same color as the workday slider and weekend bars with the same color as the weekend slider. So it becomes clear to the user how changes in the sliders affect the overall work budget. To learn more about `BarChart` implementation you can refer to [this article](https://radzion.com/blog/bar-chart). ```tsx import { ReactNode } from "react" import styled from "styled-components" import { Spacer } from "../../layout/Spacer" import { HStack, VStack } from "../../layout/Stack" import { HSLA } from "../../colors/HSLA" import { toSizeUnit } from "../../css/toSizeUnit" import { Text } from "../../text" import { getColor } from "../../theme/getters" import { toPercents } from "@lib/utils/toPercents" import { centerContent } from "../../css/centerContent" import { transition } from "../../css/transition" export interface BarChartItem { label?: ReactNode value: number color: HSLA renderValue?: (value: number) => ReactNode } interface BarChartProps { items: BarChartItem[] height: React.CSSProperties["height"] expectedValueHeight?: React.CSSProperties["height"] expectedLabelHeight?: React.CSSProperties["height"] minBarWidth?: number } const barValueGap = "4px" const barLabelGap = "4px" const defaultLabelSize = 12 const Bar = styled.div` border-radius: 4px; width: 100%; ${transition}; ` const RelativeWrapper = styled.div` position: relative; ${centerContent}; ` export const BarPlaceholder = styled(Bar)` height: 2px; background: ${getColor("mist")}; ` const Value = styled(Text)` position: absolute; white-space: nowrap; line-height: 1; bottom: ${barValueGap}; color: ${getColor("textSupporting")}; ` const Label = styled(Value)` top: ${barLabelGap}; ` const Content = styled(HStack)` flex: 1; ` const Column = styled(VStack)` height: 100%; justify-content: end; flex: 1; ` export const BarChart = ({ items, height, expectedValueHeight = defaultLabelSize, expectedLabelHeight = defaultLabelSize, minBarWidth, }: BarChartProps) => { const maxValue = Math.max(...items.map((item) => item.value)) const hasLabels = items.some((item) => item.label) return ( <VStack style={{ height }}> <Spacer height={`calc(${toSizeUnit(expectedValueHeight)} + ${barValueGap})`} /> <Content gap={4}> {items.map(({ value, color, renderValue, label }, index) => { return ( <Column style={minBarWidth ? { minWidth: minBarWidth } : undefined} key={index} > {renderValue && ( <RelativeWrapper> <Value style={{ fontSize: defaultLabelSize }} as="div"> {renderValue(value)} </Value> </RelativeWrapper> )} <Bar style={{ background: color.toCssValue(), height: value ? toPercents(value / maxValue) : "2px", }} /> {label && ( <RelativeWrapper> <Label style={{ fontSize: defaultLabelSize }} as="div"> {label} </Label> </RelativeWrapper> )} </Column> ) })} </Content> {hasLabels && ( <Spacer height={`calc(${toSizeUnit(expectedLabelHeight)} + ${barLabelGap})`} /> )} </VStack> ) } ``` ### Streamlining Data Updates and Optimistic UI Responses The work budget in our system comprises two fields: `workdayHours` and `weekendHours`. These are stored within the User entity in DynamoDB in a flat structure. This design choice simplifies the process of updating individual fields, allowing for more efficient and straightforward database operations. ```tsx export type WorkBudget = { workdayHours: number weekendHours: number } export type User = DayMoments & WorkBudget & { id: string email: string country?: CountryCode name?: string sets: Set[] registrationDate: number projects: Project[] habits: Record<string, Habit> tasks: Record<string, Task> freeTrialEnd: number isAnonymous: boolean appSumo?: AppSumo ignoreEmails?: boolean timeZone: number lastSyncedMonthEndedAt?: number lastSyncedWeekEndedAt?: number focusSounds: FocusSound[] updatedAt: number sumbittedHabitsAt?: number finishedOnboardingAt?: number subscription?: Subscription lifeTimeDeal?: LifeTimeDeal } ``` The front-end updates the `workdayHours` and `weekendHours` fields, along with other User fields, using the `useUpdateUserMutation` hook. This hook performs an optimistic update to the React state before sending the request to the server through the `updateUser` operation on the API. This method ensures a smooth and responsive user experience by reflecting changes immediately in the UI. For a deeper understanding of how to efficiently build backends within a monorepo, you can refer to [this article](https://radzion.com/blog/api). ```tsx import { User } from "@increaser/entities/User" import { useApi } from "@increaser/api-ui/hooks/useApi" import { useMutation } from "@tanstack/react-query" import { useUserState } from "@increaser/ui/user/UserStateContext" export const useUpdateUserMutation = () => { const api = useApi() const { updateState } = useUserState() return useMutation({ mutationFn: async (input: Partial<User>) => { updateState(input) return api.call("updateUser", input) }, }) } ``` ### Detailed Visualization and Layout Management in Work Budget Reporting With the work budget management configured, we can now turn our attention to the detailed three-section report, visually delineated using the `SeparatedByLine` component from [RadzionKit](https://github.com/radzionc/radzionkit) for clear separation. The first section, encapsulated within the `CurrentWeekVsBudget` component, displays two cumulative lines on a chart: a half-transparent line represents the expected work hours based on the budget from Monday to Sunday, and a solid line shows the actual work hours, corresponding to the current day of the week. Users can hover over the chart to view detailed stats for a specific day, with default stats presented for the current day, ensuring a cohesive and intuitive user experience. ```tsx import { Panel } from "@lib/ui/panel/Panel" import { WorkBudgetDaysReport } from "./WorkBudgetDaysReport" import { WorkBudgetWeeksReport } from "./WorkBudgetWeeksReport" import { CurrentWeekVsBudget } from "./CurrentWeekVsBudget" import { SeparatedByLine } from "@lib/ui/layout/SeparatedByLine" export const WorkBudgetReport = () => { return ( <Panel kind="secondary"> <SeparatedByLine gap={40}> <CurrentWeekVsBudget /> <WorkBudgetDaysReport /> <WorkBudgetWeeksReport /> </SeparatedByLine> </Panel> ) } ``` To maintain consistency in titles across the page, we use the `SectionTitle` component in the `CurrentWeekVsBudget` component. The chart requires a fixed width, so we measure the width of the parent element using the `ElementSizeAware` component, which ensures the chart fits perfectly within its allocated space. You can learn more about how this component works in [this article](https://radzion.com/blog/measure). To improve the alignment further, we add a small spacer to the right of the chart, providing a balanced visual layout. ```tsx import { HStack, VStack } from "@lib/ui/layout/Stack" import { SectionTitle } from "@lib/ui/text/SectionTitle" import { ElementSizeAware } from "@lib/ui/base/ElementSizeAware" import { Spacer } from "@lib/ui/layout/Spacer" import { chartConfig } from "./config" import { ComparisonChart } from "./ComparisonChart" export const CurrentWeekVsBudget = () => { return ( <VStack gap={20}> <SectionTitle>Current week vs budget</SectionTitle> <HStack> <ElementSizeAware render={({ setElement, size }) => ( <VStack fullWidth gap={8} ref={setElement}> {size && <ComparisonChart width={size.width} />} </VStack> )} /> <Spacer width={chartConfig.expectedXLabelWidth / 2} /> </HStack> </VStack> ) } ``` ### Data Handling and Visualization Techniques in Work Budget Reporting Our `ComparisonChart` component leverages a reusable component designed for creating line charts. While we won’t delve into each component's specifics here, you can find a comprehensive guide on how to construct line charts without relying on external charting libraries in [this article](https://radzion.com/blog/linechart). ```tsx import { HStack, VStack } from "@lib/ui/layout/Stack" import { useState } from "react" import { useWeekday } from "@lib/ui/hooks/useWeekday" import { getLastItem } from "@lib/utils/array/getLastItem" import { ChartYAxis } from "@lib/ui/charts/ChartYAxis" import { Text } from "@lib/ui/text" import { formatDuration } from "@lib/utils/time/formatDuration" import { ChartHorizontalGridLines } from "@lib/ui/charts/ChartHorizontalGridLines" import { D_IN_WEEK } from "@lib/utils/time" import { Spacer } from "@lib/ui/layout/Spacer" import { HoverTracker } from "@lib/ui/base/HoverTracker" import { getClosestItemIndex } from "@lib/utils/math/getClosestItemIndex" import { useCurrentWeekVsBudgetColors } from "./useCurrentWeekVsBudgetColors" import { chartConfig } from "./config" import { useWorkBudgetData } from "./useWorkBudgetData" import { useWorkDoneData } from "./useWorkDoneData" import { normalizeDataArrays } from "@lib/utils/math/normalizeDataArrays" import { SelectedDayInfo } from "./SelectedDayInfo" import { WeekChartXAxis } from "./WeekChartXAxis" import { TakeWholeSpaceAbsolutely } from "@lib/ui/css/takeWholeSpaceAbsolutely" import { CurrentDayLine } from "./CurrentDayLine" import { ComparisonChartLines } from "./ComparisonChartLines" import { ComponentWithWidthProps } from "@lib/ui/props" export const ComparisonChart = ({ width }: ComponentWithWidthProps) => { const weekday = useWeekday() const colors = useCurrentWeekVsBudgetColors() const workBudgetData = useWorkBudgetData() const workDoneData = useWorkDoneData() const [selectedDataPoint, setSelectedDataPoint] = useState<number>(weekday) const yData = [workBudgetData[0], getLastItem(workBudgetData)] const normalized = normalizeDataArrays({ y: yData, workBudget: workBudgetData, workDone: workDoneData, }) const contentWidth = width - chartConfig.expectedYAxisLabelWidth return ( <> <HStack> <Spacer width={chartConfig.expectedYAxisLabelWidth} /> <SelectedDayInfo expectedValue={workBudgetData[selectedDataPoint]} doneValue={workDoneData[selectedDataPoint]} width={contentWidth} index={selectedDataPoint} /> </HStack> <HStack> <ChartYAxis expectedLabelWidth={chartConfig.expectedYAxisLabelWidth} renderLabel={(index) => ( <Text key={index} size={12} color="supporting"> {formatDuration(yData[index], "min", { maxUnit: "h", minUnit: "h", })} </Text> )} data={normalized.y} /> <VStack style={{ position: "relative", minHeight: chartConfig.chartHeight, }} fullWidth > <ChartHorizontalGridLines data={yData} /> <ComparisonChartLines value={[ { data: normalized.workBudget, color: colors.budget }, { data: normalized.workDone, color: colors.done }, ]} width={contentWidth} /> <HoverTracker onChange={({ position }) => { setSelectedDataPoint( position ? getClosestItemIndex(D_IN_WEEK, position.x) : weekday ) }} render={({ props }) => <TakeWholeSpaceAbsolutely {...props} />} /> <CurrentDayLine value={selectedDataPoint} /> </VStack> </HStack> <HStack> <Spacer width={chartConfig.expectedYAxisLabelWidth} /> <WeekChartXAxis value={selectedDataPoint} /> </HStack> </> ) } ``` Before displaying the report, we need to fetch data for both lines. The `useWorkBudgetData` hook retrieves a cumulative array of expected or budgeted work hours for each day of the week. To ensure consistency in time format across both lines, we convert the data to minutes using the `convertDuration` utility from [RadzionKit](https://github.com/radzionc/radzionkit). ```tsx import { useDaysBudget } from "@increaser/ui/workBudget/hooks/useDaysBudget" import { cumulativeSum } from "@lib/utils/math/cumulativeSum" import { convertDuration } from "@lib/utils/time/convertDuration" export const useWorkBudgetData = () => { const daysBudget = useDaysBudget() return cumulativeSum(daysBudget).map((value) => convertDuration(value, "h", "min") ) } ``` User's tracked data is structured as an array of sets, each containing a project ID and start and end timestamps. To calculate the total work hours for each day, we employ the `useCurrentWeekMinutesWorkedByDay` hook, which iterates over these sets and tallies the total work hours for each day of the week. For those interested in a deeper dive into the time-tracking implementation at [Increaser](https://increaser.org), you can explore [this article](https://radzion.com/blog/report). ```tsx import { useCurrentWeekMinutesWorkedByDay } from "@increaser/ui/sets/hooks/useCurrentWeekMinutesWorkedByDay" import { cumulativeSum } from "@lib/utils/math/cumulativeSum" export const useWorkDoneData = () => { const days = useCurrentWeekMinutesWorkedByDay() return cumulativeSum(days) } ``` The `selectedDataPoint` represents the currently highlighted weekday, which defaults to the current weekday. We use the `HoverTracker` component to monitor the user's mouse position and update the `selectedDataPoint` state accordingly. To clearly indicate which day is selected, a vertical line is displayed on the chart using the `CurrentDayLine` component, and the corresponding weekday label on the X-axis is highlighted. ```tsx import { toSizeUnit } from "@lib/ui/css/toSizeUnit" import { PositionAbsolutelyCenterVertically } from "@lib/ui/layout/PositionAbsolutelyCenterVertically" import { ComponentWithValueProps } from "@lib/ui/props" import { getColor } from "@lib/ui/theme/getters" import { D_IN_WEEK } from "@lib/utils/time" import { toPercents } from "@lib/utils/toPercents" import styled from "styled-components" const Line = styled.div` height: 100%; border-left: ${toSizeUnit(2)} dashed; color: ${getColor("mistExtra")}; ` export const CurrentDayLine = ({ value }: ComponentWithValueProps<number>) => ( <PositionAbsolutelyCenterVertically fullHeight style={{ pointerEvents: "none", }} left={toPercents(value / (D_IN_WEEK - 1))} > <Line /> </PositionAbsolutelyCenterVertically> ) ``` To accurately position Y-axis labels and align two line charts, we must normalize the data using the `normalizeDataArrays` utility from [RadzionKit](https://github.com/radzionc/radzionkit). This utility takes an object containing arrays of numbers and outputs the same object with normalized arrays. The normalization process entails finding the maximum and minimum values across the arrays, calculating the range, and then scaling each value to fit within a normalized range between 0 and 1. This ensures that all elements are properly aligned and displayed correctly on the chart. ```tsx export const normalizeDataArrays = <T extends Record<string, number[]>>( input: T ): T => { const values = Object.values(input).flat() const max = Math.max(...values) const min = Math.min(...values) const range = max - min return Object.fromEntries( Object.entries(input).map(([key, value]) => [ key, value.map((v) => (v - min) / range), ]) ) as T } ``` ### Enhancing User Understanding with Detailed Work Budget Visualization To assist users in setting a realistic work budget, the second section of the report displays the average work hours for each day of the week over the last 30 days. We differentiate workdays and weekends with distinct colors to simplify identification for the user. For visualizing this data, we employ the `BarChart` component once again, this time omitting the labels to maintain a clean and focused presentation. ```tsx import { useAssertUserState } from "@increaser/ui/user/UserStateContext" import { useStartOfDay } from "@lib/ui/hooks/useStartOfDay" import { ShyInfoBlock } from "@lib/ui/info/ShyInfoBlock" import { VStack } from "@lib/ui/layout/Stack" import { Text } from "@lib/ui/text" import { range } from "@lib/utils/array/range" import { convertDuration } from "@lib/utils/time/convertDuration" import { startOfDay } from "date-fns" import { useMemo } from "react" import { splitBy } from "@lib/utils/array/splitBy" import { UniformColumnGrid } from "@lib/ui/layout/UniformColumnGrid" import { AvgDay } from "./AvgDay" import { BarChart } from "@lib/ui/charts/BarChart" import { getWorkdayColor } from "@increaser/ui/workBudget/getWorkdayColor" import { getWeekendColor } from "@increaser/ui/workBudget/getWeekendColor" import { useTheme } from "styled-components" import { isWorkday } from "@lib/utils/time/workweek" import { getSetDuration } from "@increaser/entities-utils/set/getSetDuration" const maxDays = 30 const minDays = 7 export const WorkBudgetDaysReport = () => { const todayStartedAt = useStartOfDay() const { sets } = useAssertUserState() const lastDayStartedAt = todayStartedAt - convertDuration(1, "d", "ms") const firstDayStartedAt = useMemo(() => { if (!sets.length) return todayStartedAt const firstSetDayStartedAt = startOfDay(sets[0].start).getTime() return Math.max( lastDayStartedAt - maxDays * convertDuration(1, "d", "ms"), firstSetDayStartedAt ) }, [lastDayStartedAt, sets, todayStartedAt]) const days = Math.round(lastDayStartedAt - firstDayStartedAt) / convertDuration(1, "d", "ms") const totals = useMemo(() => { const result = range(days).map(() => 0) sets.forEach((set) => { const setDayStartedAt = startOfDay(set.start).getTime() const dayIndex = Math.round( (setDayStartedAt - firstDayStartedAt) / convertDuration(1, "d", "ms") ) if (dayIndex < 0 || dayIndex >= days) return result[dayIndex] += getSetDuration(set) }) return result }, [days, firstDayStartedAt, sets]) const [workdays, weekends] = useMemo(() => { return splitBy(totals, (total, index) => { const timestamp = firstDayStartedAt + index * convertDuration(1, "d", "ms") return isWorkday(timestamp) ? 0 : 1 }) }, [firstDayStartedAt, totals]) const theme = useTheme() if (days < minDays) { return ( <ShyInfoBlock> After {minDays} days of using the app, you'll access a report that shows your average work hours on weekdays and weekends. </ShyInfoBlock> ) } return ( <VStack gap={20}> <Text color="contrast" weight="semibold"> Last {days} days report </Text> <UniformColumnGrid gap={20}> <AvgDay value={workdays} name="workday" /> <AvgDay value={weekends} name="weekend" /> </UniformColumnGrid> <BarChart expectedLabelHeight={0} expectedValueHeight={0} height={60} items={totals.map((value, index) => { const dayStartedAt = firstDayStartedAt + index * convertDuration(1, "d", "ms") return { value, color: isWorkday(dayStartedAt) ? getWorkdayColor(theme) : getWeekendColor(theme), } })} /> </VStack> ) } ``` While the second section of the report highlights the average work hours for workdays and weekends, the third section presents an average of the entire week along with a bar chart depicting the total hours for the last four weeks. In both sections, if there is insufficient data to display a comprehensive report, a `ShyInfoBlock` will appear, subtly informing the user of the lack of data. ```tsx import { ShyInfoBlock } from "@lib/ui/info/ShyInfoBlock" import { VStack } from "@lib/ui/layout/Stack" import { Text } from "@lib/ui/text" import { range } from "@lib/utils/array/range" import { convertDuration } from "@lib/utils/time/convertDuration" import { useMemo } from "react" import { UniformColumnGrid } from "@lib/ui/layout/UniformColumnGrid" import { useTheme } from "styled-components" import { useStartOfWeek } from "@lib/ui/hooks/useStartOfWeek" import { useProjects } from "@increaser/ui/projects/ProjectsProvider" import { fromWeek, toWeek } from "@lib/utils/time/Week" import { order } from "@lib/utils/array/order" import { LabeledValue } from "@lib/ui/text/LabeledValue" import { formatDuration } from "@lib/utils/time/formatDuration" import { sum } from "@lib/utils/array/sum" import { BarChart } from "@lib/ui/charts/BarChart" const maxWeeks = 4 const minWeeks = 2 export const WorkBudgetWeeksReport = () => { const weekStartedAt = useStartOfWeek() const { projects } = useProjects() const lastWeekStartedAt = weekStartedAt - convertDuration(1, "w", "ms") const firstWeekStartedAt = useMemo(() => { const allWeeks = projects.flatMap((project) => project.weeks).map(fromWeek) if (!allWeeks.length) return lastWeekStartedAt return Math.max( lastWeekStartedAt - convertDuration(maxWeeks, "w", "ms"), order(allWeeks, (v) => v, "asc")[0] ) }, [lastWeekStartedAt, projects]) const weeks = Math.round(lastWeekStartedAt - firstWeekStartedAt) / convertDuration(1, "w", "ms") const totals = useMemo(() => { const result = range(weeks).map(() => 0) projects .flatMap((project) => project.weeks) .forEach(({ week, year, seconds }) => { const weekStartedAt = fromWeek({ week, year }) const weekIndex = Math.round( (weekStartedAt - firstWeekStartedAt) / convertDuration(1, "w", "ms") ) if (weekIndex < 0 || weekIndex >= weeks) return result[weekIndex] += seconds }) return result }, [firstWeekStartedAt, projects, weeks]) const theme = useTheme() if (weeks < minWeeks) { return ( <ShyInfoBlock> After {minWeeks} weeks of using the app, you'll access a report that shows your average work week. </ShyInfoBlock> ) } return ( <VStack gap={20}> <Text color="contrast" weight="semibold"> Last {weeks} weeks report </Text> <UniformColumnGrid gap={20}> <LabeledValue labelColor="supporting" name={`Avg. week`}> <Text as="span" color="contrast"> {formatDuration(sum(totals) / weeks, "s", { maxUnit: "h" })} </Text> </LabeledValue> </UniformColumnGrid> <BarChart height={120} items={totals.map((value, index) => { const weekStartedAt = firstWeekStartedAt + index * convertDuration(1, "w", "ms") return { value, label: <Text>week #{toWeek(weekStartedAt).week + 1}</Text>, color: theme.colors.mist, renderValue: value > 0 ? () => ( <Text>{formatDuration(value, "s", { maxUnit: "h" })}</Text> ) : undefined, } })} /> </VStack> ) } ```
radzion
1,907,116
Taming the Stream: Exploring Thread-safe Ring Buffers
In the realm of programming, data structures are fundamental building blocks. When dealing with...
0
2024-07-01T04:29:46
https://dev.to/epakconsultant/taming-the-stream-exploring-thread-safe-ring-buffers-35oe
In the realm of programming, data structures are fundamental building blocks. When dealing with continuous data streams or real-time applications, traditional queues might not suffice. Enter the thread-safe ring buffer, a versatile data structure designed for efficient management of data streams while ensuring thread safety. This article delves into the concept of ring buffers, their advantages, and how to implement a thread-safe version in various programming languages. Understanding Ring Buffers: Imagine a circular buffer, akin to a racetrack. Data elements are written (produced) at the "tail" and read (consumed) from the "head." Once the tail reaches the end, it wraps around to the beginning, overwriting the oldest data (similar to a full lap). This cyclic nature allows for efficient memory utilization, especially for fixed-size data streams. Why Thread-Safe Ring Buffers? In multithreaded environments, multiple threads might try to access the ring buffer concurrently. Without proper synchronization, this can lead to data corruption or race conditions. Thread-safe ring buffers employ synchronization mechanisms (e.g., mutexes, atomic operations) to ensure: - Mutual Exclusion: Only one thread can access critical sections of the ring buffer code (adding or removing elements) at a time. - Data Consistency: Data integrity is maintained even when multiple threads interact with the buffer concurrently. Benefits of Thread-Safe Ring Buffers: - Efficiency: Efficient memory management and minimal overhead compared to traditional dynamic queues. - Real-Time Performance: Ideal for handling continuous data streams with minimal latency, facilitating real-time applications. - Bounded Memory Usage: Predefined size ensures predictable memory usage patterns, crucial for embedded systems. - Thread Safety: Concurrent access from multiple threads is handled gracefully, preventing data corruption. Implementing a Thread-Safe Ring Buffer: The specific implementation of a thread-safe ring buffer varies depending on the programming language. However, the core principles remain consistent: - Data Structure: An array to store elements and variables to track the head and tail positions. - Synchronization: Mutexes or atomic operations to control concurrent access to critical sections. - Full/Empty Checks: Functions to check if the buffer is full before adding or empty before removing elements. Here's a basic C++ example using a mutex: `C++ class ThreadSafeRingBuffer { private: int buffer[SIZE]; int head, tail; std::mutex mtx; public: // ... functions to add, remove elements, and check fullness/emptiness void add(int data) { std::lock_guard<std::mutex> lock(mtx); // ... add logic with full check } int remove() { std::lock_guard<std::mutex> lock(mtx); // ... remove logic with empty check } };` [Vue.js for Everyone: A Beginner's Guide to Building Dynamic Web Applications](https://www.amazon.com/dp/B0CW18ZNPK) Languages and Libraries: Several programming languages offer built-in functionalities or libraries for thread-safe ring buffers: - C++: Utilize libraries like Boost.Circular_Buffer or custom implementations with atomic operations. - Java: The java.util.concurrent package provides classes like LinkedBlockingQueue that can be adapted for ring buffer-like behavior. - Python: Third-party libraries like queuelib offer thread-safe ring buffer implementations. Beyond the Basics: While the basic concept is straightforward, consider these points for advanced applications: - Dropping Data: Implement strategies for handling situations when the buffer is full and new data needs to be discarded. - Multiple Producers/Consumers: Adapt the synchronization mechanisms to handle scenarios with multiple threads producing and consuming data concurrently. - Performance Optimization: For high-performance applications, explore lock-free ring buffer implementations (caution: requires careful design and potential trade-offs). Conclusion: Thread-safe ring buffers offer a powerful tool for managing data streams and real-time applications in multithreaded environments. By understanding their functionality, thread safety considerations, and implementation nuances, you can leverage them to enhance the performance and reliability of your programs. Remember, selecting the appropriate implementation approach depends on your specific programming language and application requirements.
epakconsultant
1,906,540
VIP in GCP
If you're running IRIS in a mirrored configuration for HA in GCP, the question of providing a Mirror...
0
2024-06-30T11:30:14
https://community.intersystems.com/post/vip-gcp
cloud, gcp, mirroring, beginners
If you're running IRIS in a mirrored configuration for HA in GCP, the question of providing a [Mirror VIP](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror_set_config#GHA_mirror_set_virtualip) (Virtual IP) becomes relevant. Virtual IP offers a way for downstream systems to interact with IRIS using one IP address. Even when a failover happens, downstream systems can reconnect to the same IP address and continue working. The main issue, when deploying to GCP, is that an IRIS VIP has a requirement of IRIS being essentially a network admin, per the [docs](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror_set). To get HA, IRIS mirror members must be deployed to different availability zones in one subnet (which is possible in GCP as subnets always span the entire region). One of the solutions might be load balancers, but they, of course, cost extra, and you need to administrate them. In this article, I would like to provide a way to configure a Mirror VIP without using Load Balancers suggested in most other [GCP reference architectures](https://community.intersystems.com/post/intersystems-iris-example-reference-architectures-google-cloud-platform-gcp). # Architecture ![GCP VIP](https://github.com/eduard93/Articles/assets/5127457/148cac1e-b385-4bad-982f-ebf60ff0dc9b) We have a subnet running across the region (I simplify here - of course, you'll probably have public subnets, arbiter in another az, and so on, but this is an absolute minimum enough to demonstrate this approach). Subnet's CIRD is `10.0.0.0/24`, which means it is allocated IPs `10.0.0.1` to `10.0.0.255`. As GCP [reserves](https://cloud.google.com/vpc/docs/subnets#unusable-ip-addresses-in-every-subnet) the first and last two addresses, we can use `10.0.0.2` to `10.0.0.253`. We will implement both public and private VIPs at the same time. If you want, you can implement only the private VIP. # Idea Virtual Machines in GCP have [Network Interfaces](https://cloud.google.com/compute/docs/networking/network-overview). These Network Interfaces have [Alias IP Ranges](https://cloud.google.com/compute/docs/reference/rest/v1/instances/updateNetworkInterface) which are private IP addresses. Public IP Addresses can be added by specifying [Access Config](https://cloud.google.com/compute/docs/reference/rest/v1/instances/addAccessConfig) Network Interfaces configuration is a combination of Public and/or Private IPs, and it's routed automatically to the Virtual Machine associated with the Network interface. So there is no need to update the routes. What we'll do is, during a mirror failover event, delete the VIP IP configuration from the old primary and create it for a new primary. All operations to do that take 5-20 seconds for Private VIP only, from 5 seconds and up to a minute for a Public/Private VIP IP combination. # Implementing VIP 1. Allocate IP address to use as a public VIP. Skip this step if you want private VIP only. 2. Decide on a private VIP value. I will use `10.0.0.250`. 3. Provision your IRIS Instances with a [service account](https://cloud.google.com/iam/docs/service-account-overview) - compute.instances.get - compute.addresses.use - compute.addresses.useInternal - compute.instances.updateNetworkInterface - compute.subnetworks.use For External VIP you'll also need: - compute.instances.addAccessConfig - compute.instances.deleteAccessConfig - compute.networks.useExternalIp - compute.subnetworks.useExternalIp - compute.addresses.list 4. When a current mirror member becomes primary, we'll use a [ZMIRROR](https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GHA_mirror_set_config#GHA_mirror_set_tunable_params_zmirror_routine) callback to delete a VIP IP configuration on another mirror member's network interface and create a VIP IP configuration pointing at itself. That's it. ```objectscript ROUTINE ZMIRROR NotifyBecomePrimary() PUBLIC { #include %occMessages set sc = ##class(%SYS.System).WriteToConsoleLog("Setting Alias IP instead of Mirror VIP"_$random(100)) set sc = ##class(%SYS.Python).Import("set_alias_ip") quit sc } ``` And here's `set_alias_ip.py` which must be placed into `mgr\python` directory: ```python """ This script adds Alias IP (https://cloud.google.com/vpc/docs/alias-ip) to the VM Network Interface. You can allocate alias IP ranges from the primary subnet range, or you can add a secondary range to the subnet and allocate alias IP ranges from the secondary range. For simplicity, we use the primary subnet range. Using google cli, gcloud, this action could be performed in this way: $ gcloud compute instances network-interfaces update <instance_name> --zone=<subnet_zone> --aliases="10.0.0.250/32" Note that the command for alias removal looks similar - just provide an empty `aliases`: $ gcloud compute instances network-interfaces update <instance_name> --zone=<subnet_zone> --aliases="" We leverage Google Compute Engine Metadata API to retrieve <instance_name> as well as <subnet_zone>. Also note https://cloud.google.com/vpc/docs/subnets#unusable-ip-addresses-in-every-subnet. Google Cloud uses the first two and last two IPv4 addresses in each subnet primary IPv4 address range to host the subnet. Google Cloud lets you use all addresses in secondary IPv4 ranges, i.e.: - 10.0.0.0 - Network address - 10.0.0.1 - Default gateway address - 10.0.0.254 - Second-to-last address. Reserved for potential future use - 10.0.0.255 - Broadcast address After adding Alias IP, you can check its existence using 'ip' utility: $ ip route ls table local type local dev eth0 scope host proto 66 local 10.0.0.250 """ import subprocess import requests import re import time from google.cloud import compute_v1 ALIAS_IP = "10.0.0.250/32" METADATA_URL = "http://metadata.google.internal/computeMetadata/v1/" METADATA_HEADERS = {"Metadata-Flavor": "Google"} project_path = "project/project-id" instance_path = "instance/name" zone_path = "instance/zone" network_interface = "nic0" mirror_public_ip_name = "isc-mirror" access_config_name = "isc-mirror" mirror_instances = ["isc-primary-001", "isc-backup-001"] def get_metadata(path: str) -> str: return requests.get(METADATA_URL + path, headers=METADATA_HEADERS).text def get_zone() -> str: return get_metadata(zone_path).split('/')[3] client = compute_v1.InstancesClient() project = get_metadata(project_path) availability_zone = get_zone() def get_ip_address_by_name(): ip_address = "" client = compute_v1.AddressesClient() request = compute_v1.ListAddressesRequest( project=project, region='-'.join(get_zone().split('-')[0:2]), filter="name=" + mirror_public_ip_name, ) response = client.list(request=request) for item in response: ip_address = item.address return ip_address def get_zone_by_instance_name(instance_name: str) -> str: request = compute_v1.AggregatedListInstancesRequest() request.project = project instance_zone = "" for zone, response in client.aggregated_list(request=request): if response.instances: if re.search(f"{availability_zone}*", zone): for instance in response.instances: if instance.name == instance_name: return zone.split('/')[1] return instance_zone def update_network_interface(action: str, instance_name: str, zone: str) -> None: if action == "create": alias_ip_range = compute_v1.AliasIpRange( ip_cidr_range=ALIAS_IP, ) nic = compute_v1.NetworkInterface( alias_ip_ranges=[] if action == "delete" else [alias_ip_range], fingerprint=client.get( instance=instance_name, project=project, zone=zone ).network_interfaces[0].fingerprint, ) request = compute_v1.UpdateNetworkInterfaceInstanceRequest( project=project, zone=zone, instance=instance_name, network_interface_resource=nic, network_interface=network_interface, ) response = client.update_network_interface(request=request) print(instance_name + ": " + str(response.status)) def get_remote_instance_name() -> str: local_instance = get_metadata(instance_path) mirror_instances.remove(local_instance) return ''.join(mirror_instances) def delete_remote_access_config(remote_instance: str) -> None: request = compute_v1.DeleteAccessConfigInstanceRequest( access_config=access_config_name, instance=remote_instance, network_interface="nic0", project=project, zone=get_zone_by_instance_name(remote_instance), ) response = client.delete_access_config(request=request) print(response) def add_access_config(public_ip_address: str) -> None: access_config = compute_v1.AccessConfig( name = access_config_name, nat_i_p=public_ip_address, ) request = compute_v1.AddAccessConfigInstanceRequest( access_config_resource=access_config, instance=get_metadata(instance_path), network_interface="nic0", project=project, zone=get_zone_by_instance_name(get_metadata(instance_path)), ) response = client.add_access_config(request=request) print(response) # Get another failover member's instance name and zone remote_instance = get_remote_instance_name() print(f"Alias IP is going to be deleted at [{remote_instance}]") # Remove Alias IP from a remote failover member's Network Interface # # TODO: Perform the next steps when an issue https://github.com/googleapis/google-cloud-python/issues/11931 will be closed: # - update google-cloud-compute pip package to a version containing fix (>1.15.0) # - remove a below line calling gcloud with subprocess.run() # - uncomment update_network_interface() function subprocess.run([ "gcloud", "compute", "instances", "network-interfaces", "update", remote_instance, "--zone=" + get_zone_by_instance_name(remote_instance), "--aliases=" ]) # update_network_interface("delete", # remote_instance, # get_zone_by_instance_name(remote_instance) # Add Alias IP to a local failover member's Network Interface update_network_interface("create", get_metadata(instance_path), availability_zone) # Handle public IP switching public_ip_address = get_ip_address_by_name() if public_ip_address: print(f"Public IP [{public_ip_address}] is going to be switched to [{get_metadata(instance_path)}]") delete_remote_access_config(remote_instance) time.sleep(10) add_access_config(public_ip_address) ``` # Demo Now let's deploy this IRIS architecture into GCP using Terraform and Ansible. If you already running IRIS in GCP or using a different tool, the ZMIRROR script is available [here](https://github.com/eduard93/gcp-infra/blob/main/docker-compose/iris/set_alias_ip.py). ## Tools We'll need the following tools. As Ansible is Linux only I highly recommend running it on Linux, althrough I confirmed that it works on Windows in WSL2 too. [gcloud](https://cloud.google.com/sdk/docs/install): ```bash $ gcloud version Google Cloud SDK 459.0.0 ... ``` [terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli): ```bash $ terraform version Terraform v1.6.3 ``` [python](https://www.python.org/downloads/): ```bash $ python3 --version Python 3.10.12 ``` [ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html): ```bash $ ansible --version ansible [core 2.12.5] ... ``` [ansible-playbook](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html): ```bash $ ansible-playbook --version ansible-playbook [core 2.12.5] ... ``` ## WSL2 If you're running in WSL2 on Windows, you'll need to restart ssh agent by running: ``` eval `ssh-agent -s` ``` Also sometimes (when Windows goes to sleep/hibernate and back) the WSL clock is not synced, you might need to sync it explicitly: ``` sudo hwclock -s ``` ## Headless servers If you're runnning a headless server, use `gcloud auth login --no-browser` to authenticate against GCP. ## IaC We leverage Terraform and store its state in a Cloud Storage. See details below about how this storage is created. ### Define required variables ```bash $ export PROJECT_ID=<project_id> $ export REGION=<region> # For instance, us-west1 $ export TF_VAR_project_id=${PROJECT_ID} $ export TF_VAR_region=${REGION} $ export ROLE_NAME=MyTerraformRole $ export SA_NAME=isc-mirror ``` **Note**: If you'd like to add Public VIP which exposes IRIS Mirror ports publicly (it's not recommended) you could enable it with: ```bash $ export TF_VAR_enable_mirror_public_ip=true ``` ### Prepare Artifact Registry It's [recommended](https://cloud.google.com/container-registry/docs/advanced-authentication) to leverage Google Artifact Registry instead of Container Registry. So let's create registry first: ```bash $ cd <root_repo_dir>/terraform $ cat ${SA_NAME}.json | docker login -u _json_key --password-stdin https://${REGION}-docker.pkg.dev $ gcloud artifacts repositories create --repository-format=docker --location=${REGION} intersystems ``` ### Prepare Docker images Let's assume that VM instances don't have an access to ISC container repository. But you personally do have and at the same do not want to put your personal credentials on VMs. In that case you can pull IRIS Docker images from ISC container registry and push them to Google container registry where VMs have an access to: ```bash $ docker login containers.intersystems.com $ <Put your credentials here> $ export IRIS_VERSION=2023.2.0.221.0 $ cd docker-compose/iris $ docker build -t ${REGION}-docker.pkg.dev/${PROJECT_ID}/intersystems/iris:${IRIS_VERSION} . $ for IMAGE in webgateway arbiter; do \ docker pull containers.intersystems.com/intersystems/${IMAGE}:${IRIS_VERSION} \ && docker tag containers.intersystems.com/intersystems/${IMAGE}:${IRIS_VERSION} ${REGION}-docker.pkg.dev/${PROJECT_ID}/intersystems/${IMAGE}:${IRIS_VERSION} \ && docker push ${REGION}-docker.pkg.dev/${PROJECT_ID}/intersystems/${IMAGE}:${IRIS_VERSION}; \ done $ docker push ${REGION}-docker.pkg.dev/${PROJECT_ID}/intersystems/iris:${IRIS_VERSION} ``` ### Put IRIS license Put IRIS license key file, `iris.key` to `<root_repo_dir>/docker-compose/iris/iris.key`. Note that a license has to support Mirroring. ### Create Terraform Role This role will be used by Terraform for managing needed GCP resources: ```bash $ cd <root_repo_dir>/terraform/ $ gcloud iam roles create ${ROLE_NAME} --project ${PROJECT_ID} --file=terraform-permissions.yaml ``` **Note**: use `update` for later usage: ```bash $ gcloud iam roles update ${ROLE_NAME} --project ${PROJECT_ID} --file=terraform-permissions.yaml ``` ### Create Service Account with Terraform role ```bash $ gcloud iam service-accounts create ${SA_NAME} \ --description="Terraform Service Account for ISC Mirroring" \ --display-name="Terraform Service Account for ISC Mirroring" $ gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member="serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ --role=projects/${PROJECT_ID}/roles/${ROLE_NAME} ``` ### Generate Service Account key Generate Service Account key and store its value in a certain environment variable: ```bash $ gcloud iam service-accounts keys create ${SA_NAME}.json \ --iam-account=${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com $ export GOOGLE_APPLICATION_CREDENTIALS=<absolute_path_to_root_repo_dir>/terraform/${SA_NAME}.json ``` ### Generate SSH keypair Store a private part locally as `.ssh/isc_mirror` and make it visible for `ssh-agent`. Put a public part to a file [isc_mirror.pub](../terraform/templates/isc_mirror.pub): ```bash $ ssh-keygen -b 4096 -C "isc" -f ~/.ssh/isc_mirror $ ssh-add ~/.ssh/isc_mirror $ ssh-add -l # Check if 'isc' key is present $ cp ~/.ssh/isc_mirror.pub <root_repo_dir>/terraform/templates/ ``` ### Create Cloud Storage Cloud Storage is used for storing [Terraform state remotely](https://developer.hashicorp.com/terraform/language/state/remote). You could take a look at [Store Terraform state in a Cloud Storage bucket](https://cloud.google.com/docs/terraform/resource-management/store-state) as an example. **Note**: created Cloud Storage will have a name like `isc-mirror-demo-terraform-<project_id>`: ```bash $ cd <root_repo_dir>/terraform-storage/ $ terraform init $ terraform plan $ terraform apply ``` ### Create resources with Terraform ```bash $ cd <root_repo_dir>/terraform/ $ terraform init -backend-config="bucket=isc-mirror-demo-terraform-${PROJECT_ID}" $ terraform plan $ terraform apply ``` **Note 1**: Four virtual machines will be created. Only one of them has a public IP address and plays a role of bastion host. This machine is called `isc-client-001`. You can find a public IP of `isc-client-001` instance by running the following command: ```bash $ export ISC_CLIENT_PUBLIC_IP=$(gcloud compute instances describe isc-client-001 --zone=${REGION}-c --format=json | jq -r '.networkInterfaces[].accessConfigs[].natIP') ``` **Note 2**: Sometimes Terraform fails with errors like: ```bash Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host... ``` In that case try to clean a local `~/.ssh/known_hosts` file: ```bash $ for IP in ${ISC_CLIENT_PUBLIC_IP} 10.0.0.{3..6}; do ssh-keygen -R "[${IP}]:2180"; done ``` and then repeat `terraform apply`. ## Quick test ### Access to IRIS mirror instances with SSH All instances, except `isc-client-001`, are created in a private network to increase a security level. But you can access them using [SSH ProxyJump](https://goteleport.com/blog/ssh-proxyjump-ssh-proxycommand/) feature. Get the `isc-client-001` public IP first: ```bash $ export ISC_CLIENT_PUBLIC_IP=$(gcloud compute instances describe isc-client-001 --zone=${REGION}-c --format=json | jq -r '.networkInterfaces[].accessConfigs[].natIP') ``` Then connect to, for example, `isc-primary-001` with a private SSH key. Note that we use a custom SSH port, `2180`: ```bash $ ssh -i ~/.ssh/isc_mirror -p 2180 isc@10.0.0.3 -o ProxyJump=isc@${ISC_CLIENT_PUBLIC_IP}:2180 ``` After connection, let's check that Primary mirror member has Alias IP: ```bash [isc@isc-primary-001 ~]$ ip route ls table local type local dev eth0 scope host proto 66 local 10.0.0.250 [isc@isc-primary-001 ~]$ ping -c 1 10.0.0.250 PING 10.0.0.250 (10.0.0.250) 56(84) bytes of data. 64 bytes from 10.0.0.250: icmp_seq=1 ttl=64 time=0.049 ms ``` ### Access to IRIS mirror instances Management Portals To open mirror instances Management Portals located in a private network, we leverage [SSH Socks Tunneling](https://goteleport.com/blog/ssh-tunneling-explained/). Let's connect to `isc-primary-001` instance. Note that a tunnel will live in a background after the next command: ```bash $ ssh -f -N -i ~/.ssh/isc_mirror -p 2180 isc@10.0.0.3 -o ProxyJump=isc@${ISC_CLIENT_PUBLIC_IP}:2180 -L 8080:10.0.0.3:8080 ``` Port 8080, instead of a familiar 52773, is used because we start IRIS with a dedicated WebGateway running on port 8080. After successful connection, open [http://127.0.0.1:8080/csp/sys/UtilHome.csp](http://127.0.0.1:8080/csp/sys/UtilHome.csp) in a browser. You should see a Management Portal. Credentials are typical: `_system/SYS`. The same approach works for all instances: primary (10.0.0.3), backup (10.0.0.4) and arbiter (10.0.0.5). Just make an SSH connection to them first. ### Test Let's connect to `isc-client-001`: ```bash $ ssh -i ~/.ssh/isc_mirror -p 2180 isc@${ISC_CLIENT_PUBLIC_IP} ``` Check Primary mirror member's Management Portal availability on Alias IP address: ```bash $ curl -s -o /dev/null -w "%{http_code}\n" http://10.0.0.250:8080/csp/sys/UtilHome.csp 200 ``` Let's connect to `isc-primary-001` on another console: ```bash $ ssh -i ~/.ssh/isc_mirror -p 2180 isc@10.0.0.3 -o ProxyJump=isc@${ISC_CLIENT_PUBLIC_IP}:2180 ``` And switch the current Primary instance off. Note that IRIS as well as its WebGateway is running in Docker: ```bash [isc@isc-primary-001 ~]$ docker-compose -f /isc-mirror/docker-compose.yml down ``` Let's check mirror member's Management Portal availability on Alias IP address again from `isc-client-001`: ```bash [isc@isc-client-001 ~]$ curl -s -o /dev/null -w "%{http_code}\n" http://10.0.0.250:8080/csp/sys/UtilHome.csp 200 ``` It should work as Alias IP was moved to `isc-backup-001` instance: ```bash $ ssh -i ~/.ssh/isc_mirror -p 2180 isc@10.0.0.4 -o ProxyJump=isc@${ISC_CLIENT_PUBLIC_IP}:2180 [isc@isc-backup-001 ~]$ ip route ls table local type local dev eth0 scope host proto 66 local 10.0.0.250 ``` ## Cleanup ### Remove infrastructure ```bash $ cd <root_repo_dir>/terraform/ $ terraform init -backend-config="bucket=isc-mirror-demo-terraform-${PROJECT_ID}" $ terraform destroy ``` ### Remove Artifact Registry ```bash $ cd <root_repo_dir>/terraform $ cat ${SA_NAME}.json | docker login -u _json_key --password-stdin https://${REGION}-docker.pkg.dev $ for IMAGE in iris webgateway arbiter; do \ gcloud artifacts docker images delete ${REGION}-docker.pkg.dev/${PROJECT_ID}/intersystems/${IMAGE} done $ gcloud artifacts repositories delete intersystems --location=${REGION} ``` ### Remove Cloud Storage Remove Cloud Storage where Terraform stores its state. In our case, it's a `isc-mirror-demo-terraform-<project_id>`. ### Remove Terraform Role Remove Terraform Role created in [Create Terraform Role](#create-terraform-role). # Conclusion And that's it! We change networking configuration pointing to a current mirror Primary when the NotifyBecomePrimary event happens. Author would like to thank @Mikhail.Khomenko, @Vadim.Aniskin, and @Evgeny.Shvarov for the Community Ideas Program which made this article possible.
intersystemsdev
1,907,115
Dive Into the Fascinating World of Computer Systems with CMU's ICS Course! 🚀
Explore the programmer's view of computer systems execution, information storage, and communication. Enhance your programming skills and prepare for advanced studies in computer science.
27,844
2024-07-01T04:29:33
https://getvm.io/tutorials/15-213-introduction-to-computer-systems-ics-carnegie-mellon-university
getvm, programming, freetutorial, universitycourses
As a passionate computer science student, I'm thrilled to share with you an incredible resource that has transformed my understanding of computer systems: the "Introduction to Computer Systems (ICS)" course offered by Carnegie-Mellon University (CMU). ## Explore the Programmer's Perspective 🧠 This course provides a deep dive into the inner workings of computer systems, giving you a programmer's-eye view of how programs are executed, information is stored, and communication occurs. By delving into the nitty-gritty details of computer systems, you'll gain a newfound appreciation for the complexities that underlie the software we use every day. ## Unlock Advanced Studies in Computer Science 🔑 The ICS course serves as a solid foundation for further studies in areas such as compilers, networks, operating systems, and computer architecture. By developing a deeper understanding of systems-level issues, you'll be better equipped to tackle the challenges that arise in these advanced fields of computer science. ## Enhance Your Programming Skills 💻 One of the key highlights of this course is its focus on performance evaluation and optimization. You'll learn techniques to make your code more effective, robust, and portable, equipping you with the skills to become a more proficient programmer. Whether you're aiming to optimize your personal projects or contribute to large-scale software development, these lessons will be invaluable. ## Dive into the Course Content 📚 The ICS course covers a wide range of topics, including: - Machine-level code and its generation by optimizing compilers - Computer arithmetic, memory organization, and management - Networking technology and protocols - Concurrent computation and its challenges Prepare to be captivated by the depth and breadth of this comprehensive course! 😍 ## Get Started Today! 🚀 If you're ready to embark on a transformative journey into the world of computer systems, I highly recommend checking out the "Introduction to Computer Systems (ICS)" course at Carnegie-Mellon University. You can find more information and access the course materials at [http://www.cs.cmu.edu/~213/](http://www.cs.cmu.edu/~213/). Get ready to unlock a new level of understanding and become a more versatile and effective programmer! 💪 ## Enhance Your Learning Experience with GetVM's Playground 🚀 To truly make the most of the "Introduction to Computer Systems (ICS)" course from Carnegie-Mellon University, I highly recommend utilizing the GetVM browser extension. GetVM provides an online coding playground that allows you to seamlessly apply the concepts you learn and experiment with hands-on exercises. The GetVM Playground [https://getvm.io/tutorials/15-213-introduction-to-computer-systems-ics-carnegie-mellon-university] offers a powerful and intuitive environment where you can dive into the course material and put your newfound knowledge into practice. With instant access to a virtual machine, you can write, test, and debug your code without the hassle of setting up a local development environment. The GetVM Playground's user-friendly interface and real-time feedback make it the perfect companion for your ICS learning journey. Quickly iterate on your code, explore different approaches, and see the immediate results of your efforts. This interactive experience will solidify your understanding of the course content and help you become a more confident and capable programmer. Don't just read about computer systems – experience them firsthand with the power of GetVM's Playground. Enhance your learning and unlock your full potential as you navigate the ICS course and prepare for advanced studies in computer science. 💻✨ --- ## Practice Now! - 🔗 Visit [Introduction to Computer Systems (ICS) | Carnegie-Mellon University](http://www.cs.cmu.edu/~213/) original website - 🚀 Practice [Introduction to Computer Systems (ICS) | Carnegie-Mellon University](https://getvm.io/tutorials/15-213-introduction-to-computer-systems-ics-carnegie-mellon-university) on GetVM - 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore) Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) ! 😄
getvm
1,907,114
10 Indications the Developer Inside You is Dying
In the fast-paced world of software development, it’s easy to get caught up in deadlines, bugs, and...
0
2024-07-01T04:21:23
https://medium.com/@burhanuddinhamzabhai/10-indications-the-developer-inside-you-is-dying-5cc31b949877
developerburnout, softwaredevelopment, techinovation, codingpassion
In the fast-paced world of software development, it’s easy to get caught up in deadlines, bugs, and endless lines of code. However, beneath the surface, the passion that once drove you to create innovative solutions can start to wane. Here are ten signs that the developer inside you might be dying and how to reignite that spark. **1. Lack of Curiosity** Remember the days when you eagerly awaited new tech trends and updates? If you find yourself indifferent to the latest frameworks or languages, it might be a sign. Curiosity is a developer’s lifeblood; without it, your growth stagnates. **2. Dreading Work** Everyone has off days, but if the thought of sitting down to code fills you with dread more often than not, it’s a red flag. Passion should drive your work, not just a paycheck. **3. Sticking to Old Tools** Using what you know is comfortable, but if you’re resistant to learning new tools or technologies, you might be stifling your development. Innovation requires adaptation. **4. Avoiding Collaboration** Great software development is often a team effort. If you’re isolating yourself and avoiding collaboration, you might be missing out on valuable insights and learning opportunities. **5. Neglecting Best Practices** Are you cutting corners and ignoring best practices like code reviews and documentation? These shortcuts can lead to bigger problems down the line and indicate a loss of pride in your work. **6. Burnout Symptoms** Persistent fatigue, lack of motivation, and feeling overwhelmed are all symptoms of burnout. It’s crucial to address these feelings before they lead to more serious health issues. **7. No Side Projects** Side projects are a great way to experiment and learn without the pressure of deadlines. If you haven’t felt the urge to start something new for fun, it might be time to reassess your engagement with the craft. **8. Stagnant Skillset** The tech industry evolves rapidly. If you’re not learning and growing, your skills can quickly become obsolete. Continuous learning is essential to stay relevant and excited about your work. **9. Lack of Community Involvement** Engaging with the developer community through forums, meetups, or conferences can reignite your passion. If you’re not involved, you might be missing out on inspiration and networking opportunities. **10. Feeling Disconnected from Your Work** If you feel disconnected from the purpose of your projects, it’s hard to stay motivated. Try to find meaning in your work, whether through the impact it has or the problems it solves. **Reignite Your Passion** Recognizing these signs is the first step to reigniting your passion. Take a break if you need to, explore new technologies, and reconnect with the developer community. Remember, it’s never too late to reignite the spark that made you fall in love with coding in the first place. By addressing these signs early, you can keep the developer inside you alive and thriving. After all, the tech world needs passionate, innovative minds to keep pushing the boundaries of what’s possible. > “Let your code be not just a creation, but a reflection of your passion reborn with every keystroke.” — [Burhanuddin Mulla Hamzabhai](https://medium.com/@burhanuddinhamzabhai)
burhanuddin
1,907,113
Figma for Beginners
Hello everyone! Today I'll be making a blog on Figma. I wanted to blog on Figma because I see Figma...
0
2024-07-01T04:14:38
https://dev.to/christopherchhim/figma-for-beginners-1pki
webdev, beginners, learning, designpatterns
Hello everyone! Today I'll be making a blog on Figma. I wanted to blog on Figma because I see Figma everywhere but I still don't know what it is. Figma helps with the design interface and responsive web design. Figma frames have preset devices and screen sizes. 1. Creating Figma Frames Figma frames can be created by simply hitting on the "A" or "F" key. There is a dropdown pane of the list of devices that the user wants to create the frame with. 2. Frame interactions They can be modified by changing preset values or simply dragging the corners of the box. 3. Nesting Frames Nesting frames is necessary when building complex interfaces. This is the process to combining frames. 4. Top-Level Frames The top level frames embody all the other the frames that follow it. These are notes for me to tell myself in case I ever need them. This post was inspired from: Castaneda, J. (2024, June 25) All You Need to Know About Frames in Figma Retrieved from: [https://webdesign.tutsplus.com/frames-in-figma--cms-108737t#toc-apyl-nest-frames]
christopherchhim
1,907,112
Implementing BDD with `pytest-bdd` and `pytest-playwright` for Web Testing
A tutorial to learn bdd using pytest-bdd and pytest-playwright
0
2024-07-01T04:12:29
https://dev.to/abbazs/implementing-bdd-with-pytest-bdd-and-pytest-playwright-for-web-testing-9fj
bdd, pytestbdd, pytestplaywright, cuccumber
--- title: Implementing BDD with `pytest-bdd` and `pytest-playwright` for Web Testing published: true description: A tutorial to learn bdd using pytest-bdd and pytest-playwright tags: bdd, pytestbdd, pytestplaywright, cuccumber # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-07-01 03:11 +0000 --- ## Introduction Behavior-Driven Development (BDD) is a software development process that enhances communication between developers, testers, and non-technical stakeholders. It uses simple, natural language syntax to describe the behavior of the application, making it easier for everyone to understand the requirements and the expected behavior. ### Example of BDD Consider a simple feature of logging into a web application. In BDD, this might be described in a feature file using Gherkin syntax as follows: ```gherkin Feature: User Login Scenario: Successful login with valid credentials Given the user is on the login page When the user enters valid credentials Then the user should be redirected to the dashboard ``` In this example, the behavior of the login feature is clearly described in a way that both technical and non-technical stakeholders can understand. ### References - **BDD Overview:** [Cucumber BDD](https://cucumber.io/docs/bdd/) - **pytest Documentation:** [pytest](https://docs.pytest.org/en/stable/) - **pytest-bdd Documentation:** [pytest-bdd](https://pytest-bdd.readthedocs.io/en/stable/) - **Playwright Documentation:** [Playwright](https://playwright.dev/python/docs/intro) - **pytest-playwright Documentation:** [pytest-playwright](https://github.com/microsoft/playwright-pytest) ## Prerequisites - Basic knowledge of Python - Familiarity with pytest and web automation concepts - Python installed on your machine ## Step 1: Setting Up the Environment ### Installing Required Packages First, install the necessary packages: ```sh pip install pytest pytest-bdd pytest-playwright ``` ## Step 2: Project Structure Create a structured project directory as follows: ``` tutorial/ ├── features/ │ ├── login.feature │ ├── search_stock.feature │ └── create_screen.feature ├── tests/ │ ├── test_a_login.py │ ├── test_search_stock.py │ └── test_create_screen.py ├── conftest.py └── utils/ └── config.py ``` ## Step 3: Writing Feature Files Feature files define the behavior you want to test using Gherkin syntax. ### `features/login.feature` ```gherkin Feature: Login to Screener.in Scenario: Log into Screener.in Given the user is on the login page When the user logs in with valid credentials Then the user should be redirected to the dashboard ``` ### `features/search_stock.feature` ```gherkin Feature: Search for a stock and get the P/E ratio Scenario Outline: Search for a stock by name and retrieve the P/E ratio Given the user is logged in When the user searches for "<stock_name>" Then the P/E ratio for "<stock_name>" should be displayed Examples: | stock_name | | NESTLEIND | | PGHH | | LICI | | TCS | | BRITANNIA | ``` ### `features/create_screen.feature` ```gherkin Feature: Create a new screen for quality stocks Scenario: Create a new screen with filtering stocks greater than market capital 50000Cr Given the user is logged in When the user creates a new screen with filtering stocks greater than market capital 50000Cr Then the new screen should be created successfully ``` ## Step 4: Writing Step Definitions Step definitions map the steps in your feature files to Python functions. ### `tests/test_a_login.py` ```python import pytest from pytest_bdd import scenarios, given, when, then scenarios('../features/login.feature') @given('the user is on the login page') def navigate_to_login_page(page): page.goto("https://www.screener.in/login/") @when('the user logs in with valid credentials') def login_with_valid_credentials(login): pass # The login fixture handles the login @then('the user should be redirected to the dashboard') def verify_dashboard(login): assert login.url == "https://www.screener.in/dash/" ``` ### `tests/test_search_stock.py` ```python import pytest from pytest_bdd import scenarios, given, when, then, parsers from playwright.sync_api import Page import time scenarios('../features/search_stock.feature') @given('the user is logged in') def user_logged_in(login): pass # The login fixture ensures the user is logged in @when(parsers.parse('the user searches for "{stock_name}"')) def search_stock(login: Page, stock_name): login.click('//*[@id="desktop-search"]/div/input') login.fill('//*[@id="desktop-search"]/div/input', stock_name) login.click('//*[@id="desktop-search"]/div/div/button') @then(parsers.parse('the P/E ratio for "{stock_name}" should be displayed')) def verify_pe_ratio(login: Page, stock_name): pe_ratio = login.locator('//*[@id="top-ratios"]/li[4]/span[2]') assert pe_ratio.is_visible() print(f"P/E Ratio for {stock_name}: {pe_ratio.text_content()}") time.sleep(5) ``` ### `tests/test_create_screen.py` ```python import pytest from pytest_bdd import scenarios, given, when, then from playwright.sync_api import Page scenarios("../features/create_screen.feature") @given("the user is logged in") def user_logged_in(login): pass # The login fixture ensures the user is logged in @when("the user creates a new screen with filtering stocks greater than market capital 50000Cr") def create_new_screen(login: Page): login.click("//a[@href='/explore/' and text()='Screens']") login.click("//a[@class='button button-primary' and @href='/screen/new/']") login.fill( 'textarea[name="query"]', """Market Capitalization > 50000""", ) login.click("//button[@class='button-primary']") @then("the new screen should be created successfully") def verify_new_screen_creation(login: Page): assert login.locator("text=Nestle India").is_visible() ``` ## Step 5: Configuring `pytest-playwright` Set up Playwright in your `conftest.py` to handle browser instances. ### `conftest.py` ```python import pytest from playwright.sync_api import sync_playwright from utils.config import USERNAME, PASSWORD @pytest.fixture(scope="session") def browser(): with sync_playwright() as p: browser = p.chromium.launch(headless=False) yield browser browser.close() @pytest.fixture(scope="session") def context(browser): context = browser.new_context() yield context context.close() @pytest.fixture(scope="session") def page(context): page = context.new_page() yield page page.close() @pytest.fixture(scope="session") def login(page): page.goto("https://www.screener.in/login/") page.fill('input[name="username"]', USERNAME) page.fill('input[name="password"]', PASSWORD) page.click('button[type="submit"]') assert page.url == "https://www.screener.in/dash/" yield page ``` ### `utils/config.py` Ensure you have your credentials stored securely: ```python USERNAME = "your_username" PASSWORD = "your_password" ``` ## Step 6: Running the Tests To run your tests, simply use the `pytest` command: ```sh pytest ``` ## Conclusion This tutorial provided a detailed introduction to setting up and using `pytest-bdd` and `pytest-playwright` for BDD testing. By following the steps above, you can create robust and readable tests for your web applications. Feel free to extend this setup to include more complex scenarios and additional utilities as needed. This setup provides a solid foundation for using `pytest-bdd` and `pytest-playwright` in your projects. ### References - **BDD Overview:** [Cucumber BDD](https://cucumber.io/docs/bdd/) - **pytest Documentation:** [pytest](https://docs.pytest.org/en/stable/) - **pytest-bdd Documentation:** [pytest-bdd](https://pytest-bdd.readthedocs.io/en/stable/) - **Playwright Documentation:** [Playwright](https://playwright.dev/python/docs/intro) - **pytest-playwright Documentation:** [pytest-playwright](https://github.com/microsoft/playwright-pytest) --- By following this guide, you'll be well on your way to implementing effective BDD-style tests for your web applications. Happy testing!
abbazs
1,907,111
An Introduction to Building RESTful APIs with Node.js and Express
Welcome to this guide on building RESTful APIs using Node.js and Express. Whether you're a seasoned...
0
2024-07-01T04:11:28
https://dev.to/navin_shetty/an-introduction-to-building-restful-apis-with-nodejs-and-express-4h48
webdev
Welcome to this guide on building RESTful APIs using Node.js and Express. Whether you're a seasoned developer or just starting out, this tutorial will help you understand the basics of RESTful APIs and how to implement them using one of the most popular JavaScript frameworks. ## What is a RESTful API? A RESTful API is an architectural style for designing networked applications. It relies on a stateless, client-server, cacheable communications protocol — the HTTP protocol. In RESTful APIs, requests made to a resource's URI will elicit a response that can be in JSON, XML, or HTML format. ## Why Use Node.js and Express? Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. It's lightweight and efficient, perfect for data-intensive real-time applications. Express is a fast, unopinionated, minimalist web framework for Node.js that makes building web applications and APIs easy and fun. ## Setting Up Your Environment Before we start coding, ensure you have Node.js and npm (Node Package Manager) installed. You can download and install them from nodejs.org. Create a new directory for your project and initialize it with npm: `mkdir rest-api cd rest-api npm init -y ` This will create a package.json file in your project directory. ## Installing Express Next, install Express and other necessary packages: `npm install express body-parser ` ## Creating Your First API Create an index.js file in your project directory and add the following code: `const express = require('express'); const bodyParser = require('body-parser'); const app = express(); const port = 3000; app.use(bodyParser.json()); app.get('/', (req, res) => { res.send('Welcome to our RESTful API!'); }); app.listen(port, () => { console.log(`Server is running on http://localhost:${port}`); }); ` Create a new directory for your project and initialize it with npm:
navin_shetty
1,907,110
Unveiling the Power: Exploring Angel Broking's SmartAPI
The Indian stock market is experiencing a surge in online participation. Angel Broking, a renowned...
0
2024-07-01T04:06:25
https://dev.to/epakconsultant/unveiling-the-power-exploring-angel-brokings-smartapi-1dgh
smartapi
The Indian stock market is experiencing a surge in online participation. Angel Broking, a renowned brokerage firm, empowers investors with its innovative SmartAPI. This article delves into the functionalities and advantages of SmartAPI, guiding you towards a potentially more efficient and automated trading experience. What is Angel Broking SmartAPI? SmartAPI transcends a traditional Application Programming Interface (API). It's a comprehensive suite of tools designed to equip developers and investors with functionalities for: - Automated Trading: Develop and deploy algorithmic trading strategies that react to market conditions in real-time. - Market Data Access: Gain access to live market data feeds, including stock quotes, order book depth, and historical data. - Order Placement and Management: Place and manage orders directly within your custom applications, eliminating the need for manual intervention on the Angel Broking platform. - Portfolio Tracking: Monitor your portfolio performance and holdings in real-time, facilitating informed investment decisions. Benefits of Utilizing SmartAPI: - Enhanced Automation: Automate repetitive trading tasks and execute strategies based on defined parameters, saving time and potentially reducing human error. - Backtesting Strategies: Test your trading strategies on historical data before deploying them with real capital, allowing for data-driven decision making. - Flexibility and Customization: Develop custom trading applications tailored to your specific investment goals and risk tolerance. - Real-Time Market Insights: Access and analyze live market data to identify trading opportunities and make informed investment decisions. - Cost-Effective: SmartAPI is completely free to use, eliminating additional financial burdens for investors seeking to leverage its functionalities. Who Can Benefit from SmartAPI? - Algorithmic Traders: Developers and investors who prefer to automate their trading strategies using custom algorithms. - Quantitative Analysts: Individuals who utilize mathematical and statistical models to analyze markets and identify trading opportunities. - High-Frequency Traders: Traders who employ strategies involving a large volume of orders executed at high speeds. - Retail Investors: Investors seeking to streamline their trading workflows and potentially enhance their investment performance through automation. [Unlock Your Cybersecurity Potential: The Essential Guide to Acing the CISSP Exam](https://www.amazon.com/dp/B0D42PRZD8) Getting Started with SmartAPI: Angel Broking offers comprehensive resources to get you started with SmartAPI: - Detailed Documentation: Access in-depth guides and tutorials that explain API functionalities, code samples, and best practices. - Active Developer Community: Engage with a community of developers utilizing SmartAPI for peer-to-peer learning and troubleshooting. - Ready-to-Use Code Examples: Explore readily available code snippets to jumpstart your development process and implement common trading functionalities. Security Considerations: While SmartAPI empowers automation, security remains paramount. Here are some crucial security practices to consider: - Secure Coding Practices: Follow secure coding principles to minimize vulnerabilities within your custom applications. - Two-Factor Authentication: Enable two-factor authentication on your Angel Broking account for an additional layer of security. - API Key Management: Treat your API key with the same level of care as your account credentials. Avoid sharing it publicly. Conclusion: Angel Broking's SmartAPI unlocks a world of possibilities for investors and developers seeking to enhance their trading experience. By leveraging automation, real-time data access, and customizability, SmartAPI can potentially empower you to make informed investment decisions and streamline your trading workflows. Remember, responsible use, security best practices, and continuous learning are key to unlocking the full potential of SmartAPI within your investment journey.
epakconsultant
1,907,109
Implementing Fail-Safe OTP Verification for User Login
Introduction User authentication is a critical component of securing applications and...
0
2024-07-01T04:04:34
https://dev.to/tentanganak/implementing-fail-safe-otp-verification-for-user-login-2ldb
rabbitmq, go, otp, dlq
## Introduction User authentication is a critical component of securing applications and safeguarding sensitive information. One of the most widely adopted methods for enhancing login security is using One-Time Passwords (OTP). OTPs provide an additional layer of security by requiring users to enter a unique, temporary code in addition to their regular password. However, implementing OTP verification in a robust and fail-safe manner can be challenging, especially when dealing with server downtime, network failures, or high traffic loads. ## Problem A particular challenge with OTP verification systems is managing the process efficiently when response times are prolonged. This can occur due to high user traffic or other operational delays, resulting in users experiencing uncertainty about the success of their OTP submission. To address this issue, a queue-based architecture can be implemented to handle OTP requests systematically. By leveraging queues, we can ensure that OTP requests are processed in a reliable and orderly fashion, even under high load conditions. Another critical aspect is handling errors and ensuring that failed OTP requests are retried appropriately without affecting the user experience. Implementing a robust retry mechanism and monitoring the queue can help maintain system reliability and provide insights into the system's health. ## How to Solve the Problem To address the challenges associated with implementing a fail-safe OTP verification system, we use RabbitMQ with the Dead Letter Queue (DLQ) pattern. This solution will ensure efficient handling of OTP requests, systematic retries of failed requests, and robust monitoring capabilities. Key Components of the Solution: **1. RabbitMQ for Queue Management:** - **OTP Request Queue**: A primary queue to handle incoming OTP verification requests. - **Processing Queue**: A secondary queue where OTP requests are processed. - **Dead Letter Queue (DLQ)**: A queue to handle failed OTP requests that need to be retried. **2. Error Handling and Retry Mechanism:** - **Automatic Retry**: Configure RabbitMQ to automatically retry failed OTP requests a specified number of times. - **DLQ for Failed Requests**: If an OTP request fails after the maximum number of retries, it is moved to the DLQ for further inspection and handling. **3. Monitoring:** - **Junk Queue**: Create a new queue called Junk Queue to capture undelivered or error messages Benefits of the Proposed Solution: - **Reliability**: Ensures that OTP requests are processed reliably, even under high load conditions. - **Fault Tolerance**: The DLQ pattern provides a robust mechanism for handling failures and ensuring that no requests are lost. - **Scalability**: RabbitMQ can handle a large volume of requests, making the system scalable as user traffic increases. ### The RabbitMQ Basic #### RabbitMQ RabbitMQ is a messaging broker, essentially a software platform that enables different applications or components within an application to communicate and share data with each other. #### Messaging Messaging is the process of sending and receiving messages between different software components. It's a fundamental concept in distributed computing and facilitates asynchronous communication between systems. #### Exchange An exchange in RabbitMQ is a key intermediary component responsible for receiving messages from producers and then routing them to one or more queues. Exchanges use rules, known as bindings, to determine how messages should be routed. #### Route Routing in RabbitMQ refers to the process of directing messages from exchanges to queues based on predefined criteria, such as message attributes or routing keys. This ensures that messages are delivered to the appropriate destination. #### Queue A queue in RabbitMQ is a buffer that holds messages until they are consumed by a consumer application or component. Queues provide a way to decouple producers and consumers, allowing messages to be processed asynchronously and providing resilience to system failures. ### How It Works retry queue using DLQ ![Dead Letter Queue](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjv34ojww2navay1eic9.png) Source image: https://www.cloudamqp.com/blog/when-and-how-to-use-the-rabbitmq-dead-letter-exchange.html This section details the step-by-step process of how the OTP verification system operates using RabbitMQ with the Dead Letter Queue (DLQ) pattern. The architecture ensures efficient handling of OTP requests, systematic retries of failed requests, and robust monitoring capabilities. ![Sequence Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/91en81jmlgvmdrdkuvcs.png) #### 1. OTP Request Submission - User Action: A user initiates the login process and requests an OTP. - Producer / Backend: The application acts as a producer, receiving the OTP request and sending it to the RabbitMQ OTP Request Queue. #### 2. OTP Request Queue - Queueing: The OTP Request Queue receives and holds all incoming OTP requests. - Message Forwarding: Requests are forwarded from the OTP Request Queue to the Processing Queue for handling. #### 3. Processing Queue - Consumer: A worker service acts as a consumer, picking up OTP requests from the Processing Queue. - Request Processing: The consumer processes the OTP request, handling OTP generation and verification. - Success Handling: If the OTP verification is successful, the consumer sends a response back to the user through the application, confirming the successful login. #### 4. Error Handling and Retry Mechanism - Error Detection: If an error occurs during OTP processing (e.g., network issues, server errors), the request is not immediately discarded. - Retry Logic: RabbitMQ is configured to retry the failed OTP request a specified number of times. This is done by re-queuing the message into the Processing Queue with a delay. - Maximum Retries: If the OTP request fails after the maximum number of retries, it is moved to the Dead Letter Queue (DLQ). #### 5. Dead Letter Queue (DLQ) - Failed Requests: The DLQ holds all OTP requests that have failed after the maximum retry attempts. ## Implementation - Install RabbitMQ. - Install the RabbitMQ Management Plugin for monitoring. - Install the necessary Go packages: ``` go get github.com/rabbitmq/amqp091-go ``` ### RabbitMQ Queue Setup - Direct Queue: The primary queue to handle incoming OTP requests. - DLQ: A queue to handle failed OTP requests that need to be retried. - Junk Queue: A queue to store OTP requests that fail after the maximum retry attempts. ### Golang Code #### Direct Queue (Sending OTP Request) **Producer** Below code demonstrates how to establish a connection to RabbitMQ, declare a queue, and publish a message to it. The code sets up a connection to RabbitMQ server running locally, opens a channel, declares a queue named 'direct_queue' with properties for dead-lettering, and publishes an OTP request payload to the queue. **Function to connect Rabbit mq** - Before we code the implementation, make sure you already install rabbitMq in your local machine - Then we can implement the code to establish a connection to RabbitMQ server running locally (amqp://guest:guest@localhost:5672/). If an error occurs during connection establishment, it is handled using the failOnError function. ``` conn, err := amqp.Dial("amqp://guest:guest@localhost:5672/") failOnError(err, "Failed to connect to RabbitMQ") defer conn.Close() ch, err := conn.Channel() failOnError(err, "Failed to open a channel") defer ch.Close() ``` amqp.Dial accepts a string in the AMQP URI format and returns a new Connection.  conn.Channel opens a unique, concurrent server channel to process the bulk of AMQP messages.  Any error from methods on this receiver will render the receiver invalid and a new Channel should be opened. **Function to declare direct queue** To declare a queue, we can call the QueueDeclare method of the channel that we created previously. ``` q, err := ch.QueueDeclare( "direct_queue", true, false, false, false, amqp.Table{ "x-dead-letter-exchange": "", "x-dead-letter-routing-key": "dlq", }, ) failOnError(err, "Failed to declare a queue") ``` - Declares a queue named "direct_queue" using ch.QueueDeclare method with the following parameters: - Queue name, this for attribute for queue name, in the code we add queue name as "direct_queue" - Durable queue, this is an attribute to define should rabbitmq persist the queue even if RabbitMQ restarts. We can declare as true  if we want to set queue persistent  - Auto deletion, this is an attribute to define the queue will be deleted when it's no longer in use. We set it to false because we don't want to lose the queue if it’s not consumed - Exclusive queue. Exclusive queues are only accessible by the connection that declares them and will be deleted when the connection closes - Nowait: Queue without message length limit. - amqp.Table{...}: Optional arguments for the queue, such as dead-lettering configuration. **Helper Function failOnError:** - Defines a helper function failOnError that checks for errors and logs them. - If an error is present, it logs the error message along with a custom message and exits the program using log.Fatalf. ``` func failOnError(err error, msg string) { if err != nil { log.Fatalf("%s: %s", msg, err) } } ``` Below are the all code that we implement in main function ``` package main import ( "log" amqp "github.com/rabbitmq/amqp091-go" ) func main() { conn, err := amqp.Dial("amqp://guest:guest@localhost:5672/") failOnError(err, "Failed to connect to RabbitMQ") defer conn.Close() ch, err := conn.Channel() failOnError(err, "Failed to open a channel") defer ch.Close() q, err := ch.QueueDeclare( "direct_queue", true, false, false, false, amqp.Table{ "x-dead-letter-exchange": "", "x-dead-letter-routing-key": "dlq", }, ) failOnError(err, "Failed to declare a queue") } func failOnError(err error, msg string) { if err != nil { log.Fatalf("%s: %s", msg, err) } } ``` **Publishing Message** After we declare a queue, we will create publisher to perform create new message to queue. To publish a new message, we can call the Publish method of the channel that we created previously. ``` body := "OTP request payload" err = ch.Publish( "", q.Name, false, false, amqp.Publishing{ ContentType: "text/plain", Body: []byte(body), }) failOnError(err, "Failed to publish a message") log.Printf(" [x] Sent %s", body) ``` Publish method have 5 parameter: - Exchange name, this is an attribute to declare the exchange name, we will represent it as empty string ““ - Queue name, this is a queue name that have we declare in the previous section - Mandatory Flag, if true, the message must be routed to a queue - Immediate Flag, if true, the message must be delivered immediately or return an error - Message property, such as content type and payload. **Consumer** This bellow code perform to consume message from a specified queue, processes OTP requests, and handles message acknowledgment and rejection based on processing outcomes **Get the messages** To get the messages, we can call the Consume method of the channel that we created previously. ``` msgs, err := ch.Consume( q.Name, "", false, false, false, false, nil, ) failOnError(err, "Failed to register a consumer") ``` - Queue name, this is a queue name that have we declare in the previous section - Consumer name, this is identified by a string that is unique and scoped for all - consumers on this channel - Auto Ack, When autoAck (also known as noAck) is true, the server will acknowledge - deliveries to this consumer prior to writing the delivery to the network - Exclusive, When exclusive is true, the server will ensure that this is the sole consumer - from this queue. When exclusive is false, the server will fairly distribute - deliveries across multiple consumers - NoLocal, this is not supported by RabbitMQ, It's advisable to use separate connections for - Channel.Publish and Channel.Consume so not to have TCP pushback on publishing - affect the ability to consume messages, so this parameter is here mostly for - completeness - NoWait, When noWait is true, do not wait for the server to confirm the request and - immediately begin deliveries - Optional Arguments, this can be provided that have specific semantics for the queue - or server **Processing the messages and send the otp request** This bellow code loop through the captured message and send the otp by message body. ``` forever := make(chan bool) ``` Create an unbuffered channel named forever of type bool. This channel will be used to keep the program running indefinitely. ``` go func() { for d := range msgs { log.Printf("Received a message: %s", d.Body) count := utils.CheckLimitRetry(message) // check retry limit if count >= 3 { _ := sendToJunk(d) d.Ack(false) return nil } err := processOTPRequest(string(d.Body)) if err != nil { log.Printf("Error processing message: %s", err) d.Reject(false) } else { d.Ack(false) // Acknowledge the message } } }() ``` - go func() starts a new goroutine, which is a lightweight thread managed by the Go runtime. The code inside the func() block will be executed concurrently. - for d := range msgs starts a for loop that reads messages from the msgs channel. The loop will continue to run as long as there are messages coming through the msgs channel. - sendToJunk(d) if the message retried more than 3 times, the function will publish a new junk queue and pass the message as argument. this mechanism useful for monitor unprocessed queue. - count := utils.CheckLimitRetry(message) check how many times the message has been retried - err := processOTPRequest(string(d.Body)) call a function processOTPRequest, passing the message body as a string. This function processes the OTP request and returns an error if something goes wrong #### DLQ (Retry a failed OTP) **Queue Declaration** We will create a new queue as done above, but we put a difference in the queue name. in the optional argument we will set x-dead-letter-routing-key to direct_queue ``` q, err := ch.QueueDeclare( "dlq", true, false, false, false, amqp.Table{ "x-dead-letter-exchange": "", "x-dead-letter-routing-key": "direct_queue", }, ) failOnError(err, "Failed to declare a queue") ``` if any error on consuming the message, the message will send back to direct_queue by defining "x-dead-letter-routing-key": "direct_queue" **Consumer** we need to consume the message and put a reject in it so that the message can return to the direct message to be processed again ``` forever := make(chan bool) go func() { for d := range msgs { log.Printf("Received a message in DLQ: %s", d.Body) d.Reject(false) // Reject the message to send it back to direct queue } }() log.Printf(" [*] Waiting for messages in DLQ. To exit press CTRL+C") <-forever ``` - d.Reject rejects the message, the message will be requeued or sent back to the original queue (direct queue) depending on the configuration of the message broker. #### Junk Queue (Monitor Unprocessed Message) just like the declaration above with a different name, in the optional argument section we set it to nil ``` q, err := ch.QueueDeclare( "junk_queue", true, false, false, false, nil, ) failOnError(err, "Failed to declare a queue") ``` Complete of source code can be found here https://github.com/macgatron/go-otp-queue ## Conclusion In this documentation, we explored the implementation of a fail-safe OTP verification system using RabbitMQ with the Dead Letter Queue (DLQ) pattern. By leveraging three queues—Direct Queue, DLQ, and Junk Queue—we've built a robust system capable of handling OTP requests efficiently, ensuring reliability and fault tolerance. By implementing the fail-safe OTP verification system with RabbitMQ and leveraging the DLQ pattern, we've developed a robust solution that ensures reliable OTP verification for user logins. With efficient queue management, automatic retries, and comprehensive error handling, the system provides a seamless user experience while maintaining scalability and reliability. Continuous monitoring and future enhancements will further strengthen the system's capabilities, ensuring it remains secure and efficient in handling OTP verification for various applications and use cases. ## Key Takeaways 1. Efficient OTP Handling: The Direct Queue efficiently manages incoming OTP requests, ensuring they are processed promptly. 2. Retry Mechanism: The DLQ captures failed OTP requests and retries them by sending them back to the Direct Queue, providing a mechanism for automatic recovery. 3. Error Monitoring: The Junk Queue stores OTP requests that have exceeded the maximum retry attempts, enabling administrators to monitor and analyze errors for system improvement.
budi-utomo
1,907,107
Step into Motion: Controlling Stepper Motors with Arduino
The world of electronics offers a captivating blend of creativity and functionality. Stepper motors,...
0
2024-07-01T04:01:03
https://dev.to/epakconsultant/step-into-motion-controlling-stepper-motors-with-arduino-4757
arduino
The world of electronics offers a captivating blend of creativity and functionality. Stepper motors, known for their precise movements, open doors for exciting projects. This article delves into the world of controlling stepper motors with Arduino, a popular microcontroller platform. We'll explore the setup process, delve into basic programming, and equip you to embark on your own stepper motor adventures! Understanding Stepper Motors: Unlike regular DC motors, stepper motors move in discrete steps. Each step corresponds to a specific angle of rotation. This precise control makes them ideal for applications like 3D printers, CNC machines, and robotic arms. Essential Hardware: To get started, you'll need the following: - Arduino Uno (or compatible board): The brain of your project, it controls the stepper motor. - Stepper Motor: Choose a stepper motor suitable for your project's requirements (voltage, torque, number of steps per revolution). - Stepper Motor Driver: Most stepper motors require a driver IC (integrated circuit) to translate Arduino's signals into power for the motor. Popular driver options include A4988, DRV8825, and TMC2208. - Jumper Wires: Connect various components on your breadboard. - Breadboard (optional): Provides a convenient platform for prototyping your circuit. Wiring Up the Circuit: The specific wiring configuration depends on your chosen stepper motor driver. However, here's a general outline: 1. Power Supply: Connect the stepper motor driver's power supply pins to an appropriate voltage source (based on your motor's specifications). 2. Ground: Connect the ground pin of the driver and the Arduino to a common ground. 3. Control Pins: Connect the control pins of the driver (typically labeled STEP and DIR) to digital output pins on your Arduino. 4. Motor Connection: Connect the stepper motor's wires to the corresponding motor driver pins (refer to the driver's datasheet). [Mastering Drone PCB Design with FreeRTOS, STM32, ESC, and FC](https://www.amazon.com/dp/B0CV4JX3Q4) Programming Your Arduino: Here's a basic Arduino sketch to control a stepper motor with a single coil (bipolar) driver like the A4988: ``` C++ #include <Stepper.h> const int stepsPerRevolution = 200; // Adjust based on your motor's specs const int stepPin = 8; const int dirPin = 9; Stepper myStepper(stepsPerRevolution, stepPin, dirPin); void setup() { myStepper.setSpeed(100); // Steps per second } void loop() { // Move 100 steps clockwise myStepper.step(100); delay(1000); // Pause for 1 second // Move 100 steps counter-clockwise myStepper.step(-100); delay(1000); // Pause for 1 second } ``` Explanation: - We include the Stepper.h library, which simplifies stepper motor control. - stepsPerRevolution defines the number of steps the motor takes for a complete revolution. - stepPin and dirPin specify the Arduino pins connected to the stepper driver's control pins. - In setup(), we set the desired motor speed (steps per second). - In loop(), we use myStepper.step(number) to move the motor a specified number of steps. Positive values move clockwise, negative values counter-clockwise. - delay() functions introduce pauses between movements. Experimenting Further: This basic code provides a foundation. Here are some ways to expand your exploration: - Control Speed and Direction: Modify the setSpeed() function and step values to control speed and direction more precisely. - Acceleration and Deceleration: Utilize libraries like AccelStepper for smoother motor movements with controlled acceleration and deceleration. - Multiple Stepper Motors: Control multiple stepper motors simultaneously using different Arduino pins and stepper objects. - Sensor Integration: Combine stepper motor control with sensors (e.g., limit switches) for more complex project functionalities. Safety Considerations: - Power Supply: Ensure your power supply can provide sufficient current for your chosen stepper motor. - Heat Dissipation: Stepper motors and drivers can generate heat, especially during continuous operation. Consider heat sinks for proper heat dissipation. - Current Limiting: Some drivers offer adjustable current limiting features. Set appropriate current limits to avoid motor damage. Conclusion: By combining Arduino with stepper motors, you unlock a world of creative possibilities. This article has equipped you with the essential setup and programming knowledge to get started. Remember, experimentation and exploration are key to mastering stepper motor control.
epakconsultant
1,907,050
Analizando el "Hola mundo" en NEAR Contrato y Frontend 🤠
Primeros pasos contratos inteligentes con NEAR Hola a todos, anteriormente habíamos analizado los...
0
2024-07-01T03:54:31
https://dev.to/sergiotechx/analizando-el-hola-mundo-en-near-contrato-y-frontend-1lpf
**Primeros pasos contratos inteligentes con NEAR** Hola a todos, anteriormente habíamos analizado los pasos básicos del cliente de NEAR basado en RUST. [Cliente rust](https://dev.to/sergiotechx/primeros-pasos-con-cliente-de-near-escrito-en-rust-near-cli-rs-4amn). El día de hoy vamos dar los primeros pasos en los contratos inteligentes en NEAR. Los contratos en NEAR tradicionalmente se han hecho en el lenguaje RUST, afortunadamente han tenido la buena idea de optar por usar librerías de RUST y no han creado ningún lenguaje único y propietario de NEAR, por lo tanto las personas que manejan RUST escribir contratos inteligentes va a ser algo poco complejo. Lamentablemente RUST a pesar de ser el lenguaje más “amado 😍” su sintaxis no es en lo absoluto similar a lenguajes como C. C++, C#, JAVA, Javascript, Python, etc…lo cual requiere ponerle amor y dedicación a este lenguaje, ya que es un lenguaje que definitivamente es el presente y futuro en soluciones blockchain. Por fortuna al generar los contratos inteligentes en NEAR, se genera un WebAssembly, y es por ello que el equipo de NEAR también ha pensado en lenguajes más populares para generar este mismo WebAssembly como lo es JavaScript, el cual es muy conocido, popular y ampliamente usado. Este artículo mostrará como se basará en generación de contratos mediante JavaScript. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rpkfl8yqys4ic1zq2cp7.png) _Su majestad Webassembly al rescate!_ **Prerequisitos:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uxpp6t7pag5gwvw6shxb.png) Para los usuarios de Windows , **si o si**, por el momento hay que usar WSL, en el cliente NEAR y la instalación de NodeJS 1) Instalación del cliente de NEAR ( mejor el de RUST) 2) Instalación de NodeJS Para el paso 2, es más práctico usar NVM : Node Version Manager Para instalar el nvm vamos al link oficial de este proyecto [https://github.com/nvm-sh/nvm](https://github.com/nvm-sh/nvm) Una ves instalado el NVM vamos a [https://nodejs.org/en](https://nodejs.org/en) y vemos la última versión LTS ( al día de este artículo es la versión 20.14.0), procedemos a instalarla de la siguiente manera: `nvm install 20.14.0` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ernc79j333m6jlxc00p8.png) Ahora vamos a empezar con el clásico Hola mundo ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yxyelsllydfn9njltwmq.png) En la terminal escribimos ``` npx create-near-app@latest ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xt358u6yr3gzsjaluy8b.png) --- Seleccionamos A Smart Contract ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbuanu79c1i68k9zme07.png) Seleccionamos sobre que lenguaje deseamos trabajar, para este caso en JS/TS ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xfguvloiytbfqx9u4re1.png) Le damos un nombre al proyecto: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ggwcl45fbnes7ovsb67e.png) Ponemos la opción “Y” para instalar el sdk de NEAR ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p44dx8e0fnnvcevoy0hk.png) Listo , ya tenemos creado el ambiente para crear contratos en NEAR 😊 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c72sng716dbx89u71u8g.png) Ingresamos al directorio hola mundo con el comando : ``` cd hola-mundo ``` y allí escribimos: `code .` **Análisis del contrato hola mundo** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dnkn2yotk3aqjr7ecqla.png) _import { NearBindgen, near, call, view } from 'near-sdk-js';_ **NearBindgen** Es el decorador de facto que se usa al empezar todos los contratos en NEAR. **Near:** tiene las primitivas de envío de fondos, altura del bloque, y logs, entre otros **call:** Llamado a métodos del contrato que cambian estados del contrato ( escritura) y poseen un coste asociado a la operación **view:** Llamado a métodos del contrato de sólo lectura. greeting: string = 'Hello'; Atributo de clase tipo string , lo clásico de un hola mundo con clases. ``` @view({}) // This method is read-only and can be called for free get_greeting(): string { return this.greeting; } ``` Tenemos el decorador que es una función de sólo lectura y su implementación no tiene alguna particularidad diferente al estándar de typescript. ``` @call({}) // This method changes the state, for which it cost gas set_greeting({ greeting }: { greeting: string }): void { near.log(`Saving greeting ${greeting}`); this.greeting = greeting; } ``` Tenemos el decorador que es una operación de escritura. Como particularidad en los parámetros, difiere un poco a como se usan los parámetros en una función normal donde sólo sería (greeting:string):void y en la parte izquierda se ponen los parámetros y en la derecha se especifican su tipo. Algo muy interesante es que podemos guardar los logs en la blockchain, siempre y cuando sean muy relevantes, ya que tienen un costo asociado. **Compilación del contrato** Ejecutamos el comando: ``` npm run build ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uv6qiqp26wl45e7m0qmb.png) Si somos observadores, vemos que se ha creado una carpeta llamada build con un código en WebAssembly. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/496bnryrbyas5s7jr5fw.png) --- **Creación de subcuentas** A diferencia de las EVM, donde un contrato es un ente autónomo donde una billetera (address) paga un fee por su despliegue y nada más, en NEAR los contratos se asocian a una dirección. Por ello lo más común es crear algo llamado subcuentas, una subcuenta aplica lo mismo que los domnios y subdominios de internet. **EJ:** _neacolombiadev.testnet_ es la cuenta principal ( dominio) Una subcuenta para este ejemplo seria _holamundo.nearcolombiadev.testnet_ No hay limite de subcuentas, entonces no tendríamos problema con el despliegue de contatos. Para crear una subcuenta entramos al cliete de near y ponemos la opción de account ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3wvsl1yyycq4w3pr2ea.png) Ponemos la opción de create-account ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/734tsl5yqoc1mns01okd.png) Opción fund-myself ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7x59flzsdt2gjc1f5kdh.png) Ponemos el nombre de la subcuenta: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbx1sfaeiwgdwexfaqpb.png) Ponemos la verificación que la subcuenta no exista ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wv5b1l2k33utp0pwdzdx.png) Indicamos la cantidad de near con que deseamos darle fondos a la cuenta: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iz3jxjdya68hb6054mt9.png) Elegimos la opción de generar la llave automáticamente. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c6uaaizfzl12l29soefm.png) Ponemos la opción de grabar en el sistema legacy ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8tgzle21j7rr7577efp.png) Elegimos opción de testnet ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0njjewdl0ddtn81gxcba.png) Seleccionamos sign-with-key-chain ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/onrrwqee4pkewo7xfc9v.png) Finalmente la opción de de send ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8k3z4k15djutrza13trm.png) En el explorador de bloques vemos nuestra subcuenta creada exitosamente ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wzsga7i5tl0qgul751jg.png) En la cuenta de nearcolombiadev.testnet teníamos 10 NEAR, menos el conste de la creación de la subcuenta y fondeo , podemos ver el nuevo saldo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ifl6ud2w8nmbsbps9p7e.png) --- <u>**Despliegue del contrato**</u> _Si por alguna razón nos quedamos sin fondos, podemos ir a la faucet de near: https://near-faucet.io/_ Por la terminal ingresamos al cliente de near y seleccionamos la opción de contrato. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uxb0awesuobw0p7hj8n9.png) Seleccionamos deploy ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0kk820i5danl5d6f5sca.png) Elegimos la cuenta con la que queremos hacer el deploy, en este caso holamundo.colombiadev.testnet ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ievcqdnt5kqdfh3511dk.png) Ponemos la ruta del contrato compilado: _./build/hello_near.wasm_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4f41morbo1kr80jp7l4z.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a13eq1ynsib0fmbcw41g.png) Ponemos la opción without-init-call: el init call es el “constructor” de la clase y se pueden poner parámetros” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/su5rymkxk20j0l5vfuja.png) Elegimos que va para testnet ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjln3ze5izmpj2baxl8e.png) Opción sign-with-keychain ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4f1whlwxziul4n0e8ck2.png) Finalmente en send ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vbubqetocs3mtlfqft1l.png) Si todo sale exitoso: vamos al nuevo explorador de bloques [https://testnet.nearblocks.io/](https://testnet.nearblocks.io/) Allí ponemos la subcuenta que creamos anteriormente: holamundo.nearcolombiadev.testnet ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rs3itiujb99ghlz1sw63.png) En el explorador de bloques vemos la transacción de despliegue de contratos y podemos ver los métodos del contrato por la opción contract ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fr251au3smurlp24d8q.png) Click en contract methods ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/od631zpb9i45ar6m1xfm.png) <u>**Cómo interactuar con el contrato**</u> Lo podemos hacer por el cliente de near o por la página [https://testnet.nearblocks.io/](https://testnet.nearblocks.io/) Empecemos de la forma fácil y visual, por nearblocks! 😉 Primero debemos ingresar por sign in ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5cn0vpeto52s9zz4e23n.png) Seleccionamos la billetera: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y8jg4dizc3r1iw9ulk8y.png) En nuestro caso particular tenemos la cuenta asociada con meteorWallet Super importante la pantalla siguiente: sólo autorizamos a saber billeteras y saldos, pero no a realizar transacciones o firmar transacciones a nombre de nosotros! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fqegwol4w8vb3w31pq68.png) A continuación vamos a poner la dirección donde tenemos el contrato: _holamundodev.testnet_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gfb7e22j4g5a8hokp8gc.png) Entramos por la parte de contrato: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2tdpv66s8ic786palip1.png) Seleccionamos contract methods ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zb1qw6jatkmbxrd9rkjh.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gt6d4ejtzd1a6hg90won.png) Verificamos que estemos conectados. **Llamado de lectura** _get_greeting_ Desplegamos get_greeting y click en query ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/empw81g2q3b297258zgv.png) Acá observamos el contenido de la variable greeting del contato **Llamado de escritura:** _set_greeting_ Ampliamos _set_greeting_, click en add en arguments, el parámetro se llama greeting y es tipo string, Ponemos el valor deseado y click en write ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/798vu7wkxfvkeotlceyf.png) Damos click en confirmar ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ck8cq4yun5vakwixokn6.png) A continuación sale el fee que se va a gastar y le damos aprobar a la billetera ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvqdzxhuzwksglpauxqn.png) Para confirmar el nuevo valor podemos usar el método get_greeting ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/79q2j9pdbj0ny4w4kom8.png) Ahora hagámoslo con el cliente de near , ( para los amantes de la consola 😎) **Llamado de lectura** _get_greeting_ 1) Por terminal ponemos near 2) Seleccionamos contract 3) Call-function 4) As-read-only 5) Seleccionamos la cuenta holamundo.nearcolombiadev.testnet 6) Ponemos que la función es: get_greeting 7) Seleccionamos que los parámetros son json-args 8) Como no hay parámetros escribimos {} 9) Seleccionamos que está en testnet 10) Finalmente ponemos que lo queremos realizar con la altura del bloque actual ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l43i83emq69ltgi9suze.png) **Llamado de escritura:** _set_greeting_ 1) Llamamos a termnial el programa de near 2) Opción contract 3) Opción Call-function 4) Opción as-transaction 5) Opción holamundo.nearcolombiadev.testnet 6) La función es : set_greeting 7) Opción json-args 8) Escriibimos el parámetro en formato json {"greeting":"Hola desde la consola 😍"} 9) Dejamos por defecto el valor sugerido de la aplicación del gas a pagar 10) No hay desposito, entonces lo dejamos en 0NEAR 11) Seleccionamos desde que cuenta lo queremos firmar, para este caso desde nearcolombiadev.testnet 12) Opción testnet 13) Opción sign-with-keychain 14) Finalmente en send ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pcajoijlspd0zz9308xs.png) Podemos repetir desde consola para ver el valor nuevo en la variable del contrato ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cod39auhmxytxarefpyg.png) **Y donde dejamos a near.log del contato?** Esta opción se graba por cada transacción, en este caso al revisar la última transacción de escritura [https://testnet.nearblocks.io/en/txns/7HKTJGu5AVYoXL9ccWLx5SN8j6mNqXzrmzstCpm2Dxm7?tab=execution](https://testnet.nearblocks.io/en/txns/7HKTJGu5AVYoXL9ccWLx5SN8j6mNqXzrmzstCpm2Dxm7?tab=execution) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kwbum53rzupy7lg0xenz.png) Podemos ver claramente donde se guardan los logs. Estos deben ser bien pensados, que sea algo relevante con fines de auditoria, seguimiento o información relevante. --- ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/624h1csbj3wq94kscxi7.png) Ahora vamos a realizar la parte web. Near de por si crea toda la app por defecto del “hola mundo”, esto implica contrato y conexión a la billetera, pero obviamente otro contrato y no el de nosotros, así también apenas tiene conexiones por defecto a HereWallet y MyNear Wallet. **Primeros pasos:** En la terminal escribimos ``` npx create-near-app@latest ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3la8dh30lfijpog2dttx.png) Elegimos que es una Web app Esta para Next-JS, por convenciones más actualizadas, seleccionamos ( App-Router) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spqoakpx6idluhygggvu.png) Seleccionamos que no queremos componentes BOS ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkvuurekaevy34xlm01m.png) Para este ejemplo ponemos de nombre del proyecto: hello-near-web ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy76dk4fwznywidmlbi3.png) Finalmente ponemos en Y que deseamos instalar los módulos necesarios para poder ejecutar. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v0rfojgc6mtl8thent1x.png) Puedes ejecutar el programa tal cual está, pero, hagámosle unas adaptaciones 👨‍🔧 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zz0ighx53uicmjw80u7a.png) Nos ubicamos en el directorio generado hello-near-web y abrimos el visual studio code o editor favorito. En _config.js_ vemos que tenemos las subcuentas donde tenemos alojado el contrato. Cómo estamos ejecutando todo el tutorial en testnet, reemplazamos la subcuenta _holamundo.nearcolombiadev.testnet_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zh5huitvxjioini5ptap.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uldwag6chjifd4zaczlo.png) Abrimos la carpeta wallets y abrimos el archivo _near.js_ Como podemos observar sólo hay 2 billeteras instaladas, para que el usuario elija, estas son HereWallet y MyNearWallet ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/478em4ejvma67dqr3hgw.png) Para añadir más billeteras entramos al buscardor de paquetes [Para añadir más billeteras entramos al buscardor de paquetes https://www.npmjs.com/](https://www.npmjs.com/) Allí ponemos en la búsqueda lo siguiente: _ @near-wallet-selector_. Aparecen más billeteras, sólo es instalar las billeteras que deseamos incluir en la lista de billeteras que el usuario desee elegir. Para nuestro caso vamos a incluir la billetera meteor ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/of8o44f4ncp1lkznpe2o.png) Ingresamos a este link: [https://www.npmjs.com/package/@near-wallet-selector/meteor-wallet](https://www.npmjs.com/package/@near-wallet-selector/meteor-wallet) Y seguimos las instrucciones para su instalación. `npm install @near-wallet-selector/meteor-wallet` Luego incluimos el import del instructivo ``` import { setupMeteorWallet } from "@near-wallet-selector/meteor-wallet"; ``` Finalmente incluimos _setupMeteorWallet_ en la parte de modules ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v2oyrpwtyhgdcwddwkzy.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ivf4zgundenqpm308if.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5eywiyd0467iaggow1wj.png) Si no eres análitico y no deseas ir a fondo eso es todo, ya puede ejecutar el front 😆 **Y donde se invocan los métodos de lectura y escritura del contrato?** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nda023skvbz8mrdyhw42.png) Dentro de la carpeta _app/hello-near_ está el archivo _page.js_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/br13jce3u5hanntw9lp5.png) **Función de lectura:** Acá vemos que al empezar el user effect llamamos el llamado al método tipo de lectura “_ViewMethod_” con parámetos del contrato y su respectivo método ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1bejiithd97p3y867z12.png) Si sómos analíticos, vemos que el objeto wallet está dentro del _context.js_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/27o67x40hfk8xkzlfy50.png) El objeto wallet no pertenece a la el sdk de near, es algo generardo localmente. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9f4fv2b0zramqjkes3n9.png) La clase Wallet nos crea una capa de abstracción para usarlo más facilmente los llamados al SDK de near en operaciones de lectura y escritura. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bg06obeumvfymi94nzpz.png) En el llamado vemos que si tenemos un RPC (proveedor de servicio), donde se va la cuenta , para este caso hola-mundo.nearcolombiadev.testnet, el método get_greeting y sin ningún parámeto {} ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qcendexmfbh38c7rsx2l.png) **Función de escritura:** Volviendo a _page.js_ dentro de la carpeta _hello-nea_r,se invoca la escritura y luego la parte de lectura para reflejar el cambio. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fuqaxdax2lt6rsqu6hwi.png) En near.js dentro del directorio wallet, vemos que no se interactúa con un RPC, se interactúa con la billetera que hayamos seleccionado al hacer login, y se hace la operación firmándola y enviando la transacción. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/chapt2yfm2fc5dhj7fln.png) Ahora si, vamos a correr esta aplicación: ``` npm run dev ``` Le damos click en _near integration_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/irvjvxch6o8mz5c3x47s.png) Ya podemos ver el método de lectura de contrato ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rxmv5b8qcmbae33mzh3l.png) Si damos click en _login_ vemos que ya incorpora la billetera de Meteor ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmmlnamiix7nkyxpagiq.png) Y ya estamos en la capacidad de grabar un nuevo saludo. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ua77qtdslp0yyl9s4c24.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yf25fpsvv0wit92go03q.png) Bueno, esperamos que haya quedado un poco claro cómo funcionan estos contratos y su parte web. La base que deja NEAR con viewMethod y callMethod en near.js prácticamente los podemos usar para nuestros propios llamados a los métodos de cualquier contrato en la parte web. Hasta la próxima 🤗. **Código fuente:** [github](https://github.com/sergiotechx/near-Hello-World)
sergiotechx
1,907,105
Simplifying State Management and Data Fetching in React with Redux Toolkit Query
Efficient Data Fetching and Cache Management with RTK Query: A Comprehensive...
0
2024-07-01T03:47:15
https://dev.to/forhad96/simplifying-state-management-and-data-fetching-in-react-with-redux-toolkit-query-1hob
react, redux, frontend, javascript
## Efficient Data Fetching and Cache Management with RTK Query: A Comprehensive Guide Managing state and data fetching in React can be challenging. Redux Toolkit Query (RTK Query) simplifies this process. In this post, we'll explore the `useGetTodosQuery` hook and its options, including polling and `tagTypes`. ## Key Points for Using Options - **pollingInterval**: Use when you need to keep data updated at regular intervals. - **refetchOnMountOrArgChange**: Use when you need to refetch data upon component mount or when query arguments change. - **refetchOnReconnect**: Use when you want data to refresh after regaining network connectivity. - **refetchOnFocus**: Use when you need data to refresh when the browser window regains focus. - **skip**: Use to conditionally skip a query. - **tagTypes**: Use for more efficient data fetching and cache management. ## What is Redux Toolkit Query? RTK Query is a data fetching and caching tool built on top of Redux Toolkit. It helps manage API requests, caching, and state synchronization with your Redux store. ## Setting Up RTK Query ### Step 1: Install Redux Toolkit ```bash npm install @reduxjs/toolkit react-redux ``` ### Step 2: Create an API Slice Create an `apiSlice.js` file: ```javascript import { createApi, fetchBaseQuery } from '@reduxjs/toolkit/query/react'; export const apiSlice = createApi({ reducerPath: 'api', baseQuery: fetchBaseQuery({ baseUrl: '/api' }), tagTypes: ['Todo'], endpoints: (builder) => ({ getTodos: builder.query({ query: () => 'todos', providesTags: ['Todo'], }), updateTodo: builder.mutation({ query: (todo) => ({ url: `todos/${todo.id}`, method: 'PUT', body: todo, }), invalidatesTags: ['Todo'], }), }), }); export const { useGetTodosQuery, useUpdateTodoMutation } = apiSlice; ``` ### Step 3: Configure the Store ```javascript import { configureStore } from '@reduxjs/toolkit'; import { apiSlice } from './apiSlice'; const store = configureStore({ reducer: { [apiSlice.reducerPath]: apiSlice.reducer, }, middleware: (getDefaultMiddleware) => getDefaultMiddleware().concat(apiSlice.middleware), }); export default store; ``` ### Step 4: Provide the Store ```javascript import React from 'react'; import ReactDOM from 'react-dom'; import { Provider } from 'react-redux'; import store from './store'; import App from './App'; ReactDOM.render( <Provider store={store}> <App /> </Provider>, document.getElementById('root') ); ``` ## Using `useGetTodosQuery` with Options ### Efficient Data Fetching with `tagTypes` Using `pollingInterval` can lead to unnecessary API calls, especially when data doesn't change frequently. A more efficient approach is to use `tagTypes` and invalidating tags to refetch data only when necessary. This approach ensures the UI stays in sync without constant polling. ### Example Usage Here’s how to use `useGetTodosQuery` and `useUpdateTodoMutation` in a React component: ```javascript import React from 'react'; import { useGetTodosQuery, useUpdateTodoMutation } from './apiSlice'; const TodoList = () => { const { data: todos, isLoading, isError } = useGetTodosQuery(); const [updateTodo] = useUpdateTodoMutation(); const handleUpdate = async (todo) => { await updateTodo({ ...todo, completed: !todo.completed }); }; if (isLoading) return <div>Loading...</div>; if (isError) return <div>Error loading todos</div>; return ( <ul> {todos.map((todo) => ( <li key={todo.id}> {todo.title} <button onClick={() => handleUpdate(todo)}> {todo.completed ? 'Mark Incomplete' : 'Mark Complete'} </button> </li> ))} </ul> ); }; export default TodoList; ``` ### Complete Example with Options Here’s a complete example using `useGetTodosQuery` with various options: ```javascript import React from 'react'; import { useGetTodosQuery } from './apiSlice'; const TodoList = () => { const { data: todos, isLoading, isError } = useGetTodosQuery(undefined, { refetchOnMountOrArgChange: true, refetchOnReconnect: true, refetchOnFocus: true, skip: false, }); if (isLoading) return <div>Loading...</div>; if (isError) return <div>Error loading todos</div>; return ( <ul> {todos.map((todo) => ( <li key={todo.id}>{todo.title}</li> ))} </ul> ); }; export default TodoList; ``` ## Polling Intervals vs. Tag Types ### When to Use Polling Intervals Polling intervals are useful when data changes frequently and needs to be updated in real-time. For example: - Live sports scores - Stock market data - Real-time chat messages ### When to Use `tagTypes` `tagTypes` are better suited for scenarios where data changes infrequently or when you want to minimize API calls. For example: - User profile information - Product details in an e-commerce app - Todo lists where updates are user-driven Using `tagTypes` with invalidation tags ensures data is refetched only when necessary, reducing unnecessary network traffic and improving performance. ## Conclusion RTK Query makes data fetching in React applications simple and efficient. While polling is useful for frequently updated data, using `tagTypes` and invalidating tags is a better approach for efficient data fetching and cache management. This method ensures your UI stays up-to-date without unnecessary API calls. Happy coding!
forhad96
1,907,104
How to add NativeWind in React Native Expo
Are you a frontend developer who's fallen in love with the simplicity and power of Tailwind CSS? If...
0
2024-07-01T03:46:25
https://dev.to/syketb/how-to-add-nativewind-in-react-native-expo-3h55
javascript, reactnative, beginners, react
Are you a frontend developer who's fallen in love with the simplicity and power of Tailwind CSS? If so, you're not alone! I, too, was enamored by Tailwind's utility-first approach and the way it streamlined my workflow. However, when I made the transition to React Native, I found myself missing the convenience of Tailwind's classes terribly. Writing CSS manually felt like a step back in time, and my productivity took a hit. That's when I found NativeWind, a game-changer for React Native developers who crave the same level of flexibility and ease that Tailwind offers. NativeWind is a utility library that brings the magic of Tailwind CSS to your React Native projects, allowing you to style your components with the same familiar classes you know and love. Say goodbye to the tedious task of writing CSS from scratch and hello to a world of rapid development and consistent styling. In this article, we'll dive deep into the world of NativeWind and explore how to integrate it into your React Native Expo project. Prepare to be amazed as you witness the seamless fusion of Tailwind's utility-first approach with the power of React Native. So, I'm assuming you have already created an Expo app. Now, our mission is to make our app capable of writing Tailwind CSS code. Add these two dependencies to your Expo project: ```javascript npm install nativewind npm install --save-dev tailwindcss@3.3.2 ``` Then, run npx tailwindcss init to create a tailwind.config.js file ```javascript // tailwind.config.js module.exports = { content: ["./App.{js,jsx,ts,tsx}", "./<custom directory>/**/*.{js,jsx,ts,tsx}"], theme: { extend: {}, }, plugins: [], } Modify your babel.config.js ``` ```javascript // babel.config.js module.exports = function (api) { api.cache(true); return { presets: ["babel-preset-expo"], plugins: ["nativewind/babel"], }; }; ``` Believe it or not, that's it 🎉! Now your React Native app is eligible to use Tailwind CSS classes, just like your frontend app. Isn't that cool? Modify your App.js like this: ```javascript import { StatusBar } from "expo-status-bar"; import React from "react"; import { Text, View } from "react-native"; export default function App() { return ( <View className="flex-1 items-center justify-center bg-gray-600"> <Text className="text-yellow-200 text-3xl">Hey! Welcome.</Text> <StatusBar style="light" /> </View> ); } ``` Run your app, and you will see the effects of the Tailwind CSS styles in your app as shown below: ![React Native App](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9sk60eh7imoia788imrl.jpg) That's great! In this article, you learned a little bit about NativeWind, what it is, how it helps us in React Native, and how to integrate it with an Expo app. ## About me I am Syket Bhattachergee, Software Engineer at CreoWis. If you want to discuss your technical writing needs or any role? You can reach out to me on [LinkedIn](https://linkedin.com/in/syketb) or [My Website](https://syketb.vercel.app), and follow my work on [GitHub](https://github.com/syket-git).
syketb