id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,890,495
Daily Hamster Kombat
I made this open source project based on the Hamster Kombat telegram chat group to stay on top of...
0
2024-06-16T18:23:52
https://dev.to/jansellopez/daily-hamster-kombat-3dk8
hamsterkombat, website, react
I made this open source project based on the [Hamster Kombat](https://t.me/hamster_kOmbat_bot/start?startapp=kentId1666436655) [telegram chat group](https://t.me/Hamster_Kombat_LATAM_Chat) to stay on top of daily cipher and combo. ## Website [https://jansellopez.github.io/daily-hamster-kombat/](https://jansellopez.github.io/daily-hamster-kombat/) ## Open Data You can add or consult data here: [https://docs.google.com/spreadsheets/d/1Dx10k2QWUg4dF431J5PUxtPGPs8r7QhCbbIcuK71uic/edit?usp=sharing](https://docs.google.com/spreadsheets/d/1Dx10k2QWUg4dF431J5PUxtPGPs8r7QhCbbIcuK71uic/edit?usp=sharing) ## Contact Linkedin: [Jansel López Bouza](https://www.linkedin.com/in/jansel-lopez-bouza/) Github: [JanselLopez](https://github.com/JanselLopez) Telegram: [@JanselA](https://t.me/JanselA) Whatsapp: [+5356207780](https://api.whatsapp.com/send/?phone=%2b5356207780&text&type=phone_number&app_absent=0) Phone: [+5356207780 ](tel:+5356207780) Email: [21jansel@gmail.com](mailto:21jansel@gmail.com) ## Contribute [https://github.com/JanselLopez/daily-hamster-kombat](https://github.com/JanselLopez/daily-hamster-kombat)
jansellopez
1,890,491
Comparing Top 10 Churn Prediction Software in 2024
This Blog was Originally Posted to Churnfree Blog Churn prediction software uses AI churn prediction...
0
2024-06-16T18:18:16
https://churnfree.com/blog/churn-prediction-software/
churnpredication, churnfree, churnrate, churnretention
This Blog was Originally Posted to [Churnfree Blog](https://churnfree.com/blog/churn-prediction-software/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) Churn prediction software uses **AI churn prediction** and **predictive modeling** to forecast which customers might leave and provides insights into their preferences, health scores, and overall churn risk. **Table Of Contents** 1. [Churnfree] (https://churnfree.com/blog/churn-prediction-software/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution#1-Churnfree) 2. [Churnly] (https://churnfree.com/blog/churn-prediction-software/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution#2-Churnly) 3. [Churnzero] (https://churnfree.com/blog/churn-prediction-software/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution#3-ChurnZero) 4. Akkio 5. Gainsight 6. Segment 7. Vitally 8. Retently 9. Optimove 10. Upzelo As we look ahead to 2024, the competition among churn prediction software has grown. These tools are crucial for businesses to stay competitive in understanding and predicting customer churn. They **identify customers likely to leave** and provide strategies to prevent it. These platforms are essential for customer retention management strategies. By using advanced machine learning models for churn prediction, companies can better tailor their services to meet changing customer needs, reducing churn and increasing loyalty. Thanks to technology advancements, we have powerful customer churn prediction tools to help with this. This article lists the top 10 churn prediction software options for 2024. Each one offers unique features to predict churn and improve customer retention prediction capabilities. We’ll look at platforms specializing in customer churn prediction and those providing churn prediction analysis and insights. By understanding what each churn prediction model offers, you’ll be able to choose the best one for your business. Before looking into the churn prediction software list, let’s discuss **churn prediction software benefits and why you need churn prediction software** for your business. **First,** Churn prediction software reduces customer attrition. Losing customers can be a significant setback for any business, both financially and reputation-wise. By reaching out to these customers and addressing their concerns or offering incentives, you can increase your chances of retaining them. This helps maintain revenue streams, fosters customer loyalty, and enhances the overall customer experience. **Secondly,** customer churn prediction software improves overall customer experience by providing insights like customer demographics, purchasing patterns, and interaction history. This software can accurately predict which customers are at a higher risk of leaving. **Lastly,** churn prediction software helps optimize marketing strategies. Businesses can focus their marketing efforts on retaining these high-risk customers by identifying the customers who are most likely to churn. This allows them to allocate their resources efficiently and effectively, resulting in higher conversion rates and ROI. Churn prediction software is most used in subscription-based business models like **streaming services, gym memberships, or monthly subscription boxes,** as well as in the telecom industry and financial industries like **Banks and credit card companies.** Now, let’s get into the best and must-have churn prediction software in 2024. Here’s a pricing comparison table to easily compare which churn prediction software suits you best. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ygyjbqn0qblfmw95ryae.png) **1. Churnfree** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xajlsoxmgwkyc5t67h0.png) [Churnfree](https://churnfree.com/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) is a premier churn prediction software designed to empower businesses with the tools to reduce customer churn and enhance lifetime value efficiently. This platform is distinguished by its ability to create customizable cancellation flows, aiming to win back customers through tailored strategies and offers. With Churnfree, you can drive customer retention and significantly reduce churn by up to 46%. Churnfree is heralded as the only customer retention platform you’ll ever need to combat churn effectively. It allows for the construction of optimized retention flows that can be built in a few simple steps, providing personalized and dynamic offers based on various targeting parameters such as tenure and usage. This software is designed to start reducing churn within minutes, boasting a user-friendly interface that requires no developer knowledge for implementation. **Key Features** - Customizable Cancellation Flows: Build cancel options and actions tailored to your website to improve customer lifetime value (LTV). - Personalized Retention Offers: Offer dynamic deals by analyzing customer tenure, usage, and other criteria. - In-depth Analytics and Insights: Utilize real-time data to manage customer churn and gather valuable feedback. - Seamless Payment Integration: Easily integrate with your preferred payment processors to deliver experiences that boost retention. - No Developer Required: A highly intuitive and clean user interface simplifies the process of churn management. **Payment Integrations** A notable advancement is the integration with Paddle and Stripe Payment Processor for subscription-based websites, which monitor retention flow, subscriptions, cancellations, invoices, and much more. This integration streamlines the collection of customer payments and manages recurring payments effortlessly, making it an invaluable tool for businesses relying on subscription-based revenue models. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0hjbd08ziwyixjfe7cmj.png) **Pros** - Offers all payment methods (Stripe, Paddle) on basic plan. - Helps in identifying and retaining at-risk customers, thus reducing churn significantly. - Offers 14 days free trial. - Cheaper than other products. **Cons** - It can be a bit challenging to use at first, but with a [demo video](https://www.youtube.com/watch?v=ffVE5BtZ3DI&t=25s&utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) or a [written tutorial](https://churnfree.com/blog/build-customizable-cancelation-retention-flows/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution), you can quickly grasp it. - Churnfree focuses on customer retention only therefore does not provide onboarding, expansion and customer healthcare. **Pricing** Churnfree offers a range of pricing plans suitable for businesses of all sizes, starting from a Basic plan at $49.00 per month. For more advanced features, the Professional plan is $99.00 per month, and the Business plan is $199.00 per month. There’s also an option for an Enterprise Custom plan catering to specific business needs. The **basic plan** offers all the features and integrations you might require; however, for a bigger team, you can opt for professional plans. By integrating Churnfree into your business operations, you can proactively reduce customer churn and enhance the overall customer retention strategy. With its user-friendly interface, customizable features, and powerful analytics, Churnfree positions itself as an essential tool for businesses aiming to improve their customer lifecycle and retention rates. **2. Churnly** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vawoeypl7p6su5d6h0u0.png) [Churnly](https://www.churnly.ai/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) is another great churn prediction software designed explicitly for B2B SaaS companies. It focuses on providing actionable insights and predictive analytics that help Customer Success teams effectively reduce customer churn. By tracking the entire customer journey, Churnly ensures you have all the tools to identify at-risk customers and prevent them from leaving, enhancing customer retention and business growth. **Pros** - Utilizes advanced machine learning algorithms to offer high predictive accuracy. - Prioritizes churn reduction over customer acquisition, which is crucial for long-term business sustainability. **Cons** - **Limited Free Resources:** Compared to some competitors, Churnly does not offer a free trial or free version, which might deter potential users from trying the software. - **Support Limitations:** Provides support primarily through email, which may only be sufficient for some users. **Pricing** Churnly’s pricing details are not publicly disclosed, and interested businesses must contact their team directly to obtain this information. This approach allows Churnly to tailor its pricing structure to each business’s specific needs and Scale. **Payment Integrations** Churnly supports integrations with significant billing systems, simplifying the management of subscription payments and customer financial interactions. This integration is crucial for businesses relying on recurring revenue models, as it helps maintain a smooth financial operation while minimizing churn due to payment issues. **3. ChurnZero** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wlgfkd969nb9i00koafb.png) [ChurnZero](https://churnzero.com/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) is a dynamic Customer Success platform and churn prediction tool tailored for subscription businesses. It aims to enhance customer experiences and reduce churn. It integrates seamlessly with various data systems to provide a comprehensive view of customer behavior, enabling businesses to tailor their approach effectively. The platform’s real-time, customer-focused analytics allow for quick identification of at-risk accounts and proactive management of customer relationships. **Pros** - **Proactive Churn Management:** Empowers teams to identify and address churn risks effectively. - **Enhanced Customer Insights:** Offers detailed analytics to understand and predict customer behavior and needs. - **Scalability:** Suitable for businesses of all sizes, easily integrating with existing tech stacks. - **Security and Compliance:** Adheres to GDPR, PCI DSS, and other regulatory standards to ensure data security and privacy. **Cons** - **Complexity:** Some users find the platform complex and challenging to navigate initially. - **Limited Real-Time Collaboration:** The platform does not support real-time interactions with colleagues through @ mentions, which can limit immediate collaborative efforts. - No free Trial. **Payment Integrations** ChurnZero supports significant payment systems, facilitating seamless management of subscription payments and financial transactions. This integration is crucial for maintaining smooth operations and minimizing churn related to payment issues, enhancing the overall customer experience. **Pricing** ChurnZero offers a range of pricing options to accommodate various business sizes and needs. While specific pricing details are tailored to each company, the platform ensures flexibility to provide solutions that align with different budgetary requirements. Interested businesses should contact ChurnZero directly to obtain customized pricing information based on their specific needs and usage scenarios. **4. Akkio** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p9o7gc810mshdslo49ca.png) Akkio takes a significant leap forward in churn prediction software by leveraging artificial intelligence to scrutinize customer data and unearth patterns indicative of churn risk. This tool stands as an ideal choice for SaaS companies aiming to refine their customer journey and diminish [customer churn rate](https://churnfree.com/blog/a-look-at-customer-churn-rate/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution). Akkio’s prowess lies in its ability to craft churn prediction models that are adept at identifying both existing customers teetering on the edge of departure and new customers in need of further onboarding support. Delving into customer lifetime value and segmentation empowers customer success teams to direct their efforts more judiciously. **Pros** - **Advanced AI and Machine Learning:** Can handle diverse churn prediction scenarios across various industries. - **No-Code Platform:** Akkio’s intuitive interface and automation simplify the deployment of ML models without requiring deep technical knowledge. - **Versatility:** Beyond churn prediction, Akkio is a multifaceted machine-learning platform that assists with forecasting, lead scoring, and more. **Cons** - **Learning Curve:** While the platform is designed to be user-friendly, new users may require time to fully utilize its extensive capabilities. - **Data Dependency:** The accuracy of predictions heavily relies on the quality and quantity of customer data available. **Payment Integrations** Akkio’s platform enhances its utility by integrating with popular payment processors like Stripe. This integration is particularly beneficial for subscription-based businesses, providing a streamlined approach to managing retention flows, subscriptions, cancellations, and invoices. Such integrations are pivotal in minimizing churn related to payment issues, thereby bolstering overall retention rates. **Pricing** Akkio’s pricing structure is thoughtfully designed to accommodate businesses of varying sizes and budgets. Starting with a Free tier that allows for essential insight consumption and report viewing, the platform then scales up to more feature-rich plans: **Basic:** At $49 per user/month, chat and basic ML functionalities are offered. **Pro:** Priced at $99 per user/month, this plan includes advanced ML and model operations. **Build-On Package:** Beginning at $999/month, providing white-label options and API access for extensive customization. **Enterprise:** Tailored for large-scale operations, offering dedicated infrastructure and advanced API access, with pricing available upon request. This flexible pricing ensures organizations can choose a plan that best meets their needs without over committing resources. With Akkio, businesses are equipped not only to predict and mitigate customer churn but also to refine their overall approach to customer retention, making it a must-have tool in today’s competitive landscape. **5. Gainsight** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/233mi3201w5pko3w7x10.png) Gainsight is renowned for its comprehensive approach to customer success, offering a suite of solutions that enhance every aspect of customer interaction and retention. With its robust platform, you can quickly implement strategies focusing on customer outcomes and operational execution, ensuring a consistent and positive experience for all your clients. Whether you’re a growing business or a mature organization developing a customer success strategy, GainSight provides the tools and insights to drive significant improvements in customer retention and satisfaction. **Pros** - **Comprehensive Integration Capabilities:** Seamlessly integrates with key technologies across CRM, support, and analytics, making it a central part of your tech stack. - **Scalable Solutions:** Whether a small team or a large enterprise, GainSight’s platform scales to meet your needs, offering tailored functionalities that grow with your business. - **Proactive Customer Management:** With tools like health scorecards and renewal forecasting, you can anticipate customer needs and address potential issues before they lead to churn. **Cons** - **Complex Setup:** The comprehensive nature of GainSight’s platform may require a significant setup time and a steep learning curve for new users. - **Resource Intensive:** To fully leverage the platform’s capabilities, a substantial investment in training and development may be necessary. **Pricing** GainSight offers a variety of pricing plans tailored to different business sizes and needs. Specific pricing details are customized based on the features and Scale of deployment required by your organization. For the most accurate and up-to-date pricing, it is recommended that you visit GainSight’s official website or contact their sales team. **Payment Integrations** The platform supports integration with major payment systems, simplifying the management of subscription payments and financial transactions. This integration is crucial for maintaining smooth operations and minimizing churn related to payment issues, enhancing the overall customer experience. **6. Segment** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0fly569aj7wtg07lpis.png) Segment, a tool developed by Twilio, revolutionizes customer analytics by allowing you to gather data from various platforms such as Facebook, emails, and your website. This integration helps create a unified customer profile, which is crucial in effectively understanding and mitigating customer churn. **Pros** - **Enhanced Data Analysis:** A deep understanding of customer behaviors and interactions across various channels is provided. - **Scalability:** As your business grows, Segment scales with you, handling increased data without requiring manual adjustments. - **No Coding Required:** The platform is accessible to users without technical expertise, making advanced data analytics available to a broader audience. **Cons** - **Complex Initial Setup:** While the interface is user-friendly, the initial setup process can be complex and might require technical assistance. - **Cost Prohibitive for Small Businesses:** The advanced features and extensive integration capabilities might be beyond the budget of smaller enterprises. **Payment Integrations** Segment supports integration with major payment systems, which simplifies the management of subscription payments and financial transactions. This is crucial for maintaining smooth operations and minimizing churn related to payment issues, thus enhancing the overall customer experience. **Pricing** Segment offers a free plan that includes basic features like data collection from two sources and integration with one data warehouse. A team plan starts at $120/month and expands on the Free plan by including up to 10,000 monthly visitors, unlimited sources, and additional integration capabilities. It also offers a custom plan for large-scale operations. **7. Vitally** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lsfvtagpt3zvktqrgijh.png) Vitally is a comprehensive customer success platform that empowers your team with all the necessary tools to enhance productivity and customer engagement at Scale. Integrating real-time customer data with dedicated workspaces, Vitally allows you to manage many customer relationships effortlessly while delivering personalized, one-to-one customer experiences. **Pros** - **Enhanced Productivity:** Provides all the tools necessary for your team to increase work efficiency significantly. - **Actionable Insights:** This service offers visibility into your book of business and actionable insights that drive customer retention and business growth. - **Scalable Customer Management:** Whether managing one-to-many or one-to-one customer relationships, Vitally scales to meet your needs. - **Comprehensive Integration:** Seamlessly integrates with a wide array of enterprise applications, enhancing data synchronization and management. **Cons** - **Complex Setup:** New users may find the initial setup of the platform somewhat complex. - **Learning Curve:** Fully utilizing Vitally’s extensive features and capabilities may require a period of learning. Vitally stands out as a robust tool designed to streamline customer success operations. It makes it easier to manage, analyze, and improve your interactions and relationships with customers. By leveraging its powerful features, you can ensure that your team is equipped to drive customer success and business growth effectively. **8. Retently** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7gkwmx6xqtpcdbqv6ymj.png) Retently is a sophisticated customer feedback and Net Promoter Score (NPS) platform designed to help your business predict and prevent customer churn by analyzing sentiment data. By leveraging advanced analytics and customer engagement tools, Retently allows you to address customer concerns proactively, thereby improving satisfaction levels and enhancing customer retention strategies. **Pros** - **Enhanced Engagement:** High response rates through reliable technological tools and A/B testing of customer engagement strategies. - **Predictive Insights:** Advanced analytics help predict trends and understand key drivers impacting customer experiences. - **Comprehensive Feedback Management:** From capturing to analyzing feedback, Retently provides tools to shape and drive personalized customer experiences. **Cons** - **Complex Feature Set:** The platform’s wide array of features may require a learning curve for new users to utilize it fully. - **Limited Free Trial:** During the free trial period, access to full features is restricted, which may hinder a comprehensive evaluation. **Payment Integrations** Recently, it has supported significant payment systems, enhancing the management of subscription payments and financial transactions. This integration is vital for maintaining smooth operations and minimizing churn related to payment issues. **Pricing** Retently offers a basic, pro, and enterprise plan starting from $25 to $599. The basic plan offers few features, and a pro plan is required for a better experience. Each plan is designed to scale with your business, ensuring that you can upgrade as your needs grow without over committing resources initially. **9. Optimove** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r0vpkrhqgryq7snhtp0o.png) Optimove’s Customer-Led Marketing Platform is designed to meet your specific marketing needs, regardless of where you are on your journey. It empowers you to start with your customers and win their loyalty for life by building a solid foundation of personalized CRM Marketing. This platform allows you to grow and scale your personalization efforts with AI optimization, ensuring each customer feels uniquely valued. **Pros** - **Advanced Personalization:** AI-driven tools allow for dynamic personalization of CRM journeys. - **Versatile Marketing Tools:** Supports a wide range of marketing channels and campaigns. - **Robust Integration Capabilities:** Seamlessly integrates with various data sources for a unified customer view. - **Strategic Services:** This company offers flexible pricing for strategic services, either per hour or per project, to suit different business needs. **Cons** - **Complexity:** The wide array of features and deep customization options might overwhelm new users. - **Implementation Fee:** An implementation fee is charged, which includes local support to ensure fast and quality onboarding. **Payment Integrations** Optimove includes integration capabilities with major payment systems, which streamline the management of subscription payments and financial transactions. This is crucial for businesses relying on recurring revenue models, helping to maintain smooth operations and minimize churn related to payment issues. **Pricing** Optimove offers three main product packages—Grow, Build, and Scale—tailored to your business’s size and Scale. Pricing is based on the number of monthly active customers and does not increase with more data or platform users. The add-ons are priced based on usage, providing flexibility and scalability to meet businesses’ evolving needs. For detailed pricing, businesses are encouraged to contact Optimove directly to get a quote that fits their specific requirements. **10. Upzelo** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g2cl4eosle0c2zjqctsk.png) Upzelo is a web-based platform designed to help subscription businesses dramatically reduce churn and improve subscription retention. It activates at the point of cancellation to offer targeted solutions through its innovative scoring engine and flow builder. This engine assesses real-time data such as subscription length and spending to help you save high-value customers by offering tailored experiences and retention strategies. **Pros** - **Customizable Retention Strategies:** Offers a variety of options to tailor customer retention efforts. - **User-Friendly Interface:** Simplifies the setup and management of retention programs. - **Extensive Support Options:** Provide 24/7 support through multiple channels, including live representatives, email, and chat. **Cons** - **Complex Features:** Some users may find the features complex and require time to utilize the platform. - **Limited Free Trial:** The free trial may not offer access to all features, potentially limiting the evaluation of the entire platform’s capabilities. **Payment Integrations** Upzelo supports integration with major payment systems such as Stripe and Recharge, which helps manage subscription payments efficiently. This integration is crucial for businesses relying on recurring revenue models, ensuring smooth financial operations and minimizing churn due to payment issues. **Pricing** Upzelo offers plans starting from €49.00/month to €199.00/month. The standard plan is for 40 users and includes basic features for churn reduction and subscription management. The pro plan is for 50 users and offers advanced features, including unlimited surveys, offers, and audience segmentation. These plans are designed to scale with your business, allowing you to choose the best fit for your requirements and budget. **FAQs** **What is the most effective model for predicting customer churn?** Logistic regression is highly recommended for churn prediction, especially when dealing with a straightforward binary outcome and plentiful customer data. It helps businesses predict churn and understand its causes. **How is the churn prediction system designed?** The churn prediction system is designed to optimize several key performance metrics, including accuracy, precision, recall, and f-measure, with a focus on accuracy. **Which algorithms are used in churn prediction?** Churn prediction models typically employ a variety of machine learning algorithms. The most commonly used ones include logistic regression, decision trees, random forests, support vector machines, gradient boosting machines, and neural networks. **What are the steps to create a customer churn prediction model?** To develop a churn prediction model, follow these steps: - Identify the business case to determine the desired outcome of using machine learning. - Gather and clean the necessary data. - Develop, extract, and select relevant features. - Construct the predictive model. - Implement the model and continuously monitor its performance. **Conclusion** To choose the best customer churn prediction software, look closely at what each one offers. To learn more about predicting when customers might leave, check out the resources and expert advice at [Churnfree Blog.](https://churnfree.com/blog/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) The right software can help you keep more customers and grow your business by engaging with them strategically.
churnfree
1,890,489
Highlight Your Terminal: a tools design to highlight information on output of terminal
highlight is a script to detect and highlight patterns such as URLs, domains, IPv4 addresses, IPv6...
0
2024-06-16T18:11:01
https://dev.to/phhitachi/highlight-your-terminal-a-tools-design-to-highlight-information-on-output-of-terminal-42me
terminal, shell, bash, highlight
[highlight](https://github.com/ReconXSecurityHQ/highlight) is a script to detect and highlight patterns such as URLs, domains, IPv4 addresses, IPv6 addresses, subnets, ports, categories, HTML tags, and more. ## Introduction In the world of system administration, network management, and development, analyzing log files and text streams is a daily routine. Manually sifting through plain text can be tedious and error-prone. What if you could make important information pop out with color? Enter highlight—a powerful, customizable script that uses awk to highlight specific patterns in your text files. In this guide, we’ll explore practical use cases of the highlight command to supercharge your terminal experience. ## Why Use highlight? `highlight` can highlight: - IPv4 and IPv6 addresses - Subnet masks - URLs - Domains with ports - Common network ports (e.g., 80/tcp) - Important script details - Text inside parentheses - HTML tags and attributes - With these capabilities, highlight makes it easier to identify and analyze crucial information in logs and other text streams. ## Usage Examples Here are some practical use cases to demonstrate how highlight can be used with common tools. 1.**Highlighting Patterns in Log Files** Logs are essential for troubleshooting and monitoring systems. highlight can make critical information in logs stand out. ``` highlight < /var/log/apt/history.log ``` **Example:** ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*QuQ3DMzAxzbL_LPXrt6SVA.png) In this example, it highlight keys, and other important patterns will be highlighted, making it easier to spot issues or important events in your system logs. 2.**Highlighting Output from nmap** nmap is a powerful network scanning tool. Highlighting its output can help quickly identify open ports, IP addresses, and more. ``` sudo nmap -sV -sC -Pn hackerone.com | highlight ``` **Example:** ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*bBhTF9RyFvyPx0a3NxWuMg.png) With highlight, the output from nmap scans will be more readable, allowing you to easily see open ports, service details, and potential vulnerabilities. 3.**Highlighting Output from ifconfig** The ifconfig command displays network interface configuration. Highlighting IP addresses and netmasks can make the output more readable. ``` ifconfig | highlight ``` **Example:** ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*UjhLSKNR5ZOAo4fCuPa1qw.png) 4.**Highlighting Output from curl** curl is used for transferring data with URLs. Highlighting URLs and other critical parts of the output can be very helpful. ``` curl -i -s https://www.hackerone.com | highlight ``` **Example:** ![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*5-F8phe9URGdug2DMCQHGA.png) This example makes HTTP headers, HTML tags and other important parts of the curl output stand out, which is useful for debugging web requests or inspecting server responses. ## Conclusion highlight is a versatile tool that can greatly improve the readability of your terminal output by highlighting important patterns. Whether you are analyzing logs, network configurations, or HTTP responses, highlight can help you quickly spot the information you need. Give it a try and transform the way you interact with text in your terminal! Checkout here: [Github](https://github.com/ReconXSecurityHQ/highlight) Your’re Welcome!
phhitachi
1,890,488
Tailwind CSS: CSS framework for Kickstart
Discover the future of web design with Tailwind CSS! Efficient, flexible and perfect for beginners...
0
2024-06-16T18:04:01
https://blog.disane.dev/en/tailwind-css-css-framework-for-kickstart/
tailwindcss, css, framework, webdev
![](https://blog.disane.dev/content/images/2024/06/tailwindcss-css-framework-fur-den-kickstart_banner-1.jpeg)Discover the future of web design with Tailwind CSS! Efficient, flexible and perfect for beginners and professionals. 🚀 --- In the early days of web design, CSS (Cascading Style Sheets) was used to control the look and layout of websites. Developers had to manually write styles for each element, which often resulted in cluttered and difficult to maintain code. In recent years, however, web design has come a long way, and frameworks like Tailwind CSS have revolutionized the way we design websites. ## Why CSS frameworks? ### Traditional CSS ```css /* Style for a button */ .btn { background-color: blue; color: white; padding: 10px 20px; border-radius: 5px; font-weight: bold; } ``` To use this class, you would have to add it to the corresponding HTML element: ```html <button class="btn">Click here</button> ``` ### The-problems-with-traditional-css * **Repetitions**: Many similar style elements led to redundant sections of code. * **Maintainability**: Large CSS files quickly became cluttered and difficult to maintain. * **Global styles**: Changes to one class could have unexpected effects on other parts of the website. ## The introduction of CSS frameworks To solve these problems, CSS frameworks such as Bootstrap and Foundation were developed. These frameworks offered predefined classes and components that allowed developers to create responsive designs quickly and efficiently. However, they also had their own limitations, such as limited customization options and overload from unused styles. ## Tailwind CSS: The New Approach Tailwind CSS is a utility-first CSS framework that allows developers to create custom designs without ever leaving the CSS. Instead of using predefined components, Tailwind provides an extensive collection of utility classes that can be applied directly to HTML. [Tailwind CSS - Rapidly build modern websites without ever leaving your HTML.![Preview image](https://tailwindcss.com/_next/static/media/social-card-large.a6e71726.jpg)Tailwind CSS is a utility-first CSS framework for rapidly building modern websites without ever leaving your HTML.](https://tailwindcss.com/) #### Advantages of Tailwind CSS 1. **Fine-grained control**: Developers can control the look of their elements in detail by combining different utility classes. 2. **Reusability**: Utility classes are small and modular, resulting in reusable and maintainable code. 3. **Performance**: Tailwind removes unused CSS classes during production, which minimizes file size and improves load times. ### A practical example Suppose you want to create a button. With Tailwind CSS, it looks like this: ```html <button class="bg-blue-500 text-white font-bold py-2 px-4 rounded"> Click here </button> ``` Explained here: * `bg-blue-500`: Sets the background to blue. * `text-white`: Sets the text color to white. * `font-bold`: Makes the text bold. * `py-2 px-4`: Sets the padding at the top/bottom and left/right. * `rounded`: Makes the corners of the button rounded. ### Why Tailwind CSS is ideal for beginners Tailwind CSS offers several advantages for beginners: * **Quick start**: No need to write or understand extensive CSS files. * **Intuitive classes**: The utility classes are easy to understand and intuitively named. * **Instant feedback**: Changes can be made directly in the HTML and checked immediately in the browser. ### Tailwind CSS compared to other frameworks While frameworks such as Bootstrap and Foundation offer predefined components that can be implemented quickly, Tailwind CSS offers greater flexibility and control. It allows developers to create exactly the design they want without being constrained by predefined styles. ### Tailwind CSS setup and configuration #### Installation Installing Tailwind CSS is straightforward and can be done via npm or Yarn: ```bash npm install tailwindcss ``` #### Configuration After installation, Tailwind can be configured by creating a configuration file: ```bash npx tailwindcss init ``` This creates a `tailwind.config.js` file in which customizations can be made: ```js module.exports = { theme: { extend: { colors: { 'custom-blue': '#1e40af', }, }, }, variants: {}, plugins: [], } ``` ### Creating a project with Tailwind CSS A simple HTML project with Tailwind CSS could look like this: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Tailwind CSS example</title> <link href="styles.css" rel="stylesheet"> </head> <body class="bg-gray-100 flex items-center justify-center h-screen"> <div class="text-center"> <h1 class="text-4xl font-bold text-gray-800 mb-4">Welcome to Tailwind CSS</h1> <button class="bg-blue-500 text-white font-bold py-2 px-4 rounded hover:bg-blue-700"> Click here </button> </div> </body> </html> ``` The `styles.css` file contains the Tailwind directives: ```css @tailwind base; @tailwind components; @tailwind utilities; ``` ### Additional concepts in Tailwind CSS #### Responsive design Tailwind provides built-in support for responsive design. Classes can be adapted for different screen sizes using prefixes such as `sm:`, `md:`, `lg:`, and `xl:`: ```html <div class="bg-white sm:bg-gray-100 md:bg-gray-200 lg:bg-gray-300 xl:bg-gray-400"> Responsive design </div> ``` #### State-specific styles With Tailwind, you can also apply state-specific styles such as `hover:`, `focus:`, and `active:` can be applied: ```html <button class="bg-blue-500 text-white font-bold py-2 px-4 rounded hover:bg-blue-700 focus:outline-none focus:ring-2 focus:ring-blue-600 focus:ring-opacity-50"> Interactive button </button> ``` #### Darkmode Tailwind offers an easy way to implement darkmode. The configuration is done in the `tailwind.config.js` file: ```js module.exports = { darkMode: 'media', // or 'class' theme: { extend: {}, }, variants: { extend: {}, }, plugins: [], } ``` Dark mode-specific classes can then be used: ```html <div class="bg-white dark:bg-gray-800"> Dark mode supported </div> ``` ## Conclusion 📃 Tailwind CSS has revolutionized the way we design websites. It offers a powerful and flexible alternative to traditional CSS frameworks and allows developers to create custom designs quickly and efficiently. For beginners and experienced developers alike, Tailwind CSS is an indispensable tool in modern web development. --- If you like my posts, it would be nice if you follow my [Blog](https://blog.disane.dev) for more tech stuff.
disane
1,890,487
Tailwind CSS: CSS-Framework für den Kickstart
Entdecke die Zukunft des Webdesigns mit Tailwind CSS! Effizient, flexibel und perfekt für Anfänger...
0
2024-06-16T18:03:50
https://blog.disane.dev/tailwindcss-css-framework-fur-den-kickstart/
tailwindcss, css, framework, webentwicklung
![](https://blog.disane.dev/content/images/2024/06/tailwindcss_css-framework-fur-den-kickstart_banner-1.jpeg)Entdecke die Zukunft des Webdesigns mit Tailwind CSS! Effizient, flexibel und perfekt für Anfänger und Profis. 🚀 --- In den frühen Tagen des Webdesigns wurde CSS (Cascading Style Sheets) verwendet, um das Aussehen und Layout von Webseiten zu steuern. Entwickler mussten manuell Stile für jedes Element schreiben, was oft zu unübersichtlichem und schwer wartbarem Code führte. In den letzten Jahren hat sich das Webdesign jedoch stark weiterentwickelt, und Frameworks wie Tailwind CSS haben die Art und Weise, wie wir Webseiten gestalten, revolutioniert. ## Warum CSS-Frameworks? ### Traditionelles CSS ```css /* Stil für eine Schaltfläche */ .btn { background-color: blue; color: white; padding: 10px 20px; border-radius: 5px; font-weight: bold; } ``` Um diese Klasse zu verwenden, müsste man sie dem entsprechenden HTML-Element hinzufügen: ```html <button class="btn">Klicken Sie hier</button> ``` ### Die Probleme mit traditionellem CSS * **Wiederholungen**: Viele ähnliche Stilelemente führten zu redundanten Codeabschnitten. * **Wartbarkeit**: Große CSS-Dateien wurden schnell unübersichtlich und schwer zu pflegen. * **Globale Stile**: Änderungen an einer Klasse konnten unerwartete Auswirkungen auf andere Teile der Webseite haben. ## Die Einführung von CSS-Frameworks Um diese Probleme zu lösen, wurden CSS-Frameworks wie Bootstrap und Foundation entwickelt. Diese Frameworks boten vordefinierte Klassen und Komponenten, die es Entwicklern ermöglichten, schnell und effizient ansprechende Designs zu erstellen. Sie hatten jedoch auch ihre eigenen Einschränkungen, wie z.B. eingeschränkte Anpassungsmöglichkeiten und Überlastung durch ungenutzte Stile. ## Tailwind CSS: Der neue Ansatz Tailwind CSS ist ein Utility-First-CSS-Framework, das es Entwicklern ermöglicht, benutzerdefinierte Designs zu erstellen, ohne jemals das CSS zu verlassen. Anstatt vordefinierte Komponenten zu verwenden, bietet Tailwind eine umfangreiche Sammlung von Utility-Klassen, die direkt in HTML angewendet werden können. [Tailwind CSS - Rapidly build modern websites without ever leaving your HTML.![Preview image](https://tailwindcss.com/_next/static/media/social-card-large.a6e71726.jpg)Tailwind CSS is a utility-first CSS framework for rapidly building modern websites without ever leaving your HTML.](https://tailwindcss.com/) #### Vorteile von Tailwind CSS 1. **Feinkörnige Kontrolle**: Entwickler können das Aussehen ihrer Elemente detailliert steuern, indem sie verschiedene Utility-Klassen kombinieren. 2. **Wiederverwendbarkeit**: Utility-Klassen sind klein und modular, was zu wiederverwendbarem und wartbarem Code führt. 3. **Performance**: Tailwind entfernt ungenutzte CSS-Klassen bei der Produktion, was die Dateigröße minimiert und die Ladezeiten verbessert. ### Ein praktisches Beispiel Angenommen, du möchtest eine Schaltfläche erstellen. Mit Tailwind CSS sieht das so aus: ```html <button class="bg-blue-500 text-white font-bold py-2 px-4 rounded"> Klicken Sie hier </button> ``` Hier erklärt: * `bg-blue-500`: Setzt den Hintergrund auf Blau. * `text-white`: Setzt die Textfarbe auf Weiß. * `font-bold`: Macht den Text fett. * `py-2 px-4`: Setzt die Innenabstände (Padding) oben/unten und links/rechts. * `rounded`: Macht die Ecken der Schaltfläche abgerundet. ### Warum Tailwind CSS für Anfänger ideal ist Für Anfänger bietet Tailwind CSS mehrere Vorteile: * **Schneller Einstieg**: Keine Notwendigkeit, umfangreiche CSS-Dateien zu schreiben oder zu verstehen. * **Intuitive Klassen**: Die Utility-Klassen sind leicht verständlich und intuitiv benannt. * **Sofortiges Feedback**: Änderungen können direkt im HTML vorgenommen und sofort im Browser überprüft werden. ### Tailwind CSS im Vergleich zu anderen Frameworks Während Frameworks wie Bootstrap und Foundation vordefinierte Komponenten bieten, die schnell implementiert werden können, bietet Tailwind CSS eine größere Flexibilität und Kontrolle. Es ermöglicht es Entwicklern, genau das Design zu erstellen, das sie möchten, ohne durch vordefinierte Stile eingeschränkt zu sein. ### Tailwind CSS Setup und Konfiguration #### Installation Die Installation von Tailwind CSS ist unkompliziert und kann über npm oder Yarn erfolgen: ```bash npm install tailwindcss ``` #### Konfiguration Nach der Installation kann Tailwind konfiguriert werden, indem eine Konfigurationsdatei erstellt wird: ```bash npx tailwindcss init ``` Dies erzeugt eine `tailwind.config.js` Datei, in der Anpassungen vorgenommen werden können: ```js module.exports = { theme: { extend: { colors: { 'custom-blue': '#1e40af', }, }, }, variants: {}, plugins: [], } ``` ### Erstellung eines Projekts mit Tailwind CSS Ein einfaches HTML-Projekt mit Tailwind CSS könnte wie folgt aussehen: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Tailwind CSS Beispiel</title> <link href="styles.css" rel="stylesheet"> </head> <body class="bg-gray-100 flex items-center justify-center h-screen"> <div class="text-center"> <h1 class="text-4xl font-bold text-gray-800 mb-4">Willkommen bei Tailwind CSS</h1> <button class="bg-blue-500 text-white font-bold py-2 px-4 rounded hover:bg-blue-700"> Klicken Sie hier </button> </div> </body> </html> ``` Die Datei `styles.css` enthält die Tailwind-Direktiven: ```css @tailwind base; @tailwind components; @tailwind utilities; ``` ### Weiterführende Konzepte in Tailwind CSS #### Responsive Design Tailwind bietet integrierte Unterstützung für responsive Design. Klassen können durch Präfixe wie `sm:`, `md:`, `lg:`, und `xl:` für verschiedene Bildschirmgrößen angepasst werden: ```html <div class="bg-white sm:bg-gray-100 md:bg-gray-200 lg:bg-gray-300 xl:bg-gray-400"> Responsives Design </div> ``` #### Zustandsspezifische Stile Mit Tailwind können auch Zustandsspezifische Stile wie `hover:`, `focus:`, und `active:` angewendet werden: ```html <button class="bg-blue-500 text-white font-bold py-2 px-4 rounded hover:bg-blue-700 focus:outline-none focus:ring-2 focus:ring-blue-600 focus:ring-opacity-50"> Interaktive Schaltfläche </button> ``` #### Darkmode Tailwind bietet eine einfache Möglichkeit, den Darkmode zu implementieren. Die Konfiguration erfolgt in der `tailwind.config.js` Datei: ```js module.exports = { darkMode: 'media', // oder 'class' theme: { extend: {}, }, variants: { extend: {}, }, plugins: [], } ``` Anschließend können Dunkelmodus-spezifische Klassen verwendet werden: ```html <div class="bg-white dark:bg-gray-800"> Dunkelmodus unterstützt </div> ``` ## Fazit 📃 Tailwind CSS hat die Art und Weise, wie wir Webseiten gestalten, revolutioniert. Es bietet eine leistungsstarke und flexible Alternative zu traditionellen CSS-Frameworks und ermöglicht es Entwicklern, schnell und effizient benutzerdefinierte Designs zu erstellen. Für Anfänger und erfahrene Entwickler gleichermaßen ist Tailwind CSS ein unverzichtbares Werkzeug in der modernen Webentwicklung. --- If you like my posts, it would be nice if you follow my [Blog](https://blog.disane.dev) for more tech stuff.
disane
1,890,486
Building charts and Dev UX
As a developer one of the frequent decisions I face is selecting the right library for charts on my...
0
2024-06-16T18:03:50
https://dev.to/alessiochiffi/building-charts-and-dev-ux-391j
javascript, webdev, charts, productivity
As a developer one of the frequent decisions I face is selecting the right library for charts on my projects. It’s a critical choice, as the library can significantly impact both the ease of development and the end-user experience. For a long time, I relied on Chart.js due to its simplicity and quick setup. However, I quickly encountered limitations. One major issue was the difficulty in customizing tooltips. While Chart.js does offer ways to create custom tooltips, including HTML tooltips, it’s not straightforward. Implementing custom HTML tooltips requires significant manual effort to manage tooltip elements. Moreover, adapting Chart.js to work seamlessly with responsive designs posed another challenge. Customizing charts with media queries isn’t natively supported, and achieving the desired responsiveness required workarounds that felt more like hacks than solutions. These limitations led me to explore other options, and that’s when I discovered Apache ECharts https://echarts.apache.org/en/index.html This library felt like a breath of fresh air. ECharts offers out-of-the-box support for advanced customizations and responsive designs. The ability to easily integrate HTML content in tooltips without cumbersome workarounds made a huge difference in the workflow. Additionally, its responsive features are intuitive, making it easier to ensure that charts look great on all devices. Code for a custom HTML tooltip ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vj8qmd1136oid9sbjy6t.jpg) It not only enhanced the functionality and aesthetics of my charts but also streamlined my development process. If you’re facing similar issues with your charts, I highly recommend giving ECharts a try.
alessiochiffi
1,890,485
Los Secretos para Ahorrar en tus Compras de Videojuegos
En el mundo de los videojuegos, encontrar las mejores ofertas puede ser una verdadera hazaña. Con...
0
2024-06-16T18:00:21
https://dev.to/tapsubstantial/los-secretos-para-ahorrar-en-tus-compras-de-videojuegos-o3h
En el mundo de los videojuegos, encontrar las mejores ofertas puede ser una verdadera hazaña. Con títulos esperados lanzándose continuamente y ediciones especiales que atraen a los fanáticos más fervientes, mantener tu colección al día sin vaciar tu cartera es todo un arte. Pero no te preocupes, aquí te revelaremos algunos secretos que te ayudarán a ahorrar en tus compras de videojuegos. **1. Sigue a Cazadores de Ofertas** Existen comunidades online dedicadas exclusivamente a encontrar las mejores ofertas en videojuegos. Estos cazadores de ofertas hacen el trabajo sucio por ti, y solo tienes que seguirles en sus redes o suscribirte a sus boletines. **2. Aprovecha las Rebajas Estacionales** No es raro que plataformas como Steam, PlayStation Store y Xbox Live Marketplace ofrezcan desacuentos significativos durante eventos especiales, como las Rebajas de Invierno o las Ofertas de Verano. Mantente atento a estos periodos para ahorrar bastante en tus compras. **3. Compra en Sitios de Claves Digitales** Los mercados de claves digitales han ganado gran popularidad gracias a sus precios competitivos. Estos sitios suelen ofrecer las mismas claves que obtendrías al comprar directamente del distribuidor, pero a un precio reducido. La ventaja es que puedes encontrar verdaderos chollos, como en esta guía para adquirir Fallout 76 a buen precio. **4. Intercambia y Vende Juegos** Si te has aburrido de algunos de tus juegos o ya los has completado, considera venderlos o intercambiarlos. Hay plataformas donde puedes encontrar usuarios dispuestos a pagarte por esos títulos que ya no necesitas, o incluso intercambiarlos por otros que quieras probar. **5. Suscríbete a Servicios de Videojuegos** Plataformas como Xbox Game Pass o PlayStation Now ofrecen una amplia biblioteca de juegos a un costo mensual relativamente bajo. Esto puede ser una opción excelente si sueles jugar a una gran variedad de títulos, ya que podrías ahorrarte la compra de juegos individuales. Con estos consejos en mente, estás listo para sacar el máximo provecho de toda tu experiencia en el mundo de los videojuegos sin gastar una fortuna. Recuerda siempre comparar precios y estar atento a las ofertas, y quién sabe, tu próximo favorito podría estar a la vuelta de la esquina sin romper tu bolsillo.
tapsubstantial
1,890,484
Configuring Ping URL Tests/Health Checks with Azure Monitor Application Insights
What is Azure Monitor Application Insight As a deep dive, Application Insights serves as your...
0
2024-06-16T18:00:21
https://dev.to/shaloversal123/configuring-ping-url-testshealth-checks-with-azure-monitor-application-insights-2ecn
**What is Azure Monitor Application Insight** As a deep dive, Application Insights serves as your application's health guardian. It offers insights into how your applications are performing and being utilized. Gain valuable data on performance metrics, user engagement, and retention. Application Insight also tracks essential details like request and failure rates, response times, popular pages, and user demographics. With features like an application dependency map and performance insights, Application Insights equips you with the tools to master your application's well-being and performance. **Availability Alerts** Application Insights availability sends web requests to your application at regular intervals from points around the world. You can receive alerts if your application isn't responding or if it responds too slowly. For this demo, we focused on availability. We're3 going to configure a URL ping test to check our availability and this will send an HTTP request to our Web App URL to see if there's a response. and then we're going to configure an alert based on the results of that test. Lets get started! STEPS 1. Create web app using the app service on the azure portal copy the url is similar to htpps:solowebapp.azurewebsites.net ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wnozp4ulwhd7cfrp8g8t.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zfchhomf6upxzvyvpc1p.png) 2.Go to azure portal search for azure monitor ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2l0zv6kxysf9dnw3k31f.png) 3.click application under insight ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wp2sw2hui92oy19j9gvu.png) 4.click create ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l8e1vsc3gbvxwerh6ez0.png) 5 fill the reaquired information and and click go to resource ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2yvenzvmfrizderuop3u.png) 6 locate Availability under investigate. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oydmgvycqgeam9rzyzfp.png) 7 click add classic test ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghvnqfip0vtximsn134k.png) 8. fill the required classic test information such as name=as desired sku=ping url= https//:(url for the web app) test frequency: 5min Test location; select our desired location Success criteria: Htpp response Time out: 120 min Status code: 200 Alert Status: enabled Click create. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3qdue8hqoxhv98za7k81.png) 9.Click refresh to see your location. HOW TO CONFIGURE ALERT a.Click on the elispes>>**...**<< ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31kzbtt2ezkqdbk74pey.png) b. click open rules ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6dds3m0mtfwbed42xc0m.png) c. Locate alert rule configuration and select action group ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aa38dy3fri2xflg4cg77.png) d. Select create action group ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ejdheuqajuzy1h4z30c2.png) e. Fill the required intance details action group name : Display name: Click next to notification ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k54lcme49ku5cdqv7vy5.png) f.Select notification type Select email/sms/push/voice Name: Use desired name ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mcresjotnl6gc315qw7q.png) g. Click review and create ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ggtnajjzo8rhpwof1n8j.png) h. Click create ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8t4xgp0wbut2u2sxhmd2.png) **HOW TO STOP MONITORING A WEB APP** k. open your web app in azure app service ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqy93fgzmo3l8zeihiek.png) m.Click Stop ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pdk9cdbkoo39xe89q65l.png) n. Go Back to your azure home page,search for monitor, navigate to availability.Check for errors in the didplayed info. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzels1mrbzjfh3p9wel1.png) n. Similarly when you input your url to google. It will also display error 403. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p0mqi2bf7mpugdvq1yax.png)
shaloversal123
1,890,483
The Future of Web Development: Embracing Emerging Technologies and Trends
The web development landscape is constantly evolving, driven by new technologies and shifting user...
0
2024-06-16T17:58:13
https://dev.to/matin_mollapur/the-future-of-web-development-embracing-emerging-technologies-and-trends-4pno
webdev, javascript, beginners, programming
**The web development landscape is constantly evolving, driven by new technologies and shifting user expectations. As we move further into 2024, several key trends are shaping the future of web development, offering exciting opportunities and challenges for developers. This article explores these trends and provides insights into how developers can leverage them to create more robust, efficient, and user-friendly web applications.** ## Progressive Web Apps (PWAs): The Standard for Modern Web Applications Progressive Web Apps (PWAs) have transcended their initial hype and are now considered a standard in web development. PWAs combine the best of web and mobile apps, providing users with offline functionality, push notifications, and a native app-like experience directly in the browser. The success story of Starbucks' PWA underscores the significant impact of this technology on customer engagement and operational efficiency. By caching essential resources and offering seamless user interactions, PWAs enhance user experiences and drive business growth. ## Serverless Architecture: Enhancing Scalability and Efficiency Serverless architecture continues to gain traction as a streamlined approach to building and deploying applications. This model allows developers to focus on writing code without worrying about server management, as the cloud provider handles scalability and infrastructure. The cost-effectiveness and flexibility of serverless architecture make it an attractive choice for modern web development, enabling applications to handle varying traffic loads effortlessly and reducing operational costs. ## The Rise of JAMSTACK: Simplifying Development with Pre-Rendering JAMSTACK (JavaScript, APIs, and Markup) is revolutionizing web development by offering a more straightforward and performant approach to building websites. By leveraging static site generators like Next.js, developers can pre-render pages at build time, resulting in highly performant and SEO-friendly websites. The flexibility and scalability of JAMSTACK make it a preferred choice for developers aiming to deliver fast, secure, and user-friendly web applications. ## Motion UI: Creating Engaging and Interactive Experiences As web interfaces become more interactive, Motion UI is emerging as a crucial tool for enhancing user engagement through dynamic animations and micro-interactions. Well-designed animations can guide users through the interface, provide feedback, and make interactions more intuitive and enjoyable. Motion UI helps create a more engaging and visually appealing user experience, aligning with the industry's focus on user-centric design. ## Blockchain: Transforming Web Development Security Blockchain technology is making significant inroads into web development, offering enhanced security, transparency, and efficiency. By utilizing cryptographic principles, blockchain ensures data integrity and protects against unauthorized access, making it ideal for applications handling sensitive information. Beyond security, blockchain's potential extends to innovative applications like decentralized marketplaces and secure identity management, paving the way for new digital experiences. ## API-First Development: Building Robust and Reusable Components API-first development is a design philosophy that prioritizes building Application Programming Interfaces (APIs) as the foundation for web applications. This approach allows for parallel development of front-end and back-end components, leading to faster development cycles and reduced integration issues. APIs act as well-defined contracts between different parts of the application, ensuring consistency and reliability. This methodology is particularly beneficial for developing Progressive Web Apps (PWAs) and other modern web applications that require seamless data and functionality integration. ## Conclusion The web development landscape in 2024 is characterized by a strong emphasis on user experience, security, and performance. By embracing technologies like PWAs, serverless architecture, JAMSTACK, Motion UI, and blockchain, developers can create more robust, efficient, and user-friendly applications. Staying abreast of these trends and integrating them into your development workflow will not only enhance your technical skills but also ensure that your applications remain at the forefront of innovation.
matin_mollapur
1,890,482
Day 4 of 30... Current day
-So the Transactions project that I was to do is halfway done still some functionalities to be...
0
2024-06-16T17:53:36
https://dev.to/francis_ngugi/day-4-of-30-current-day-3p5e
-So the Transactions project that I was to do is halfway done still some functionalities to be applied but all will be good. -And also Finalizing reading Nmap and ready to start reminding myself how to use Wireshark.
francis_ngugi
1,890,481
DAY 3 OF 30... Also Forgot to post this blog
This day was the day I planned a Transaction Project where The transactions were to be fetched from a...
0
2024-06-16T17:50:03
https://dev.to/francis_ngugi/day-3-of-30-also-forgot-to-post-this-blog-3o47
This day was the day I planned a Transaction Project where The transactions were to be fetched from a mock server and did a set up of the project.
francis_ngugi
1,889,659
Generate Dynamic Open Graph Images using Nextjs
Dynamic OG Dynamic OG helps developers easily create og images without needing to develop...
0
2024-06-16T17:47:27
https://dev.to/shrihari/generate-dynamic-open-graph-images-using-nextjs-4k9g
nextjs, webdev, tutorial
## Dynamic OG [Dynamic OG](https://www.dynamicog.com/) helps developers easily create og images without needing to develop their proprietary code. It is completely free to use and a self-hosted paid version is also available. This tutorial serves as a base for [Dynamic OG](https://www.dynamicog.com/). Here are some of the templates available in [Dynamic OG](https://www.dynamicog.com/). Get started with the `Simple theme` below. All these are dynamically generated based on the queries on the url. Use [Dynamic OG](https://www.dynamicog.com/) for your projects/companies for free. ![Simple Theme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8v1db7bwssjc1kw62ipi.png) ![Docs Theme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhfrmvb6ejg51lir8xvt.png) ![Blogs Theme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lphnazt5bvkeufj2m5o4.png) ![Profile Theme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nz6t0wqnduhsh2mkp14m.png) ## Getting started witht the simple theme We're going to use `nextjs` and `ImageResponse` from `next/og` to create our simple images. To create a new `nextjs` app ``` npm i create-next-app ``` ``` npx create-next-app@latest ``` In this simple-og project we're going to use `App Router`. You can also find the github repo at the bottom of the blog. ``` ✔ What is your project named? … simple-og ✔ Would you like to use TypeScript? … No / Yes ✔ Would you like to use ESLint? … No / Yes ✔ Would you like to use Tailwind CSS? … No / Yes ✔ Would you like to use `src/` directory? … No / Yes ✔ Would you like to use App Router? (recommended) … No / Yes ✔ Would you like to customize the default import alias (@/*)? … No / Yes Creating a new Next.js app in /Users/shrihari/testRepos/simple-og. ``` ``` npm install @vercel/og ``` Inside your app folder create 2 files. ``` cd simple-og/src/app touch Simple.tsx touch img/route.tsx ``` In the `Simple.tsx` file. You have to be picky with the css as the [ImageResponse](https://nextjs.org/docs/app/api-reference/functions/image-response) supports only certain types of styles. ```ts type TSimpleTemplate = { title: string , website: string } export function SimpleTemplate({ t }: { t: TSimpleTemplate }) { return ( <div style={{ background: '#f8fafc', color: '#334155', width: '100%', height: '100%', display: 'flex', alignItems: 'center', justifyContent: 'center', padding: "24px", }}> <div style={{ margin: '6px', padding: "24px", width: "100%", borderRadius: "24px", height: "100%", fontSize: 72, display: "flex", flexDirection: "column", border: `#334155 2px solid`, color: '#334155' }}> {t.title?.slice(0, 80)} <hr style={{ border: `#334155 1px solid`, width: "100%" }}></hr> <p style={{ fontSize: "52", fontWeight: "700", display: 'flex', justifyContent: 'center', color: '#334155' }}> {t.website}</p> </div> </div> ) } ``` In the `img/route.tsx` file ```ts import { SimpleTemplate } from '../Simple'; import { ImageResponse } from 'next/og' import { NextRequest } from 'next/server' // Route segment config export const runtime = 'edge' // Image generation export async function GET(request: NextRequest) { const params = request.nextUrl.searchParams const title: string = params.get("title") || "No title"; const website: string = params.get("website") || "No website" const t = { title, website } return new ImageResponse( ( <SimpleTemplate t={t} /> ), { width: 1200, height: 630, headers: { 'Cache-Control': 'public, max-age=3600, immutable', }, }, ) } ``` On your browser open http://localhost:3000/simple/img?title=Every%20moment%20is%20a%20fresh%20beginning.&website=blogs.gratitude.com Just change the queries param values to generate a dynamic image based on the queries. Deploy on your preferred servers. You can use vercel for free if you are using only couple thousands requests per month! ![Demo simple image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cqfhv4yje70ugxn1telh.png) Source code on github [simple-og-image](https://github.com/ShrihariMohan/simple-og-image) It is completely free to use all our default templates and a self-hosted paid version is also available. This tutorial serves as a base for [Dynamic OG](https://www.dynamicog.com/). --- ## Self Hosted Dynamic OG Includes - Get lifetime access for just $10 per template. Customize them to match your style. Own your templates for ever. - Leave server setup to us. Deploy on your preferred platform: Cloudflare Workers, Netlify Functions, Vercel, or even machines. - You own the source code of your templates. This means your designs are yours, not ours. We simply provide the tools. Learn more about [Dynamic OG](https://www.dynamicog.com/)
shrihari
1,890,477
Day 2 of 30... Forgot to post the blog
On the 2nd day of the challenge, I learned how to use the useEffect hook and fetch data from a mock...
0
2024-06-16T17:46:45
https://dev.to/francis_ngugi/day-2-of-30-forgot-to-post-the-blog-54pf
On the 2nd day of the challenge, I learned how to use the useEffect hook and fetch data from a mock server and display it on the DOM which was quite tricky to get the hang of but I am still managing to learn more of React.JS. <u>**What I did on that day:**</u> i) Made a quiz app where useEffect was used to add a timer functionality for each qsn: >The code: https://github.com/FrancisNgigi05/react-hooks-use-effect-lab ii) Used useEffect to fetch data from a mock server API: >The code: https://github.com/FrancisNgigi05/react-hooks-simple-data-fetching-lab
francis_ngugi
1,890,480
What is Value Types and Reference Types in JavaScript
Two fundamental categories that every developer should understand are value types and reference...
0
2024-06-16T17:46:43
https://dev.to/yashrajxdev/what-is-value-types-and-reference-types-in-javascript-23kn
javascript, webdev, programming
Two fundamental categories that every developer should understand are value types and reference types. Value type is basically it will not changeable after something is assigned to variable and reference type is what we can change after declaration. In JavaScript objects are reference type and all other data types are value types. It is important to know the difference between this to datatypes which will help you avoid common pitfalls and write more efficient code. **Value type example** `let a = 10; let b = a; // b is now 10 b = 20; // changing b does not affect a console.log(a); // 10 ` **Reference type example** `let obj = { name: 'John' }; obj.name = 'Doe'; // the original object is modified console.log(obj.name); // 'Doe' ` [Learn more...](https://medium.com/@dudhatrayashraj/understanding-value-types-and-reference-types-in-javascript-3a83e1105bd8)
yashrajxdev
1,890,479
What I Discovered About Making Great Widgets: Insights From 100+ Real Users
We want to help people read more from their to-read pile with minimal effort using our product, ...
0
2024-06-16T17:45:35
https://dev.to/lincemathew/what-i-discovered-about-making-great-widgets-insights-from-100-real-users-1ml3
mobile, development, webdev
We want to help people read more from their to-read pile with minimal effort using our product, [FeedZap](https://hexmos.com/feedzap). To see if a mobile home screen widget could assist in this, I conducted a small research study on effective home screen widgets for Android and iPhones. The main goal of my discovery is to answer these questions: 1. What purposes do home screen widgets serve? 2. Are Android or iOS users using widgets effectively? 3. What factors or principles should be considered when building widgets? 4. Who are the top widget makers in the market, and how did they become successful? ## Discovering Popular Widgets: Collecting Insights from 100+ Real Users For this study, I selected [Reddit](https://www.reddit.com/) and [Hacker News](https://news.ycombinator.com/), platforms where many tech enthusiasts gather. I posted my inquiry in major Android and iPhone subreddits and waited for responses. The response was amazing. Across all posts, we received around **105k views and over 180+ responses**. I found many widget users and discovered a wide variety of widgets from the replies. ![widget](https://hackmd.io/_uploads/HkWz3l2HA.png) Since the list was extensive, I selected a few popular widgets and specific situations where widgets are used. I also considered some surveys conducted by other platforms regarding the popularity of widgets. A [poll conducted by Nextpit](https://www.nextpit.com/poll-results-android-widgets-are-as-popular-as-ever) shows that 65% of their readers are active widget users. ## Simplifying Your Daily Routine with Widgets Android introduced the widget feature in [its very first version in 2009](https://en.wikipedia.org/wiki/Android_version_history), with Android 1.5 being the first to have the named widget feature. In later Android versions, widgets evolved to become resizable and included more features. [iOS added widgets into](https://www.cnet.com/tech/mobile/ios-14-finally-brings-widgets-your-iphone-home-screen-wwdc/) in September 2020 with iOS 14. People mostly use widgets related to their day-to-day activities, such as 1. checking meetings 2. looking at the weather 3. marking to-dos 4. keeping notes There are many good widgets available for these purposes in the store. Widgets enhance apps by creating a larger space to display relevant information compared to the typical app icon. Besides these categories, custom widgets are also popular these days. Custom widgets provide an interface and tools to help users create or modify home screen widgets based on their interests. We will discuss more about custom widgets in the upcoming section. I examined the successful widgets suggested by most users in the market, analyzing their features and patterns to understand what makes them effective. ### Lessons from Google: Building User-Friendly Widgets The Google Calendar widget is one of the most recommended calendar widgets by users. You likely have a calendar widget, especially the Google Calendar widget, on your home screen. What contributes to the simplicity of a widget? **Optimized design patterns** are crucial in widget design, and Google has successfully overcome this challenge. ![calender (1)](https://hackmd.io/_uploads/BJFLjD2rA.png) Google Calendar widget only shows the relevant data with minimal clicks, as referred in this image most of the space in the widget allocated for displaying the most relevant information for the user. Additionally, *the support for different widgets, rather than incorporating everything into a single view, enhances usability*. The Google Calendar widget offers two types of views: * daily view * monthly view catering to different user needs and preferences. Continue reading the full article here: https://journal.hexmos.com/study-about-making-great-mobile-home-screen-widgets/
lincemathew
1,890,476
Blockchain
Blockchain is a decentralized and distributed digital ledger technology that securely records...
0
2024-06-16T17:44:31
https://dev.to/arun_gupta/blockchain-19p0
devchallenge, cschallenge, computerscience, beginners
Blockchain is a decentralized and distributed digital ledger technology that securely records transactions across multiple computers. The main innovation of blockchain is its ability to allow digital information to be recorded and shared without the need for a central authority, ensuring transparency and security. Each transaction on a blockchain is grouped into a "block." These blocks are linked or "chained" together in chronological order through cryptographic hashes, which are unique identifiers generated by a hash function. Once a block is added to the chain, it is extremely difficult to alter the information within it, providing a high level of data integrity and security. Blockchain operates on a peer-to-peer network where each participant (or "node") maintains a copy of the entire blockchain. Transactions are validated by these nodes through a consensus mechanism, such as Proof of Work (PoW) or Proof of Stake (PoS). In PoW, miners compete to solve complex mathematical puzzles to validate transactions and add new blocks, earning cryptocurrency rewards for their efforts. PoS, on the other hand, allows validators to create new blocks based on the number of tokens they hold and are willing to "stake" as collateral. The most well-known application of blockchain technology is cryptocurrencies, such as Bitcoin and Ethereum. However, blockchain's potential extends beyond digital currencies. It can be used for a variety of applications including supply chain management, voting systems, healthcare records, and smart contracts. Smart contracts are self-executing contracts with the terms directly written into code, which automatically enforce and execute agreements when certain conditions are met. Blockchain's decentralized nature, coupled with its transparency and security features, makes it a transformative technology with the potential to revolutionize various industries by reducing fraud, improving transparency, and enabling secure, efficient transactions.
arun_gupta
1,890,475
330. Patching Array
330. Patching Array Hard Given a sorted integer array nums and an integer n, add/patch elements to...
27,523
2024-06-16T17:34:27
https://dev.to/mdarifulhaque/330-patching-array-4oo9
php, leetcode, algorithms, programming
330\. Patching Array Hard Given a sorted integer array `nums` and an integer `n`, add/patch elements to the array such that any number in the range `[1, n]` inclusive can be formed by the sum of some elements in the array. Return _the minimum number of patches required_. **Example 1:** - **Input:** `nums = [1,3]`, `n = 6` - **Output:** 1 - **Explanation:** - Combinations of nums are `[1]`, `[3]`, `[1,3]`, which form possible sums of: `1`, `3`, `4`. - Now if we add/patch `2` to `nums`, the combinations are: `[1]`, `[2]`, `[3]`, `[1,3]`, `[2,3]`, `[1,2,3]`. - Possible sums are `1`, `2`, `3`, `4`, `5`, `6`, which now covers the range `[1, 6]`. - So we only need 1 patch. **Example 2:** - **Input:** `nums = [1,5,10]`, `n = 20` - **Output:** 2 - **Explanation:** The two patches can be `[2, 4]`. **Example 3:** - **Input:** `nums = [1,2,2]`, `n = 5` - **Output:** 0 **Constraints:** - <code>1 <= nums.length <= 1000</code> - <code>1 <= nums[i] <= 10<sup>4</sup></code> - `nums` is sorted in **ascending order**. - <code>1 <= n <= 2<sup>31</sup> - 1</code> **Solution:** ``` class Solution { /** * @param Integer[] $nums * @param Integer $n * @return Integer */ function minPatches($nums, $n) { $ans = 0; $i = 0; $miss = 1; while ($miss <= $n) { if ($i < count($nums) && $nums[$i] <= $miss) { $miss += $nums[$i++]; } else { $miss += $miss; ++$ans; } } return $ans; } } ``` **Contact Links** - **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)** - **[GitHub](https://github.com/mah-shamim)**
mdarifulhaque
1,890,474
Angular error
Hi Guys,I have got this error(see image) i try to unstall and install Angular Language Service in vs...
0
2024-06-16T17:30:51
https://dev.to/yaici_anis_94b3b62a50aa70/angular-error-42hj
angular
Hi Guys,I have got this error(see image) i try to unstall and install Angular Language Service in vs code but this error still ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kohdrti5t59er227akvz.png)
yaici_anis_94b3b62a50aa70
1,890,473
Best cross-platform CMS for Flutter web and mobile apps?
One of the best cross-platform CMS options you can use with Flutter to build web, iOS, and Android...
0
2024-06-16T17:29:13
https://dev.to/shaerif/best-cross-platform-cms-for-flutter-web-and-mobile-apps-15j5
webdev, flutter, api
One of the best cross-platform CMS options you can use with Flutter to build web, iOS, and Android apps is **Strapi**. Strapi is a headless CMS that allows you to create and manage content, which can then be consumed via APIs in your Flutter applications. ### Key Features: - **Open Source**: Strapi is open-source and customizable. - **Headless**: It delivers content via RESTful or GraphQL APIs, making it suitable for any front-end framework, including Flutter. - **Extensible**: Easily add plugins to extend its functionality. - **User-Friendly Admin Interface**: A clean and intuitive admin panel for managing content. - **Self-Hosted**: Complete control over your data and hosting environment. ### How to Integrate with Flutter: 1. **Set Up Strapi**: Install Strapi on your server and configure your content types and APIs. 2. **Consume APIs in Flutter**: Use Flutter's HTTP package to fetch data from Strapi's API endpoints. 3. **Build UI in Flutter**: Design and build your app's UI to display the content fetched from Strapi. ### Why Strapi? - **Flexibility**: Easily adapt to various project requirements. - **Community and Support**: Active community and plenty of resources. - **Performance**: Efficient in handling high traffic. Using Strapi with Flutter allows you to manage your content centrally while delivering a seamless experience across web, iOS, and Android platforms. For building web apps and iOS/Android apps with Flutter, consider **CrafterCMS** or **Strapi**. **ButterCMS** is also highly rated for cross-platform development with Flutter. Each CMS platform offers different features, so explore them to see which best fits your needs. ### Easiest Headless CMS Options for Flutter: - **ButterCMS**: Known for its ease of use and developer-focused experience. Offers a component-based editing interface and pre-built Flutter libraries for a quick setup. - **Contentful**: Provides a user-friendly interface and extensive documentation for beginners, with features like content previews and localization. - **Strapi**: An open-source option offering customization control. It may have a steeper learning curve but provides flexibility for developers comfortable with Node.js. Choosing the right CMS can enhance your cross-platform development with Flutter, ensuring a smooth and efficient workflow.
shaerif
1,890,472
Unlock the Secret Side of Your Phone with Android 15’s Private Space🤫🤫
Meet Android 15’s coolest new feature: Private Space. Think of it as your own hidden corner of your...
0
2024-06-16T17:25:54
https://dev.to/yesswee/unlock-the-secret-side-of-your-phone-with-android-15s-private-space-39ha
Meet Android 15’s coolest new feature: Private Space. Think of it as your own hidden corner of your phone, where everything you do stays completely private!🫢. Whether you’re working with sensitive documents, browsing without leaving a trace, or just wanting to keep certain apps and info separate, Private Space has got you covered. It’s easy to switch between your main phone and your Private Space, giving you peace of mind that your secrets are safe and sound. Enjoy ultimate privacy and security with Android 15’s Private Space, and take control of your digital life like never before. Private space uses a separate user profile, and when it’s locked, the profile is paused, so the apps in it are no longer active. You can choose to use the device lock or a separate lock factor for Private space, and its apps show up in a separate container in the launcher. They’re also hidden from the recent view, notifications, settings, and from other apps when the Private space is locked. User generated and downloaded media and files and accounts are separated between the Private space and the main space. You can use the system share sheet and the photo picker to give apps access to content across spaces, but only when the Private space is unlocked. With Android 15, Google promises smoother transitions. Also new is the fact that apps won’t be able to run in an active, foreground state, for more than six hours, to prevent pointless battery draining. There’s a more efficient software decoder for AV1 too, that can be used when hardware decoding isn’t supported. Apps can highlight only the most recently selected photos and videos when they’ve received partial access to media permissions, which can improve your experience with apps that frequently request access to photos and videos. Additionally, there are changes that protect you from malicious background apps, as they are prevented from bringing other apps to the foreground, elevating their privileges, and “abusing user interaction”. This is done in order to protect from malicious apps that launch another app’s activity, then overlay themselves on top, creating the illusion of being the non-malicious app that they started. Android 15 is now available to download on a range of devices from Android partners including Lenovo, Nothing, Oppo, Honor, Xiaomi and, of course, Google itself (i.e. the best Pixel phones).
yesswee
1,890,471
Laravel and WebAuthn
In my previous article, I provided information about WebAuthn. In this article, I will explain how to...
0
2024-06-16T17:25:38
https://dev.to/cryptograph/laravel-and-webauthn-4h9k
laravel, webauthn, php
In my previous article, I provided information about WebAuthn. In this article, I will explain how to implement it with Laravel. [https://niyazi.net/en/laravel-and-webauthn](https://niyazi.net/en/laravel-and-webauthn)
cryptograph
1,890,470
Building Software that’s Efficient, Intuitive, and Seamless
In the rapidly evolving world of software development, creating applications that are not only...
0
2024-06-16T17:23:59
https://dev.to/checkiamsiam/building-software-thats-efficient-intuitive-and-seamless-338
softwaredevelopment, webdev, cleancoding, programming
In the rapidly evolving world of software development, creating applications that are not only efficient but also intuitive and seamless is paramount. As we step into 2024, let’s explore some of the cutting-edge tech stacks that are shaping the future of software engineering ## Embracing Modern Tech Stacks The choice of technology stack is critical in determining the performance, scalability, and user experience of your software. Here are some of the prominent tech stacks that have gained traction in 2024: - **LAMP Stack:** A classic combination of Linux, Apache, MySQL, and PHP, renowned for its reliability and ease of use. - **MEAN Stack:** Comprising MongoDB, Express.js, AngularJS, and Node.js, this stack is known for its speed and efficiency in handling dynamic websites and applications. - **MERN Stack:** Similar to MEAN but with React replacing AngularJS, offering a more flexible and component-based approach to UI development. - **MEVN Stack:** A variant of MEAN, where Vue.js is used instead of AngularJS, known for its simplicity and progressive framework. - **Serverless Stack:** This approach abstracts server management and offers a cost-effective solution for scaling applications on demand. ## Key Principles for Efficient and Intuitive Software 1. **User-Centric Design:** Start with the user in mind. An intuitive UI/UX is essential for user adoption and satisfaction. 2. **Performance Optimization:** Employ efficient algorithms and data structures to ensure your application runs smoothly. 3. **Scalability:** Design your software to handle growth seamlessly, whether it’s increasing data, users, or transaction volume. 4. **Security:** Implement robust security measures to protect user data and maintain trust. 5. **Continuous Integration/Continuous Deployment (CI/CD):** Automate your deployment process to reduce manual errors and speed up release cycles. 6. **Responsive Design:** Ensure your application is accessible across various devices and platforms. 7. **Testing:** Rigorous testing is crucial to catch bugs early and maintain software quality. ## Leveraging the Latest Technologies To stay ahead, incorporating the latest technologies such as AI, machine learning, and blockchain can provide a competitive edge. For instance, AI can be used to enhance user experience through personalized recommendations, while blockchain can increase transparency and security in data transactions. ## Conclusion Building software that’s efficient, intuitive, and seamless requires a thoughtful approach to both the choice of tech stack and the design principles. By staying updated with the latest trends and technologies, and focusing on the user’s needs, you can create software that not only meets but exceeds expectations.
checkiamsiam
1,890,433
Unlock a World of Knowledge with the eBooks Collection at RGBSpot.com
Immerse yourself in a world of inspiration and learning with the eBooks collection at RGBSpot.com....
0
2024-06-16T17:16:17
https://dev.to/ardenjames/unlock-a-world-of-knowledge-with-the-ebooks-collection-at-rgbspotcom-37ga
books, ebooks, collection
Immerse yourself in a world of inspiration and learning with the **[eBooks collection](https://rgbspot.com/)** at RGBSpot.com. Our library is brimming with diverse and meticulously curated eBooks designed to ignite your creativity and expand your knowledge. Whether you’re looking to sharpen your professional skills, dive into a new hobby, or simply enjoy a captivating read, we have the perfect book for you. At RGBSpot.com, we value quality and relevance, which is why our eBooks span a wide range of topics—from digital marketing and graphic design to personal development and wellness. Each book is crafted by industry experts and seasoned authors, ensuring you receive insightful and practical knowledge. Our eBooks are easy to access and download, making it simple to fit learning into your busy lifestyle. Discover new perspectives, elevate your skills, and find endless inspiration with the exceptional eBooks collection at RGBSpot.com. Start exploring today and unlock a treasure trove of information tailored just for you.
ardenjames
1,890,432
Using Astro Image Optimization Benefits with Tina CMS Cloud in Production build
Problem Statement In production, the benefits of image optimization using Astro's...
0
2024-06-16T17:15:30
https://dev.to/friebe/using-astro-image-optimization-benefits-with-tina-cms-cloud-in-production-build-5a01
astro, javascript, jamstack
## Problem Statement In production, the benefits of image optimization using Astro's Image/Picture component are lost when integrating with Tina CMS and Tina Cloud. However, I have found a workaround to resolve this issue. Whether this workaround is the best or a good fit for client websites remains uncertain, but here are the details. For further context, you can read the github discussion [here](https://github.com/tinacms/tinacms/discussions/4035#discussioncomment-6264333). ## Preface Typically, you have a markdown file with an image path pointing to a locally based image, like so: ``` markdown imgSrc: /src/assets/img/logo.png ... ``` To leverage the image optimization benefits of the Astro Image component, such as generating multiple versions of an image based on device density, you would use dynamic image imports as described [in astro docs](https://docs.astro.build/en/recipes/dynamically-importing-images/). Everything works perfectly in development mode when you run npm run `"tinacms dev -c \"astro dev\"",`. [Tina CLI commands](https://tina.io/docs/cli-overview/) But for clarity here an example of importing images dynamically. ``` javascript --- import type { ImageMetadata } from "astro"; import { Image } from "astro:assets"; interface Props { imgSrc: string; } const { imgSrc } = Astro.props; const images = import.meta.glob<{ default: ImageMetadata }>("/src/assets/img/*.{jpeg,jpg,png,gif}"); if (!images[imgSrc]) { throw new Error(`"${imgSrc}" does not exist in glob: "src/assets/img/*.{jpeg,jpg,png,gif}"`); } --- <Image width="70" height="70" src={images[imgSrc]()} alt="test image" densities={[1.5, 2]} loading="lazy" /> ``` ## The Production Issue In production, Tina CMS typically retrieves its media from its own CDN server (e.g., https://assets.tina.io/image). This setup conflicts with Astro's automatic image optimization and the associated glob pattern. The function tries to search for the image in the local path. This works in development mode, but in production, the `imgSrc` returned from the markdown file is no longer a local string. Instead, it returns a URL like https://assets.tina.io/image, causing the search function aka glob function to fail and resulting in a build error. For more information, refer to the github discussion [here](https://github.com/tinacms/tinacms/discussions/4035#discussioncomment-6264333). ## Workaround for Using Image Optimization To still leverage Astro's image optimization in production, you need to adapt the build script: ## Modified Build Script The build script must be updated to: ```json "scripts": { "build": "tinacms build --local -c \"astro build\"" } ``` However, be aware that dynamically fetching data will no longer work as expected. For example, querying dynamic data using Tina's client in for example a Date component will not working properly. ## Example script of my dynamically data fetches ```javascript <script> import { formatMarkdownUrl, formatDateString } from "../utils/format"; document.addEventListener("DOMContentLoaded", async () => { try { const { client } = await import( "../../tina/__generated__/client" ); async function getLiveDatesResponse() { try { const response = await client.queries.live_dateConnection({ sort: "date", }); return response; } catch (error) { console.error( error, ); return null; } } } }) </script> ``` ## Conclusion This workaround allows you to leverage Astro's image optimization benefits in production while using Tina CMS and Tina Cloud. However, it introduces limitations in dynamically fetching data, which may not be suitable for all use cases. Evaluate whether this approach fits your project's requirements and constraints. For further reading and to join the discussion, check out the GitHub discussion.
friebe
1,890,431
The Art of Software Maintenance: Embracing Change in 2024
In the ever-evolving landscape of technology, the maintenance of software systems is not just a...
0
2024-06-16T17:15:06
https://dev.to/checkiamsiam/the-art-of-software-maintenance-embracing-change-in-2024-109j
webdev, programming, softwaredevelopment, software
In the ever-evolving landscape of technology, the maintenance of software systems is not just a routine task; it’s an art that requires a deep understanding of the changing tech stacks and the foresight to anticipate future needs. As we step into 2024, the complexity of software systems has grown exponentially, making maintenance a critical aspect of the software development lifecycle. ## Why Maintenance Matters Software maintenance is crucial for several reasons. **It ensures that software continues to operate in a changing environment, maintains its value over time, and meets new requirements that emerge from evolving business strategies or customer needs.** ## The Four Pillars of Software Maintenance 1. **Corrective Maintenance:** This involves fixing bugs and defects discovered post-deployment. It’s a reactive approach that ensures the software operates as intended. 2. **Adaptive Maintenance:** As the tech ecosystem evolves, adaptive maintenance ensures that software remains compatible with new operating systems, hardware, and tech stacks. 3. **Perfective Maintenance:** This proactive approach involves enhancing the software to improve performance and add new features that users demand. 4. **Preventive Maintenance:** Aimed at preventing future software issues, this includes code optimization and updating documentation to avoid potential failure. ## Tech Stacks of 2024 The choice of tech stacks is pivotal in software maintenance. In 2024, stacks like LAMP, MEAN, MERN, and MEVN have been popular among developers for their robustness and flexibility. Additionally, the Serverless Stack has gained traction for its cost-efficiency and scalability. ## Best Practices for Software Maintenance - **Automated Testing:** Implementing continuous integration and deployment with automated testing ensures that any changes made do not break existing functionality. - **Monitoring Tools:** Utilizing tools like Hyperping can help in uptime monitoring and identifying issues before they affect users. - **Documentation:** Keeping documentation up-to-date is essential for effective maintenance, ensuring that any developer can understand and work on the software. - **User Feedback:** Incorporating user feedback into the maintenance process helps in aligning the software with user needs and expectations. ## Conclusion Software maintenance in 2024 is not just about fixing bugs; it’s about adapting to change, improving performance, and preparing for the future. By embracing the latest tech stacks and following best practices, we can ensure that our software remains reliable, efficient, and relevant in the years to come.
checkiamsiam
1,890,430
smooth card slider
I used java script to make auto slider, but it is jerky, I want to make it very smooth slider without...
0
2024-06-16T17:13:13
https://dev.to/arvind_vishwakarma_46c578/smooth-card-slider-5f8i
help
I used java script to make auto slider, but it is jerky, I want to make it very smooth slider without any jerk. Please suggest.
arvind_vishwakarma_46c578
1,890,429
$Set, $AddToSet, $Push in MongoDB
$Set, $AddToSet, $Push $set $set মানে হচ্ছে কোন কিছুর মান সেট করা। আমাদের...
0
2024-06-16T17:09:20
https://dev.to/kawsarkabir/set-addtoset-push-in-mongodb-2a95
webdev, mongodb, programming, kawsarkabir
## **$Set, $AddToSet, $Push** ### $set `$set` মানে হচ্ছে কোন কিছুর মান সেট করা। আমাদের ডাটাবেজে যদি কোন কিছুর আপডেট করতে হয় তাহলে আমরা এই অপারেটর ব্যাবহার করি। কিভাবে আমরা আমাদের ডাটাকে আপডেট করতে পারি সেটার উদাহরণ দেওয়া যাক: ```javascript db.products.insertOne({ _id: 100, quantity: 250, instock: true, details: { model: "14QQ", make: "Clothes Corp" }, ratings: [{ by: "Customer007", rating: 4 }], tags: ["apparel", "clothing"], }); ``` - `products` কালেকশনে আমরা একটা ডাটা আমাদের ডাটাবেজে এড করে নিলাম। ### কিভাবে আমরা আপডেট করতে পারি ```javascript db.products.updateOne( { _id: 100 }, { $set: { quantity: 500, details: { model: "2600", make: "Fashionaires" }, }, } ); ``` ### $AddToSet `$set` অপারেটর এর একটা অসুবিধা আছে, যেমন আপনাকে যদি বলা হয় যে `tags` প্রপার্টি টা আপডেট করতে তাহলেই বিপত্তি বেজে যাবে। চলুন দেখা যাক: ```javascript db.products.updateOne( { _id: 100 }, { $set: { tags: "accessories", }, } ); ``` আপনি যদি এভাবে ব্যাবহার করেন তাহলে আগের `tags` অ্যারের মান পুরোটাই রেপ্লেস হয়ে শুধু `tags: "clothing"` হয়ে যাবে। কিন্তু আমরা চাচ্ছি যে আগের আইটেমগুলোও থাকবে। তাহলে আমরা এই সমস্যা সমাধানের জন্য আরেকটা অপারেটর ব্যাবহার করতে পারি, সেটা হলো `$addToSet`: ```javascript db.products.updateOne( { _id: 100 }, { $addToSet: { tags: "accessories", }, } ); ``` আপনি যদি এভাবে এড করেন তাহলে সুন্দরভাবে নতুন করে এড হয়ে যাবে। আর যদি এই ভ্যালুটা অলরেডি থেকে থাকে তাহলে আর মডিফাই হবে না। আর আপনি যদি চান আপনার ডুপ্লিকেট এলিমেন্ট লাগবে, সেক্ষেত্রে `$push` ব্যাবহার করতে পারেন। ### $push `$push` অপারেটর ব্যাবহার করে আমরা কোনো অ্যারের মধ্যে নতুন আইটেম যোগ করতে পারি। এই অপারেটরটি অ্যারের শেষে নতুন আইটেম যোগ করে এবং ডুপ্লিকেট ভ্যালু রাখার ক্ষেত্রেও কার্যকর। উদাহরণস্বরূপ, আমরা যদি `ratings` অ্যারের মধ্যে নতুন রেটিং যোগ করতে চাই, তাহলে আমরা `$push` ব্যাবহার করতে পারি: ```javascript db.products.updateOne( { _id: 100 }, { $push: { ratings: { by: "Customer008", rating: 5 }, }, } ); ``` - উপরের কোডটিতে, `_id: 100` ডকুমেন্টের `ratings` অ্যারের মধ্যে `{ by: "Customer008", rating: 5 }` নতুন একটি রেটিং যোগ করা হয়েছে। এইভাবে `$push` অপারেটর ব্যবহার করে অ্যারে আপডেট করতে পারি।
kawsarkabir
1,890,428
Launching Your Online Business in 2024: A Tech-Driven Guide
The Digital Landscape of 2024 As we step into 2024, the digital landscape is more vibrant than ever....
0
2024-06-16T17:08:34
https://dev.to/checkiamsiam/launching-your-online-business-in-2024-a-tech-driven-guide-2n13
webdev, techtalks, business, guide
**The Digital Landscape of 2024** As we step into 2024, the digital landscape is more vibrant than ever. With global e-commerce sales expected to continue their upward trajectory, there’s no better time to launch an online business. The key to success lies in leveraging the latest tech stacks to build a robust, scalable, and efficient online platform. **Finding Your Niche** The first step is identifying a gap in the market that aligns with your passions and expertise. Whether it’s a unique product offering or a service that caters to a specific need, your niche will be the foundation of your online presence. **Building Your Online Presence** Once you’ve pinpointed your niche, it’s time to build your online presence. This involves creating a professional website that not only looks great but also performs seamlessly. Consider using the MERN stack (MongoDB, Express.js, React, Node.js) for a full-stack JavaScript solution that can handle dynamic content and user interactions with ease. **E-Commerce and User Experienc**e For e-commerce functionality, look into platforms like Shopify or WooCommerce, which can integrate with your tech stack to provide a seamless shopping experience. Prioritize user experience (UX) by employing responsive design and intuitive navigation, ensuring your site is accessible across all devices. **Marketing and SEO** No online business can thrive without a solid marketing strategy. Utilize social media platforms and SEO techniques to drive traffic to your site. Content marketing, powered by AI-driven tools like GPT-4, can help you create engaging content that resonates with your audience and improves search engine rankings. **Customer Service and Analytics** Excellent customer service is paramount. Implement chatbots and customer support systems that utilize AI to provide quick and helpful responses. Use analytics tools to track user behavior and refine your strategies accordingly. **Security and Compliance** Ensure your tech stack includes robust security measures to protect user data. Stay updated with the latest compliance regulations to maintain trust and avoid legal pitfalls. **Conclusion** Starting an online business in 2024 requires a blend of entrepreneurial spirit and tech savviness. By embracing the latest tech stacks and focusing on user experience, you can set your online venture up for success.
checkiamsiam
1,890,427
Integrating reCAPTCHA v3 in Next.js
Step 1: Obtain reCAPTCHA v3 credentials Access Google reCAPTCHA page: Visit Google...
0
2024-06-16T17:05:51
https://dev.to/adrianbailador/integrating-recaptcha-v3-in-nextjs-170o
webdev, nextjs, javascript, security
#### Step 1: Obtain reCAPTCHA v3 credentials 1. **Access Google reCAPTCHA page:** - Visit [Google reCAPTCHA admin console](https://www.google.com/recaptcha/admin/). - Log in with your Google account if necessary. 2. **Register your website:** - Click on "V3" at the top to register a new key for reCAPTCHA v3. - Fill out the form with your project name and domains where reCAPTCHA will be used. 3. **Get site keys:** - After registering your site, Google will provide two keys: the site key and the secret key. These keys are essential for integrating reCAPTCHA v3 into your web application. #### Step 2: Setup in your Next.js application 1. **Install necessary npm package:** ```bash npm install @react-google-recaptcha-v3 ``` 2. **Create a reCAPTCHA component:** - Create a React component in your Next.js project to handle reCAPTCHA v3 logic. ```jsx // components/Recaptcha.js import { useEffect } from 'react'; import { useGoogleReCaptcha } from 'react-google-recaptcha-v3'; const Recaptcha = ({ onVerify }) => { const { executeRecaptcha } = useGoogleReCaptcha(); useEffect(() => { const verifyCallback = async () => { if (executeRecaptcha) { const token = await executeRecaptcha(); onVerify(token); // Send token to backend or handle verification here } }; verifyCallback(); }, [executeRecaptcha, onVerify]); return null; // This component doesn't render anything visible in the DOM }; export default Recaptcha; ``` 3. **Integrate the component into your form:** ```jsx // contact.js import Recaptcha from '../components/Recaptcha'; const ContactPage = () => { const handleRecaptchaVerify = (token) => { console.log('reCAPTCHA Token:', token); // Send token to server for verification }; return ( <div> {/* Your form goes here */} <form> {/* Other form fields */} <Recaptcha onVerify={handleRecaptchaVerify} /> <button type="submit">Submit</button> </form> </div> ); }; export default ContactPage; ``` 4. **Server-side setup:** - In your backend (Node.js, Python, PHP, etc.), verify the reCAPTCHA v3 token using the provided secret key from Google. ### Differences between reCAPTCHA v2 and reCAPTCHA v3 1. **Interaction mode:** - **reCAPTCHA v2:** Requires users to solve a visible challenge like selecting images or entering text. - **reCAPTCHA v3:** Operates in the background and evaluates user behavior to provide a risk score. 2. **Visibility for users:** - **reCAPTCHA v2:** Is visible to users as it presents an explicit challenge. - **reCAPTCHA v3:** Is invisible to users, working behind the scenes without requiring explicit user interaction. 3. **Use of scores:** - **reCAPTCHA v2:** Does not generate a score; it only validates correct challenge responses. - **reCAPTCHA v3:** Provides a score from 0.0 to 1.0 indicating the likelihood that the user is a bot. 4. **Implementation:** - **reCAPTCHA v2:** Requires including a widget in the form and backend verification. - **reCAPTCHA v3:** Integrates via a frontend API, with primary verification done on the backend using the secret key. ### Additional considerations - **Handling `null` in `executeRecaptcha`:** You may encounter cases where `executeRecaptcha` could be `null` temporarily, especially during component initialization. Here's how to handle it: ```jsx // Inside useEffect in Recaptcha.js useEffect(() => { const verifyCallback = async () => { if (executeRecaptcha) { const token = await executeRecaptcha(); onVerify(token); // Send token to backend or handle verification here } }; if (executeRecaptcha !== null) { verifyCallback(); } }, [executeRecaptcha, onVerify]); ``` - **Additional References:** For more technical details or troubleshooting specific issues, you can refer to the [official Google documentation for reCAPTCHA v3](https://developers.google.com/recaptcha/docs/v3) or explore additional resources within the developer community. This guide provides a solid foundation for effectively integrating reCAPTCHA v3 into your Next.js application, enhancing both security and user experience simultaneously.
adrianbailador
1,890,426
How and why the Next.js App Router is so awesome
The Next.js App Router is a new paradigm for building applications using React’s latest features. It...
0
2024-06-16T17:03:07
https://dev.to/checkiamsiam/how-and-why-the-nextjs-app-router-is-so-awesome-269l
nextjs, react, webdev, programming
The Next.js App Router is a new paradigm for building applications using React’s latest features. It was introduced in Next.js 13 and is built on React Server Components. The App Router provides a number of benefits over the traditional Pages Router, including: - Shared layouts - Nested routing - Loading states - Error handling - Shared layouts The App Router makes it easy to create and share layouts across your application. This can help to improve consistency and reduce boilerplate code. ## **Nested routing** The App Router supports nested routing, which allows you to create complex routing structures without having to resort to hacks or workarounds. ## **Loading states** The App Router provides built-in support for loading states, which can help to improve the user experience by preventing users from seeing blank pages while content is loading. ## **Error handling** The App Router also provides built-in support for error handling. This can help you to gracefully handle errors and provide users with useful feedback. ## **Why the Next.js App Router is so nice** In addition to the benefits listed above, the App Router is also simply more elegant and expressive than the Pages Router. It is more aligned with the React programming model and makes it easier to build modern, complex applications. ## **Examples of how to use the App Router** Here are some specific examples of how the App Router can be used to improve your Next.js applications: - Sharing a layout across multiple pages: With the App Router, you can easily create a shared layout for your application and then reuse it across multiple pages. This can help to improve consistency and reduce boilerplate code. - Creating complex routing structures: The App Router supports nested routing, which allows you to create complex routing structures without having to resort to hacks or workarounds. For example, you could create a nested route for a blog post category, with sub-routes for each individual blog post. - Improving the user experience with loading states: The App Router provides built-in support for loading states. This means that you can easily show users a loading spinner or other indicator while content is loading. This can help to improve the user experience by preventing users from seeing blank pages. - Gracefully handling errors: The App Router also provides built-in support for error handling. This means that you can easily display a custom error page to users if an error occurs. This can help to provide users with useful feedback and prevent them from seeing generic error messages. ## **Conclusion** Overall, the App Router is a powerful and flexible tool that can help you to build better Next.js applications. It is more aligned with the React programming model and provides a number of benefits over the traditional Pages Router. If you are building a new Next.js application, I recommend that you use the App Router.
checkiamsiam
1,889,747
Come correggere l'errore "non-existent config entity name returned by FieldStorageConfigInterface::getBundles()"
Fonte: How to fix "non-existent config entity name returned by...
0
2024-06-16T17:02:33
https://dev.to/mcale/come-correggere-lerrore-non-existent-config-entity-name-returned-by-fieldstorageconfiginterfacegetbundles-1ega
drupal, fix, italian
Fonte: [How to fix "non-existent config entity name returned by FieldStorageConfigInterface::getBundles()"](https://www.drupal.org/project/drupal/issues/2916266) Personalmente mi sono imbattuto in questo errore dopo aver aggiornato un sito storico che è nato con Drupal 8, è passato alla versione 9 e adesso ha affrontato l'aggiornamento a Drupal 10. Nel corso degli anni il sito ha subito numerose aggiunte, modifiche e cancellazione di campi, ogni volta portandosi dietro molta "sporcizia". Con l'aggiornamento a Drupal 10 ho notato una mole di errori nei log immensa quindi è stato necessario affrontare la problematica cercando. Grazie alla community sono riuscito a trovare la soluzione citata nel link in cima al post. Voglio riportare la soluzione per rendere più facile trovare la risposta perchè il post originale è diventato molto lungo e un po' dispersivo. Per risolvere l'errore ci sono diversi modi: * Utilizzando un `HOOK_update`: ```php function YOURMODULENAME_update_10001() { ## Fixes: ## A non-existent config entity name returned by FieldStorageConfigInterface::getBundles(): entity type: paragraph, bundle: text, field name: field_image $entity_type = 'paragraph'; $bundle = 'text'; $field_name = 'field_image'; /** @var \Drupal\Core\KeyValueStore\KeyValueFactoryInterface $key_value_factory */ $key_value_factory = \Drupal::service('keyvalue'); $field_map_kv_store = $key_value_factory->get('entity.definitions.bundle_field_map'); $map = $field_map_kv_store->get($entity_type); // Remove the field_dates field from the bundle field map for the page bundle. unset($map[$field_name]['bundles'][$bundle]); // Remove field definition if empty. if (empty($map[$field_name]['bundles'])) { unset($map[$field_name]); } $field_map_kv_store->set($entity_type, $map); // Remove entity type definition if empty after unsetting. if (empty($field_map_kv_store->get($entity_type))) { $field_map_kv_store->delete($entity_type); } } ``` Nel codice precedente viene mostrato come esempio un errore sull'entità di tipo `paragraph`, sul bundle `text` dove il campo che causa il problema è `field_image`. In questo modo si pulisce la definizione del campo `field_image` togliendo il suo riferimento al bundle `text` e se alla fine della pulizia se il campo non ha altri bundle al suo interno si procede all'eliminazione del campo stesso. Proseguendo con la stessa logica se alla conclusione anche il tipo di entità è vuoto allora procediamo alla sua cancellazione. Se l'errore è presente su più di un campo si può tranquillamente inserire la funzione in un `foreach` dove cambiando i parametri `$entity_type`, `$bundle` e `$field_name`. * Creando un modulo personalizzato: 1 - Incomincia creando una cartella chiamato nel modo che preferisci, userò il placeholder `[MY_MODULE]` per indicare il nome del modulo. 2 - All'interno della cartella, crea un file con nome `[MY_MODULE].info.yml` e il suo contenuto dovrà essere: ```yaml name: My Module description: Fixes Error - A non-existent config entity name returned by FieldStorageConfigInterface::getBundles(). package: Custom type: module core_version_requirement: ^9.4 || ^10 ``` 3 - Crea un altro file chiamato `drush.services.yml` e inserisci dentro questo codice: ```yaml services: update.commands: class: \Drupal\[MY_MODULE]\Commands\UpdateCommands arguments: - '@keyvalue' tags: - { name: drush.command } ``` 4 - Crea una cartella `src`, dentro la quale crea un'altra cartella `Commands`. 5 - All'interno della cartella `Commands`, crea il file `UpdateCommands.php` e inserisci al suo interno questo codice: ```php <?php namespace Drupal\[MY_MODULE]\Commands; use Drush\Commands\DrushCommands; use Drupal\Core\KeyValueStore\KeyValueFactoryInterface; /** * A Drush commandfile. * * In addition to this file, you need a drush.services.yml * in root of your module, and a composer.json file that provides the name * of the services file to use. * * See these files for an example of injecting Drupal services: * - http://git.drupalcode.org/devel/tree/src/Commands/DevelCommands.php * - http://git.drupalcode.org/devel/tree/drush.services.yml */ class UpdateCommands extends DrushCommands { /** * The key value store to use. * * @var \Drupal\Core\KeyValueStore\KeyValueStoreInterface */ protected $keyValueStore; /** * @param \Drupal\Core\KeyValueStore\KeyValueFactoryInterface $key_value_factory * The key value store to use. */ public function __construct(KeyValueFactoryInterface $key_value_factory) { $this->keyValueStore = $key_value_factory; } /** * Corrects a field storage configuration. See https://www.drupal.org/project/drupal/issues/2916266 for more info * * @command update:correct-field-config-storage * * @param string $entity_type * Entity type * @param string $bundle * Bundle name * @param string $field_name * Field name */ public function correctFieldStorageConfig($entity_type, $bundle, $field_name) { $field_map_kv_store = $this->keyValueStore->get('entity.definitions.bundle_field_map'); $map = $field_map_kv_store->get($entity_type); unset($map[$field_name]['bundles'][$bundle]); // Remove field definition if empty. if (empty($map[$field_name]['bundles'])) { unset($map[$field_name]); } $field_map_kv_store->set($entity_type, $map); // Remove entity type definition if empty after unsetting. if (empty($field_map_kv_store->get($entity_type))) { $field_map_kv_store->delete($entity_type); } } } ``` 6 - Inserisci il modulo nella cartella `web/modules/custom/` del tuo sito, attivalo da interfaccia grafica o con `drush en [MY_MODULE]`. 7 - Adesso puoi usare il comando drush custom che è stato appena creato con `drush update:correct-field-config-storage [ENTITY_TYPE] [BUNDLE] [FIELD_NAME]`. Usando l'esempio di prima otterremmo questo comando `drush update:correct-field-config-storage paragraph text field_image`. 8 - Dopo aver seguito il comando l'errore dovrebbe essere scomparso e puoi disabilitare il modulo con `drush pmu [MY_MODULE]` e successivamente rimuoverlo. Un grande ringraziamento va alla community di Drupal che ha trovato la soluzione che ho leggermente modificato.
mcale
1,890,397
Testing Columnar Storage
As most of you probably already know, since approximately the end of 2022 InterSystems IRIS included...
27,746
2024-06-16T16:20:33
https://community.intersystems.com/post/testing-columnar-storage
docker, python, programming
<p>As most of you probably already know, since approximately the end of 2022 InterSystems IRIS included the columnar storage functionality to its database, well, in today's article we are going to put it to the test in comparison to the usual row storage.</p> <h2>Columnar Storage</h2> <p>What is the main characteristic of this type of storage? Well, if we consult the <a href="http://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSOD_storage" target="_blank">official documentation</a> we will see this fantastic table that explains the main characteristics of both types of storage (by rows or by columns):</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lozc4xdxl0w0vx1o58tr.png) <p>As you can see, columnar storage is designed primarily for analytical tasks in which queries are launched against specific fields in our table, while row storage is more optimal when a large number of insertion, update and deletion operations are required. as well as obtaining complete records.</p> <p>If you continue reading the documentation you will see how simple it is to configure our table to be able to use columnar storage:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">table</span> (<span class="hljs-keyword">column</span> <span class="hljs-keyword">type</span>, column2 type2, column3 type3) <span class="hljs-keyword">WITH</span> STORAGETYPE = COLUMNAR</code></pre> <p>Using this command we would be defining all the columns of our table with columnar storage, but we could opt for a mixed model in which our table has row storage but certain columns make use of columnar storage.</p> <p>This mixed scenario could be interesting in cases where aggregation operations such as sums, averages, etc. are common. For this case we could define which column is the one that will use said storage:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">table</span> (<span class="hljs-keyword">column</span> <span class="hljs-keyword">type</span>, column2 type2, column3 type3 <span class="hljs-keyword">WITH</span> STORAGETYPE = COLUMNAR)</code></pre> <p>In the previous example we defined a table with row storage and a column (column3) with columnar storage.</p> <h2>Comparative</h2> <p>To compare the time spent by column storage and row storage in different queries, we have created a small exercise using Jupyter Notebook that will insert a series of records that we will generate in two tables, the first with storage with rows ( Test.PurchaseOrderRow) and the second with columnar storage in two of its columns (Test.PurchaseOrderColumnar)</p> <h4>Test.PurchaseOrderRow</h4> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> Test.PurchaseOrderRow ( <span class="hljs-keyword">Reference</span> <span class="hljs-built_in">INTEGER</span>, Customer <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), PaymentDate <span class="hljs-built_in">DATE</span>, Vat <span class="hljs-built_in">NUMERIC</span>(<span class="hljs-number">10</span>,<span class="hljs-number">2</span>), Amount <span class="hljs-built_in">NUMERIC</span>(<span class="hljs-number">10</span>,<span class="hljs-number">2</span>), <span class="hljs-keyword">Status</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">10</span>))</code></pre> <h4>Test.PurchaseOrderColumnar</h4> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> Test.PurchaseOrderColumnar ( <span class="hljs-keyword">Reference</span> <span class="hljs-built_in">INTEGER</span>, Customer <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), PaymentDate <span class="hljs-built_in">DATE</span>, Vat <span class="hljs-built_in">NUMERIC</span>(<span class="hljs-number">10</span>,<span class="hljs-number">2</span>), Amount <span class="hljs-built_in">NUMERIC</span>(<span class="hljs-number">10</span>,<span class="hljs-number">2</span>) <span class="hljs-keyword">WITH</span> STORAGETYPE = COLUMNAR, <span class="hljs-keyword">Status</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">10</span>) <span class="hljs-keyword">WITH</span> STORAGETYPE = COLUMNAR)</code></pre> <p>If you download the Open Exchange project and deploy it in your local Docker, you can access the Jupyter Notebook instance and review the file <strong>PerformanceTests.ipynb</strong>, which will be responsible for generating the random data that we are going to store in different phases in our tables&nbsp;and finally it will show us a graph with the performance of the query operations.</p> <p>Let's take a quick look at our project configuration:</p> <h4>docker-compose.yml</h4> <pre class="codeblock-container" idlang="4" lang="YAML" tabsize="4"><code class="language-yaml hljs"><span class="hljs-attr">version:</span> <span class="hljs-string">'3.7'</span> <span class="hljs-attr">services:</span> <span class="hljs-comment"># iris</span> <span class="hljs-attr"> iris:</span> <span class="hljs-attr"> init:</span> <span class="hljs-literal">true</span> <span class="hljs-attr"> container_name:</span> <span class="hljs-string">iris</span> <span class="hljs-attr"> build:</span> <span class="hljs-attr"> context:</span> <span class="hljs-string">.</span> <span class="hljs-attr"> dockerfile:</span> <span class="hljs-string">iris/Dockerfile</span> <span class="hljs-attr"> ports:</span> <span class="hljs-bullet"> -</span> <span class="hljs-number">52774</span><span class="hljs-string">:52773</span> <span class="hljs-bullet"> -</span> <span class="hljs-number">51774</span><span class="hljs-string">:1972</span> <span class="hljs-attr"> volumes:</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">./shared:/shared</span> <span class="hljs-attr"> environment:</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">ISC_DATA_DIRECTORY=/shared/durable</span> <span class="hljs-attr"> command:</span> <span class="hljs-bullet">--check-caps</span> <span class="hljs-literal">false</span> <span class="hljs-bullet">--ISCAgent</span> <span class="hljs-literal">false</span> <span class="hljs-comment"># jupyter notebook</span> <span class="hljs-attr"> jupyter:</span> <span class="hljs-attr"> build:</span> <span class="hljs-attr"> context:</span> <span class="hljs-string">.</span> <span class="hljs-attr"> dockerfile:</span> <span class="hljs-string">jupyter/Dockerfile</span> <span class="hljs-attr"> container_name:</span> <span class="hljs-string">jupyter</span> <span class="hljs-attr"> ports:</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">"8888:8888"</span> <span class="hljs-attr"> environment:</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">JUPYTER_ENABLE_LAB=yes</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">JUPYTER_ALLOW_INSECURE_WRITES=true</span> <span class="hljs-attr"> volumes:</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">./jupyter:/home/jovyan</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">./data:/app/data</span> <span class="hljs-attr"> command:</span> <span class="hljs-string">"start-notebook.sh --NotebookApp.token='' --NotebookApp.password=''"</span> </code></pre> <p>We deploy the IRIS and Jupyter containers in our docker, initially configuring IRIS with the namespace "TEST" and the two tables required for the test.</p> <p>To avoid boring you with code, you can consult the <strong>PerformanceTests.ipynb</strong> file from which we will connect to IRIS, generate the records to be inserted and store them in IRIS</p> <h2>Test execution</h2> <p>The results have been the following (in seconds):</p> <h3>Inserts:</h3> <p>The insertions made are of bulk type:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">INSERT</span> <span class="hljs-keyword">INTO</span> Test.PurchaseOrderColumnar (<span class="hljs-keyword">Reference</span>, Customer, PaymentDate, Vat, Amount, <span class="hljs-keyword">Status</span>) <span class="hljs-keyword">VALUES</span> (?, ?, ?, ?, ?, ?)</code></pre> <p>And the time for each batch of inserts is as follows:</p> <table border="1" cellpadding="1" cellspacing="1" style="width: 500px;"> <tbody> <tr> <td> <p style="text-align: center;">Total inserts</p> </td> <td style="text-align: center;">Row storage</td> <td style="text-align: center;">Mixed storage</td> </tr> <tr> <td style="text-align: center;">1000</td> <td style="text-align: center;"> <p>0.031733</p></td> <td style="text-align: center;"> <p>0.041677</p></td> </tr> <tr> <td style="text-align: center;">5000</td> <td style="text-align: center;"> <p>0.159338</p></td> <td style="text-align: center;"> <p>0.185252</p></td> </tr> <tr> <td style="text-align: center;">20000</td> <td style="text-align: center;"> <p>0.565775</p></td> <td style="text-align: center;"> <p>0.642662</p></td> </tr> <tr> <td style="text-align: center;">50000</td> <td style="text-align: center;"> <p>1.486459</p></td> <td style="text-align: center;"> <p>1.747124</p></td> </tr> <tr> <td style="text-align: center;">100000</td> <td style="text-align: center;"> <p>2.735016</p></td> <td style="text-align: center;"> <p>3.265492</p></td> </tr> <tr> <td style="text-align: center;">200000</td> <td style="text-align: center;"> <p>5.395032</p></td> <td style="text-align: center;"> <p>6.382278</p></td> </tr> </tbody></table> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/arf5m31p462q58ke6m8t.png) <h3>Selects:</h3> <p>The Select launched includes an aggregation function and a condition, both on columns with columnar storage:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">AVG</span>(Amount) <span class="hljs-keyword">FROM</span> Test.PurchaseOrderColumnar <span class="hljs-keyword">WHERE</span> <span class="hljs-keyword">Status</span> = <span class="hljs-string">'SENT'</span></code></pre> <table border="1" cellpadding="1" cellspacing="1" style="width: 500px;"> <tbody> <tr> <td style="text-align: center;"> <p>Total rows</p></td> <td style="text-align: center;">Row storage</td> <td style="text-align: center;">Mixed storage</td> </tr> <tr> <td style="text-align: center;">1000</td> <td style="text-align: center;"> <p>0.002039</p></td> <td style="text-align: center;"> <p>0.001178</p></td> </tr> <tr> <td style="text-align: center;">5000</td> <td style="text-align: center;"> <p>0.00328</p></td> <td style="text-align: center;"> <p>0.000647</p></td> </tr> <tr> <td style="text-align: center;">20000</td> <td style="text-align: center;"> <p>0.005493</p></td> <td style="text-align: center;"> <p>0.001555</p></td> </tr> <tr> <td style="text-align: center;">50000</td> <td style="text-align: center;"> <p>0.016616</p></td> <td style="text-align: center;"> <p>0.000987</p></td> </tr> <tr> <td style="text-align: center;">100000</td> <td style="text-align: center;"> <p>0.036112</p></td> <td style="text-align: center;"> <p>0.001605</p></td> </tr> <tr> <td style="text-align: center;">200000</td> <td style="text-align: center;"> <p>0.070909</p></td> <td style="text-align: center;"> <p>0.002738</p></td> </tr> </tbody></table> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9m18alggfh5e24br5j9.png) <h2>Conclusions</h2> <p>As you can see in the results obtained, the operation is exactly what is indicated in the documentation. Including columns with columnar storage has slightly penalized performance during insert (about 18% slower for our example) while queries on those same columns have dramatically improved response time (258 times faster).</p> <p>It is undoubtedly something to take into account when planning the development of any application.</p>
intersystemsdev
1,890,425
Unlock the Secrets to Writing Clean and Structured JavaScript Code: Essential Practices for Developers
JavaScript is one of the most widely used programming languages in the world, powering everything...
0
2024-06-16T16:59:48
https://dev.to/futuristicgeeks/unlock-the-secrets-to-writing-clean-and-structured-javascript-code-essential-practices-for-developers-43ml
webdev, javascript, react, frontend
JavaScript is one of the most widely used programming languages in the world, powering everything from dynamic web applications to server-side logic. However, its flexibility and dynamic nature can sometimes lead to messy and unstructured code. Writing clean and structured JavaScript code is essential for maintaining readability, scalability, and ease of maintenance. This article explores the best practices for writing clean and structured JavaScript code, helping you create more maintainable and efficient applications. ## 1. Introduction In the fast-evolving world of software development, writing clean and structured code is not just a preference but a necessity. JavaScript, with its wide adoption and versatility, demands a disciplined approach to coding to ensure that projects remain manageable and scalable. This guide provides a comprehensive overview of best practices and advanced techniques to help you write clean and structured JavaScript code. ## 2. Why Clean and Structured Code Matters Clean and structured code is crucial for several reasons: - Readability: Code that is easy to read and understand reduces the learning curve for new team members. - Maintainability: Structured code is easier to debug, extend, and refactor. - Scalability: Well-organized code can be scaled more efficiently to accommodate growing application requirements. - Collaboration: Clean code facilitates better collaboration among developers by providing a common understanding and reducing miscommunication. ## 3. Essentials of Writing Clean JavaScript Code **Use Meaningful Variable and Function Names** Choosing meaningful names for variables and functions is fundamental to writing clean code. Descriptive names make the code self-documenting and easier to understand. `// Poor naming let a = 5; function foo(x) { return x * x; } // Better naming let itemCount = 5; function calculateSquare(number) { return number * number; }` **Follow Consistent Naming Conventions** Consistent naming conventions help in maintaining a uniform codebase. Common practices include using camelCase for variables and functions, PascalCase for classes, and UPPERCASE for constants. `let userName = "John"; const MAX_USERS = 100; class UserProfile { constructor(name) { this.name = name; } }` **Keep Functions Small and Focused** Small, focused functions that perform a single task are easier to test and debug. They also promote code reusability. `// A function that does too much function handleUserInput(input) { // Validate input if (!input) { return "Invalid input"; } // Process input let processedInput = input.trim().toLowerCase(); // Update UI document.getElementById('output').innerText = processedInput; } // Breaking it into smaller functions function validateInput(input) { return input && input.trim() !== ""; } function processInput(input) { return input.trim().toLowerCase(); } function updateUI(output) { document.getElementById('output').innerText = output; } function handleUserInput(input) { if (!validateInput(input)) { return "Invalid input"; } let processedInput = processInput(input); updateUI(processedInput); }` **Avoid Global Variables** Global variables can lead to code that is difficult to maintain and debug due to potential naming conflicts and unintended side effects. Use local variables and encapsulate code within functions or modules. `// Avoid var globalVariable = "I'm global"; // Use local variables function exampleFunction() { let localVariable = "I'm local"; console.log(localVariable); }` **Use Comments Wisely** Comments should be used to explain the “why” behind complex code, not the “what” which should be clear from the code itself. Over-commenting can clutter the code and make it harder to read. `// Poor commenting let x = 10; // Assign 10 to x // Better commenting // Calculate the area of a circle let radius = 10; let area = Math.PI * Math.pow(radius, 2); Consistent Formatting and Indentation Consistent formatting and indentation make the code more readable and maintainable. Use tools like Prettier to enforce consistent code formatting. function fetchData() { return fetch('https://api.example.com/data') .then(response => response.json()) .then(data => { console.log(data); }) .catch(error => { console.error('Error fetching data:', error); }); }` ## 4. Advanced Practices for Structuring JavaScript Code **Use Modules and ES6 Imports/Exports** Modular code improves maintainability and reusability by dividing the codebase into smaller, self-contained modules. Use ES6 modules to import and export functionalities. `// mathUtils.js export function add(a, b) { return a + b; } export function subtract(a, b) { return a - b; } // main.js import { add, subtract } from './mathUtils.js'; console.log(add(5, 3)); // 8 console.log(subtract(5, 3)); // 2 Implement Design Patterns Design patterns like Singleton, Factory, and Observer can help in writing scalable and maintainable code. They provide proven solutions to common problems. // Singleton pattern class Singleton { constructor() { if (!Singleton.instance) { Singleton.instance = this; } return Singleton.instance; } logMessage(message) { console.log(message); } } const instance1 = new Singleton(); const instance2 = new Singleton(); console.log(instance1 === instance2); // true` **Leverage Asynchronous Programming Wisely** Asynchronous programming with Promises and async/await makes it easier to manage asynchronous operations and improve the responsiveness of your application. `// Using Promises function fetchData() { return fetch('https://api.example.com/data') .then(response => response.json()) .then(data => { console.log(data); }) .catch(error => { console.error('Error fetching data:', error); }); } // Using async/await async function fetchData() { try { let response = await fetch('https://api.example.com/data'); let data = await response.json(); console.log(data); } catch (error) { console.error('Error fetching data:', error); } }` ## Error Handling and Logging Effective error handling and logging are crucial for debugging and maintaining code. Use try/catch blocks and logging libraries to manage errors gracefully. `function parseJSON(jsonString) { try { let data = JSON.parse(jsonString); console.log(data); } catch (error) { console.error('Invalid JSON:', error); } }` ## 5. Tools and Libraries for Maintaining Clean Code **Linters and Formatters** Tools like ESLint and Prettier help enforce coding standards and consistent formatting, reducing errors and improving code readability. `# Install ESLint npm install eslint --save-dev # Install Prettier npm install prettier --save-dev` **Testing Frameworks** Testing frameworks like Jest and Mocha enable you to write and run tests, ensuring that your code works as expected and remains reliable. `// Example Jest test test('adds 1 + 2 to equal 3', () => { expect(add(1, 2)).toBe(3); });` **Code Review Tools** Code review tools like GitHub, GitLab, and Bitbucket facilitate collaborative code reviews, helping catch issues early and improve code quality. ## 6. Best Practices for Code Reviews - Review frequently: Regular reviews help catch issues early and ensure continuous improvement. - Be constructive: Provide helpful feedback and suggestions for improvement. - Focus on the code, not the coder: Keep feedback objective and focused on the code itself. - Use automated tools: Leverage automated code review tools to catch common issues before manual review. ## 7. Conclusion Writing clean and structured JavaScript code is essential for creating maintainable, scalable, and high-quality applications. By following best practices and leveraging the right tools, you can improve your coding standards and contribute to a more efficient development process. Embrace these practices to enhance your JavaScript skills and ensure your projects remain robust and manageable. By consistently applying these principles, you’ll be able to write cleaner, more efficient, and more maintainable JavaScript code, ultimately leading to better software development outcomes. **Read More on our latest article:** https://futuristicgeeks.com/unlock-the-secrets-to-writing-clean-and-structured-javascript-code-essential-practices-for-developers/ **Did you find this article helpful? Let us know with a like!**
futuristicgeeks
1,890,424
30 Essential Array Methods in JavaScript with Examples
Arrays are a fundamental data structure in JavaScript, used to store multiple values in a single...
0
2024-06-16T16:59:29
https://dev.to/mahabubr/30-essential-array-methods-in-javascript-with-examples-5570
webdev, javascript, array, datastructures
Arrays are a fundamental data structure in JavaScript, used to store multiple values in a single variable. JavaScript provides various built-in methods to manipulate and interact with arrays. Here are ten important array methods every JavaScript developer should know, complete with examples and explanations. **1. push()** The push() method adds one or more elements to the end of an array and returns the new length of the array. - Example: ``` let fruits = ['apple', 'banana']; fruits.push('orange'); console.log(fruits); // Output: ['apple', 'banana', 'orange'] ``` Explanation: Here, push('orange') adds 'orange' to the end of the fruits array. **2. pop()** The pop() method removes the last element from an array and returns that element. - Example: ``` let fruits = ['apple', 'banana', 'orange']; let lastFruit = fruits.pop(); console.log(lastFruit); // Output: 'orange' console.log(fruits); // Output: ['apple', 'banana'] ``` Explanation: pop() removes 'orange' from the end of the array and returns it. **3. shift()** The shift() method removes the first element from an array and returns that element. - Example: ``` let fruits = ['apple', 'banana', 'orange']; let firstFruit = fruits.shift(); console.log(firstFruit); // Output: 'apple' console.log(fruits); // Output: ['banana', 'orange'] ``` Explanation: shift() removes 'apple' from the beginning of the array and returns it. **4. unshift()** The unshift() method adds one or more elements to the beginning of an array and returns the new length of the array. - Example: ``` let fruits = ['banana', 'orange']; fruits.unshift('apple'); console.log(fruits); // Output: ['apple', 'banana', 'orange'] ``` Explanation: unshift('apple') adds 'apple' to the beginning of the fruits array. **5. splice()** The splice() method changes the contents of an array by removing, replacing, or adding elements at a specific index. - Example: ``` let fruits = ['apple', 'banana', 'orange']; fruits.splice(1, 1, 'grape'); // Removes 1 element at index 1 and adds 'grape' console.log(fruits); // Output: ['apple', 'grape', 'orange'] ``` Explanation: splice(1, 1, 'grape') removes 1 element at index 1 ('banana') and adds 'grape' at that position. **6. slice()** The slice() method returns a shallow copy of a portion of an array into a new array object selected from start to end (end not included). - Example: ``` let fruits = ['apple', 'banana', 'orange', 'grape']; let citrus = fruits.slice(1, 3); console.log(citrus); // Output: ['banana', 'orange'] ``` Explanation: slice(1, 3) creates a new array containing elements from index 1 to index 2 (excluding index 3). **7. concat()** The concat() method is used to merge two or more arrays. This method does not change the existing arrays but returns a new array. - Example: ``` let fruits = ['apple', 'banana']; let moreFruits = ['orange', 'grape']; let allFruits = fruits.concat(moreFruits); console.log(allFruits); // Output: ['apple', 'banana', 'orange', 'grape'] ``` Explanation: concat(moreFruits) merges the fruits and moreFruits arrays into a new array. **8. forEach()** The forEach() method executes a provided function once for each array element. - Example: ``` let fruits = ['apple', 'banana', 'orange']; fruits.forEach(function(fruit) { console.log(fruit); }); // Output: // 'apple' // 'banana' // 'orange' ``` Explanation: forEach() iterates through each element in the fruits array and logs it to the console. **9. map()** The map() method creates a new array populated with the results of calling a provided function on every element in the calling array. - Example: ``` let numbers = [1, 2, 3]; let doubled = numbers.map(function(number) { return number * 2; }); console.log(doubled); // Output: [2, 4, 6] ``` Explanation: map() applies the provided function to each element in numbers and creates a new array with the results. **10. filter()** The filter() method creates a new array with all elements that pass the test implemented by the provided function. - Example: ``` let numbers = [1, 2, 3, 4, 5]; let evenNumbers = numbers.filter(function(number) { return number % 2 === 0; }); console.log(evenNumbers); // Output: [2, 4] ``` Explanation: filter() applies the provided function to each element in numbers and returns a new array with the elements that pass the test. **11. reduce()** The reduce() method executes a reducer function on each element of the array, resulting in a single output value. - Example: ``` let numbers = [1, 2, 3, 4]; let sum = numbers.reduce(function(accumulator, currentValue) { return accumulator + currentValue; }, 0); console.log(sum); // Output: 10 ``` Explanation: reduce() sums up all elements in the numbers array, starting with an initial value of 0. **12. find()** The find() method returns the value of the first element in the array that satisfies the provided testing function. - Example: ``` let numbers = [1, 2, 3, 4]; let found = numbers.find(function(number) { return number > 2; }); console.log(found); // Output: 3 ``` Explanation: find() returns the first element in the numbers array that is greater than 2. **13. findIndex()** The findIndex() method returns the index of the first element in the array that satisfies the provided testing function. Otherwise, it returns -1. - Example: ``` let numbers = [1, 2, 3, 4]; let index = numbers.findIndex(function(number) { return number > 2; }); console.log(index); // Output: 2 ``` Explanation: findIndex() returns the index of the first element in the numbers array that is greater than 2. **14. includes()** The includes() method determines whether an array includes a certain value among its entries, returning true or false as appropriate. - Example: ``` let fruits = ['apple', 'banana', 'orange']; let hasBanana = fruits.includes('banana'); console.log(hasBanana); // Output: true ``` Explanation: includes('banana') checks if 'banana' is in the fruits array and returns true. **15. some()** The some() method tests whether at least one element in the array passes the test implemented by the provided function. - Example: ``` let numbers = [1, 2, 3, 4]; let hasEven = numbers.some(function(number) { return number % 2 === 0; }); console.log(hasEven); // Output: true ``` Explanation: some() checks if there is at least one even number in the numbers array and returns true. **16. every()** The every() method tests whether all elements in the array pass the test implemented by the provided function. - Example: ``` let numbers = [2, 4, 6, 8]; let allEven = numbers.every(function(number) { return number % 2 === 0; }); console.log(allEven); // Output: true ``` Explanation: every() checks if all elements in the numbers array are even and returns true. **17. sort()** The sort() method sorts the elements of an array in place and returns the sorted array. The default sort order is according to string Unicode code points. - Example: ``` let fruits = ['banana', 'orange', 'apple']; fruits.sort(); console.log(fruits); // Output: ['apple', 'banana', 'orange'] ``` Explanation: sort() arranges the elements of the fruits array in alphabetical order. **18. reverse()** The reverse() method reverses the order of the elements in an array in place and returns the reversed array. - Example: ``` let fruits = ['banana', 'orange', 'apple']; fruits.reverse(); console.log(fruits); // Output: ['apple', 'orange', 'banana'] ``` Explanation: reverse() reverses the order of the elements in the fruits array. **19. join()** The join() method joins all elements of an array into a string and returns this string. - Example: ``` let fruits = ['apple', 'banana', 'orange']; let fruitString = fruits.join(', '); console.log(fruitString); // Output: 'apple, banana, orange' ``` Explanation: join(', ') concatenates all elements in the fruits array into a single string, separated by a comma and a space. **20. from()** The Array.from() method creates a new, shallow-copied array instance from an array-like or iterable object. - Example: ``` let str = 'hello'; let charArray = Array.from(str); console.log(charArray); // Output: ['h', 'e', 'l', 'l', 'o'] ``` Explanation: Array.from(str) creates a new array from the string str, with each character of the string becoming an element in the array. **21. flat()** The flat() method creates a new array with all sub-array elements concatenated into it recursively up to the specified depth. - Example: ``` let nestedArray = [1, [2, [3, [4]]]]; let flatArray = nestedArray.flat(2); console.log(flatArray); // Output: [1, 2, 3, [4]] ``` Explanation: flat(2) flattens the nestedArray up to 2 levels deep. **22. flatMap()** The flatMap() method first maps each element using a mapping function, then flattens the result into a new array. It is identical to a map() followed by a flat() of depth 1. - Example: ``` let numbers = [1, 2, 3]; let mappedFlatArray = numbers.flatMap(num => [num, num * 2]); console.log(mappedFlatArray); // Output: [1, 2, 2, 4, 3, 6] ``` Explanation: flatMap() applies the function to each element and flattens the result. **23. fill()** The fill() method changes all elements in an array to a static value, from a start index (default 0) to an end index (default array length). It returns the modified array. - Example: ``` let array = [1, 2, 3, 4, 5]; array.fill(0, 2, 4); console.log(array); // Output: [1, 2, 0, 0, 5] ``` Explanation: fill(0, 2, 4) replaces elements from index 2 to 3 with 0. **24. copyWithin()** The copyWithin() method shallow copies part of an array to another location in the same array and returns it, without modifying its length. - Example: ``` let array = [1, 2, 3, 4, 5]; array.copyWithin(0, 3, 4); console.log(array); // Output: [4, 2, 3, 4, 5] ``` Explanation: copyWithin(0, 3, 4) copies the element at index 3 to index 0. **25. keys()** The keys() method returns a new Array Iterator object that contains the keys for each index in the array. - Example: ``` let fruits = ['apple', 'banana', 'orange']; let keys = fruits.keys(); for (let key of keys) { console.log(key); } // Output: // 0 // 1 // 2 ``` Explanation: keys() provides an iterator over the keys (indices) of the fruits array. **26. values()** The values() method returns a new Array Iterator object that contains the values for each index in the array. - Example: ``` let fruits = ['apple', 'banana', 'orange']; let values = fruits.values(); for (let value of values) { console.log(value); } // Output: // 'apple' // 'banana' // 'orange' ``` Explanation: values() provides an iterator over the values of the fruits array. **27. entries()** The entries() method returns a new Array Iterator object that contains key/value pairs for each index in the array. - Example: ``` let fruits = ['apple', 'banana', 'orange']; let entries = fruits.entries(); for (let [index, value] of entries) { console.log(index, value); } // Output: // 0 'apple' // 1 'banana' // 2 'orange' ``` Explanation: entries() provides an iterator over the key/value pairs of the fruits array. **28. reduceRight()** The reduceRight() method applies a function against an accumulator and each value of the array (from right-to-left) to reduce it to a single value. - Example: ``` let numbers = [1, 2, 3, 4]; let product = numbers.reduceRight(function(accumulator, currentValue) { return accumulator * currentValue; }, 1); console.log(product); // Output: 24 ``` Explanation: reduceRight() multiplies all elements of the numbers array from right to left. **29. toLocaleString()** The toLocaleString() method returns a string representing the elements of the array. The elements are converted to strings using their toLocaleString() methods and are separated by a locale-specific string (such as a comma ","). - Example: ``` let prices = [1234.56, 7890.12]; let localeString = prices.toLocaleString('en-US', { style: 'currency', currency: 'USD' }); console.log(localeString); // Output: '$1,234.56,$7,890.12' ``` Explanation: toLocaleString() formats each number in the prices array according to locale-specific conventions. **30. toString()** The toString() method returns a string representing the specified array and its elements. - Example: ``` let fruits = ['apple', 'banana', 'orange']; let stringRepresentation = fruits.toString(); console.log(stringRepresentation); // Output: 'apple,banana,orange' ``` Explanation: toString() converts the fruits array into a comma-separated string.
mahabubr
1,890,423
Blockchain in Banking: Revolutionizing the Financial Sector
Introduction Blockchain technology, initially developed for cryptocurrencies like...
27,673
2024-06-16T16:58:23
https://dev.to/rapidinnovation/blockchain-in-banking-revolutionizing-the-financial-sector-11de
## Introduction Blockchain technology, initially developed for cryptocurrencies like Bitcoin, has evolved into a revolutionary technology impacting various sectors, including banking. This decentralized digital ledger offers a secure and transparent way to record transactions, accessible by multiple parties and resistant to tampering and fraud. ## How is Blockchain Transforming Banking? ### Enhancing Security Blockchain's decentralized nature enhances security by eliminating single points of failure, making it less vulnerable to hacking and fraud. Each transaction is encrypted and linked to the previous one, creating a chain that is extremely difficult to alter. ### Streamlining Processes Blockchain streamlines banking processes by automating them with smart contracts, reducing the time and cost associated with traditional operations. This automation eliminates intermediaries, speeding up transactions and reducing errors. ### Improving Transparency Blockchain improves transparency by recording every transaction on a public ledger, accessible by all network participants. This transparency helps in reducing fraud and aids in regulatory compliance. ## What is Blockchain in Banking? ### Definition and Core Concepts Blockchain is a distributed ledger technology that maintains a secure and decentralized record of transactions. Its core concepts include decentralization, immutability, and transparency, which are crucial for eliminating risks associated with centralization and ensuring data integrity. ### Key Components of Blockchain Technology in Banking Key components include distributed ledger technology (DLT), smart contracts, and consensus mechanisms like Proof of Work (PoW) and Proof of Stake (PoS). These components ensure security, transparency, and efficiency in banking operations. ## Types of Blockchain Implementations in Banking ### Public Blockchains Public blockchains are decentralized and open to anyone. They offer high transparency but face challenges in scalability and privacy. ### Private Blockchains Private blockchains are controlled by a single organization, offering better control and efficiency but with potential security risks due to centralization. ### Consortium Blockchains Consortium blockchains are controlled by a group of organizations, providing a balance between decentralization and efficiency, ideal for industries requiring secure and transparent interactions. ## Benefits of Blockchain in Banking ### Reduced Operational Costs Blockchain reduces operational costs by eliminating intermediaries and automating processes, leading to significant savings. ### Increased Efficiency and Speed Blockchain enables near real-time transactions, reducing transfer times and costs, especially in cross-border payments. ### Enhanced Security Measures Blockchain's cryptographic security and decentralized nature make it nearly impervious to cyber-attacks and fraud, ensuring the integrity of financial data. ## Challenges of Implementing Blockchain in Banking ### Regulatory Issues Navigating the complex regulatory landscape is a significant challenge, as blockchain's decentralized nature can conflict with existing regulations. ### Scalability Concerns Current blockchain implementations struggle with scalability, posing a barrier to mainstream adoption in financial services. ### Integration with Existing Systems Integrating blockchain with legacy banking systems requires significant investment in middleware and APIs, along with ensuring regulatory compliance. ## Future of Blockchain in Banking ### Predictions and Trends Blockchain is expected to see increased adoption in cross-border payments, digital identities, and the integration of AI to enhance efficiency and reduce costs. ### Potential Innovations Innovations like smart contracts, decentralized finance (DeFi), and blockchain-based KYC solutions hold the potential to revolutionize banking operations and enhance financial inclusivity. ## Real-World Examples of Blockchain in Banking ### Case Study 1: Cross-Border Payments Ripple's partnership with Santander and JPMorgan Chase's JPM Coin demonstrate blockchain's potential to make international money transfers faster, cheaper, and more secure. ### Case Study 2: Fraud Reduction Major banks like JPMorgan Chase are using blockchain to enhance security and reduce fraud by ensuring that all transactions are recorded securely and transparently. ## Conclusion ### Summary of Blockchain Benefits in Banking Blockchain offers enhanced security, improved transparency, and increased efficiency in banking, making it a pivotal technology in modern financial services. ### Final Thoughts on the Future of Blockchain in Banking Despite challenges, the potential benefits of blockchain in banking—such as increased security, efficiency, and inclusivity—make it a technology that could fundamentally change the financial sector. Drive innovation with intelligent AI and secure blockchain technology! 🌟 Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <https://www.rapidinnovation.io/post/how-is-blockchain-in-banking-transforming-the-industry-54b09> ## Hashtags #BlockchainBanking #FintechRevolution #SecureTransactions #DigitalLedger #FutureOfFinance
rapidinnovation
1,890,421
B2B SaaS Benchmarks: A Complete Guide 2024
This Blog was Originally Posted to churnfree blog Winning new customers is more costly and retaining...
0
2024-06-16T16:51:47
https://churnfree.com/blog/b2b-saas-churn-rate-benchmarks/
saaschurn, churnfree, b2bsaas, churnmanagenent
This Blog was Originally Posted to [churnfree blog](https://churnfree.com/blog/b2b-saas-churn-rate-benchmarks/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) Winning new customers is more costly and retaining long-term customers with long-term subscription plans can be much more valuable. Fortunately, today’s companies have b2b SaaS benchmarks such as robust customer retention strategies and acquiring a new tool to help them keep old users and acquire new ones. Let’s dive into everything you need about b2b SaaS benchmarks. As churn experts at [Churnfree](https://churnfree.com/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) we enable you to understand why your users cancel their subscriptions and assist you in retaining them with personalized onboarding experiences—Collect a pack of tips, b2b benchmarks, and guidance to improve your churn rates. Knowing the deep-down definition of churn rate and how it can become subtly uncertain to affect your business strategies is imperative. First, let’s explain what we mean by “churn rate.” The churn rate estimates the number of users who cancel their subscriptions within a specific period. It is a beneficial practice to calculate revenue lost from churned users. SaaS churn rates are critical for a business’s long-term undertakings and overall performance. **Calculating SaaS Churn Rate** SaaS churn rate focuses on the number of users that leave your services monthly or annually. To measure the SaaS churn rate, you can divide the total number of churned users via the total number of users: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cftt6vwxk4cpd1tghil3.png) For example, if your business has 1000 customers and 80 ended their subscriptions last month, your customer churn rate is 8%. The SaaS churn rate is also understood as “logo churn.” **What is Churn rate in SaaS?** In the context of SaaS, the churn rate is a critical metric that measures the rate at which customers stop using the service over a specific period. It is an indicator of customer retention and business health. The churn rate estimates the number of users who cancel their subscriptions within a specific period. It is a beneficial practice to calculate revenue lost from churned users. SaaS churn rates are critical for a business’s long-term undertakings and overall performance. Let’s dig into how to calculate saas churn rate and have a look at b2b saas benchmarks for 2024. **Net SaaS Churn Rate** To get more accurate measurements, there’s the net SaaS churn rate, which evaluates the number of new users your business gained over a specific period. To measure it, you need to divide the total number of left users by the number of new users gained over a period via the total number of users. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6j5sjm3tqbhl612mb497.png) For example, if your company has 1000 customers, 80 of them canceled their subscription last month, while your business gained 2 new users, your net SaaS churn rate would be 6%. Nevertheless, customer retention is a tremendous metric to evaluate your customer base and product experience. Still, it does not offer real insights into the revenue affected by the churned users—it’s usually connected with the revenue churn rate. The real insight is gained by the tool that measures your Net SaaS churn rate and helps you define your users’ behavior patterns. **Revenue Churn Rate** The revenue SaaS churn rate calculates the ratio of lost revenue due to some reasons, such as downgrades, cancellations, payment failures, and other bottlenecks. To measure it, you can divide the total churned revenue over a specific period with the total revenue at the beginning. Calculating revenue churn rate benchmarks involves a similar process, focusing on the financial impact: **Determine Revenue at Start:** Note the total revenue at the beginning of the period. **Assess Revenue Lost:** Calculate the revenue lost due to customer churn during the period. **Compute the Rate:** Divide the lost revenue by the starting total, then multiply by 100 for the percentage. Example: If your starting revenue was $1 million and you lost $100,000 to churn, your revenue churn rate would be (100,000/1,000,000) * 100, equating to a 10% churn rate. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqr3aldl12n5x62dxoih.png) For instance, if your total revenue is $2,000 and your churned revenue is $300, your revenue churn rate will be 15%. The revenue churn rate helps set different pricing tiers for subscription pricing plans, which do not fall under the category of SaaS churn rate. Net revenue churn rates give insight into the financial impact of customer departures. Smaller B2B SaaS companies often face a net revenue churn rate ranging from 10% to 15%. In contrast, larger companies usually maintain a healthier rate of about 5% to 7%. These figures are pivotal for assessing the overall financial health and sustainability of a company. The net revenue churn measures the revenue obtained from expansion, such as upsells, add-ons, or tier upgrades. To measure it, you can divide the total churned revenue minus the expansion revenue by the total revenue at the beginning of the period. The very reason to calculate the revenue churn rate is to gain more in-depth insights into the actual revenue lost and gained. For instance, if your business gained $2000 and your churned revenue is $300, but you also gained an additional $80 from expansion, and the net revenue churn is 7%. **Monthly Recurring Revenue/ Subscription Revenue** SaaS businesses usually operate on a monthly income basis. Therefore, another important metric to calculate when and why your customers decide to leave your services—is the Monthly Recurring Revenue (MRR) Churn. To measure average churn rate for subscription services, you can divide the total churned MRR minus the expanded MRR with the total MRR at the beginning of the month. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/puazqa6rc842sp5nw6q5.png) Likewise, you can know your Annual Recurring Revenue (ARR) churn if your business runs annually. Similarly, you can divide the total churned revenue minus the expanded revenue with the total ARR at the beginning of the year. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujmp4yn0t4nwmsr5l2dv.png) Similarly, you can estimate retention and measure Gross retention, Net retention, or Logo retention. Customer retention lets you track how successfully you can retain existing users happy and gain more revenue. MRR and ARR metrics are essential for tracking the regular income generated from customers. For smaller companies, the gross MRR churn rate typically hovers around 2% to 2.5%, whereas larger companies manage to keep it at about 1%. Net MRR remains steady at 2% across most companies, reflecting a consistent revenue flow despite customer churn. **Monthly Churn Rate** For small to medium-sized SaaS businesses average SaaS churn rates revolve around 3% and 7%. On the other hand, around 1% is known to be ideal. **Annual Churn Rate** An annual churn rate of about 5% or less is essential for maintaining sustainable growth. It’s worth noting that larger companies typically experience lower churn rates due to their established market presence and the nature of their client contracts, which often include extended periods that limit churn. The SaaS industry, on average, aims for an annual customer churn rate of 5% or lower for established companies. In contrast, the median gross dollar churn for SaaS companies stands at 12%, with a median annual logo churn of 13%, highlighting the challenges businesses face in minimizing revenue and customer losses year over year. **What is a good churn rate for SaaS?** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4rlzrc6as3xp6ts1xnz.png) Around 5% annually is generally considered good and sustainable for SaaS companies, acceptable rates can vary depending on specific circumstances such as company size, market, and the nature of the SaaS product. **Annual Churn Rate:** A sustainable churn rate for SaaS companies is often cited to be around 5% annually​. This means that a company retains 95% of its customers over a year. **Monthly Churn Rate:** For a monthly churn rate, a good benchmark is less than 1% per month, which translates to about 12% annually. Enterprises and larger companies typically have lower churn rates, closer to 0.5-1% monthly, while SMBs (small to medium-sized businesses) might see higher rates, around 3-7% monthly due to their different customer dynamics and shorter subscription cycles​​. **Industry Variation:** Churn rates can vary significantly by industry and the specific type of SaaS product. For instance, B2B SaaS companies usually have lower churn rates compared to B2C SaaS companies because B2B relationships are often longer-term and more integrated into the client’s operations. **Median Churn Rates:** According to surveys, the median annual gross dollar churn for SaaS companies is about 12%, and the median annual logo churn (the rate at which customers are lost) is approximately 13%​​. These figures provide a broader context for what might be considered typical across the industry. Company size plays a crucial role in determining churn rates. Smaller companies, often newer in the market or with less established products, typically see higher churn rates, ranging from 3% to 7% annually. This is partly due to their customer base’s varying commitment levels and the lower financial barriers to switching services. Larger companies, with well-established products and more substantial contractual agreements, usually maintain lower churn rates of about 1% to 2% annually. These firms benefit from longer client relationships and often provide more comprehensive solutions that integrate deeply into their clients’ operations, making switching more cumbersome and less likely. **Factors Affecting Churn Rate Benchmarks in SaaS Companies** **Voluntary vs. Involuntary Churn** **1. Voluntary Churn:** This occurs when customers decide to cancel their subscriptions, often due to dissatisfaction with the product or service. Common reasons include a lack of needed features, poor customer support, or a better offer from a competitor. The voluntary churn rate for B2B companies stands at 3.50%, slightly lower than B2C companies at 4.04%. **2. Involuntary Churn:** This type happens without the customer’s intention to leave, usually due to payment issues like expired credit cards or insufficient funds. It’s significant to note that involuntary churn doesn’t reflect dissatisfaction with the product. The churn rate for B2B and B2C companies is almost the same, with B2B having a marginally higher involuntary churn figure due to these payment-related issues. **B2B SaaS Churn Rate Benchmarks & Challenges** In the uncertain economic climate, knowing the natural [causes of customer churn](https://churnfree.com/blog/causes-of-customer-churn-in-saas/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) should be the ultimate motive of every business. There are several b2b SaaS benchmarks and challenges every business face, and let’s dive into why they occur and what the solutions are. **Churn happens when businesses fail to know users’ behavior patterns.** Personalization is critical for your users to track their preferences, attitudes, and willingness to pay. Amazon, for example, can rely on extensive storage of past purchase decisions to calculate what its user is prepared to pay for specific products. Using advanced analytics, it examines the motives of past purchases—not only a user’s buying history, age, gender, and location but also monetary factors such as purchase value and monthly or yearly shopping expenditure—and then these metrics are used to measure the possibility of a new purchases users are willing to make. This way, you can decide on an accurate price plan, and your users don’t feel an over-pricing factor and do not cancel subscription plans to join your competitors’ service for lower prices. **What do your users actually want**? Personalization is essential because businesses profit from it, and users expect it. The study reveals that 74% of users find mass marketing is frustrating, as many businesses have observed a steep decline in the significance of newsletters, birthday mailings, and matching campaigns. The users’ expectations grow as they are suggested to more personalized advertising through digital media and other advertising channels. Paying attention to user behavior patterns can be helpful in making [customer retention strategies.](https://churnfree.com/blog/customer-retention-strategies/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) **How to track user behavior patterns successfully?** Three factors matter most to a victorious and scalable approach to measure the user behavior pattern: data, triggers, culture, and methods. **1. Getting the correct data (not more data)** A recent survey shows 67% of participants said their biggest challenge was using the right tools to find the right data. Obtaining the correct data is key to knowing best-in-class user behavior patterns, but it can pose a challenge for utilities. One common problem is how to drill down users’ data to stop perceived discrepancies and gaps causing SaaS churn, and so they can know how to generate insights and personalize product promotion campaigns for the better. The market is swamped with wonderful tools to help retain your customers. Choosing the best tool for your need is an imperative task. **Churnfree** is an amazing [churn prediction software](https://churnfree.com/blog/churn-prediction-software/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) that has been helping several businesses retain their customers and help them make the best business decisions. - One of its greater benefits is its intelligence and automated metrics that drill down the right data and help you take action on all discrepancies that made the customers leave your products. - It has advanced automated metrics to predict user behavior more correctly and successfully. Metrics such as customer lifetime value (CLV) or upselling probabilities help you make better decisions. - It helps extract relevant external data that might contain a user’s buying power, demographic group, and other characteristics. - Builds cancel flow for customers who want to unsubscribe, helping you retain customers. **2. Determining the right triggers for successful user contact** Finding the right time to contact a user is key. That means finding and testing certain events associated with a specific user—a form of trigger-based marketing relatively close to the b2b SaaS business sector. Models of triggers may contain visits to web stores and webpages, online searches on specific products for price comparisons, and clicks on FAQs about finding a better solutions. Businesses have to pay a close attention to such triggers. For instance, metrics track a user browsing online about FAQs and clicking on questions about a particular product; it can send the user an automated message promoting its service and a new discounted price plan to enable the user to resume using its services. **3. Using the correct methods and tech to scale up successfully** Many b2b SaaS businesses get caught in pilot mode, running a few effective campaigns but need help running them out across their user base. To scale up, they must take three actions: - Revise their data in real-time and automate their algorithms. - Ensure your system infrastructure is firm and steady. - Create more interfaces for your sales channels. **FAQs** **1. What is the typical conversion rate for B2B SaaS companies?** The average lead-to-customer conversion rate for B2B SaaS companies generally ranges from 1% to 5%. Factors such as lead quality and the effectiveness of lead nurturing strategies can influence this rate. A conversion rate exceeding 5% is considered indicative of a highly efficient lead generation and nurturing process. **2. What are the current churn rate benchmarks for SaaS businesses?** For small to medium-sized SaaS businesses, the typical monthly churn rate benchmarks lies between 3% and 7%. This rate is influenced by factors such as the pricing and subscription model, which affect the cost for a customer to switch services. **3. What is an acceptable churn rate for a B2B app’s free plan?** In the context of various industries, the average churn rate for free plans in B2B applications, such as those in Software and Business & Professional Services sectors, is around 3.8%. This contrasts with sectors like Digital Media and Entertainment, Consumer Goods and Retail, and Education, which see an average churn rate of 6.5%. **4. What retention rate should B2B SaaS businesses aim for?** Best-in-class SaaS businesses, irrespective of their size or industry, typically achieve a churn rate of approximately 85-87%. This figure can vary depending on factors like company size and business model, but it serves as a benchmark for exceptional performance in customer retention. **The bottom-line:** Millions business online if not billions, they cant survive without the information of algorithms. The metrics offer the insights that lead you to determine the user behavior patterns. Ensure that you have strong infrastructure to support user retention strategies and B2b SaaS benchmarks. A [churn management software](https://churnfree.com/blog/best-churn-management-software-to-keep-your-business-afloat/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) like Churnfree helps you understand the bottlenecks coming in your way of building customer retention strategies.
churnfree
1,876,566
Do you do your homework before interview?
Do you ever feel like doing the homework before every interview? Do you feel that there is some kind...
0
2024-06-16T16:06:00
https://dev.to/sein_digital/do-you-do-your-homework-before-interview-9dc
webdev, interview, programming, career
Do you ever feel like doing the homework before every interview? Do you feel that there is some kind of test you have to pass and not only gain enough points but also outscore other candidates? But when you actually get the job, it has nothing to do with questions, requirements and responsibilities posted in the listing. I think this is ridiculous. Recruitment process became detached from reality for a long time, and I've seen only few companies that actually make recruiting right. I have over 15 years of experience in the field, I've been on many recruiting processes, and I did some recruiting myself as well. I'm not saying my approach is bad, but we have to address elephant in the room. ## The BAD Let's start with how most interviews go. You show up. You have a ton of questions in regards to technical knowledge about tech that was listed in the job listing. But let's be hones, recruiters have a standard list of those questions. And the number of questions is limited to a degree, so most of them you can find online easily (yeah, recruiters take that questions from online sources as well). So, if you are determined you can just study answers for the questions, without really understanding the topic! In fact, there are many people that do so! The problem starts to be visible when ability to answer questions, does not corelate with ability to solve problems in the job environment. Or in worst case scenario - it does, but the temper does not fit the team, and it ends up lowering efficiency of the whole team. Gaining 1 team member ends up loosing 200% productivity. ## The Ugly And the productivity is actually on the elitist approach to recruiting. This is "the top 1%" of programmers. You either have to answer all questions perfectly and beyond, answer leetcode problems, take test on the coding exercise platforms within 2h timeframe, or make a full blown solution within 24h window. This is not a problem with recruitment process. It's a problem with company culture. The grind mindset. You sacrifice everything for that company, if you don't we don't want you. Those type of companies expect ultimate loyalty from you, but don't return any loyalty back to you. Let's be honest - companies are loyal as long as it benefits them, but no company is nowadays loyal to their employees, so you don't have any obligation to be loyal back. Anyway, for the sake of your mental health - steer away from that type recruiting processes. ## The Good Now, the best recruiting processes are those who don't focus on the knowledge, but concepts and problem solving skills. Intricacies of a specific tech does not matter unless there are conceptual mindset behind it. Mindset and thinking process are the money saving skills. And heads up recruiters - any hard technical knowledge can be bridged very quickly with googling skills, or nowadays even prompting skills. So prohibiting googling during interview is one of the stupidest rules ever. Why? Because the way someone is looking for information is one of the most important skills to have. People do that on the job anyway, and during interview you are pretending like it does not happen. And one more aspect to consider - attitude and company culture fitness. Basically, check for soft skills like self-presentation, honesty, teamwork capabilities, communication. How to do that? By asking questions like: "what would you do, if you realize there is a bug on production", or "how would you handle conflict, if you have disagreement about implementation, and you are sure you represent better solution". But you have to be very sensitive to bullshit answers. ## My protocol I'm assuming that at this point you want to recruit some talent. You might have considered other approaches, you might have your own recruiting recipe, however you might think there is some room for improvement. Certainly, there is some room for that in my recruiting protocol - it's not bulletproof. But I think it's one that I like the most and whenever I'm being recruited in similar way, my green lights goes off. I suddenly know, that people on the other side actually know how to do it properly. So here's my process: - Introduction (~10 minutes) - Brief technical check (~10-20 minutes) - Problem solving check (~15 minutes) - Soft skills check and debrief (~15 minutes) Of course if you have more time, or multiple step recruiting process, you can extend it, but apart from introduction, all the parts should take roughly same amount of time. Let me explain each part briefly. ### Introduction Standard who am I, who are you, why you left. Nothing outside of typical formula. You can explain the tech stack and team composition as well. ### Technical Check Yeah, I mentioned that in "the bad" section, that you can easily learn answer for these questions, but even so - you still need to check with simple questions, if that resume you got has anything to do with reality. But don't be tempted with very niche technical questions that you don't actually use in production. A datatype questions or common libraries are enough to see if someone is using the technology or not. If you are asking about something technical that you do not use on a daily basis, you cannot realistically expect an answer from someone else without being a jerk. ### Problem Solving Check This is the section, where you can ask, how someone would solve some technical issue. How would you design an API, or solution architecture, or consistency across multiple databases. Those questions ask about concepts and patterns, and does not focus on any specific technology. Of course, depending on skill level of candidate you can scale them up and down. You can ask a junior or mid about naming and linting problems, or clean code. Important thing is, that you have to check not the answer itself but how they got to the answer. Answer itself can be wrong (!) as long, as their thinking process is solid, and they are open for suggestions. ### Soft Skills Check That openness to suggestions is also a soft skill check. This is important, if not most important skill check. Emotional maturity can make or break your team. You are not hiring a freelancer, but someone who will be part of the social group. In group environment you need to keep in mind that there are couple situations where people have to know how to manage their emotions. And no, I don't mean they have to hide them by that. What you have to have in mind is: how they handle constructive critique and suggestions, ideas rejection, giving feedback themselves, other people's feelings and social outings. This is emotional intelligence. It's part of the job, even though almost every company out there does not recognize it or cultivate it. And assuming yours does not have a program to improve emotional awareness of your employees, you have to look for those who have emotional self-regulation mechanism by themselves. What it means is: - candidate can take critique into account, does not reject it or does not shut down immediately - candidate can propose ideas and solutions in clear and easy to understand way - candidate knows how to communicate during conflict, and does not escalate - candidate is eager to participate in social outing of the team Based on that "metrics" you can come up with questions or provoke situation to check against it in non obvious way. Asking some of the questions directly can also help, but you are risking getting logical answer, not the actual behavior. ## Conclusion Recruitment process might be one of the most annoying processes in our line of work. Compared to other industries, software engineers and IT change jobs way more often. Not surprising, considering that switching jobs is one of the best ways to get a pay rise in a industry that wage changes happen also more often than any other industry. However, sometimes even though it is mostly intellectual and creative work, recruiters/companies seems to treat us like construction workers. But maybe there's better way? I would recommend anyone who participate in the recruitment process, to think this through a little bit more. If you have your own thoughts and suggestions, I'm happy to hear them in the comments.
sein_digital
1,890,420
How to Make the Most of Rephrasetool.com: Your Guide to Effortless Paraphrasing
How to Make the Most of Rephrasetool.com: Your Guide to Effortless Paraphrasing Picture this: You’re...
0
2024-06-16T16:51:16
https://dev.to/alphapik/how-to-make-the-most-of-rephrasetoolcom-your-guide-to-effortless-paraphrasing-1iae
productivity, tooling, startup
<h2>How to Make the Most of Rephrasetool.com: Your Guide to Effortless Paraphrasing</h2> Picture this: You’re working on a report or an important email, but the right words just aren’t coming to you. It’s a common struggle, and it’s exactly where Rephrasetool.com comes into play. This site is designed to help you rephrase and refine your writing with ease. In this article, we’ll dive deep into how to use Rephrasetool.com effectively and why it’s a must-have tool for anyone looking to enhance their writing. <h3>Why Choose Rephrasetool.com? Solving Your Writing Challenges</h3> Writing can often feel like an uphill battle, especially when trying to articulate complex thoughts clearly. Rephrasetool.com is here to take your text and transform it into something more polished and engaging. Let’s explore some common writing challenges that Rephrasetool.com can help you overcome: <ul> <li><strong>Writer’s Block:</strong> Struggling to find the right words? This tool offers fresh ways to express your ideas.</li> <li><strong>Repetitive Language:</strong> If you find yourself stuck in a loop of similar words and phrases, Rephrasetool.com provides diverse alternatives to keep your writing interesting.</li> <li><strong>Complex Sentences:</strong> Simplify those long, winding sentences into something more concise and easier to read.</li> <li><strong>Maintaining Tone:</strong> Whether you need to keep it formal or want a more relaxed tone, Rephrasetool.com adapts to your style.</li> </ul> <h3>How to Use Rephrasetool.com: A Step-by-Step Guide</h3> Getting started with Rephrasetool.com is a breeze. Follow these steps to rephrase your text effectively: <ol> <li><strong>Visit the Site:</strong> Head over to <a href="https://rephrasetool.com" target="_blank" rel="noopener">Rephrasetool.com</a>.</li> <li><strong>Input Your Text:</strong> Copy and paste the text you want to rephrase into the provided box.</li> <li><strong>Choose Your Preferences:</strong> Select the desired tone and style from the available options to suit your needs.</li> <li><strong>Rephrase:</strong> Click the ‘Spin’ button to generate new versions of your text.</li> <li><strong>Review and Edit:</strong> Examine the rephrased text to ensure it meets your expectations and make any necessary adjustments.</li> <li><strong>Copy and Use:</strong> Once you’re satisfied with the result, copy the rephrased text and use it in your work.</li> </ol> <h3>Real-Life Example: Rephrasetool.com in Action</h3> Let’s see how Rephrasetool.com can transform a piece of writing with a real-world example: Original Text: “I am writing to inform you about the upcoming changes to our policy. It is important that all employees are aware of these changes to comply with company regulations.” Rephrased Text: “I’d like to let you know about some important updates to our policy. Please review these changes to ensure we all adhere to the company guidelines.” Notice the difference? The rephrased text is more engaging and direct, making the communication clearer and more accessible. <h3>The Role of Technology in Enhancing Writing</h3> Rephrasetool.com utilizes advanced technology to analyze your text and suggest improvements. It understands the context and provides alternatives that make your writing clearer and more effective. This technology-driven approach ensures that the rephrased text stays true to the original meaning while enhancing readability and clarity. For more on how technology is reshaping writing tools, check out this <a href="https://www.forbes.com/sites/bernardmarr/2020/11/09/the-10-best-examples-of-how-technology-is-already-used-in-our-everyday-life/" target="_blank" rel="noopener">Forbes article</a>. <h3>Why Use Technology for Rephrasing?</h3> You might wonder why rephrasing tools like Rephrasetool.com are so effective. Here’s why using technology for rephrasing tasks makes a huge difference: <ul> <li><strong>Efficiency:</strong> It processes your text quickly, saving you time and effort.</li> <li><strong>Accuracy:</strong> It offers precise alternatives that are contextually appropriate and maintain the original message.</li> <li><strong>Diversity:</strong> It provides a wide range of vocabulary options, making your writing more varied and interesting.</li> <li><strong>Consistency:</strong> It helps maintain a uniform tone and style throughout your text.</li> </ul> For a deeper look at how rephrasing tools are revolutionizing writing, explore this <a href="https://builtin.com/writing-tools" target="_blank" rel="noopener">guide on writing tools</a>. <h3>Conclusion: Rephrasetool.com is Your Ultimate Writing Companion</h3> In a world where clear and effective communication is essential, Rephrasetool.com offers a reliable solution to enhance your writing effortlessly. Whether you’re facing writer’s block, trying to simplify complex ideas, or just looking for a fresh way to phrase your thoughts, Rephrasetool.com is here to help. So, the next time you’re struggling to find the right words, remember to visit Rephrasetool.com. It’s your go-to tool for rephrasing and refining your text, making your writing clear, engaging, and effective. Check out <a href="https://rephrasetool.com" target="_blank" rel="noopener">Rephrasetool.com</a> today and start transforming your writing!
alphapik
1,890,419
Understanding Advanced AI Techniques: RAG, Fine-Tuning, and Beyond
Hey there! If you’ve been keeping up with the latest in artificial intelligence, you know it’s...
0
2024-06-16T16:48:54
https://dev.to/gervaisamoah/understanding-advanced-ai-techniques-rag-fine-tuning-and-beyond-23cn
ai, machinelearning, deeplearning
Hey there! If you’ve been keeping up with the latest in artificial intelligence, you know it’s evolving at breakneck speed. Today, we’re diving into some advanced AI techniques and concepts that can help you get the most out of large language models (LLMs). We’ll talk about Retrieval Augmented Generation (RAG), fine-tuning, Reinforcement Learning from Human Feedback (RLHF), and more. Plus, we’ll cover practical applications, limitations, and tips for choosing the right model and effective prompting. Ready? Let’s get started! ## Retrieval Augmented Generation (RAG) ### What is RAG? First up, let’s talk about Retrieval Augmented Generation (RAG). This technique is like giving your AI a superpower: the ability to look up and incorporate external knowledge before generating an answer. RAG works in three simple steps: 1. **Document Retrieval**: When you ask a question, RAG searches for relevant documents that might contain the answer. 2. **Incorporate Retrieved Text**: It then incorporates the retrieved text into an updated prompt. 3. **Generate Answer**: Finally, it generates an answer from this new, context-rich prompt. ### How RAG Can Be Used RAG can be used to build very useful applications: - **Chat with PDF files**: Imagine being able to chat with the content of a PDF document, getting the information you need quickly and efficiently. - **Answering questions based on website articles**: Need to find specific information from an article? RAG can pull in relevant text to give you a precise answer. RAG is all about making your AI smarter by giving it the tools to find and use external information. It’s a powerful way to enhance the accuracy and relevance of generated content. Now, let’s transition to another crucial aspect of improving AI performance: fine-tuning and alignment. ## Fine-Tuning and Alignment ### What Are Fine-Tuning and Alignment? Fine-tuning is like giving your AI a special education. Instead of general knowledge, it gets trained on a specific dataset to perform particular tasks better. Alignment, on the other hand, ensures that your AI’s behavior aligns with human values and expectations. ###Why Fine-Tune? Fine-tuning is essential for several reasons: - **Specific Knowledge**: It helps your AI gain expertise in areas not covered by its general training. - **Smaller Models**: Enables smaller models to perform specialized tasks effectively. - **Complex Tasks**: Useful for tasks that are hard to specify in a simple prompt. ### Example of Fine-Tuning Consider summarizing customer service calls. By fine-tuning an LLM on the structure you want for summaries — like including the product name/ID, the customer’s name/ID, and the request type — you can get consistent and useful outputs every time. ### What About Alignment? Alignment goes a step further by ensuring that the AI’s responses are in line with human values and ethical guidelines. This involves adjusting the model’s behavior to avoid generating harmful or biased content and to be more aligned with what humans consider appropriate and useful. ### Why Alignment Is Important Aligning AI models with human values is crucial to prevent the misuse of AI and ensure that it serves the best interests of users. It helps in: - **Reducing Bias**: Ensuring the AI does not propagate harmful stereotypes or biases. - **Enhancing Safety**: Preventing the AI from generating toxic or harmful content. - **Building Trust**: Creating more reliable and trustworthy AI systems. ### Example of Alignment For instance, if you’re using an AI to provide medical advice, alignment ensures that the responses are not only accurate but also ethical and empathetic. This involves training the AI to understand the nuances of sensitive topics and respond appropriately. Fine-tuning and alignment ensure your AI is both knowledgeable and reliable. Let’s see next how we can make AI even better at following instructions and improving its responses through instruction tuning and RLHF. ## Instruction Tuning and RLHF ### Definitions **Instruction Tuning**: This process involves fine-tuning the model to follow specific instructions more effectively. **Reinforcement Learning from Human Feedback (RLHF)**: In this technique, the model learns from human feedback, getting rewards for good answers and penalties for bad ones. ### Why Use These Techniques? These techniques make your AI more responsive and aligned with user needs. Instruction tuning ensures that the AI understands and follows instructions correctly, while RLHF helps improve the quality of its answers by learning from feedback. By leveraging instruction tuning and RLHF, you can fine-tune the behavior and responses of your AI to meet specific needs and standards. Next. let’s explore some practical applications of these advanced AI techniques. ## Limitations of LLMs Despite their capabilities, LLMs have some limitations: - **Knowledge Cutoffs**: They are trained on data up to a certain point and might not have the latest information. - **Hallucinations**: They can sometimes generate incorrect or nonsensical answers. - **Limited Input and Output Length**: There are constraints on the length of input and generated text. - **Bias and Toxicity**: They can produce biased or harmful content. Being aware of these limitations helps in setting realistic expectations and using AI responsibly. Now, let’s talk about choosing the right model for your needs. ## Choosing a Model ### Closed-Source Models **Pros**: Easy to integrate into applications, often more powerful and cost-effective. **Cons**: Potential risk of vendor lock-in. ### Open-Source Models **Pros**: Full control over the model, can run on your device, complete control over data privacy and access. **Cons**: This may require more technical expertise to implement and maintain. Choosing between closed-source and open-source models depends on your specific needs, technical expertise, and priorities regarding control and privacy. Finally, let’s look at some tips for better prompting to get the most out of your AI. ## Tips for Better Prompting To get the best results from an LLM: - **Be Detailed and Specific**: Provide clear instructions first, followed by sufficient context. - **Guide the Model**: Encourage the model to think through its answer. - **Iterative Improvement**: Experiment with and refine your prompts. - **Use System and User Prompts**: Define and use different types of prompts effectively to guide the model. Effective prompting can significantly enhance the quality of the AI’s output, making it a more useful tool for various tasks. ## Conclusion AI holds immense potential for automation and augmentation across various fields. By understanding and applying techniques like RAG, fine-tuning, and RLHF, we can leverage AI to create better tools and solutions. Remember to use AI responsibly to improve not just your life but also the world around you. Happy AI exploring!
gervaisamoah
1,890,418
Guideline For Newly Redux Learner
From my experience of learning redux , I am sharing a guideline for newly redux learners. I hope my...
0
2024-06-16T16:47:23
https://dev.to/checkiamsiam/guideline-for-newly-redux-learner-13cp
redux, javascript, react, mern
From my experience of learning redux , I am sharing a guideline for newly redux learners. I hope my guideline will help you to gain redux knowledge without suffering any confusion. The first thing that you must have to know before starting a journey with redux is what is redux actually?. Basically redux is a state management JavaScript library. It helps us to manage state in a predictable way and its state works globally. You can get and update the state anywhere from your website. And then if you are a React developer like me. You will face a confusing situation with redux JS , react-redux and redux-toolkit . This confusion will eat a large amount of your time and you will be late to start your redux learning journey. so my opinion is just close your eyes to learn and explore the redux JS first because it’s the main redux logic and then if you want to use redux in your react app , you have to explore react-redux library and then when you are feeling comfort in redux , you can learn redux-toolkit. Redux toolkit is not an external anything from redux. Redux-toolkit is for managing redux logic in an easier way. There you will find another name that is rtk-query its like react-query library. it will help you to do query in your application. Then after gaining the knowledge about redux logic and how to implement that you must have to practice. For react developers, react-hook is good enough for managing state and context-API is also good for using the state globally but in a large application it’s better to manage your state in a different and predictable way. So in a large application redux gives us a power to manage our state. And my advice is go to YouTube and find a large application with redux implementation and practice it. You have already gained the redux knowledge and after practicing this you will feel comfortable to use redux in large applications.
checkiamsiam
1,890,416
Rate Limiting and DDoS
What is Rate Limiting? Rate Limiting is a technique to control the rate at which requests (such as...
0
2024-06-16T16:46:48
https://dev.to/nirvanjha2004/rate-limiting-and-ddos-4j56
webdev, beginners, programming, tutorial
**What is Rate Limiting?** 1. Rate Limiting is a technique to control the rate at which requests (such as GET, POST, PUT etc.) are made to a service by a client or an application. 2. This is achieved by restricting the amount of request that a client makes to the server in a specified amount of time. For example, you can make only 100 requests in a specified time interval say 30 secs. It means if you made 100 requests to a website....then you have to wait for 30 seconds...and then only you are allowed to make 100 request again. **Why Rate Limiting?** 1. Preventing Overload:- Rate limiting controls how often a user or system can make requests to a service. This helps prevent overuse of resources, ensuring that the system remains available and responsive for all users. For example, rate limiting can stop a single user from making thousands of login attempts in a minute, which could otherwise degrade service for others. 2. Mitigating Abuse: Without rate limiting, an application could be more susceptible to abuse such as brute force attacks on passwords. (Brute Force Attacks: in these attacks, the attackers tries to gain the access of the system illegally...For ex. If the site needs an 4 digit OTP to Login, So the attacker will run an algorithm and try all the 4 digit numbers from 1000 to 9999. This is a hit and trial method. The algorithm tries each and every OTP ). By limiting how often someone can perform an action, it reduces the feasibility of such attacks. 3. Managing Traffic: In high traffic scenarios such as movie ticket booking...Rate Limiting helps to manage the Traffic on the website and ensures a fairer distribution of services to the users. 4. DDoS protection: A DDoS attack means attacking the website from multiple sources which can make the website unavailable. DDoS protection mechanism can identify such malicious traffic and filter them. (Sounds Overwhelming just look at the Figure below). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9206ynbsm3q68vo2ijv1.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/app0pdi1ukavicxzyr87.png) So the question arises:- **Where Can You Commonly Find Rate Limiters=> 1. In the login page of sites...where there is a higher chance of Brute Force attacks. 2. In the Ecommerce Sites: Suppose if there is a sale on Sneakers...The attackers will send multiple request to jam the website so that normal users cant access it. 3. API Endpoints , Email Sending etc. **How To Write the Code for Rate Limiter:- Step1: Add Dependency npm i express-rate-limit Step2: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vizuxrjy55jcq8p5gwb.png) Step3: Use this middleware in whichever end point you need. An example is attached Below on how to use this middleware. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m82s82x2yosgpj2wu17d.png) Now the PROBLEM arises:- Your Server is still vulnerable to DDoS. Though DDoS is rarely used for password reset, it is usually used to choke a server. How can you save your reset password endpoint? 1. You can implement logic in the rate limiter code that only 3 resets are allowed per email sent out. OR 2. You can implement CAPTCHA logic. How does a captcha work:- You can various tools such as CloudFlare turnstile. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h2u708fpqyd22v0dz11.png) To Use Captcha , Here is an example how you can do it:- Step1: Go to Cloudflare Turnstile and click on ADD SITE ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9ysrft8l8w132osi2lc.png) Step2: Add a name and Domain. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9tm5pk57a1r97x9bb4t.png) Step3: You will get a site key and a secret key (This secret key is used to verify token...discussed earlier). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6m98csqyusnk8ro2rqdu.png) Step4:Create a React Page Step5: Add Dependencies by running this command in the terminal:- npm i @marsidev/react-turnstile Step6: Update App.tsx ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svht5dk634ls6e0adsen.png) Step7: Update the backend Code:- ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ozyczr5jshls65aeo5lc.png) And You are Good to Go :) Do Share your reviews in the comments!! Thanks
nirvanjha2004
1,889,703
Hands-On with Gleam: Building and Improving a Binary Search Tree
Gleam is a functional programming language that proudly aims to be simple. This means you have a few...
0
2024-06-16T16:46:32
https://dev.to/micskeil/always-look-deeper-with-gleam-2enf
patternmatching, gleam, functional, erlang
**Gleam is a functional programming language that proudly aims to be simple. This means you have a few straightforward tools to solve problems. That's why it's such a great language for zoning out from other perspectives and allowing you to solve coding problems differently than you are used to.** Let's zone out. Build a binary search tree and see how we can improve the code step by step, and what we can learn from it. *Binary search trees perform better than arrays when inserting or retrieving sortable data. They are built from nodes, with each node containing data and two pointers to other nodes. The left node has a value smaller than or equal to the value stored in the current node, while the right node stores a value larger than that stored in the current node.* In Gleam you can write a type to represent this: ``` pub type Tree { Node(data: Int, left: Tree, right: Tree) Nil } ``` In Gleam, we can use pattern matching and the 'case' expression to add new data to the tree. My first attempt at writing a function to add a new node to a tree was as follows: ``` fn add_to_tree(tree: Tree, data: Int) -> Tree { case tree { Node(tree_data, Nil, right) if data <= tree_data -> { Node(tree_data, Node(data, Nil, Nil), right) } Node(tree_data, left, Nil) if data > tree_data -> { Node(tree_data, left, Node(data, Nil, Nil)) } Node(tree_data, left, right) if data <= tree_data -> { Node(tree_data, add_to_tree(left, data), right) } Node(tree_data, left, right) if data > tree_data -> { Node(tree_data, left, add_to_tree(right, data)) } Nil -> Node(data, Nil, Nil) _ -> panic as "Invalid tree" } } ``` A few things might bother you with this code. Primarily, the last branch shouldn't be needed. However, since Gleam doesn't try to evaluate any complex pattern with 'if', the code won't compile without that last branch, even though we know it is impossible for the data to take that route. #### Nest a little bit. Nest a little bit. Coming from JavaScript, where "nevernesting" is a principle, it might feel unnatural, but in Gleam, we can sometimes nest and benefit from it. First, let's move the comparison one nested level down. I've included a picture from my editor of this refactor to illustrate how easy and nice it is, thanks to the Gleam language server. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d9bakk24j87zgvupbjav.png) Let's delete those lines marked unreachable. ``` fn add_to_tree(tree: Tree, data: Int) -> Tree { case tree { Node(tree_data, left, right) -> { case data <= tree_data { True -> Node(tree_data, add_to_tree(left, data), right) False -> Node(tree_data, left, add_to_tree(right, data)) } } Nil -> Node(data, Nil, Nil) } } ``` Much better. Notice how we moved the real work inserting the data to the Nil case, it means our recursive function now only has **one base** case and one for deciding the direction in the tree. The last optimization can be to put the base case in the first position, but I will let you do that. In summary, by leveraging Gleam's simplicity and powerful pattern matching, we streamlined our binary search tree implementation. We reduced complexity and made the code more intuitive, enhancing readability and maintainability. With each step, we learned to harness Gleam's features to write cleaner, more efficient code. Happy coding!
micskeil
1,890,413
Enhancing Rust Enums in the State Pattern
Recap In my previous article, I discussed how Rust enums should be strongly considered...
0
2024-06-16T16:39:46
https://dev.to/digclo/enhancing-rust-enums-in-the-state-pattern-35pa
rust, designpatterns, learning
# Recap In my previous [article](https://dev.to/digclo/state-pattern-with-rust-enums-61g), I discussed how Rust enums should be strongly considered when the solution benefits from a state machine. The strongest argument for this is the fact that the Rust compiler will inform you when a state variant isn't covered in a match expression. This was stemmed from a state pattern example provided by the official [Rust book](https://doc.rust-lang.org/stable/book/ch17-03-oo-design-patterns.html#implementing-an-object-oriented-design-pattern). The sample scenario they used was an article that was required to go through the status of `Draft`, `PendingReview`, and `Approved`. Using a similar strategy as I explained in my original article, I came up with the following code: _post.rs_ ```rust use crate::state::{ArticleState, ArticleTransition}; #[derive(Default)] pub struct Post { state: ArticleState, content: String, } impl Post { pub fn add_text(&mut self, text: &str) { self.content.push_str(text); } pub fn content(&self) -> &str { self.state.content(&self.content) } pub fn request_review(&mut self) { self.state.update_state(ArticleTransition::RequestReview); } pub fn approve(&mut self) { self.state.update_state(ArticleTransition::Approve); } } ``` _state.rs_ ```rust enum State { Draft, PendingReview, Published, } pub enum ArticleTransition { RequestReview, Approve, } pub struct ArticleState { state: State, } impl Default for ArticleState { fn default() -> Self { Self { state: State::Draft, } } } use ArticleTransition as T; use State as S; impl ArticleState { pub fn update_state(&mut self, transition: ArticleTransition) { match (&self.state, transition) { // Handle RequestReview (S::Draft, T::RequestReview) => self.state = S::PendingReview, (_, T::RequestReview) => (), // Handle Approve (S::PendingReview, T::Approve) => self.state = S::Published, (_, T::Approve) => (), } } pub fn content<'a>(&self, article_post: &'a str) -> &'a str { match self.state { S::Published => article_post, _ => "", } } } ``` # New Requirements While this code satisfies the test cases described by the Rust book, the following [section](https://doc.rust-lang.org/stable/book/ch17-03-oo-design-patterns.html#trade-offs-of-the-state-pattern) shares some plausible modifiers to the requirements of our code. The new requirements are as follows: - Add a reject method that changes the post’s state from PendingReview back to Draft. - Require two calls to approve before the state can be changed to Published. - Allow users to add text content only when a post is in the Draft state. Hint: have the state object responsible for what might change about the content but not responsible for modifying the Post. The first and third requirements are fairly straight forward in execution with our enums and match pattern. However the second case exposes yet another feature we can utilize in Rust. Because an article in the `PendingReview` state require two approvals before moving to `Published`, we will need a reference the current count of approvals. # Enums Can Contain Fields Some initial considerations may include a mutable field in our state struct. But this would be disconnected from our enum definition. We only really need an approval count when the article is in the state of `PendingReview`. Another feature Rust enums provide is the ability to store fields internally inside a single enum variant. Let's try this with our `PendingReview` state. ```rust enum State { Draft, PendingReview { approvals: u8 }, Published, } ``` Now we have a stateful field for our `PendingReview` state that holds the current count of approvals. This will require that any use of `PendingReview` must also acknowledge the `approvals` field. We can now update our match arm for `PendingReview` to include the reference to `approvals` and run a simple condition of whether we should set the state to `Published`. ```rust (S::PendingReview { approvals }, T::Approve) => { let current_approvals = approvals + 1; if current_approvals >= 2 { self.state = S::Published } else { self.state = S::PendingReview { approvals: current_approvals, } } } ``` And that's it. By adding a field to a single enum variant, we now have additional context for this single expression without the need to store any reference in the root of our `State` struct. This also makes our code easier to understand as an unfamiliar contributor can easily deduce that the `approvals` field only matters when our state is in `PendingReview`. # Full Code Example _post.rs_ ```rust use crate::state::{ArticleState, ArticleTransition}; #[derive(Default)] pub struct Post { state: ArticleState, content: String, } impl Post { pub fn add_text(&mut self, text: &str) { let is_mutable = self.state.is_text_mutable(); match is_mutable { true => { self.content.push_str(text); } false => (), } } pub fn content(&self) -> &str { self.state.content(&self.content) } pub fn request_review(&mut self) { self.state.update_state(ArticleTransition::RequestReview); } pub fn approve(&mut self) { self.state.update_state(ArticleTransition::Approve); } pub fn reject(&mut self) { self.state.update_state(ArticleTransition::Reject) } } ``` _state.rs_ ```rust enum State { Draft, PendingReview { approvals: u8 }, Published, } pub enum ArticleTransition { RequestReview, Approve, Reject, } pub struct ArticleState { state: State, } impl Default for ArticleState { fn default() -> Self { Self { state: State::Draft, } } } use ArticleTransition as T; use State as S; impl ArticleState { pub fn update_state(&mut self, transition: ArticleTransition) { match (&self.state, transition) { // Handle RequestReview (S::Draft, T::RequestReview) => self.state = S::PendingReview { approvals: 0 }, (_, T::RequestReview) => (), // Handle Approve (S::PendingReview { approvals }, T::Approve) => { let current_approvals = approvals + 1; if current_approvals >= 2 { self.state = S::Published } else { self.state = S::PendingReview { approvals: current_approvals, } } } (_, T::Approve) => (), (S::PendingReview { .. }, T::Reject) => self.state = S::Draft, (_, T::Reject) => (), } } pub fn content<'a>(&self, article_post: &'a str) -> &'a str { match self.state { S::Published => article_post, _ => "", } } pub fn is_text_mutable(&self) -> bool { matches!(self.state, S::Draft) } } ```
digclo
1,890,409
Exploring the World of Generative AI: Key Takeaways
I am excited to share that I have recently completed the "Introduction to Generative AI" course, and...
0
2024-06-16T16:30:57
https://dev.to/bishop_bhaumik/exploring-the-world-of-generative-ai-key-takeaways-3hea
ai, machinelearning, cloud, devops
I am excited to share that I have recently completed the "Introduction to Generative AI" course, and it has been an eye-opening journey into the realm of artificial intelligence. Here are some of the key insights and learnings I've gained from this immersive experience: **Understanding Generative AI:** One revolutionary area of artificial intelligence is called "generative AI," which is particularly good at producing original material in text, image, audio, and video formats. Based on the patterns they discover from large datasets, generative models exhibit the amazing capacity to produce unique and significant outputs, in contrast to typical AI models that concentrate on classification or prediction. **Differences Between Generative and Discriminative AI :** One of the fundamental concepts I delved into was the distinction between generative and discriminative AI models. Generative AI models, as I learned, are designed to create new content, while discriminative AI models focus on classifying or distinguishing between different categories based on given inputs. This understanding is crucial in leveraging AI effectively across different applications and industries. **Key Machine Learning Principals:** Machine learning (ML) encompasses various approaches to learning from data, each serving distinct purposes. I have also gone through Key areas of Machine Learning working algorithm natures such as: Supervised learning involves training models on labeled data, where algorithms learn patterns to make predictions or classifications based on input-output pairs. Unsupervised learning, in contrast, explores unlabeled data to discover inherent patterns or structures, clustering similar data points or reducing dimensions. Semi-supervised learning combines elements of both by leveraging a small amount of labeled data alongside a larger pool of unlabeled data, enhancing model accuracy and scalability. These learning paradigms collectively empower ML to solve complex problems across industries, from predicting consumer behavior to enhancing medical diagnostics and beyond. **Practical Applications :** The course provided practical insights into how generative AI can be applied in real-world scenarios. For instance, I learned how these models can generate realistic images, compose music, or even assist in natural language processing tasks such as text generation and translation. The versatility of generative AI opens up endless possibilities for innovation and creativity across various fields. **Reflecting on the Journey :** I've been astounded by the developments and revolutionary potential of generative AI during this course. Every module has increased my admiration for this cutting-edge technology, from comprehending its theoretical underpinnings to investigating practical applications. **Certification link :** https://www.cloudskillsboost.google/public_profiles/7adda6cf-7e52-4676-a1f1-214a4065ae33/badges/9472162 **Continuing the Learning Journey :** As I consider what I've discovered, I can't wait to use these revelations in my career. Through ongoing research and development, I hope to contribute to the progress of generative AI, which I think will be crucial in determining the direction of technology in the future.
bishop_bhaumik
1,882,278
Criando componentes para Web #01: Acessibilidade (a11y) na prática com WAI-ARIA
O que abordaremos nessa série? Olá, nessa série de artigos vamos abordar todos as...
0
2024-06-16T16:28:03
https://dev.to/afonsopacifer/criando-componentes-para-web-01-acessibilidade-a11y-na-pratica-com-wai-aria-45ef
frontend, a11y, webdev, html
<audio format> ## O que abordaremos nessa série? Olá, nessa série de artigos vamos abordar todos as especialidades que uma pessoa profissional de Front-End precisa compreender para criar componentes **acessíveis**, **performáticos**, **responsivos**, **manuteníveis**, **reutilizáveis**, **documentados**, **customizáveis**, **testados** e que atendem as reais necessidades de produtos e times de desenvolvimento, independente da tecnologia escolhida, seja **React.js**, **Angular**, **Vue.js** ou qualquer outra. ## Como vamos abordar cada assunto? Pensei em seguirmos um roteiro para que cada assunto possa ser abordado com o mesmo padrão, dessa forma não perdemos nenhum ponto importante e ainda me ajuda na hora de escrever hehe. ### Roteiro genérico: - Introdução - Qual a importância? - Principais referências - Aplicando na prática - Como testar - Dicas de especialista - Próximos passos Ok, agora que estamos alinhados com as ideias por trás dessa série, vamos partir para o primeiro assunto. Bem vindos ao mundo da **a11y** ❤️. ## Introdução Primeiro precisamos entender o que significa o tal **a11y**. Basicamente é a abreviação da palrava em inglês **accessibility** (acessibilidade), sendo **a + 11 letras no meio + y**. Apenas abreviamos o termo para facilitar o uso no dia a dia. > **Dica**: Fazemos o mesmo com a palavra **internationalization** (internacionalização), sendo **i + 18 letras no meio + n**, logo temos como resultado o tão famoso termo **i18n** (Que será assunto de um próximo artigo). Dentre todos os aspectos relacionados a acessibilidade web, hoje abordaremos o famoso **WAI-ARIA**, mas antes, precisamos conhecer a **WAI**. **WAI** ou **[Web Accessibility Initiative](https://www.w3.org/WAI/)** (Iniciativa de Acessibilidade na Web), é uma iniciativa da [W3C](https://www.w3.org) para desenvolver padrões e materiais de apoio que nos ajudam a compreender e implementar a acessibilidade. Já o **[WAI-ARIA](https://www.w3.org/WAI/standards-guidelines/aria/)** nós podemos definir como uma especificação que traz uma extensão para o `HTML`, adicionar muito mais dinamismo e controle sobre a semântica. ### Qual a importância da a11y? Antes de demonstrar o **WAI-ARIA** em ação, e para nivelar o conhecimento, precisamos abordar a importância da acessibilidade na web, e para isso, eu trouxe alguns links de experts que já explicaram muito bem o assunto na comunidade Front-End ❤️. ### Referências sobre a11y na Web - @brunopulis: [Acessibilidade Web: como começar do jeito certo](https://brunopulis.com/introducao-acessibilidade-web/) - [Talita Pagani](http://talitapagani.com): [Acessibilidade na prática para você nunca mais esquecer](https://www.youtube.com/watch?v=4URTZHk6tz0&t=129s) - [Reinaldo Ferraz](https://reinaldoferraz.com.br): [Acessibilidade na web modo Jedi Master](https://www.youtube.com/watch?v=MMLQioPwbik) ## Aplicando na prática Antes de aplicarmos o **WAI-ARIA**, para que tudo faça sentido precisamos entender de forma prática a importância da semântica e ir muito além de simplesmente dizer que "`HTML` semântico é o certo", sem um critério objetivo. Vamos começar com um simples componente de `toggle button`, e para facilitar o exemplo vamos trabalhar apenas com `HTML` + `CSS` + `JS`: ```html <span class="toggle"> <span> OFF </span> <button class="toggle__button"></button> <span> ON </span> </span> ``` Resultado: ![Exemplo de toggle button, sendo clicado e alternando o estado de clicado e não clicado](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ehjxkiryoydl01mv918q.gif) > PS: O código completo desse exemplo, incluindo CSS e Javascript, pode ser acessado no [meu Codepen](https://codepen.io/afonsopacifer/pen/dyEVmeJ). Conseguem perceber os graves problemas de semântica dessa botão? Não? Do ponto de vista de um usuário comum, o comportamento está bem claro e bem fácil de entender, mas a semântica não foi feita para um usuário "comum". A principal função da semântica no `HTML` é ser interpretada por robôs, sejam mecanismos de busca (tentando entender sua página para rankea-la) ou leitores de tela (transcrevendo o conteúdo, iterações e estados para um usuário com deficiência visual). O **WAI-ARIA** em especial é extremamente importante para adicionar significado para a interface através da semântica, afinal, entender que uma interface na web não é algo apenas visual, faz parte das bases para um pessoa verdadeiramente profissional em Front-End. Para ficar mais claro, vamos testar esse botão com um software leitor de tela e analisar os resultados: {% youtube https://www.youtube.com/watch?v=FfQu6iQwuNQ %} Como podemos perceber, o `toggle-button` não indica o estado de `ON` ou `OFF`, em outras palavras, não sabemos se o botão está clicado ou não. Para corrigir isso, podemos adicionar a propriedade `aria-pressed`, que tem como valores possíveis: - `true`: para indicar que está "clicado". - `false`: indicando que "não esta clicado". - `mixed`: para indicar que está entre os dois estados. ```html <span class="toggle"> <span> OFF </span> <button class="toggle__button" aria-pressed="false"></button> <span> ON </span> </span> ``` Resultado: {% youtube https://www.youtube.com/watch?v=w1-YzKW5MtQ %} Bem melhor, porém ainda temos um grave problema na experiência do usuário. Apesar do comportamento do botão estar claro, como não existe conteúdo de texto dentro do elemento `button`, não é possível saber o que o botão faz, a única informação que o leitor de tela tem é um `toggle-button` sem descrição. Vamos adicionar um `aria-label` para resolver esse problema. ```html <span class="toggle"> <span> OFF </span> <button class="toggle__button" aria-pressed="false" aria-label="Alterna entre os modos ON e OFF"> </button> <span> ON </span> </span> ``` Resultado: {% youtube https://www.youtube.com/watch?v=fKK7igcqRmg %} Ainda podemos ir além, caso o `toogle-button` abra `dropdown`, podemos vincular os componentes usando o atributo `aria-haspopup`, e assim por diante. Na categoria **States and Properties** (Estados e propriedades) do **WAI-ARIA**, temos uma longa lista de atributos possíveis para adicionar semântica em nossas aplicações, recomendo a consultar a [lista completa na documentação da Mozilla Developer Network](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Attributes). ### Indo além com `WAI-ARIA Roles` Quando falamos em componentes, na maioria das vezes criamos elementos que não existem no `HTML`, e mesmo quando existem, é bem comum ignorar o elemento nativo e criar um comportamento personalizado, como exemplo ignorar a tag `<dialog>`e criar um `modal` do zero utilizando apenas `<div>`. Não há nada de errado com essa pratica, desde que deixe claro para o leitor de tela que o papel (em inglês: **role**) daquela `<div>` é se comportar como `modal`, ai entram os atributos da categoria **WAI-ARIA Roles**. ### Vamos nos aprofundar com um exemplo mais crítico Sabe quando utilizamos elementos semânticos do HTML para construir algo? Podemos, por exemplo, criar a estrutura de uma página da seguinte forma não semântica: ```html <div> <div></div> <div> <div></div> <div></div> </div> <div></div> </div> ``` Ou seguindo um `HTML` semântico: ```html <div> <header></header> <main> <section></section> <section></section> </main> <footer></footer> </div> ``` Até aí tudo bem, aprendemos que devemos escrever de forma semântica e os motivos da sua importância. Mas e quando não encontramos uma tag `HTML` perfeita para nossa necessidade? Ai entra o atributo `role`. Vamos ao exemplo de um alerta de erro: ```html <div class="snackbar-error"> Ouve um problema ao enviar sua solicitação </div> ``` Parando para pensar com calma, entendemos que simplesmente jogar um alerta visual na tela não alerta um usuário de leitor de tela, certo? Para que o papel de alerta seja realizado corretamente pelo componente, precisamos adicionar esse papel: ```html <div class="snackbar-error" role="alert"> Ouve um problema ao enviar sua solicitação </div> ``` Dessa forma o leitor de tela avisa ao usuário que existe um alerta e lê a mensagem assim que o alerta for disparado ❤️. Mais uma vez eu recomendo consultar a lista completa de `roles` na [documentação da Mozilla Developer Network ](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Roles). ## Como testar Nos exemplos acima, eu usei o leitor de tela chamado **Voice Over**, que já vem instalado por padrão nos computadores com sistema operacional **macOS**, mas obviamente existem leitores de tela para **Windows**, **Linux**, **Android**, etc... Recomendo pesquisar, instalar e aprender bem a usar ao menos um leitor de tela, afinal, seria no mínimo estranho projetar interfaces visuais sem monitor, logo, pensar em semântica sem se quer testar o que você esta fazendo, beira o absurdo! (desculpem a falta de decoro, me exaltei rsrsrs). ## Dicas de especialista - Na hora de criar um componente, não pule etapas por achar que `HTML` é simples. Planeje a semântica, pesquise e teste. - Durante sua analise de requisitos, crie uma tarefa para pesquisar e construir a semântica do componente antes mesmo de começar a trabalhar no `CSS`. - Tente [recriar um botão sem usar a tag `<button>`](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Roles/button_role), o aprendizado vai trazer bons insights, confia. ## Conclusão Lembre-se! Se você se considera uma pessoa boa com `HTML`, mas nunca abriu um leitor de tela, repense, saber como o `HTML` funciona e ser profissional são coisas diferentes. Ah, se você gostou do conteúdo e quer que essa série continue, comente com um feedback e compartilhe com seus amigos Devs. E claro! Me siga para mais dicas 😎: - [Meu site pessoal](https://afonsopacifer.github.io) - [Github](https://github.com/afonsopacifer) - [Twitter](https://x.com/afonsopacifer) - [Linkedin](https://www.linkedin.com/in/afonsopacifer/) - [Instagram](https://www.instagram.com/afonsopacifer/) Obrigado por ler e te vejo na próxima ❤️.
afonsopacifer
1,890,408
Instagram Bio For couple
Here are some quick and simple ideas for couple's Instagram bios: "Forever in love ❤️" "Better...
0
2024-06-16T16:25:36
https://dev.to/off_page_497f860f47d38578/instagram-bio-for-couple-4f7p
Here are some quick and simple ideas for couple's Instagram bios: "Forever in love ❤️" "Better together 💕" "Adventurers at heart 🌍" "Love and laughter always 💑" "Soulmates since 2020 💖" "Together through it all ❤️" "Two hearts, one journey 💞" "Creating memories together 📸" "Partners in crime 😎" "Happily ever after starts here 💍" These succinct bios are straightforward and charming, yet they perfectly convey the spirit of your partnership. For more details click here :https://myvipbio.com/instagram-bio-for-couples/
off_page_497f860f47d38578
1,890,321
Changing typescale with CSS Variables in Angular Material Demo
A post by Dharmen Shah
0
2024-06-16T16:22:58
https://dev.to/shhdharmen/changing-typescale-with-css-variables-in-angular-material-demo-49m4
--- title: Changing typescale with CSS Variables in Angular Material Demo published: true description: tags: ---
shhdharmen
1,889,899
Every Next.js website is starting to look the same
Current state of web development for some time now includes JS frameworks and libraries springing...
0
2024-06-16T16:20:33
https://dev.to/dellboyan/every-nextjs-website-is-starting-to-look-the-same-12a6
nextjs, tailwindcss, react, javascript
Current state of web development for some time now includes JS frameworks and libraries springing like mushrooms after the rain. Among these, [Next.js](https://nextjs.org/) has emerged as the most popular choice for any developer that wants to build a beautiful SEO-friendly website. However, as its popularity grows, I noticed Next.js websites are beginning to look eerily similar. In this article, we'll explore the reasons behind this and is this bad or maybe even a good thing. ## What do I even mean? First of, I got to point out, I love Next.js. It's my go to framework whenever I start a new web project, no other JS framework allows you to build something beautiful that quickly. But quickly is exactly the issue. If you want to build something quickly it's going to come with some trade offs. If you are working with Next.js, when starting a project you'll probably start with some boilerplate or a template, [seems like industries are popping up around Next.js boilerplates nowadays](https://www.google.com/search?q=next.js+boilerplate&rlz=1C1GCEA_enRS1049RS1049&oq=next.js+boilerpl&gs_lcrp=EgZjaHJvbWUqBwgAEAAYgAQyBwgAEAAYgAQyBwgBEAAYgAQyBggCEEUYOTIICAMQABgWGB4yCAgEEAAYFhgeMggIBRAAGBYYHjIGCAYQRRg9MgYIBxBFGD3SAQgyMTU0ajBqN6gCALACAA&sourceid=chrome&ie=UTF-8). Next (.js), you'll probably use [Tailwind CSS](https://tailwindcss.com/) and some component library, most probably [shadcn/ui](https://ui.shadcn.com/). All of these solutions are great, but, as more developers gravitate towards these ready-made components, the individuality of websites diminishes, leading to a sea of sites that look and feel the same. Something like shadcn/ui is completely customizable, but if you want to finish a website quickly, you probably won't spent that much time on customization, if you are not working strictly by design. I never really noticed this until I started spending more time on Twitter/X. If you open up Twitter/X right now, and start browsing any solopreneur/build in public/next.js [thread](https://x.com/search?q=%23buildinpublic&src=hashtag_click&f=live), you will quickly notice a lot of developers asking for feedback on their websites, and they all look kind of Nextjsy. The convenience of Next.js/Tailwind/shadcn combo is obvious. It offers a well-structured, easy, and cohesive design system that ensures visual and functional consistency across projects. However, this uniformity comes at the cost of originality. ## This has happened before What I noticed with Next.js is not something new, this has happened before with any popular language/framework/CMS. [WordPress ](https://wordpress.org/)being one of the most popular. In the past I worked on a lot of WordPress websites, as the community around WordPress grow, certain WordPress themes and plugins become a default option for a lot of people, like [Avada](https://themeforest.net/item/avada-responsive-multipurpose-theme/2833226), [Betheme ](https://themeforest.net/item/betheme-responsive-multipurpose-wordpress-theme/7758048)and [The7](https://themeforest.net/item/the7-responsive-multipurpose-wordpress-theme/5556590) with millions of sales in downloads. You install the theme, select one from many template dummy options, and in a couple of minutes you'll have a beautiful WordPress website...that looks like all the rest. Now I can identify a WordPress website as soon as the page is loaded, when that happens finally. Similar situation happened with [Bootstrap](https://getbootstrap.com/), as this toolkit became more and more popular, the websites that used it started looking more and more the same. So who is the blame for this, the lazy developer? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u4xii98l38dqn2aczk67.gif) I don't think anyone should be blamed for this, and I don't think this is even a bad thing, I think this is great! All these new technologies gave us more options, and having more options is always good. If you want to create something unique strictly following certain design patterns while investing more time, you can certainly do so. On the other side, if you are building a product that you want to get into the hands of users, so you can get feedback as soon as possible, Next.js component libraries with ready-made building blocks are a great choice. At the end of the day the user does not care, I noticed this pattern only because I look at websites every day. Once and if the product grows you can, and you certainly should invest more time in proper design so the website follows certain brand guidelines when it actually becomes a brand. ## Conclusion While Next.js, shadcn/ui, Tailwind CSS, and popular component libraries have undeniably transformed web development for the better, they have also contributed to a trend of sameness among websites, but they are not the first to do so. Speedy developments with AI probably will not help with this trend. But having more options is not a bad thing. Anybody working on a website has now the option to build something fast or to build something unique, it just depends in what stage of development you are and what you are trying to build. [Connect with me on Twitter/X.com](https://x.com/DellBoyan)
dellboyan
1,890,393
Nothing OS 3.0: Release Date, New Glyph & More
Prepare yourself for the next evolution in smartphone technology — Nothing OS 3.0. This update from...
0
2024-06-16T16:19:52
https://dev.to/journetrix/nothing-os-30-release-date-new-glyph-more-4708
nothing, nothingos, nothingphone, phone
Prepare yourself for the next evolution in smartphone technology — Nothing OS 3.0. This update from Nothing Technology is not just an upgrade; it’s a complete transformation designed to redefine how you interact with your device. From enhanced performance to innovative features, [Nothing OS 3.0](https://www.journetrix.com/2024/06/Nothing-OS-3.0-Release-Date-New-Glyph--More.html) promises to elevate your smartphone experience to unprecedented levels. ![Nothing OS 3.0](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5devc26qkglywri740s.jpeg) **Redefined User Interface for Seamless Navigation** Nothing OS 3.0 introduces a sleek and unified user interface that ensures every interaction feels intuitive and visually stunning. Whether you’re navigating your home screen or diving into your favorite apps, the redesigned interface offers a harmonious experience that enhances usability and aesthetics. Experience the new “Sound & Vibration” settings and the dynamic “Color Contrast” options in the Wallpaper & Style app. These features not only personalize your device but also showcase the seamless integration of design and functionality within Nothing OS 3.0. **Enhanced Glyph Interface for Interactive Engagement** The Glyph interface in [Nothing OS 3.0](https://www.journetrix.com/2024/06/Nothing-OS-3.0-Release-Date-New-Glyph--More.html) has been enhanced to provide more interactive feedback and customization options. Imagine your device responding dynamically to apps like Uber and Zomato, offering detailed feedback through its Glyph progress bar. This feature transforms routine interactions into engaging experiences, making your smartphone usage more immersive than ever. Experience the new “Sound & Vibration” settings and the dynamic “Color Contrast” options in the Wallpaper & Style app. These features not only personalize your device but also showcase the seamless integration of design and functionality within Nothing OS 3.0. The Glyph interface in Nothing OS 3.0 has been enhanced to provide more interactive feedback and customization options. Imagine your device responding dynamically to apps like Uber and Zomato, offering detailed feedback through its Glyph progress bar. This feature transforms routine interactions into engaging experiences, making your smartphone usage more immersive than ever.
journetrix
1,890,392
How I Built McDonald’s Drive-Thru: All-AI, All-Local
Full Article Final Result from This Project Building a 1-Person Food Delivery Business with 100%...
0
2024-06-16T16:19:43
https://dev.to/exploredataaiml/how-i-built-mcdonalds-drive-thru-all-ai-all-local-1nme
rag, genai, machinelearning, llm
[Full Article](https://medium.com/@learn-simplified/how-i-built-mcdonalds-drive-thru-all-ai-all-local-812260a0bc40) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5n0f74tol8o382d4fcuh.png) [Final Result from This Project](https://www.youtube.com/watch?v=R7ekFRwrqhc&list=PLWQ_n2lpS5yyATPtr6dyWCblj7Y3HD_d3&index=6) Building a 1-Person Food Delivery Business with 100% AI, Each Detail Spelled Out **What’s This Article About?** This article is about building an AI-powered drive-through voice assistant for a restaurant chain called AniDonald’s. The voice assistant is designed to interact with customers at the drive-through window, take their orders, provide information about the menu, and assist with the ordering process. The article walks through the implementation of the voice assistant using various technologies, including LLM for natural language processing, speech recognition, and vector databases. **Why Read This Article?** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xvwsgbvqwv4vbot9yt9z.png) This article is worth reading for several reasons. First, it provides a practical example of how artificial intelligence can be integrated into a real-world business scenario, specifically in the context of a restaurant’s drive-through service( but similar approach can be used in variety of critical real world apps). By automating the order-taking process, the voice assistant can improve efficiency, reduce errors, and enhance the overall customer experience. Second, the article covers a range of AI technologies and techniques, such as speech recognition, language models, and vector databases. Reading this article can help readers understand how these technologies work together to create a functional AI system. Finally, the article demonstrates the potential of AI to transform traditional business processes and customer interactions. By exploring this use case, readers can gain insights into the future of AI and its practical applications across various industries. **Closing Thoughts !** The implementation of the AniDonald’s drive-through voice assistant showcases the potential of AI to revolutionize customer service and operational efficiency in the restaurant industry. By leveraging advanced technologies like natural language processing and speech recognition, businesses can streamline their processes, reduce human errors, and provide a more personalized and convenient experience for their customers. Looking ahead, the integration of AI in businesses is poised to become more widespread and sophisticated. As AI models continue to improve and become more accessible, we can expect to see AI-powered solutions being adopted across various industries, from healthcare and finance to manufacturing and transportation. One exciting prospect is the combination of AI with other emerging technologies, such as the Internet of Things (IoT) and edge computing. By integrating AI capabilities with IoT devices and edge computing infrastructure, businesses can enable real-time decision-making and automation, further enhancing efficiency and responsiveness. However, it is crucial to address the ethical and societal implications of AI adoption in business. Concerns around data privacy, algorithmic bias, and the impact on employment need to be carefully considered and addressed through responsible AI development and implementation practices. Overall, the AniDonald’s drive-through voice assistant serves as a compelling example of how AI can transform traditional business models and customer interactions. As AI continues to evolve, we can expect to witness more innovative applications that push the boundaries of what is possible, while also fostering a more efficient, personalized, and intelligent business landscape.
exploredataaiml
1,890,391
Performance tests IRIS - PostgreSQL - MySQL using Python
It seems like yesterday when we did a small project in Java to test the performance of IRIS,...
27,746
2024-06-16T16:17:52
https://community.intersystems.com/post/performance-tests-iris-postgresql-mysql-using-python
database, performance, python, testing
<p>It seems like yesterday when we did a small project in Java to test the performance of IRIS, PostgreSQL and MySQL (you can review the article we wrote back in June at the end of this article). If you remember, IRIS was superior to PostgreSQL and clearly superior to MySQL in insertions, with no big difference in queries.</p> <p>Well, shortly after @Dmitry.Maslennikov told me "Why don't you test it from a Python project?" Well, here is the Python version of the tests we previously performed using the JDBC connections.</p> <p>First of all, let you know that I am not an expert in Python, so if you see anything that could be improved, do not hesitate to contact me.</p> <p>For this example I have used Jupyter Notebook, which greatly simplifies Python development and allows us to see step by step what we are doing. Associated with this article you have the application so that you can do your own tests.</p> <h2>Warning for Windows users</h2> <p>If you clone the GitHub project in Visual Studio Code you may have to change the default end of line configuration from CRLF to LF to be able to correctly deploy the containers:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/px0sb01l2soqm9rjpf00.png) <p>If you are going to try to reproduce the tests on your computers, you must take the following into consideration: Docker Desktop will request permissions to access the folders on your computers that it needs to deploy the project. If you have not configured access permission to these folders before launching the Docker containers, the initial creation of the test table in PostgreSQL will fail, so before launching the project you must configure shared access to the project folders in your DockerDesktop.</p> <p>To do this you must access <strong>Settings -&gt; Resources -&gt; File sharing</strong> and add the folder where you have cloned the project to the list:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50z9sud8mne505ko48y3.png) <p>You are warned!</p> <h2>&nbsp;</h2> <h2>Test of performance</h2> <p>For these tests we will use a fairly simple table with the most basic information possible about a patient. Here you can see the command to create the table in SQL:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> Test.Patient ( <span class="hljs-keyword">Name</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), Lastname <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), Photo <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">5000</span>), Phone <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">14</span>), Address <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>) )</code></pre> <p>As you can see, we have defined the patient's photo as a VARCHAR(5000), the reason for this is because here we are going to include (theoretically) the vectorized information of the photo. A few months ago I published an article explaining how using Embedded Python we could implement IRIS, a facial recognition system (<a href="https://community.intersystems.com/post/facial-recognition-embedded-python-and-iris" target="_blank">here</a>) where you can see how images are transformed into vectors for later comparison. Well, the issue of vectorization comes from the fact that said vector format is the norm in many Machine Learning models and it never hurts to test with something similar to reality (just something).</p> <h3>&nbsp;</h3> <h3>Jupyter Notebook Setup</h3> <p>To simplify the development of the project in Python as much as possible, I have used the magnificent Jupyter Notebook tool that allows us to develop each of the functionalities that we will need step by step.</p> <p>Here's a look at our Jupyter:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tc333jux9ihcwvws966v.png) <p>Let's take a look at the most interesting points of it:</p> <h4>Importing libraries:</h4> <pre class="codeblock-container" idlang="2" lang="Python" tabsize="4"><code class="language-python hljs"><span class="hljs-keyword">import</span> iris <span class="hljs-keyword">import</span> names <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">from</span> datetime <span class="hljs-keyword">import</span> datetime <span class="hljs-keyword">import</span> psycopg2 <span class="hljs-keyword">import</span> mysql.connector <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-keyword">import</span> random_address <span class="hljs-keyword">from</span> phone_gen <span class="hljs-keyword">import</span> PhoneNumber</code></pre> <h4>Connecting to the databases:</h4> <p>IRIS:</p> <pre class="codeblock-container" idlang="2" lang="Python" tabsize="4"><code class="language-python hljs">connection_string = <span class="hljs-string">"iris:1972/TEST"</span> username = <span class="hljs-string">"superuser"</span> password = <span class="hljs-string">"SYS"</span> connectionIRIS = iris.connect(connection_string, username, password) cursorIRIS = connectionIRIS.cursor() print(<span class="hljs-string">"Connected"</span>)</code></pre> <p>PostgreSQL:</p> <pre class="codeblock-container" idlang="2" lang="Python" tabsize="4"><code class="language-python hljs">connectionPostgres = psycopg2.connect(database=<span class="hljs-string">"testuser"</span>, host=<span class="hljs-string">"postgres"</span>, user=<span class="hljs-string">"testuser"</span>, password=<span class="hljs-string">"testpassword"</span>, port=<span class="hljs-string">"5432"</span>) cursorPostgres = connectionPostgres.cursor() print(<span class="hljs-string">"Connected"</span>)</code></pre> <p>MySQL:</p> <pre class="codeblock-container" idlang="2" lang="Python" tabsize="4"><code class="language-python hljs">connectionMySQL = mysql.connector.connect( host=<span class="hljs-string">"mysql"</span>, user=<span class="hljs-string">"testuser"</span>, password=<span class="hljs-string">"testpassword"</span> ) cursorMySQL = connectionMySQL.cursor() print(<span class="hljs-string">"Connected"</span>)</code></pre> <h4>Generation of the values to be inserted</h4> <pre class="codeblock-container" idlang="2" lang="Python" tabsize="4"><code class="language-python hljs">phone_number = PhoneNumber(<span class="hljs-string">"USA"</span>) resultsIRIS = [] resultsPostgres = [] resultsMySQL = [] parameters = [] <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> range(<span class="hljs-number">1000</span>): rng = np.random.default_rng() parameter = [] parameter.append(names.get_first_name()) parameter.append(names.get_last_name()) parameter.append(str(rng.standard_normal(<span class="hljs-number">50</span>))) parameter.append(phone_number.get_number()) parameter.append(random_address.real_random_address_by_state(<span class="hljs-string">'CA'</span>)[<span class="hljs-string">'address1'</span>]) parameters.append(parameter) print(<span class="hljs-string">"Parameters built"</span>)</code></pre> <h4>Insertion into IRIS</h4> <pre class="codeblock-container" idlang="2" lang="Python" tabsize="4"><code class="language-python hljs">date_before = datetime.now() cursorIRIS.executemany(<span class="hljs-string">"INSERT INTO Test.Patient (Name, Lastname, Photo, Phone, Address) VALUES (?, ?, ?, ?, ?)"</span>, parameters) connectionIRIS.commit() difference = datetime.now() - date_before print(difference.total_seconds()) resultsIRIS.append(difference.total_seconds())</code></pre> <h4>Insertion into PostgreSQL</h4> <pre class="codeblock-container" idlang="2" lang="Python" tabsize="4"><code class="language-python hljs">date_before = datetime.now() cursorPostgres.executemany(<span class="hljs-string">"INSERT INTO test.patient (name, lastname, photo, phone, address) VALUES (%s,%s,%s,%s,%s)"</span>, parameters) connectionPostgres.commit() difference = datetime.now() - date_before print(difference.total_seconds()) resultsPostgres.append(difference.total_seconds())</code></pre> <h4>Insertion into MySQL</h4> <pre class="codeblock-container" idlang="2" lang="Python" tabsize="4"><code class="language-python hljs">date_before = datetime.now() cursorMySQL.executemany(<span class="hljs-string">"INSERT INTO test.patient (name, lastname, photo, phone, address) VALUES (%s,%s,%s,%s,%s)"</span>, parameters) connectionMySQL.commit() difference = datetime.now() - date_before print(difference.total_seconds()) resultsMySQL.append(difference.total_seconds())</code></pre> <p>For our test I have decided to insert the following set of values for each database:</p> <ul> <li>1 insertion with 1000 patients.</li> <li>1 insertion with 5000 patients.</li> <li>1 insertion with 20,000 patients.</li> <li>1 insertion with 50,000 patients.</li> </ul> <p>Keep in mind when performing the tests that the longest time spent is creating the values to be inserted by Python. To bring it closer to reality, I have launched several tests in advance so that the databases already have a significant set of records (around 200,000 records).</p> <h2>Test results</h2> <h3>Insertion of 1000 patients:</h3> <ul> <li>InterSystems IRIS: 0.037949 seconds.</li> <li>PostgreSQL: 0.106508 seconds.</li> <li>MySQL: 0.053338 seconds.</li> </ul> <h3>Insertion of 5,000 patients:</h3> <ul> <li>InterSystems IRIS: 0.162791 seconds.</li> <li>PostgreSQL: 0.432642 seconds.</li> <li>MySQL: 0.18925 seconds.</li> </ul> <h3>Insertion of 20,000 patients:</h3> <ul> <li>InterSystems IRIS: 0.601944 seconds.</li> <li>PostgreSQL: 1.803113 seconds.</li> <li>MySQL: 0.594396 seconds.</li> </ul> <h3>Insertion of 50,000 patients:</h3> <ul> <li>InterSystems IRIS: 1.482824 seconds.</li> <li>PostgreSQL: 4.581251 seconds.</li> <li>MySQL: 2.162996 seconds.</li> </ul> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/43uhn71id7m8ba9202xi.png) <p>Although this is a fairly simple test, it is very significant since it allows us to see the trend of each database regarding insertion performance.</p> <h2>Conclusions</h2> <p>If we compare the performance in the tests carried out with the Java project and the current one in Python we will see that on this occasion the behavior of PostgreSQL is clearly inferior to the Python project, being 4 times slower than InterSystems IRIS,&nbsp;&nbsp;while MySQL has improved compared to to the Java version.</p> <p>Unquestionably InterSystems IRIS remains the best of the three, with more linear behavior and better insertion performance, regardless of the technology used.</p> <h2>Technical characteristics of the laptop used for the tests:</h2> <ul> <li>Operative System: Microsoft Windows 11 Pro.</li> <li>Processor: 13th Gen Intel(R) Core(TM) i9-13900H, 2600 Mhz.</li> <li>RAM: 64 GB.</li> </ul>
intersystemsdev
1,890,390
Performance tests IRIS - PostgreSQL - MySQL
As a former JAVA developer it has always been a challenge to decide which database was the most...
27,746
2024-06-16T16:14:32
https://community.intersystems.com/post/performance-tests-iris-postgresql-mysql
docker, java, database, jdbc
<p>As a former JAVA developer it has always been a challenge to decide which database was the most suitable for the project we were going to develop, one of the main criteria I used was their performance, as well as their HA configuration capabilities ( high availability). Well, now is the time to put IRIS to the test with respect to some of the most commonly used databases, so I've decided to create a small Java project based on SpringBoot that connects via JDBC with a MySQL database, another of PostgreSQL and finally with IRIS.</p> <p>We are going to take advantage of the fact that we have Docker images of these databases to use them in our project and allow you to try it yourself without having to carry out any installation. We can check the docker configuration in our <strong>docker-compose.yml</strong> file</p> <pre class="codeblock-container" idlang="4" lang="YAML" tabsize="4"><code class="language-yaml hljs"><span class="hljs-attr">version:</span> <span class="hljs-string">"2.2"</span> <span class="hljs-attr">services:</span> <span class="hljs-comment"># mysql</span> <span class="hljs-attr"> mysql:</span> <span class="hljs-attr"> build:</span> <span class="hljs-attr"> context:</span> <span class="hljs-string">mysql</span> <span class="hljs-attr"> container_name:</span> <span class="hljs-string">mysql</span> <span class="hljs-attr"> restart:</span> <span class="hljs-string">always</span> <span class="hljs-attr"> command:</span> <span class="hljs-bullet">--default-authentication-plugin=mysql_native_password</span> <span class="hljs-attr"> environment:</span> <span class="hljs-attr"> MYSQL_ROOT_PASSWORD:</span> <span class="hljs-string">SYS</span> <span class="hljs-attr"> MYSQL_USER:</span> <span class="hljs-string">testuser</span> <span class="hljs-attr"> MYSQL_PASSWORD:</span> <span class="hljs-string">testpassword</span> <span class="hljs-attr"> MYSQL_DATABASE:</span> <span class="hljs-string">test</span> <span class="hljs-attr"> volumes:</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">./mysql/sql/dump.sql:/docker-entrypoint-initdb.d/dump.sql</span> <span class="hljs-attr"> ports:</span> <span class="hljs-bullet"> -</span> <span class="hljs-number">3306</span><span class="hljs-string">:3306</span> <span class="hljs-comment"># postgres</span> <span class="hljs-attr"> postgres:</span> <span class="hljs-attr"> build:</span> <span class="hljs-attr"> context:</span> <span class="hljs-string">postgres</span> <span class="hljs-attr"> container_name:</span> <span class="hljs-string">postgres</span> <span class="hljs-attr"> restart:</span> <span class="hljs-string">always</span> <span class="hljs-attr"> environment:</span> <span class="hljs-attr"> POSTGRES_USER:</span> <span class="hljs-string">testuser</span> <span class="hljs-attr"> POSTGRES_PASSWORD:</span> <span class="hljs-string">testpassword</span> <span class="hljs-attr"> volumes:</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">./postgres/sql/dump.sql:/docker-entrypoint-initdb.d/dump.sql</span> <span class="hljs-attr"> ports:</span> <span class="hljs-bullet"> -</span> <span class="hljs-number">5432</span><span class="hljs-string">:5432</span> <span class="hljs-attr"> adminer:</span> <span class="hljs-attr"> container_name:</span> <span class="hljs-string">adminer</span> <span class="hljs-attr"> image:</span> <span class="hljs-string">adminer</span> <span class="hljs-attr"> restart:</span> <span class="hljs-string">always</span> <span class="hljs-attr"> depends_on:</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">mysql</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">postgres</span> <span class="hljs-attr"> ports:</span> <span class="hljs-bullet"> -</span> <span class="hljs-number">8081</span><span class="hljs-string">:8080</span> <span class="hljs-comment"># iris</span> <span class="hljs-attr"> iris:</span> <span class="hljs-attr"> init:</span> <span class="hljs-literal">true</span> <span class="hljs-attr"> container_name:</span> <span class="hljs-string">iris</span> <span class="hljs-attr"> build:</span> <span class="hljs-attr"> context:</span> <span class="hljs-string">.</span> <span class="hljs-attr"> dockerfile:</span> <span class="hljs-string">iris/Dockerfile</span> <span class="hljs-attr"> ports:</span> <span class="hljs-bullet"> -</span> <span class="hljs-number">52773</span><span class="hljs-string">:52773</span> <span class="hljs-bullet"> -</span> <span class="hljs-number">1972</span><span class="hljs-string">:1972</span> <span class="hljs-attr"> command:</span> <span class="hljs-bullet">--check-caps</span> <span class="hljs-literal">false</span> <span class="hljs-comment"># tomcat</span> <span class="hljs-attr"> tomcat:</span> <span class="hljs-attr"> init:</span> <span class="hljs-literal">true</span> <span class="hljs-attr"> container_name:</span> <span class="hljs-string">tomcat</span> <span class="hljs-attr"> build:</span> <span class="hljs-attr"> context:</span> <span class="hljs-string">.</span> <span class="hljs-attr"> dockerfile:</span> <span class="hljs-string">tomcat/Dockerfile</span> <span class="hljs-attr"> volumes:</span> <span class="hljs-bullet"> -</span> <span class="hljs-string">./tomcat/performance.war:/usr/local/tomcat/webapps/performance.war</span> <span class="hljs-attr"> ports:</span> <span class="hljs-bullet"> -</span> <span class="hljs-number">8080</span><span class="hljs-string">:8080</span> </code></pre> <p>With a quick glance we will see that we are using the following images:</p> <ul> <li><b>IRIS</b>: IRIS Community instance to which we will connect by JDBC.</li> <li><b>Postgres</b>: PostgreSQL database image listening on port 5432.</li> <li><b>MySQL</b>: MySQL database image listening on port 3306.</li> <li><b>Tomcat</b>: Docker image configured with an Apache Tomcat application server on which we will deploy the WAR file of our application.</li> <li><b>Adminer</b>: database administrator that will allow us to consult the Postgres and MySQL databases.</li> </ul> <p>As you can see, we have configured the listening ports so that they are also mapped on our computer, not only within Docker. In the case of databases, it would not be necessary, since the connection will be made within the Docker containers, so if you have any problems with the <strong>ports</strong>, you can delete the ports line from the <strong>docker-compose.yml</strong> file.</p> <p>Each database image is running a pre-script that will create the tables needed for performance tests, let's look at one of the <strong>dump.sql</strong> files</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">SCHEMA</span> <span class="hljs-keyword">test</span>; <span class="hljs-keyword">DROP</span> <span class="hljs-keyword">TABLE</span> <span class="hljs-keyword">IF</span> <span class="hljs-keyword">EXISTS</span> test.patient; <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> test.country ( <span class="hljs-keyword">id</span> <span class="hljs-built_in">INT</span> PRIMARY <span class="hljs-keyword">KEY</span>, <span class="hljs-keyword">name</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>) ); <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> test.city ( <span class="hljs-keyword">id</span> <span class="hljs-built_in">INT</span> PRIMARY <span class="hljs-keyword">KEY</span>, <span class="hljs-keyword">name</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), lastname <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), photo BYTEA, phone <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">14</span>), address <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), country <span class="hljs-built_in">INT</span>, <span class="hljs-keyword">CONSTRAINT</span> fk_country FOREIGN <span class="hljs-keyword">KEY</span>(country) <span class="hljs-keyword">REFERENCES</span> test.country(<span class="hljs-keyword">id</span>) ); <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> test.patient ( <span class="hljs-keyword">id</span> <span class="hljs-built_in">INT</span> <span class="hljs-keyword">GENERATED</span> <span class="hljs-keyword">BY</span> <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">AS</span> <span class="hljs-keyword">IDENTITY</span> PRIMARY <span class="hljs-keyword">KEY</span>, <span class="hljs-keyword">name</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), lastname <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), photo BYTEA, phone <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">14</span>), address <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">225</span>), city <span class="hljs-built_in">INT</span>, <span class="hljs-keyword">CONSTRAINT</span> fk_city FOREIGN <span class="hljs-keyword">KEY</span>(city) <span class="hljs-keyword">REFERENCES</span> test.city(<span class="hljs-keyword">id</span>) ); <span class="hljs-keyword">INSERT</span> <span class="hljs-keyword">INTO</span> test.country <span class="hljs-keyword">VALUES</span> (<span class="hljs-number">1</span>,<span class="hljs-string">'Spain'</span>), (<span class="hljs-number">2</span>,<span class="hljs-string">'France'</span>), (<span class="hljs-number">3</span>,<span class="hljs-string">'Portugal'</span>), (<span class="hljs-number">4</span>,<span class="hljs-string">'Germany'</span>); <span class="hljs-keyword">INSERT</span> <span class="hljs-keyword">INTO</span> test.city <span class="hljs-keyword">VALUES</span> (<span class="hljs-number">1</span>,<span class="hljs-string">'Madrid'</span>,<span class="hljs-number">1</span>), (<span class="hljs-number">2</span>,<span class="hljs-string">'Valencia'</span>,<span class="hljs-number">1</span>), (<span class="hljs-number">3</span>,<span class="hljs-string">'Paris'</span>,<span class="hljs-number">2</span>), (<span class="hljs-number">4</span>,<span class="hljs-string">'Bordeaux'</span>,<span class="hljs-number">2</span>), (<span class="hljs-number">5</span>,<span class="hljs-string">'Lisbon'</span>,<span class="hljs-number">3</span>), (<span class="hljs-number">6</span>,<span class="hljs-string">'Porto'</span>,<span class="hljs-number">3</span>), (<span class="hljs-number">7</span>,<span class="hljs-string">'Berlin'</span>,<span class="hljs-number">4</span>), (<span class="hljs-number">8</span>,<span class="hljs-string">'Frankfurt'</span>,<span class="hljs-number">4</span>); </code></pre> <p>We are going to create 3 tables for our tests, <strong>patient</strong>, <strong>city</strong> and <strong>country</strong>, these last two are going to have preloaded data of cities and countries.</p> <p>Perfect, next we are going to see how we will make the connections to the database.</p> <p>To do this we have created our Java project using a preconfigured Spring Boot project available from Visual Studio Code that provides us with the basic structure.</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5znztzu2cjsesck45ptv.png) <p>Don't worry if you don't understand the structure of the project at first glance, the goal is not to learn Java, but still we are going to explain a little more in detail the main documents.</p> <h4>MyDataSourceFactory.java</h4> <p>Java class that opens the connections to the different databases.</p> <h4>PerformancerController.java</h4> <p>Controller in charge of publishing the endpoints that we will call from Postman.</p> <h4>application.properties</h4> <p>Configuration file with the different connections to the databases deployed in our Docker.</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fmtwjue3prxj6ftye4wj.png) <p>As you can see, the connection URLs use the container name since, when deployed in a Tomcat container, the databases will be accessible by our Java application only with the corresponding container name. We can also check how the URL is making a connection via JDBC to our databases. The Java libraries used in the project are defined in the pom.xml file.</p> <p>If you modify the source code, you only have to execute the command:</p> <pre class="codeblock-container" idlang="11" lang="Bash" tabsize="4"><code class="language-bash hljs">mvn package</code></pre> <p>And this will generate a file <strong>performance-0.0.1-SNAPSHOT.war</strong>, rename it to <strong>performance.war</strong> and move it to the <strong>/tomcat</strong> directory, replacing the existing one.</p> <p>As the project is on GitHub, we only need to clone it on our computer from Visual Studio and execute the following commands in the terminal:</p> <pre class="codeblock-container" idlang="11" lang="Bash" tabsize="4"><code class="language-bash hljs">docker-compose build docker-compose up -d</code></pre> <p>Let's check the Docker portal:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmmbhspo5b0oolr8ub2g.png) <p>Great! Docker containers working. Now let's check from our Adminer and the IRIS management portal that our tables have been created correctly.</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3yea2wjxros9kgyxs7ns.png) <p>Let's first access the MySQL database. If you consult the file <strong>docker-compose.yml</strong> we will see that the username and password defined for MySQL and PostgreSQL are the same <strong>testuser</strong>/<strong>testpassword</strong></p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/obuxjqj3vrrgriuq2lst.png) <p>Here we have our three tables inside our Test database, let's look at our PostgreSQL database:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n2vjm66x8gkiauevfn2u.png) <p>Let's select the <strong>testuser</strong> database and the <strong>test</strong> schema:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/byjbpx3qb387g89v03x9.png) <p>Here we have our tables perfectly created in PostgreSQL. Let's finally check that everything is configured correctly in IRIS:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/36ull1lcbui8dcih88m2.png) <p>All correct, we have our tables created in the <strong>USER</strong> Namespace under the <strong>Test</strong> schema.</p> <p>Alright, once the checks are done, let's rock! For this we will use Postman, in which we will load the file attached to the project <strong>performance.postman_collection.json</strong></p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z7dczt95yy38j26a3cvx.png) <p>These are the different tests that we are going to launch, we will start with inserts and continue with queries against the database. I have not included any type of index beyond those that are created automatically with the definition of primary keys in the different databases.</p> <h2>Insert</h2> <p>REST call: GET&nbsp;http://localhost:8080/performance/tests/insert/<strong>{database}</strong>?total=1000</p> <p>The variable {database} may have the following values:</p> <ul> <li>postgres</li> <li>mysql</li> <li>iris</li> </ul> <p>And the total attribute will be the one that we will modify to indicate the total number of insertions that we want to make.</p> <p>The method that will be invoked is called <strong>insertRecords</strong> and you can find it in the java file <strong>PerformanceController.java</strong> located at <strong>/src/main/java/com/performance/controller/</strong>, you can see that it is an extremely simple insert:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">INSERT</span> <span class="hljs-keyword">INTO</span> test.patient <span class="hljs-keyword">VALUES</span> (<span class="hljs-literal">null</span>, ?, ?, <span class="hljs-literal">null</span>, ?, ?, ?)</code></pre> <p>The first value is null as it is the autogenerated primary key and the second null corresponds to a BLOB/BYTEA/LONGVARBINARY type field where we will save a photo later.</p> <p>We are going to launch the following batches of pushes: 100, 1000 , 10000, 20000 and we will check the response times that we will receive in Postman. For each measurement we will do 3 tests and we will calculate the average of the 3 values that we obtain.</p> <table border="1" cellpadding="1" cellspacing="1" style="width: 500px;"> <tbody> <tr> <td style="text-align: center;">&nbsp;</td> <td style="text-align: center;">100</td> <td style="text-align: center;">1000</td> <td style="text-align: center;">10000</td> <td style="text-align: center;">20000</td> </tr> <tr> <td style="text-align: center;">MySQL</td> <td style="text-align: center;">0.754</td> <td style="text-align: center;">8.91 s</td> <td style="text-align: center;">88 s</td> <td style="text-align: center;">192 s</td> </tr> <tr> <td style="text-align: center;">PostgreSQL</td> <td style="text-align: center;">0.23 s</td> <td style="text-align: center;">2.24 s</td> <td style="text-align: center;">20.92 s</td> <td style="text-align: center;">40.35 s</td> </tr> <tr> <td style="text-align: center;">IRIS</td> <td style="text-align: center;">0.07 s</td> <td style="text-align: center;">0.33 s</td> <td style="text-align: center;">2.6 s</td> <td style="text-align: center;">5 s</td> </tr> </tbody></table> <p>Let's see it graphically.</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u94whr35ehtfo7lctk3a.png) <p>&nbsp;</p> <h2>Insert with a binary file</h2> <p>In the previous example we did simple inserts, let's go to push the accelerator including in our insert a 50 kB picture as a photo for our patients.</p> <p>REST call: GET&nbsp;http://localhost:8080/performance/tests/insertBlob/<strong>{database}</strong>?total=1000</p> <p>The variable {database} may have the following values:</p> <ul> <li>postgres</li> <li>mysql</li> <li>iris</li> </ul> <p>And the total attribute will be the one that we will modify to indicate the total number of insertions that we want to make.</p> <p>The method that will be invoked is called <strong>insertBlobRecords</strong> and you can find it in the java file <strong>PerformanceController.java</strong> located at <strong>/src/main/java/com/performance/controller/</strong>, you can check that it is an insert similar to the previous one with the exception that we are passing the file in the&nbsp;insert:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">INSERT</span> <span class="hljs-keyword">INTO</span> test.patient (<span class="hljs-keyword">Name</span>, Lastname, Photo, Phone, Address, City) <span class="hljs-keyword">VALUES</span> (?, ?, ?, ?, ?, ?)</code></pre> <p>Let's slightly modify the number of inserts above to avoid the test taking forever and I will clean the Docker of the images to start again with a total level playing field.</p> <table border="1" cellpadding="1" cellspacing="1" style="width: 500px;"> <tbody> <tr> <td style="text-align: center;">&nbsp;</td> <td style="text-align: center;">100</td> <td style="text-align: center;">1000</td> <td style="text-align: center;">5000</td> <td style="text-align: center;">10000</td> </tr> <tr> <td style="text-align: center;">MySQL</td> <td style="text-align: center;">1.87 s</td> <td style="text-align: center;">17 s</td> <td style="text-align: center;">149 s</td> <td style="text-align: center;">234 s</td> </tr> <tr> <td style="text-align: center;">PostgreSQL</td> <td style="text-align: center;">0.6 s</td> <td style="text-align: center;">5.22 s</td> <td style="text-align: center;">23.93 s</td> <td style="text-align: center;">60.43 s</td> </tr> <tr> <td style="text-align: center;">IRIS</td> <td style="text-align: center;">0.13 s</td> <td style="text-align: center;">0.88 s</td> <td style="text-align: center;">4.58 s</td> <td style="text-align: center;">12.57 s</td> </tr> </tbody></table> <p>&nbsp;</p> <p>Let's look at the graph:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hw61inxargposh5tf3m4.png) <h2>&nbsp;</h2> <h2>Select</h2> <p>Let's test performance with a simple query that gets us all the records from the Patient table.</p> <p>REST call: GET&nbsp;http://localhost:8080/performance/tests/select/<strong>{database}</strong></p> <p>&nbsp;</p> <p>The variable {database} may have the following values:</p> <ul> <li>postgres</li> <li>mysql</li> <li>iris</li> </ul> <p>The method that will be invoked is called <strong>selectRecords</strong> and you can find it in the java file <strong>PerformanceController.java</strong> located at <strong>/src/main/java/com/performance/controller/</strong>, the query is extremely basic:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> test.patient</code></pre> <p>We'll test the query with the same set of items we used for the first insert test.</p> <table border="1" cellpadding="1" cellspacing="1" style="width: 500px;"> <tbody> <tr> <td style="text-align: center;">&nbsp;</td> <td style="text-align: center;">100</td> <td style="text-align: center;">1000</td> <td style="text-align: center;">10000</td> <td style="text-align: center;">20000</td> </tr> <tr> <td style="text-align: center;">MySQL</td> <td style="text-align: center;">0.03 s</td> <td style="text-align: center;">0,02 s</td> <td style="text-align: center;">0.03 s</td> <td style="text-align: center;">0.04 s</td> </tr> <tr> <td style="text-align: center;">PostgreSQL</td> <td style="text-align: center;">0.03 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.04 s</td> <td style="text-align: center;">0.03 s</td> </tr> <tr> <td style="text-align: center;">IRIS</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.04 s</td> <td style="text-align: center;">0.05 s</td> </tr> </tbody></table> <p>&nbsp;</p> <p>And graphically:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4szguxajr2knj2trqgy3.png) <p>&nbsp;</p> <h2>Select group by</h2> <p>Let's test performance with a query that includes a left join as well as aggregation functions.&nbsp;</p> <p>REST call: GET&nbsp;http://localhost:8080/performance/tests/selectGroupBy/<strong>{database}</strong></p> <p>The variable {database} may have the following values:</p> <ul> <li>postgres</li> <li>mysql</li> <li>iris</li> </ul> <p>The method that will be invoked is called <strong>selectGroupBy</strong> and you can find it in the java file <strong>PerformanceController.java</strong> located at <strong>/src/main/java/com/performance/controller/</strong>, let's see the query:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">count</span>(p.Name), c.Name <span class="hljs-keyword">FROM</span> test.patient p <span class="hljs-keyword">left</span> <span class="hljs-keyword">join</span> test.city c <span class="hljs-keyword">on</span> p.City = c.Id <span class="hljs-keyword">GROUP</span> <span class="hljs-keyword">BY</span> c.Name</code></pre> <p>We'll test the query again with the same set of items we used for the first insert test.</p> <table border="1" cellpadding="1" cellspacing="1" style="width: 500px;"> <tbody> <tr> <td style="text-align: center;">&nbsp;</td> <td style="text-align: center;">100</td> <td style="text-align: center;">1000</td> <td style="text-align: center;">10000</td> <td style="text-align: center;">20000</td> </tr> <tr> <td style="text-align: center;">MySQL</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.03 s</td> <td style="text-align: center;">0.03 s</td> </tr> <tr> <td style="text-align: center;">PostgreSQL</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> </tr> <tr> <td style="text-align: center;">IRIS</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02</td> <td style="text-align: center;">0.03 s</td> <td style="text-align: center;">0.04 s</td> </tr> </tbody></table> <p>&nbsp;</p> <p>And graphically:</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rl5idkpl8f3h91v7dudm.png) <p>&nbsp;</p> <h2>Update</h2> <p>For the update we are going to launch a query with an associated subquery within its conditions.</p> <p>REST Call: GET&nbsp;http://localhost:8080/performance/tests/update/<strong>{database}</strong></p> <p>The variable {database} may have the following values:</p> <ul> <li>postgres</li> <li>mysql</li> <li>iris</li> </ul> <p>The method that will be invoked is called <strong>UpdateRecords</strong> and you can find it in the java file <strong>PerformanceController.java</strong> located at <strong>/src/main/java/com/performance/controller/</strong>, let's see the query:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">UPDATE</span> test.patient <span class="hljs-keyword">SET</span> Phone = <span class="hljs-string">'+15553535301'</span> <span class="hljs-keyword">WHERE</span> <span class="hljs-keyword">Name</span> <span class="hljs-keyword">in</span> (<span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">Name</span> <span class="hljs-keyword">FROM</span> test.patient <span class="hljs-keyword">where</span> <span class="hljs-keyword">Name</span> <span class="hljs-keyword">like</span> <span class="hljs-string">'%12'</span>)</code></pre> <p>Let's launch the query and see the results.</p> <table border="1" cellpadding="1" cellspacing="1" style="width: 500px;"> <tbody> <tr> <td style="text-align: center;">&nbsp;</td> <td style="text-align: center;">100</td> <td style="text-align: center;">1000</td> <td style="text-align: center;">10000</td> <td style="text-align: center;">20000</td> </tr> <tr> <td style="text-align: center;">MySQL</td> <td style="text-align: center;">X</td> <td style="text-align: center;">X</td> <td style="text-align: center;">X</td> <td style="text-align: center;">X</td> </tr> <tr> <td style="text-align: center;">PostgreSQL</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.03 s</td> </tr> <tr> <td style="text-align: center;">IRIS</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.04 s</td> </tr> </tbody></table> <p>We note that MySQL does not allow this type of subqueries on the same table that we are going to update, therefore we cannot measure their times under equal conditions. In this case, we will omit the graph as it is so simple.</p> <h2>Delete</h2> <p>For the delete we are going to launch a query with an associated subquery within its conditions.</p> <p>REST Call: GET&nbsp;http://localhost:8080/performance/tests/delete/<strong>{database}</strong></p> <p>The variable {database} may have the following values:</p> <ul> <li>postgres</li> <li>mysql</li> <li>iris</li> </ul> <p>The method that will be called is called <strong>DeleteRecords</strong> and you can find it in the java file <strong>PerformanceController.java</strong> located at <strong>/src/main/java/com/performance/controller/</strong>, let's see the query:</p> <pre class="codeblock-container" idlang="1" lang="SQL" tabsize="4"><code class="language-sql hljs"><span class="hljs-keyword">DELETE</span> test.patient <span class="hljs-keyword">WHERE</span> <span class="hljs-keyword">Name</span> <span class="hljs-keyword">in</span> (<span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">Name</span> <span class="hljs-keyword">FROM</span> test.patient <span class="hljs-keyword">where</span> <span class="hljs-keyword">Name</span> <span class="hljs-keyword">like</span> <span class="hljs-string">'%12'</span>)</code></pre> <p>Let's launch the query and see the results.</p> <table border="1" cellpadding="1" cellspacing="1" style="width: 500px;"> <tbody> <tr> <td style="text-align: center;">&nbsp;</td> <td style="text-align: center;">100</td> <td style="text-align: center;">1000</td> <td style="text-align: center;">10000</td> <td style="text-align: center;">20000</td> </tr> <tr> <td style="text-align: center;">MySQL</td> <td style="text-align: center;">X</td> <td style="text-align: center;">X</td> <td style="text-align: center;">X</td> <td style="text-align: center;">X</td> </tr> <tr> <td style="text-align: center;">PostgreSQL</td> <td style="text-align: center;">0.01 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.03 s</td> </tr> <tr> <td style="text-align: center;">IRIS</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.02 s</td> <td style="text-align: center;">0.04 s</td> </tr> </tbody></table> <p>We note again that MySQL does not allow this type of subqueries on the same table from which we are going to delete, therefore we cannot measure their times under equal conditions.</p> <h2>Conclusions</h2> <p>We can affirm that all of them are quite fine-tuned when it comes to querying data, as well as updating and deleting records (except for the incident with MySQL). Where we find the biggest difference is in the handling of inserts. IRIS is the best of the 3 by far, being 6 times faster than PostgreSQL and up to 20 times faster than MySQL at data ingestion.</p> <p>In order to operate with large data sets, IRIS is undoubtedly the best option in the tests carried out.</p> <p>So... we already have a winner! IRIS WINS!</p> <p>&nbsp;</p> <p>PS: These are some small examples of tests that you can carry out, feel free to modify the code as you wish.</p>
intersystemsdev
1,890,388
BEST BATTING TECHNIQUES FOR CRICKET
BATTING TECHNIOUES INTRODUCTION Batting in cricket involves skillful hitting of the ball with a...
0
2024-06-16T16:00:34
https://dev.to/sneha_sharma_1487093e12d8/best-batting-techniques-for-cricket-2gd7
crickter, indiancricket
BATTING TECHNIOUES INTRODUCTION Batting in cricket involves skillful hitting of the ball with a bat, requiring balance, precision, and strategic shot selection ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3ocramkuxqhg1ra1236.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqgupzpxxc7f3e2zr6n7.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wgxunnzlgva7wo2w5jmy.png) FOR MORE INFO:https://www.blogger.com/blog/post/edit/6316222385250759374/2406990344104905337
sneha_sharma_1487093e12d8
1,890,387
Scroll watcher progress timeline bar in easy step.
Scroll watcher is a progress bar which check for the scroll bar. It act as a custom progess bar which...
0
2024-06-16T15:59:27
https://dev.to/sunder_mehra_246c4308e1dd/scroll-progress-timeline-bar-in-easy-step-1mk0
webdev, beginners, css, html
Scroll watcher is a progress bar which check for the scroll bar. It act as a custom progess bar which indicates 0% as the start of webpage and 100% as the end of webpage. example: 1- We are at the top of webpage. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pxvu3s2wdamkggb1kjmu.png) 2- We are at the end of webpage. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2kxqft8vj44751wy8sev.png) HTML:- Just add a div with class name. `<div className="scroll-watcher"></div>` CSS:- 1- Make div with some initial height, i have given of 5px and width of 100% 2- Choose color of as you like. 3- Give z-index of max so that our scroll watcher do not go behind any other component. It should be at the top. 4- Scroll watcher should be sticked to the top of the page. 5- I have given "margin:auto" to make the scroller at the top-center. ```css .scroll-watcher{ height: 5px; background-color: rgb(110, 36, 213); border-radius: 15px; position: sticky; width: 100%; z-index: 999; margin: auto; top: 0; } ``` **Once done, you will get this result:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b3ar4sd34tijuww0xvg8.png) Now its time to add animation. Add keyframe with from and to. from indicates what will be the initial width and to indicates what will be final width. ```css @keyframes scroll-watcher { from{width:5%} to{width:100%} } ``` **Now add the animation property for scroller.** 1- Give animation name and animation property here i have given linear. `animation: scroll-watcher linear; ` 2- Give animation timeline, Here scroll show that animation will work when you are scrolling. `animation-timeline: scroll();` So the final CSS will be ```css .scroll-watcher { height: 5px; background-color: rgb(110, 36, 213); border-radius: 15px; position: sticky; width: 100%; z-index: 1000; margin: auto; top: 0; animation: scroll-watcher linear; animation-timeline: scroll(); } ``` Thanks, Feel free to ask any queries.
sunder_mehra_246c4308e1dd
1,890,385
Day 20 of my progress as a vue dev
About today Today was a very strange day in my journey. Let me explain. I woke up and decided I don't...
0
2024-06-16T15:55:12
https://dev.to/zain725342/day-20-of-my-progress-as-a-vue-dev-2inl
webdev, vue, typescript, tailwindcss
**About today** Today was a very strange day in my journey. Let me explain. I woke up and decided I don't wanna work on the audio editor project anymore(at least for now). Reason? Well honestly there are many, but the main one was I wasn't having any fun working on it or felt the same excitement like I felt on the last few projects. Is it giving up? I don't know. But, doing something you can't do happily is not something that should be done, especially when there isn't gun to your head. **What's next?** Well, I won't be sitting on my ass and do nothing of course, instead I will be changing my course of action and do something that I will find fruitful and enjoyable. I do have a plan in my mind but I need to shape it out a bit before start execution on it. **Improvements required** I really think at this moment I am giving up because I don't like the process of the project I was working on but there can be many reasons. If that's the only reason then the next thing I will start to work on I should be able to stick to it for a longer period of time, which I hope I do. But, if for some reason I'm unable to do that, then I should really fix this issue by working on this first. Wish me luck!
zain725342
1,888,224
Spring Framework: About Aware suffix interface
Following discussion is based on source code of Spring Framework 6.1.8. Some commonly used Spring...
0
2024-06-16T15:54:07
https://dev.to/saladlam/spring-framework-about-aware-suffix-interface-39pb
spring
Following discussion is based on source code of Spring Framework 6.1.8. Some commonly used Spring Framework components can be injected into your bean during bean creation. # Commonly used Aware interface | Interface name | Information interested | Injected by | | -------------- | ---------------------- | ----------- | | org.springframework.context.ApplicationEventPublisherAware | ApplicationEventPublisher | org.springframework.context.support.ApplicationContextAwareProcessor#invokeAwareInterfaces | | org.springframework.context.MessageSourceAware | MessageSource | org.springframework.context.support.ApplicationContextAwareProcessor#invokeAwareInterfaces | | org.springframework.context.EnvironmentAware | Environment | org.springframework.context.support.ApplicationContextAwareProcessor#invokeAwareInterfaces | | org.springframework.beans.factory.BeanNameAware | Bean name | org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory#invokeAwareMethods | | org.springframework.beans.factory.BeanFactoryAware | BeanFactory | org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory#invokeAwareMethods | | org.springframework.context.ApplicationContextAware | ApplicationContext | org.springframework.context.support.ApplicationContextAwareProcessor#invokeAwareInterfaces |
saladlam
1,890,384
From React to Hotwire - Part II - [EN]
Versão em Português Introduction In the first article about this migration from React to...
0
2024-06-16T15:50:03
https://dev.to/cirdes/from-react-to-hotwire-part-ii-en-2lim
react, hotwire, rails, phlex
[Versão em Português](https://dev.to/cirdes/do-react-ao-hotwire-parte-ii-pt-br-3aa4) ## Introduction In the [first article](https://dev.to/cirdes/from-react-to-hotwire-part-i-en-2o6g) about this migration from React to Hotwire, I discussed how we arrived at our current React stack. In this Part II, I will talk about the return to Rails views at Linkana. ## Evolution of Rails and SSR One of Rails' most brilliant characteristics is its obsession with simplicity. Since Rails was born within [Basecamp](https://brasil.basecamp.com/), a company that chose to have few employees, it is part of Rails' DNA to seek ways to keep web development simple. This is why Rails is known as the [One Person Framework](https://world.hey.com/dhh/the-one-person-framework-711e6318). Attending the latest edition of [Rails World](https://rubyonrails.org/world/2024) and watching the talk by [DHH](https://www.youtube.com/watch?v=iqXjGiQ_D-A) made me realize that generating views on the backend with Rails was no longer synonymous with slow, ugly interfaces that do not care about UX. With Hotwire, through Turbo and Stimulus, it was possible to create applications as complex as Gmail, [Hey](https://hey.com/), or Slack, [Campfire](https://once.com/campfire). And this became even more surreal with [Turbo 8](https://evilmartians.com/chronicles/the-future-of-full-stack-rails-turbo-morph-drive). ![SSR Giphy](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExYnprZ2x6bjdudzZ1Y2NqOXlqdGVwMzZvY2ZhOGFlbDFpazdpOWJ0ZCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/vUvxwcgQ6Iqlm2p6g9/giphy.gif) But the great benefit of returning to Server Side Rendering (SSR) is not needing APIs. Creating an API, whether REST or GraphQL, makes development slower. And it's not just Rails that is realizing this; Elixir with Liveview, PHP with [Livewire](https://livewire.laravel.com/docs/quickstart), and even JS with [HTMX](https://htmx.org/), and [React](https://react.dev/reference/react-dom/server) are following this trend. As Deno.js says: [The Future (and the Past) of the Web is Server Side Rendering](https://deno.com/blog/the-future-and-past-is-server-side-rendering). ## Components Are Here to Stay As we saw in Part I of this article, views in React/Vue/etc. introduced the concept of components. These components are small interface blocks that condense "Style (Tailwind)", "Structure (HTML)", and "Behavior/JavaScript". Componentization, in addition to promoting reuse and facilitating testing, allows customization or use to create your own Design System. But most importantly, they are ready-made. You don't need to reinvent the wheel. A great example is [Shadcn](https://ui.shadcn.com/), launched only in March 2023 and already has 60k stars on Github. Unfortunately, Rails views were created in a pre-components context. Html.erb is very good for writing HTML, but it is not good for writing behavior. Partials are great for writing behavior, but they are terrible for HTML. To try to solve this, the first solution that emerged was [ViewComponents](https://viewcomponent.org/), a framework created by the folks at Github. ViewComponent is our React, allowing the frontend to be thought of as components. It is the most popular solution today. Although I have never used it, I feel it lacked boldness. It seems like a fusion of partials with erb. But what disappointed me the most was that [Github is migrating from Rails to React](https://news.ycombinator.com/item?id=33576722), and the maintainers of ViewComponents [don't seem very enthusiastic](https://www.youtube.com/watch?v=YdeuXQJkZrs). As an alternative to ViewComponent, I found [Phlex](https://www.phlex.fun/). It comes with the boldness of creating components using only Ruby code. This is the kind of boldness that scares. I was scared when I first saw React put HTML, JS, and CSS (StyledComponents) inside a component. I was scared to see Tailwind create classes for each CSS property and have HTML created with dozens of classes in an element. Letting go of this "prejudice" and testing Phlex was important here. But the decisive factor was following the creator of Phlex, [Joel Drapper](https://x.com/joeldrapper), on social media and realizing how motivated he is. ## Component Library As we do not like to reinvent the wheel here at Linkana, we decided to look for component libraries that we could use and evaluated: - https://phlexui.com/ - https://primer.style/guides/rails - https://shadcn.rails-components.com/ - https://railsdesigner.com/ - https://zestui.com/ - https://railsui.com/ - https://protos.inhouse.work/ We ended up choosing [Phlex UI](https://phlexui.com/), not because we believe it is ready, but because we believe the path it has chosen is the most suitable for the premise of reusable and easily customizable components. It is a library clearly inspired by [Shadcn](https://ui.shadcn.com/) and brought some Shadcn innovations to the Rails world. 1. Encouraging you to [customize](https://phlexui.com/docs/customizing_components) components by copying them into your codebase. 2. A component structure very similar to React: Shadcn ```javascript <Accordion type="single" collapsible className="w-full"> <AccordionItem value="item-1"> <AccordionTrigger>Is it accessible?</AccordionTrigger> <AccordionContent> Yes. It adheres to the WAI-ARIA design pattern. </AccordionContent> </AccordionItem> </Accordion> ``` PhlexUI ```ruby render PhlexUI::Accordion.new do render PhlexUI::Accordion::Item.new do render PhlexUI::Accordion::Trigger.new do p { "What is PhlexUI?" } end render PhlexUI::Accordion::Content.new do p { "PhlexUI is a UI component library for Ruby devs who want to build better, faster." } end end end ``` 3. All CSS is written with Tailwind inside the component itself. It is very easy to make small adjustments/changes. 4. The components are made up of several very simple Ruby classes. It is very easy to understand code. ![Phlex UI Components](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzn0y1wap4fq21dnl438.png) 5. Integration with Stimulus through specific controllers for each component. ## Putting it into Production The first step to start the migration to SSR with Phlex and Hotwire was to map the PhlexUI components and place them in [Lookbook](https://lookbook.build/) within our application with our brand colors. ![Lookbook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmnmmo5gbazzc3nvya30.png) The second step was to create a Rails version of all navigation elements. Sidebar, tabs, etc. This way, we managed to rewrite route by route using SSR while the React (SPA) routes continued to function. For example: ![Dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kmwybxgx8bse27myzfp4.png) Everything that is part of the Dashboard is already using SSR. Navigating between the "Summary" and "Lead time" tabs is a navigation that uses Hotwire's Turbo. ![Pendencies](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/krc5by32evcpda0v6jpt.png) When the user clicks on the pending tab, we disable Turbo, and an HTML request is made to the backend that loads our SPA in React and displays the page. Thus, the application switches to SPA mode, and the requests return to being via our GraphQL API with JSON exchange. If the user clicks on a link that is already in Hotwire, we make React perform an HTML request, and the application returns to being an SSR. The transition from SPA to SSR is practically imperceptible, but from SSR to SPA is a bit slower, without hindering the user experience. Our idea is to complete this transition by the next edition of [Tropical.rb](https://www.tropicalrb.com/). Since June 11th, we have been running Phlex in production, and as a form of gratitude, Linkana has become one of the [sponsors of the project](https://github.com/sponsors/joeldrapper#sponsors). We have also started contributing to the PhlexUI project. If you are also working with components in Rails, leave a comment about what you have been doing.
cirdes
1,890,383
The Future of Google Search: Innovations Shaping Tomorrow's Web by Mike Savage of New Canaan
Google Search, the world's most widely used search engine, has revolutionized how we access...
0
2024-06-16T15:45:03
https://dev.to/savagenewcanaan/the-future-of-google-search-innovations-shaping-tomorrows-web-by-mike-savage-of-new-canaan-jk4
google, ai
<p style="text-align: justify;">Google Search, the world's most widely used search engine, has revolutionized how we access information. Since its launch in 1998, Google has continually refined its search algorithms and expanded its capabilities, striving to deliver more relevant, accurate, and timely results. As we look ahead, the future of Google Search promises even more transformative advancements, driven by artificial intelligence (AI), machine learning, and evolving user needs.</p> <h2 style="text-align: justify;">Enhanced User Experience through AI</h2> <p style="text-align: justify;"><a href="https://en.wikipedia.org/wiki/Artificial_intelligence">Artificial Intelligence</a> and machine learning are at the heart of Google's future innovations. Google's AI research has led to the development of sophisticated algorithms that understand user intent more deeply and deliver personalized search results. One such example is the BERT (Bidirectional Encoder Representations from Transformers) algorithm, introduced in 2019. BERT helps Google understand the context of words in a query, enabling more precise and contextually relevant results.</p> <p style="text-align: justify;">Going forward, we can expect AI to play an even more significant role in enhancing user experience. Google's AI will likely become better at predicting user queries and providing instant answers, reducing the need for users to sift through multiple search results. This predictive capability could lead to a more conversational search experience, where users interact with Google as if they were conversing with a knowledgeable assistant.</p> <h4 style="text-align: justify;">Voice and Visual Search</h4> <p style="text-align: justify;">The proliferation of voice-activated devices like Google Home and the increasing use of voice assistants on smartphones indicate a shift towards voice search. Google is investing heavily in improving its voice recognition technology to understand natural language queries more accurately. This includes understanding nuances, accents, and dialects, making voice search more accessible and reliable for a global audience.</p> <p style="text-align: justify;">Similarly, visual search is set to become more prominent. Google Lens, a tool that allows users to search using images instead of text, exemplifies this trend. With visual search, users can point their cameras at objects to receive information, shop for similar items, or translate text in real-time. As technology advances, visual search will become more integrated into everyday search activities, offering a seamless bridge between the physical and digital worlds.</p> <h4 style="text-align: justify;">Augmented Reality (AR) Integration</h4> <p style="text-align: justify;">Augmented Reality (AR) is poised to redefine how we interact with search results. Google has already introduced AR elements in search, allowing users to view 3D models of objects, animals, and even human anatomy directly in their environment through their devices. This interactive approach provides a more immersive experience, making information more engaging and easier to understand.</p> <p style="text-align: justify;">In the future, AR could expand to include virtual tours, interactive learning modules, and enhanced e-commerce experiences. Imagine searching for a piece of furniture and being able to visualize it in your living room through AR, or exploring a historical landmark in a virtual environment. These capabilities will make search more experiential and practical.</p> <h2 style="text-align: justify;">Increased Focus on Privacy and Ethical AI</h2> <p style="text-align: justify;">As concerns about data privacy and ethical AI grow, Google is under pressure to ensure its search practices are transparent and respectful of user privacy. The company has already taken steps to give users more control over their data, such as the introduction of auto-delete controls for search history.</p> <p style="text-align: justify;">Looking ahead, Google will likely implement more stringent privacy measures and develop AI that prioritizes ethical considerations. This might involve limiting data collection, improving data anonymization techniques, and being transparent about how user data is used to improve search experiences.</p> <h4 style="text-align: justify;">Enhanced E-commerce Integration</h4> <p style="text-align: justify;">Google is also positioning itself as a key player in the e-commerce landscape. With features like Google Shopping and integrated product search, the search engine is becoming a more powerful tool for online shopping. Future developments might include more personalized shopping experiences, leveraging AI to suggest products based on user behavior and preferences.</p> <p style="text-align: justify;">Additionally, partnerships with retailers and advancements in payment technologies could enable seamless, in-search purchases. Imagine completing a purchase directly from the search results page without being redirected to an external site. This level of integration would streamline the online shopping process and provide a more efficient experience for users.</p> <p style="text-align: justify;">The future of Google Search is set to be more intuitive, interactive, and integrated into our daily lives. Through advancements in AI, voice and visual search, AR, privacy measures, and e-commerce capabilities, Google is poised to remain at the forefront of the search engine landscape. These innovations will not only enhance how we access and interact with information but also redefine our expectations of what a search engine can do. As Google continues to evolve, the search experience will become more personalized, efficient, and immersive, paving the way for a new era of digital discovery.</p> <p style="text-align: justify;"><a href="https://brojure.com/savage-new-canaan/">Mike Savage</a> is a tech-savvy individual from New Canaan, Connecticut, known for his passion for technology and his engaging tech blog. Growing up in a community that values education and innovation, Savage developed a deep interest in technology from an early age. His blog covers a wide range of topics, including the latest tech trends, product reviews, and insightful analyses of emerging technologies.</p> <p style="text-align: justify;">Savage's expertise spans various domains, from software development to cutting-edge gadgets, making his blog a valuable resource for tech enthusiasts and novices alike. His ability to simplify complex concepts and present them in an accessible manner has garnered a loyal readership. Beyond blogging, Savage actively participates in tech conferences and seminars, staying updated with the industry's rapid advancements.</p> <p style="text-align: justify;">In his free time, he enjoys experimenting with new technologies and contributing to open-source projects. Mike Savage's blend of technical knowledge and passion for sharing information has established him as a respected voice in the tech community.</p>
savagenewcanaan
1,890,304
Do React ao Hotwire - Parte II - [PT-BR]
English version available Introdução No primeiro artigo sobre essa migração do React para...
0
2024-06-16T15:44:26
https://dev.to/cirdes/do-react-ao-hotwire-parte-ii-pt-br-3aa4
react, hotwire, rails, phlex
[English version available](https://dev.to/cirdes/from-react-to-hotwire-part-ii-en-2lim) ## Introdução No [primeiro artigo](https://dev.to/cirdes/do-react-ao-hotwire-parte-i-pt-br-1hm2) sobre essa migração do React para o Hotwire, eu abordei como chegamos à nossa stack atual em React. Nesta Parte II, vou falar sobre o retorno às views do Rails na Linkana. ## Evolução do Rails e o SSR Uma das características mais brilhantes do Rails é sua obsessão pela simplicidade. Como o Rails nasceu dentro do [Basecamp](https://brasil.basecamp.com/), uma empresa que escolheu ter poucos funcionários, faz parte do DNA do Rails buscar maneiras de manter o desenvolvimento web simples. Por isso, o Rails é o [Framework de uma pessoa só](https://world.hey.com/dhh/the-one-person-framework-711e6318). Ter participado da última edição do [Rails World](https://rubyonrails.org/world/2024) e assistido à palestra do [DHH](https://www.youtube.com/watch?v=iqXjGiQ_D-A) me fez perceber que gerar views no backend com Rails já não era mais sinônimo de interfaces lentas, feias e que não se preocupam com UX. Com Hotwire, através do Turbo e do Stimulus, era possível criar aplicações tão complexas como o Gmail, [Hey](https://hey.com/) ou Slack, [Campfire](https://once.com/campfire). E isso ficou ainda mais surreal com o [Turbo 8](https://evilmartians.com/chronicles/the-future-of-full-stack-rails-turbo-morph-drive). <img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExYnprZ2x6bjdudzZ1Y2NqOXlqdGVwMzZvY2ZhOGFlbDFpazdpOWJ0ZCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/vUvxwcgQ6Iqlm2p6g9/giphy.gif"> Mas o grande benefício do retorno ao Server Side Rendering (SSR) é não precisar de APIs. Criar uma API, seja ela REST ou GraphQL, torna o desenvolvimento mais lento. E não é apenas o Rails que está percebendo isso, é o Elixir com o Liveview, PHP com o [Livewire](https://livewire.laravel.com/docs/quickstart), e até mesmo o próprio JS com [HTMX](https://htmx.org/), além do próprio [React](https://react.dev/reference/react-dom/server). Como diz o Deno.js: [O Futuro (e o Passado) da Web é a Renderização do Lado do Servidor](https://deno.com/blog/the-future-and-past-is-server-side-rendering). ## Componentes vieram para ficar Como vimos na Parte I desse artigo, views em React/Vue/Angular introduziram o conceito de componentes. Esses componentes são pequenos blocos de interface que condensam "Estilo (Tailwind)", "Estrutura (HTML)" e "Comportamento/JavaScript". Componentizar, além de promover o reuso e facilitar os testes, também facilita a customização. Mas o mais importante, eles já estão prontos. Você não precisa reinventar a roda. Um grande exemplo é o [Shadcn](https://ui.shadcn.com/), lançado apenas em março de 2023 e que já conta com 60k estrelas no Github. Infelizmente, as views do Rails foram criadas no contexto pré-componentes. O html.erb é muito bom para se escrever HTML, mas não é bom para escrever comportamento. Já os partials são ótimos para se escrever comportamento, mas são péssimos para HTML. Para tentar resolver isso, a primeira solução que surgiu foi o [ViewComponents](https://viewcomponent.org/), um framework criado pelo pessoal do Github. O ViewComponents é o nosso React, permitindo que o frontend seja pensado na forma de componentes. Ele é a solução mais popular hoje. Apesar de eu nunca ter usado, sinto que faltou ousadia. Ele me parece uma fusão dos partials com o erb. Mas o que mais me decepcionou foi que o [Gihub está migrando do Rails para o React](https://news.ycombinator.com/item?id=33576722) e os próprios mantenedores do ViewComponents [não me parecem muito empolgados](https://www.youtube.com/watch?v=YdeuXQJkZrs). Como alternativa ao ViewComponents, encontrei o [Phlex](https://www.phlex.fun/). Ele vem com a ousadia de criar componentes utilizando somente código Ruby. Esse é o tipo de ousadia que assusta. Eu me assustei quando vi pela primeira vez o React colocar HTML, JS e CSS (StyledComponents) dentro de um componente. Eu me assustei ao ver o Tailwind criar classes para cada propriedade do CSS e termos HTML criados com dezenas de classes em um elemento. Abrir mão desse "preconceito" e testar o Phlex foi importante aqui. Mas o fator decisivo foi acompanhar o criador do Phlex, o [Joel Drapper](https://x.com/joeldrapper), nas redes sociais e perceber o quão motivado ele está. ## Biblioteca de componentes Como não gostamos de reinventar a roda aqui na Linkana, resolvemos buscar bibliotecas de componentes que pudéssemos usar e chegamos a avaliar: - https://phlexui.com/ - https://primer.style/guides/rails - https://shadcn.rails-components.com/ - https://railsdesigner.com/ - https://zestui.com/ - https://railsui.com/ - https://protos.inhouse.work/ Acabamos escolhendo a [Phlex UI](https://phlexui.com/) não por acreditar que ela está pronta, mas por acreditar que o caminho que ela escolheu é o mais adequado à premissa de componentes reutilizáveis e de fácil customização. Ela é uma biblioteca que foi claramente inspirada na Shadcn e trouxe para o mundo Rails algumas inovações do Shadcn. 1. Incentivar que você [customize](https://phlexui.com/docs/customizing_components) os componentes copiando-os para a sua codebase. 2. Uma estrutura de componentes muito parecida com o React: Shadcn ```javascript Accordion type="single" collapsible className="w-full"> <AccordionItem value="item-1"> <AccordionTrigger>Is it accessible?</AccordionTrigger> <AccordionContent> Yes. It adheres to the WAI-ARIA design pattern. </AccordionContent> </AccordionItem> </Accordion> ``` PhlexUI ```ruby render PhlexUI::Accordion.new do render PhlexUI::Accordion::Item.new do render PhlexUI::Accordion::Trigger.new do p{ "What is PhlexUI?" } end render PhlexUI::Accordion::Content.new do p { "PhlexUI is a UI component library for Ruby devs who want to build better, faster." } end end end ``` 3. Todo o CSS é escrito com Tailwind dentro do próprio componente. É muito fácil fazer pequenos ajustes/mudanças. 4. Os componentes são formados por várias classes Ruby muito simples. É um código muito fácil de entender. ![Componentes Phlex UI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzn0y1wap4fq21dnl438.png) 5. Integração com Stimulus através de controllers específicos de cada componente. ## Colocando em produção - SSR + SPA O primeiro passo para começar a migração para SSR com Phlex e Hotwire foi mapear os componentes do PhlexUI e colocá-los no [Lookbook](https://lookbook.build/) dentro da nossa aplicação com as cores da nossa marca. ![Lookbook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmnmmo5gbazzc3nvya30.png) O segundo passo foi criar uma versão em Rails de todos os elementos de navegação. Sidebar, tabs, etc. Dessa forma, conseguimos ir reescrevendo rota por rota usando SSR ao mesmo tempo que as rotas em React (SPA) continuam funcionando. Por exemplo: ![dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kmwybxgx8bse27myzfp4.png) Tudo que faz parte do Dashboard já está utilizando SSR. Navegar entre as abas "Resumo" e "Lead time" é uma navegação que utiliza o Turbo do Hotwire. ![Pendências](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/krc5by32evcpda0v6jpt.png) Quando o usuário clica na aba de pendências, desabilitamos o Turbo e é feita uma requisição de HTML ao backend que carrega nossa SPA em React e a página é exibida. Com isso, a aplicação muda para o modo SPA e as requisições voltam a ser via nossa API GraphQL com troca de JSON. Se o usuário clicar num link que já está em Hotwire, fazemos com que o React realize uma requisição de HTML e a aplicação volte a ser uma SSR. A transição do SPA para o SSR é praticamente imperceptível, mas do SSR para o SPA é um pouco mais demorada, sem atrapalhar a experiência do usuário. Nossa ideia é finalizar essa transição até a próxima edição do [Tropical.rb](https://www.tropicalrb.com/). Desde o dia 11 de junho, estamos rodando o Phlex em produção e, como forma de agradecimento, a Linkana passou a ser um dos [sponsors do projeto](https://github.com/sponsors/joeldrapper#sponsors). Também já começamos a contribuir com o projeto do PhlexUI. Se você também está trabalhando com componentes em Rails, deixe nos comentários o que você tem feito.
cirdes
1,890,382
Day 7: Building a React Project 🏗️
Welcome to Day 7 of our React.js learning journey Today, we'll put all the concepts we've learned so...
0
2024-06-16T15:43:50
https://dev.to/dipakahirav/day-7-building-a-react-project-1m8c
react, webdev, javascript
Welcome to Day 7 of our React.js learning journey Today, we'll put all the concepts we've learned so far into practice by building a small React project. This hands-on experience will help solidify your understanding of React and prepare you for building larger applications. 🎉 please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) to support my channel and get more web development tutorials. ### Project Overview: PhotoWall 📸 For our project, we'll create a simple photo-sharing application called PhotoWall. Users will be able to upload images, view a gallery of shared photos, and interact with the photos by liking or commenting on them. 📱 ### Setting up the Project 📁 1. **Create a new React project** using `create-react-app` or Vite. 🎉 2. **Install any additional dependencies** needed for the project, such as routing or styling libraries. 📦 3. **Set up the project structure**, creating directories for components, pages, and assets. 🗂️ ### Implementing Features 🎨 1. **Create the main components** for the application, such as `Header`, `PhotoGallery`, `UploadForm`, and `PhotoDetails`. 📋 2. **Implement the functionality** for each component, such as: - Rendering a list of photos in the gallery 📸 - Handling photo uploads and storing them in state 📁 - Displaying photo details when a user clicks on an image 🔍 - Allowing users to like and comment on photos 👍 3. **Use React Router** to set up routes for different pages, such as the home page, upload page, and photo details page. 🛣️ 4. **Style the components** using CSS or a styling library like Styled Components or Emotion. 💃 ### Example Code 📝 Here's an example of how you might implement the `PhotoGallery` component: ```jsx import React, { useState, useEffect } from 'react'; import { Link } from 'react-router-dom'; function PhotoGallery() { const [photos, setPhotos] = useState([]); useEffect(() => { // Fetch photos from an API or database fetchPhotos(); }, []); const fetchPhotos = async () => { const response = await fetch('/api/photos'); const data = await response.json(); setPhotos(data); }; return ( <div> <h2>Photo Gallery</h2> <div className="photo-grid"> {photos.map(photo => ( <Link to={`/photos/${photo.id}`} key={photo.id}> <img src={photo.url} alt={photo.caption} /> </Link> ))} </div> </div> ); } export default PhotoGallery; ``` ### Conclusion 🎉 By building the PhotoWall project, you've gained hands-on experience in applying the React concepts you've learned throughout this learning journey. You've created components, managed state, handled user interactions, and even integrated routing and styling. This project serves as a foundation for building more complex React applications in the future. Remember to keep practicing, experimenting, and exploring new libraries and techniques to continuously improve your React.js skills. 💪 --- Feel free to leave your comments or questions below. If you found this guide helpful, please share it with your peers and follow me for more web development tutorials. Happy coding! ### Follow and Subscribe: - **Website**: [Dipak Ahirav] (https://www.dipakahirav.com) - **Email**: dipaksahirav@gmail.com - **Instagram**: [devdivewithdipak](https://www.instagram.com/devdivewithdipak) - **YouTube**: [devDive with Dipak](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) - **LinkedIn**: [Dipak Ahirav](https://www.linkedin.com/in/dipak-ahirav-606bba128)
dipakahirav
1,890,381
Deploying a Kubernetes Cluster on Azure Kubernetes Service(AKS) with Terraform
Introduction Kubernetes, often abbreviated as K8s, is an open-source platform designed to...
0
2024-06-16T15:43:15
https://dev.to/audu97/deploying-a-kubernetes-cluster-on-azure-kubernetes-serviceaks-with-terraform-1b8g
kubernetes, aure, devops, cloud
### Introduction Kubernetes, often abbreviated as K8s, is an open-source platform designed to automate deploying, scaling, and operating application containers. It allows you to manage containerized applications across a cluster of machines efficiently Azure Kubernetes Service(AKS): Is a managed Kubernetes service offered by Microsoft as part of the Azure cloud platform. It provides a way for organizations to deploy and manage their containerized applications at scale, leveraging the powerful features of Kubernetes. By providing a fully managed service that handles many of the underlying infrastructure and management chores, AKS makes it easier to build and operate Kubernetes clusters. Because of this, businesses can concentrate on their apps and services rather than worrying about the infrastructure as a whole. Terraform is an open-source infrastructure-as-code software tool created by HashiCorp. It allows users to define and provision their infrastructure on different cloud platforms and define services using a high-level configuration language known as HashiCorp Configuration Language (HCL), or optionally JSON. In this article, I will be discussing how I created an AKS cluster on Azure entirely using Terraform, provisioned an NGINX image and set up Prometheus and Grafana for monitoring and alerting. This article assumes the reader has a basic understanding of Azure and Kubernetes and also has the Azure CLI installed and signed in. ### Creating the cluster To begin, I created a directory named `azure-aks` to store my Terraform scripts. Then, I created a `main.tf` file where I will write the Terraform configurations for my resources. In the main.tf file, I began by defining the providers that I will need for this project. Terraform Providers are plugins that implement resource types and data sources. They serve as a bridge between Terraform and a service or platform, such as Azure, AWS, Kubernetes, etc. In my case, I will need providers for Azure to communicate with Azure services, a provider for Kubernetes to create Kubernetes services, and Helm to access Helm charts for Prometheus and Grafana. ```HCL terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "3.107.0" } kubernetes = { source = "hashicorp/kubernetes" version = "2.30.0" } helm = { source = "hashicorp/helm" version = "2.13.2" } } } ``` After that i ran `terraform init` so terraform downloads the plugin and the neccessary dependencies needed. Next creating resource groups and the Kubernetes cluster ```HCL resource "azurerm_resource_group" "aks-resource" { name = "aks-resources" location = "France Central" } resource "azurerm_kubernetes_cluster" "test_cluster" { name = "example-aks1" location = azurerm_resource_group.aks-resource.location resource_group_name = azurerm_resource_group.aks-resource.name dns_prefix = "testaks" default_node_pool { name = "default" node_count = 2 vm_size = "Standard_D2_v2" } identity { type = "SystemAssigned" } tags = { Environment = "Production" } } ``` Created a resource group with the name `aks-resource` in the “France Central” region and an Azure Kubernetes Service(AKS) named “example-aks1” within that resource group. The AKS cluster will have a default node pool with 2(which can be increased to suit your need) nodes of size “Standard_D2_v2” and it will use a system-assigned managed identity. The cluster is also tagged “Environment: Production”. Once this is done, I ran the `terraform plan` command to see the resources that will be provisioned then `terraform apply` to provision these resources. The reason I did this was, in other to get the kubeconfig file I have to run the command `az aks get-credentials --resource-group <ResourceGroupName> --name <AKSClusterName>` (replaced ResourceGroupName and AKSClusterName with the name of my resource group and my cluster name) The kubeconfig file is used to allow access to the Kubernetes clusters. It contains the necessary details to connect to the cluster, such as cluster API server addresses, user credentials, and namespaces. This file is used by Kubectl and other Kubernetes client applications to communicate with the cluster’s API server and manage Kubernetes resources. Essentially, it’s like a key that allows you to access and control your Kubernetes cluster on the cloud. After running the `az aks get-credentials --resource-group <ResourceGroupName> --name <AKSClusterName>` command the terminal outputs where the kubeconfig file is saved. ```HCL data "azurerm_kubernetes_cluster" "test_cluster" { name = azurerm_kubernetes_cluster.test_cluster.name resource_group_name = azurerm_resource_group.aks-resource.name } ``` The data block is used to fetch data about an existing aks cluster. It retrieves information about the aks cluster with the specified name and resource group, which is to be used somewhere else in the terraform configuration ```HCL resource "local_file" "kubeconfig" { content = data.azurerm_kubernetes_cluster.test_cluster.kube_config_raw filename = "/home/ephraim/.kube/config" } ``` The resource "local_file" "kubeconfig" block creates a local file that contains the kubeconfig of the retrieved AKS cluster. This kubeconfig is necessary to interact with my Kubernetes cluster using kubectl or other Kubernetes tools. The content of the file is the raw kubeconfig data from the AKS cluster, and it’s being saved to a specified path on my local machine (/home/ephraim/.kube/config) ```HCL resource "null_resource" "wait_for_kubeconfig" { provisioner "local-exec" { command = "sleep 10" } depends_on = [ local_file.kubeconfig ] } provider "kubernetes" { config_path = local_file.kubeconfig.filename } provider "helm" { kubernetes { config_path = local_file.kubeconfig.filename } } ``` The null resource block is used to introduce a delay in the terraform execution, which pauses the execution for 10 seconds to ensure that the kubeconfig file is fully written. The provider "kubernetes" block configures the Kubernetes provider for Terraform, which allows me to manage my Kubernetes resources with Terraform. It uses the kubeconfig file created by the `local_file.kubeconfig` resource to connect to my AKS cluster. Similarly, the provider "helm" block configures the Helm provider, which lets me deploy Helm charts to my Kubernetes cluster. It also uses the same kubeconfig file for connectivity. ```HCL resource "kubernetes_namespace" "test_namespace" { metadata{ name = "monitoring" } depends_on = [ local_file.kubeconfig ] } ``` Resource “kubernetes namespace” creates a kubernetes namespace called “monitoring” A Kubernetes namespace is a way to divide cluster resources between multiple users. It is some sort of cluster within the Kubernetes cluster but for similar tasks or similar resources. Or let me say, it is used for grouping similar cluster resources to bolster organisation. In this namespace, I’m going to be provisioning Prometheus and Grafana. The “depends_on” attribute ensures that the namespace is not created until the `local_file.kubeconfig` file is applied. This means that Terraform will wait for the kubeconfig file to be available before it attempts to create the namespace ```HCL resource "helm_release" "prom-helm" { name = "prometheus" repository = "https://prometheus-community.github.io/helm-charts" chart = "prometheus" namespace = kubernetes_namespace.test_namespace.metadata[0].name depends_on = [ kubernetes_namespace.test_namespace ] } resource "helm_release" "graf-helm" { name = "grafana" repository = "https://grafana.github.io/helm-charts" chart = "grafana" namespace = kubernetes_namespace.test_namespace.metadata[0].name depends_on = [ kubernetes_namespace.test_namespace ] } ``` The resource "helm_release" "prom-helm" block will deploy Prometheus from the specified Helm chart repository. It sets the release name to Prometheus, uses the chart from the Prometheus community Helm repository, and deploys it to the monitoring namespace created by the `kubernetes_namespace.test_namespace` resource. The resource "helm_release" "graf-helm" block does the same for Grafana, deploying it from the Grafana Helm chart repository with the release name Grafana to the same monitoring namespace. Both resources have a depends_on attribute that ensures they are created after the monitoring namespace has been created. Since they are both being deployed in the monitoring namespace. ```HCL resource "kubernetes_deployment" "nginx_depl" { metadata { name = "nginx-deployment" namespace = kubernetes_namespace.test_namespace.metadata[0].name } spec { replicas = 2 selector { match_labels = { app = "nginx" } } template { metadata { labels = { app = "nginx" } } spec { container { name = "nginx" image = "nginx:latest" port { container_port = 80 } } } } } depends_on = [ kubernetes_namespace.test_namespace ] } ``` This defines a Kubernetes deployment named “nginx-deployment” that will be created in the monitoring namespace. This deployment will set up two NGINX pods running in my Kubernetes cluster within the monitoring namespace serving content on port 80. The depends_on attribute also makes sure it is created only after the namespace has been created ```HCL resource "kubernetes_service" "nginx-service" { metadata { name = "nginx-service" namespace = kubernetes_namespace.test_namespace.metadata[0].name } spec { selector = { app = "nginx" } port { port = 80 target_port = 80 } type = "LoadBalancer" } depends_on = [ kubernetes_namespace.test_namespace] } ``` This resource creates a Kubernetes service called “nginx-service” also in the monitoring namespace of type LoadBalancer. This also depends on the namespace created earlier the full code looks like this ```HCl terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "3.107.0" } kubernetes = { source = "hashicorp/kubernetes" version = "2.30.0" } helm = { source = "hashicorp/helm" version = "2.13.2" } } } provider "azurerm" { # Configuration options features {} } resource "azurerm_resource_group" "aks-resource" { name = "aks-resources" location = "France Central" } resource "azurerm_kubernetes_cluster" "test_cluster" { name = "example-aks1" location = azurerm_resource_group.aks-resource.location resource_group_name = azurerm_resource_group.aks-resource.name dns_prefix = "testaks" default_node_pool { name = "default" node_count = 2 vm_size = "Standard_D2_v2" } identity { type = "SystemAssigned" } tags = { Environment = "Production" } } data "azurerm_kubernetes_cluster" "test_cluster" { name = azurerm_kubernetes_cluster.test_cluster.name resource_group_name = azurerm_resource_group.aks-resource.name } resource "local_file" "kubeconfig" { content = data.azurerm_kubernetes_cluster.test_cluster.kube_config_raw filename = "/home/ephraim/.kube/config" } resource "null_resource" "wait_for_kubeconfig" { provisioner "local-exec" { command = "sleep 10" } depends_on = [ local_file.kubeconfig ] } provider "kubernetes" { config_path = local_file.kubeconfig.filename } provider "helm" { kubernetes { config_path = local_file.kubeconfig.filename } } resource "kubernetes_namespace" "test_namespace" { metadata{ name = "monitoring" } depends_on = [ local_file.kubeconfig ] } resource "helm_release" "prom-helm" { name = "prometheus" repository = "https://prometheus-community.github.io/helm-charts" chart = "prometheus" namespace = kubernetes_namespace.test_namespace.metadata[0].name depends_on = [ kubernetes_namespace.test_namespace ] } resource "helm_release" "graf-helm" { name = "grafana" repository = "https://grafana.github.io/helm-charts" chart = "grafana" namespace = kubernetes_namespace.test_namespace.metadata[0].name depends_on = [ kubernetes_namespace.test_namespace ] } resource "kubernetes_deployment" "nginx_depl" { metadata { name = "nginx-deployment" namespace = kubernetes_namespace.test_namespace.metadata[0].name } spec { replicas = 2 selector { match_labels = { app = "nginx" } } template { metadata { labels = { app = "nginx" } } spec { container { name = "nginx" image = "nginx:latest" port { container_port = 80 } } } } } depends_on = [ kubernetes_namespace.test_namespace ] } resource "kubernetes_service" "nginx-service" { metadata { name = "nginx-service" namespace = kubernetes_namespace.test_namespace.metadata[0].name } spec { selector = { app = "nginx" } port { port = 80 target_port = 80 } type = "LoadBalancer" } depends_on = [ kubernetes_namespace.test_namespace] } output "client_certificate" { value = azurerm_kubernetes_cluster.test_cluster.kube_config[0].client_certificate sensitive = true } output "kube_config" { value = azurerm_kubernetes_cluster.test_cluster.kube_config_raw sensitive = true } ``` ### Deployment To deploy the finalized infrastructure to Azure, I will need to run `terraform plan` to preview the resources that will be created. Following this, I'll execute `terraform apply` to provision these resources. This process may take some time. ### Verify Deployment To verify the successful deployment, I navigated to the ‘Connect’ tab in the cluster portal, where Azure provides commands for authentication and connection to my cluster. After executing these commands, I successfully connected to my cluster. To view all my deployments, I ran the command `kubectl get deployments --namespace monitoring` Everything seems to be up and running correctly Additionally, I needed to verify if the Nginx service was set up correctly. Once Nginx, of type LoadBalancer, had been deployed, Kubernetes provisioned an external IP. To access it, I ran the command `kubectl get svc nginx-service --namespace monitoring`. In the ‘External IP’ column, I copied and pasted the IP address into a browser. Nginx is running correctly! Ran “terraform destroy” to delete and remove all resources ### Challenges The major challenge I faced was obtaining and using the kubeconfig file. I later realized that I needed to create the resource group and cluster first, then retrieve the kubeconfig file, before proceeding to create the other resources. ### Conclusion In this article, I’ve walked through the process of deploying a Kubernetes cluster on Azure Kubernetes Service (AKS) using Terraform. I deployed an AKS cluster and configured Kubernetes resources, including Nginx, Prometheus, and Grafana for my application’s needs To take this further I intend to explore topics such as auto-scaling, continuous deployment pipelines, and multi-region clusters to further enhance your Kubernetes infrastructure. the github repo for the full code can be found [Here](https://github.com/audu97/azure-aks)
audu97
1,890,380
🎨 Ultimate Front End Design Resource Guide: Elevate Your UI/UX Projects! ❤️
Welcome to the ultimate guide for all your front-end design needs! Whether you’re a seasoned designer...
0
2024-06-16T15:39:27
https://dev.to/aayush518/ultimate-front-end-design-resource-guide-elevate-your-uiux-projects-4550
ui, design, webdev
Welcome to the ultimate guide for all your front-end design needs! Whether you’re a seasoned designer or just starting out, we’ve got you covered with the best resources to make your projects stand out. Dive into a treasure trove of color palettes, fonts, icons, vectors, and more! ## 🌈 Color Sites Colors can make or break your design. Check out these amazing color resources to find the perfect palette: 1. [Color Hunt](https://colorhunt.co) - Curated color palettes ready to use. 2. [Klart](https://klart.io) - Generate stunning color combinations. 3. [Adobe Color](https://color.adobe.com) - Create and explore color schemes. 4. [Webkul Color Palettes](https://webkul.github.io) - Beautiful color inspirations. 5. [Pigment by ShapeFactory](https://pigment.shapefactory.co) - Unique color schemes. ## 💻 Font Download Sites Typography matters! Enhance your designs with these fantastic font resources: 1. [DaFont](https://dafont.com) - A vast collection of free fonts. 2. [1001 Fonts](https://1001fonts.com) - Find the perfect font for any project. 3. [Font Squirrel](https://fontsquirrel.com) - Quality fonts that are free for commercial use. 4. [Font Freak](https://fontfreak.com/fonts-new.htm) - Discover new and unique fonts. ## 🖼 Vector Logo Sites Need a logo? These sites have you covered with a plethora of vector logos: 1. [SeekLogo](https://seeklogo.com) - High-quality vector logos. 2. [LogoVector](https://logovector.net) - Download vector logos for free. 3. [Logotypes101](https://logotypes101.com) - Explore a wide range of logos. 4. [Logos-Vector](https://logos-vector.com) - Free vector logos to download. ## 🔵 Icon Download Sites Icons are essential for UI design. Find the perfect icons with these resources: 1. [Flaticon](https://flaticon.com) - Thousands of free icons. 2. [FreeIcons](https://freeicons.io) - High-quality icons for every need. 3. [IconStore](https://iconstore.co) - Beautifully crafted free icons. 4. [IconFinder](https://iconfinder.com) - A huge collection of icons. 5. [Digital Nomad Icons](https://digitalnomadicons.com) - Unique icons for digital nomads. ## 🖌 Brush Download Sites Enhance your digital artwork with these incredible brushes: 1. [BrushKing](https://brushking.eu) - A kingdom of brushes. 2. [Brusheezy](https://brusheezy.com/brushes) - Free Photoshop brushes. 3. [My Photoshop Brushes](https://myphotoshopbrushes.com) - A variety of brushes to choose from. 4. [FBrushes](https://fbrushes.com) - Free brushes for Photoshop. 5. [GFX Fever](https://gfxfever.com) - High-quality brushes. ## 📁 File Download Sites Download files and resources for your design projects from these sites: 1. [Freepik](https://freepik.com) - Free vectors, photos, and PSD files. 2. [All Free Download](https://all-free-download.com) - Free graphic resources. 3. [Vecteezy](https://vecteezy.com) - Free and premium vectors. 4. [FreeImages](https://freeimages.com) - Free stock photos and illustrations. ## 📸 Mockup Download Sites Showcase your designs with these realistic mockups: 1. [Mockups for Free](https://mockupsforfree.com) - Free and premium mockups. 2. [Mockup World](https://mockupworld.co) - High-quality free mockups. 3. [Graphic Burger](https://graphicburger.com) - Free design resources. 4. [ZippyPixels](https://zippypixels.com) - Premium quality mockups. ## ✨ Inspiration Sites Get inspired by these amazing design inspiration sites: 1. [Inspirationde](https://inspirationde.com) - Fresh design inspirations. 2. [Designspiration](https://designspiration.net) - Find and share creative ideas. 3. [Pinterest](https://pinterest.com) - Endless inspiration for every project. 4. [Dribbble](https://dribbble.com) - Discover the world’s top designers and creatives. ## 🎥 Video Sites Need video content? Check out these fantastic video resources: 1. [Mixkit](https://mixkit.co) - Free video clips and music. 2. [Coverr](https://coverr.co) - Beautiful free videos for your homepage. 3. [Motion Places](https://motionplaces.com) - High-quality stock footage. 4. [Videezy](https://videezy.com) - Free HD stock video footage. ## 🖼 Photo Sites Without Background Looking for images with transparent backgrounds? These sites are perfect: 1. [CleanPNG](https://cleanpng.com) - Free PNG images. 2. [PNGimg](https://pngimg.com) - High-quality PNG images. 3. [FootyRenders](https://footyrenders.com) - Football renders without backgrounds. 4. [PNGTree](https://pngtree.com) - Free PNG images and backgrounds. ## 📷 Photo Sites Free and high-quality photos can be found here: 1. [Unsplash](https://unsplash.com) - Beautiful free images and photos. 2. [Pexels](https://pexels.com) - Best free stock photos and videos. 3. [Pixabay](https://pixabay.com) - Stunning free images and royalty-free stock. 4. [StockSnap](https://stocksnap.io) - Beautiful free stock photos. 5. [Burst by Shopify](https://burst.shopify.com) - Free high-resolution images. ## Explore these Incredible Resources Take your front-end UI/UX projects to the next level with these fantastic resources. Happy designing! 🎨❤️
aayush518
1,890,378
Twilio Challenge: AI-Powered Voice Assistant
This is a submission for the Twilio Challenge What I Built I created an AI-powered voice...
0
2024-06-16T15:37:29
https://dev.to/thatcoolguy/twilio-challenge-ai-powered-voice-assistant-30j8
devchallenge, twiliochallenge, ai, twilio
*This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)* ## What I Built I created an AI-powered voice assistant designed to handle complex questions. Many existing voice assistants struggle with these types of inquiries, which can be frustrating for users. My assistant aims to bridge this gap. ## Demo To try out this app, make a call to **+1 (423) 454-3174**. You will be greeted with a prompt, say your request after the initial prompt. ### Source code {% github ThatCoolGuyyy/Twilio-Gemini %} Here is an article I wrote that has detailed steps to build the PHP-Laravel version of this voice assistant - https://www.twilio.com/en-us/blog/build-ai-powered-voice-assistant-twilio-laravel-openai ## Twilio and AI I used **Twilio Programmable voice** to get the user's request, this request is then passed into **Gemini API** which processes the request and returns a response. **Twilio Programmable Voice** is then used to return the response to the user in voice format. **Twilio Functions** was used to host the code. ## Additional Prize Categories **Twilio Times Two** - The project uses **Twilio Programmable Voice** and **Twilio Functions**. **Impactful Innovators:** My AI-powered voice assistant addresses the challenge of accessing information. It helps to bridge the gap between users who might struggle with traditional search methods, like the elderly, people with disabilities, or those with limited literacy skills.
thatcoolguy
1,890,377
A beginners guide to Kubernetes with Docker
Kubernetes (often abbreviated as K8s) is an open-source platform for automating the deployment,...
0
2024-06-16T15:31:42
https://dev.to/ferdousazad/a-beginners-guide-to-kubernetes-with-docker-1e4m
kubernetes, docker, tutorial, webdev
**Kubernetes** (often abbreviated as K8s) is an open-source platform for automating the deployment, scaling, and operation of application containers. It works with various container runtimes, including Docker, to orchestrate containerized applications across clusters of machines. Here’s a brief tutorial on Kubernetes and how it works with Docker containers: **1. Basic Concepts in Kubernetes** - **Cluster:** A set of nodes (machines) running containerized applications managed by Kubernetes. - **Node:**A single machine in a Kubernetes cluster. It can be a physical or virtual machine. - **Pod:** The smallest deployable unit in Kubernetes, which can contain one or more containers. - **Service:***An abstraction that defines a logical set of pods and a policy by which to access them. - **Deployment:** Manages a set of identical pods, ensuring the correct number of replicas and allowing updates. - **ConfigMap and Secret:**Objects for managing configuration data and sensitive information, respectively. **2. Installing Kubernetes** You can install Kubernetes locally using tools like Minikube or kind (Kubernetes in Docker). For production, you’d typically use a cloud provider’s managed Kubernetes service, such as GKE (Google Kubernetes Engine), EKS (Amazon Elastic Kubernetes Service), or AKS (Azure Kubernetes Service). **Install Minikube** Follow the installation guide for your operating system on the [Minikube website ](https://minikube.sigs.k8s.io/docs/start/) **Start Minikube:** `minikube start` **Install kubectl:** Follow the installation guide on the Kubernetes [website](https://kubernetes.io/) **3. Creating a Simple Kubernetes Application** Step 1: Create a Deployment Create a Docker image: Dockerfile: `FROM node:14 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["node", "app.js"]` **Build the Docker image:** docker build -t node-app:latest . **Push the image to a container registry (e.g., Docker Hub):** `docker tag node-app:latest <your-dockerhub-username>/node-app:latest` `docker push <your-dockerhub-username>/node-app:latest` **Create a Kubernetes Deployment YAML file (`deployment.yaml`):** ``` apiVersion: apps/v1 kind: Deployment metadata: name: node-app-deployment spec: replicas: 3 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: node-app image: <your-dockerhub-username>/node-app:latest ports: - containerPort: 3000 ``` Apply the deployment: `kubectl apply -f deployment.yaml` **Check the deployment and pods:** ``` kubectl get deployments kubectl get pods ``` Step 2: Create a Service **Create a Service YAML file (`service.yaml`):** ``` apiVersion: v1 kind: Service metadata: name: node-app-service spec: selector: app: node-app ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer ``` 2. Apply the service: `kubectl apply -f service.yaml` 3. Check the service: `kubectl get services` 4. Access the application: Use the external IP provided by the service (in Minikube, use `minikube service node-app-service – url`). 4. Communicating Between Services step 1: Set Up a Database Service Create a PostgreSQL Deployment and Service: ``` apiVersion: apps/v1 kind: Deployment metadata: name: postgres-deployment spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:latest env: - name: POSTGRES_DB value: mydatabase - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: mysecretpassword ports: - containerPort: 5432 --- apiVersion: v1 kind: Service metadata: name: postgres-service spec: selector: app: postgres ports: - protocol: TCP port: 5432 targetPort: 5432 ``` 2. Apply the deployment and service: `kubectl apply -f postgres.yaml` Step 2: Update Node.js Application to Use Database Modify `app.js` to connect to PostgreSQL: ``` const express = require('express'); const { Pool } = require('pg'); const app = express(); const PORT = 3000; const pool = new Pool({ user: 'postgres', host: 'postgres-service', database: 'mydatabase', password: 'mysecretpassword', port: 5432, }); app.get('/', async (req, res) => { const result = await pool.query('SELECT NOW()'); res.send(`PostgreSQL time: ${result.rows[0].now}`); }); app.listen(PORT, () => { console.log(`Server is running on port ${PORT}`); }); ``` 2. Build and push the updated Docker image: ``` docker build -t <your-dockerhub-username>/node-app:latest . docker push <your-dockerhub-username>/node-app:latest ``` 3. Update the deployment: `kubectl set image deployment/node-app-deployment node-app=<your-dockerhub-username>/node-app:latest` 4. Scaling and Updating Applications Scale the deployment: `kubectl scale deployment node-app-deployment – replicas=5` 5. Update the deployment with a new image: `kubectl set image deployment/node-app-deployment node-app=<your-dockerhub-username>/node-app:new-version` 6. Managing Configurations with ConfigMaps and Secrets Create a ConfigMap: `configmap.yaml` ``` apiVersion: v1 kind: ConfigMap metadata: name: app-config data: DATABASE_HOST: postgres-service DATABASE_USER: postgres DATABASE_PASSWORD: mysecretpassword DATABASE_NAME: mydatabase Applying the ConfigMap: kubectl apply -f configmap.yaml apiVersion: apps/v1 kind: Deployment metadata: name: node-app-deployment spec: replicas: 1 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: node-app image: <your-dockerhub-username>/node-app:latest ports: - containerPort: 3000 env: - name: DATABASE_HOST valueFrom: configMapKeyRef: name: app-config key: DATABASE_HOST - name: DATABASE_USER valueFrom: configMapKeyRef: name: app-config key: DATABASE_USER - name: DATABASE_PASSWORD valueFrom: configMapKeyRef: name: app-config key: DATABASE_PASSWORD - name: DATABASE_NAME valueFrom: configMapKeyRef: name: app-config key: DATABASE_NAME ``` Make sure to replace <your-dockerhub-username> with your actual Docker Hub username. The ConfigMap provides database configuration information that the node-app container in the Deployment uses through environment variables. The kubectl apply -f configmap.yaml command applies the ConfigMap configuration to your Kubernetes cluster. This tutorial covers the basics of Kubernetes, deploying and managing Docker containers, and communicating between services. For more advanced topics, consider exploring Kubernetes documentation and other resources.
ferdousazad
1,890,376
Transform your Use Cases to Software with ZERO Code
Being able to transform a formal specification into working software has been the wet dream of every...
0
2024-06-16T15:29:38
https://ainiro.io/blog/transform-your-use-cases-to-software-with-zero-code
lowcode, ai, productivity, programming
Being able to transform a formal specification into working software has been the wet dream of every single project manager for half a century. Simply put, because 99% of all time and resources are spent in the implementation phase. If you can skip the implementation phase, you're basically saving 99% of your resources. Over the last couple of weeks we've been working extensively on [AI Functions](https://ainiro.io/blog/getting-started-with-ai-functions). AI Functions are small building blocks performing some tiny task. Examples can be for instance. * Send email * List contacts * Scrape website * Search the web * Etc ... When a software developer implements a use case, what he or she is doing is really just assembling fundamental building blocks such as the above into larger wholes, resulting in what we refer to as _"business logic"_. An entire industry has popped out over the last 50 years trying to smoothen the process of implementing business logic and use cases correctly. > With AI, No-Code, and AINIRO, this entire industry is now obsolete There is nothing magically occurring in this process though, and it can just as well be performed by an AI. To understand why and how, let's look at a video where I'm demonstrating how the AI can effectively replace the entire implementation phase of such use cases, by assembling smaller blocks together, to solve high level business requirements. {% embed https://www.youtube.com/watch?v=PD0Wi67Wi2c %} ## The End of Software Development I suspect that 5 years down the road, nobody serious about software development will actually write code anymore. There are only a finite amount of _"basic building blocks"_ we will ever need. The rest of the job will be orchestrating these smaller building blocks together, using natural language, to apply business logic, allowing the machine to autonomously fix the rest using AI. To give you one example, realise that in the above video, I took the following _"specification"_ and automatically transformed it into an AI assistant, capable of executing my task. **Send marketing email** ```text If the user asks you to send a marketing email, you will need to know what contact to send it to, and which account the contact belongs to. Ask the user what contact he wants to send the email to, for then to find the contact. When you have the contact, you can use the contact's reference to the account to find the account. When you know both the contact and the account, proceed with scraping the account's website using the website URL you found from the account, and create a personalised sales email trying to get the contact into a meeting with us, to discuss AI Chatbots, AI Assistants, and custom AI solutions, solving specific business requirements you anticipate the company might have as you scraped their website. Create use cases relevant to the account's verticals. Show the email to the user, in addition to who you will send it to, including the email address - But do NOT send the email before the user has confirmed he or she wants you to actually send it. ## Additional information 1. To find the function required to scrape a website, you can use the "get-context" method and search for "Scrape website URL". 2. To find the function required to send an email, you can search for "Send email". 3. To find a contact you can search for "List contacts" 4. To find an account you can search for "List accounts" 5. Send the email from Thomas Hansen, CEO at AINIRO 6. Include a Calendly link to allow the user to book a meeting. The link is http://calendly.com/ainiro/ainiro-introduction ``` The paradox is that the above resembles every single use case I have ever seen in my professional life - And every single software developer ever having worked on a medium or large project have probably also seen thousands of similar use cases. The idea is to create such use cases to create a formal specification for a software developer, leaving no room for misunderstanding, allowing the software developer to implement code that somehow solves the above problem. So creating such use cases is a skill we've become really good at over the last 50 years in the industry. However, we can now completely skip the software developer, and move straight from use case to functioning software. In this process, 99% of all resources required to create working software has effectively vanished. As an additional bonus, what we're ending up with is not a traditional software system, but an AI Assistant you can write to using natural language, and even even talk to (soon!) To understand why, realise the above use case example was simply copied and pasted into our VSS RAG database, resulting in a record in our database, that will be automatically transferred to OpenAI when the user says he or she wants to send a marketing email. This results in that **the use case becomes the software**! > Basically, when you're done with writing down your use case, you're done implementing the software! This results in a 1,000x better User eXperience (UX), where we're using our existing senses to communicate with the machine, and the machine autonomously executes some task for us such as illustrated in the above video. As an additional bonus, we can now take our eyes away from our phones, and interact with our software even as we're driving our cars by leveraging our voice. > No need for AI driving our cars anymore, you can ditch your project Elon ... 😂 ## The Trillion Dollar Bet The software industry is worth trillions of dollars annually. Probably somewhere between 5 to 10 trillion dollars in total. My bet is that the _entire industry will completely collapse_, and be replaced by something completely different, where we're no longer interacting with our software using a graphical user interface, but instead our voice, and to a much less extent our eyes. At this point the sceptics amongst you might claim I'm wrong, and tell me about the fallacies of existing initiatives like [Devin](https://devin.ai/), and maybe claim that even Devin and [GitHub CoPilot](https://github.com/features/copilot) at best are assistance tools for existing developers to make them more productive. And yes, this is correct, but 99.7% of the world's population literally don't care about tools such as Devin and CoPilot. 99.7% of the earth's population cannot create software, they don't know any programming languages, and they literally don't care. All they want is a system that sends marketing emails, something that checks their emails and replies to their wife that they'll be late home from work today, or something that can book an airline ticket for them in May to go to Greece on vacation. As software developers we've been _"shielded"_ from the above facts, while magically painting ourselves into our own little corner of lingo and abstract constructs, such as OOP, O/RM, DBMS, etc - In an attempt to simplify our jobs - While the end result became that we completely forgot our _purpose_, which is to create working software, solving requirements our _"customers"_ (99.7% of the world) happens to have. > This is the reason why initiatives like Devin exists, because devs still believe it's about optimising the way we code - When the _real_ task at hand is to completely _eliminate_ the requirement for coding! Do you think your grandma is going to wait for you to order her airline ticket because you're going to go prompt engineer Devin and have it produce a system that allows her to book an airline ticket? Of course not, or rather to be correct; Yes, maybe your grandma, but nobody else will bother to wait ... ## The future is No-Code and AI, not Devin or GitHub CoPilot For us developers, Devin and GitHub CoPilot just might be the thing. For 99.7% of the world it's of zero interest. These 99.7% however though, can easily prompt engineer a couple of training snippets together, creating their own [AI workflows](https://ainiro.io/ai-workflows), that somehow solves their problems - At which point once you're done having Devin creating your _"airline booking system"_, they're already past that, having implemented an additional 1,000 AI functions, solving some 1,000 _additional_ problems for them - In addition to of course having perfectly solved the entire _"airline ticket problem"_. > The future belongs to the illiterate, those without software development skills, the ones who don't give a sjait Sorry guys, the party is over - You can all go home ...
polterguy
1,890,375
Get Rid of Tightly Coupled Modules and Circular Dependencies in NestJS
NestJS is a great NodeJS framework that injects a lot of refreshment into the ecosystem of Node’s...
0
2024-06-16T15:26:52
https://dev.to/kishieel/get-rid-of-tightly-coupled-modules-and-circular-dependencies-in-nestjs-3do1
nestjs, node, dependencyinversion, eventdriven
NestJS is a great NodeJS framework that injects a lot of refreshment into the ecosystem of Node’s backend solutions. With its robust module system, it allows planning and building scalable architecture that contains modules responsible for wrapping related logic together. While working within a modular environment like this, you may sometimes encounter an issue of circular dependencies caused by tightly coupled modules. In most cases, you will recognize this as an error presented below. ```text [Nest] 2788 - 06/10/2023, 12:56:50 PM LOG [InjectorLogger] > Nest encountered an undefined dependency. > This may be due to a circular import or a missing dependency declaration. [Nest] 2788 - 06/10/2023, 12:56:50 PM ERROR [ExceptionHandler] > Nest can't resolve dependencies of the UserService (?). > Please make sure that the argument dependency at index [0] is available in the UserModule context. ``` This is caused by two services that depend on each other to perform their logic. In this example above, the `PostService` is a dependency of `UserService`, but also `UserService` is a dependency of `PostService`. This causes Nest’s dependency injection container to be unable to resolve this situation. As per Nest’s documentation, you may, of course, use the `forwardRef()` function. However, this is only a temporary solution. If you don't truly solve the problem of tight coupling and circular dependencies, adding newer modules will become quite painful, and you will have to wrap most of your dependencies with the mentioned function. In today’s post, I would like to suggest another solution, which, of course, may not be applicable to all cases. But even if it solves only half of your circular dependencies, this may be already a good step forward. So, before further ado, let’s examine an example problem and the proposed solution. ### The problem Let’s consider a not too simple application for writing blog posts. But, apart from just creating posts, we were required to implement a bunch of additional actions like sending notifications to the author’s followers, increasing the author’s reputation after post creation, configuring a payment gateway if the post is behind a paywall, and some other fancy features. At the end, we may end up with code similar to the following. ```typescript class PostService { constructor( private readonly postRepositiory: PostRepository, private readonly userService: UserService, private readonly reputationService: ReputationService, private readonly notificationService: NotificationService, private readonly trackingService: TrackingService, private readonly paymentService: PaymentService, private readonly moderationService: ModerationService ) {} public createPost(args: CreatePostArgs): Post { const post = this.postRepository.create(args); this.reputationService.increaseReputation(args.userId); this.notificationService.notifyFollowersAboutPost(post); this.userService.updateUserActivity('post.created', post); this.trackingService.registerTrackable('post', post); this.moderationService.checkPostContentForViolations(post.content); if (args.isPremium) { this.paymentService.chargeUserForPremiumPost(args.userId); } return post; } } ``` This example is obviously made up, so please don’t pay too much attention to the details. I just want you to notice how many services `PostService` is dependent on and imagine that these services may also be dependent on `PostService`. For example, the notification service may require additional data from the post service to dispatch notifications, or the `ModerationService` may need to modify the post again via PostService after moderation. This is where circular dependencies occur. So now that we have a grasp of the problem, let’s explore the solution that I want to propose. ### The solution The solution I want to propose for this problem is to use the well-known concept of Event-Driven Architecture. Instead of calling all subsequent actions from the `createPost` method, we will simply create the post there and emit an event. Then, any module interested in performing some action related to the event may do so without crossing its logical borders. NestJS already comes with handy tools that we can use to benefit from events. If you don't yet have the package installed, you can simply add `@nestjs/event-emitter` to your application. ```shell yarn add @nestjs/event-emitter ``` And when the package is installed, add the EventEmitterModule to the root module of your application. ```typescript import { Module } from '@nestjs/common'; import { EventEmitterModule } from '@nestjs/event-emitter'; @Module({ imports: [ EventEmitterModule.forRoot(), // ... ], }) export class AppModule {} ``` When we have it ready, we may simply get rid of all `PostService` dependencies and replace them with only one — `EventEmitter2`. After this, we may also remove subsequent method invocations from the `createPost` method and instead emit an event with its key and payload. The payload may be defined as a separate interface or class if you wish, but here, for presentation purposes, I will just emit the created post as a payload. ```typescript class PostsService { constructor(private eventEmitter: EventEmitter2) {} public createPost(args: CreatePostArgs): Post { const post = this.db.posts.create({ ... }); this.eventEmitter.emit('post.created', post); return post; } } ``` The last thing we have to do is to add a listener to the modules that may be interested in this event. For example, in the notification module, we may have a listener as below. In the other modules, the code will be pretty much the same, just other services will be involved in event handling. ```typescript class NotificationsListener { constructor( private readonly notificationsService: NotificationsService ) {} @OnEvent('post.created') handlePostCreated(post: Post) { this.reputationService.notifyFollowersAboutPost(post); } } ``` This way, the `NotificationService` and `PostService` are only loosely coupled now. The `PostService` is no longer dependent on `NotificationService`. We get rid of circular dependency here, yet we keep the functionality still working — Yay! ### Summary To sum up, in this article we explored the proposition of introducing events into your application to solve the issue of circular dependencies. This approach helps modules to stay within their borders yet react to actions performed in another module as well. Even though this way may not be applicable to all solutions it definitely may fix the one presented today. I want to say thank you to all who read this article. I would love to hear your thoughts about this proposal and your ways of tackling circular dependencies in your applications, so feel free to share. Don’t forget to check out my other articles for more tips and insights. Happy hacking!
kishieel
1,890,319
Getting Started With React Native: Installation & Setup
React Native is a popular framework for building mobile applications using JavaScript and React. In...
27,888
2024-06-16T15:26:37
https://dev.to/itsproali/getting-started-with-react-native-installation-setup-37c5
reactnative, react, android, javascript
React Native is a popular framework for building mobile applications using JavaScript and React. In this guide, we'll walk you through the steps to get started with React Native, including installation, environment setup, and project initialization. Today I will cover only Windows Setup! #### Prerquisites - [Chocolatey](https://chocolatey.org/install) - [Node JS](https://nodejs.org) --- ## 1. Installation First of all, we need to install `JDK (Java Development Kit)` and `Android Studio` on our machine. - **JDK Install:** Open an Administrator Command Prompt (right-click Command Prompt and select "Run as Administrator"), then run the following command: ``` powershell choco install -y microsoft-openjdk17 ``` - **Android Studio Installation:** There are lots of versions of [Android Studio](https://developer.android.com/studio/index.html). You may download a suitable version of your PC requirements. After Installation make sure you have downloaded these or relevant dependencies. - `Android SDK Platform 34` - `Intel x86 Atom_64 System Image` or `Google APIs Intel x86 Atom System Image` Next, select the "SDK Tools" tab and check the box next to "Show Package Details" here as well. Look for and expand the `Android SDK Build-Tools` entry, then make sure that `34.0.0` is selected. Finally, click "Apply" to download and install the Android SDK and related build tools. --- ## 2. Environment Setup The React Native tools require some environment variables to be set up to build apps with native code. **1. Configure the ANDROID_HOME environment variable** - Open the **Windows Control Panel** - Click on **User Accounts**, then click **User Account**s again - Click on **Change my environment variables** - Click on **New...** to create a new `ANDROID_HOME` user variable that points to the path to your Android SDK: ![Environment Variable Setup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x5qgtzsp0z6iumov75mb.png) **2. Add platform-tools to Path** - Open the **Windows Control Panel** - Click on **User Accounts**, then click **User Account**s again - Click on **Change my environment variables** - Select the **Path** variable. - Click **Edit**. - Click **New** and add the path to platform-tools to the list. `C:\Users\username\AppData\Local\Android\Sdk\platform-tools` We will need an Android device to run our React Native Android app. This can be either a `physical Android device`, or more commonly, you can use an `Android Virtual Device` which allows you to emulate an Android device on your computer. Either way, you will need to prepare the device to run Android apps for development. --- ## 3. Create a New React Native Project Once the development environment is set up, you can create a new React Native project using the React Native CLI: ``` powershell npx react-native init MyReactNativeApp ``` This command creates a new directory called `MyReactNativeApp` with all the necessary files and dependencies. Navigate to your project directory: ``` powershell cd MyReactNativeApp ``` **React Navigation Native Stack Setup (optional)** If you don't want to implement React Navigation, you may skip this part. [React Navigation](https://reactnavigation.org) is the most popular library for Routing and navigation for `Expo` and `React Native` apps. Run the below command to work with React Navigation. ``` powershell npm install @react-navigation/native @react-navigation/native-stack ``` ``` powershell npm install react-native-screens react-native-safe-area-context ``` `react-native-screens` package requires one additional configuration step to properly work on Android devices. Edit `MainActivity.kt` or `MainActivity.java` file which is located under `android/app/src/main/java/<your package name>/`. Add the highlighted code to the body of the `MainActivity` class: ``` kotlin override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(null) } ``` and make sure to add the following import statement at the top of this file below your package statement: ``` kotlin import android.os.Bundle; ``` This change is required to avoid crashes related to View state being not persisted consistently across Activity restarts. Now, we need to wrap the whole app in NavigationContainer. Usually, you'd do this in your entry file, such as index.js or App.js: ``` typescript import * as React from 'react'; import { NavigationContainer } from '@react-navigation/native'; export default function App() { return ( <NavigationContainer>{/* Rest of your app code */}</NavigationContainer> ); } ``` If you want to use the Native stack follow code block below to setup the native stack: ``` typescript import { createNativeStackNavigator } from '@react-navigation/native-stack'; const Stack = createNativeStackNavigator(); function MyStack() { return ( <Stack.Navigator> <Stack.Screen name="Home" component={Home} /> <Stack.Screen name="Notifications" component={Notifications} /> <Stack.Screen name="Profile" component={Profile} /> <Stack.Screen name="Settings" component={Settings} /> </Stack.Navigator> ); } ``` --- ## 4. Run the project If all the setup is done, then it's time to see some magic by running the project. Open the terminal and run the below command: ``` powershell npx react-native run-android ``` --- ### Conclusion You now have a basic React Native setup ready for development. From here, you can start building your mobile application using React Native. Happy coding! #### Note: I am not a React Native Expert. Still, If you have any questions or run into any issues or if there are any corrections, feel free to leave a comment below. #### About the Author: I am `Mohammad Ali`, a Full Stack Developer (MERN). You can connect with me and see my work through the following links: - [Portfolio](https://itsproali.me/) - [LinkedIn](https://www.linkedin.com/in/itsproali/) - [GitHub](https://www.github.com/itsproali/) Feel free to reach out for any collaborations or inquiries. Happy coding!
itsproali
1,873,360
Modern Front-End Development with React
Introduction React is a popular JavaScript library for building user interfaces,...
27,559
2024-06-16T15:23:00
https://dev.to/suhaspalani/modern-front-end-development-with-react-4kao
webdev, react, frontend, development
#### Introduction React is a popular JavaScript library for building user interfaces, particularly single-page applications. Developed by Facebook, React allows developers to create large web applications that can update and render efficiently in response to data changes. This week, we'll explore the fundamentals of React, including components, state, props, and hooks. #### Importance of React in Modern Front-End Development React has revolutionized front-end development with its component-based architecture, making it easier to build and maintain complex UIs. Understanding React is essential for modern web developers, as it is widely used in the industry. #### React Basics **Setting Up a React Environment:** - **Using Create React App**: The easiest way to set up a React project. ```bash npx create-react-app my-app cd my-app npm start ``` - **Folder Structure**: Overview of the default folder structure created by Create React App. **React Components:** - **Function Components**: The simplest way to define a component. ```javascript function Welcome(props) { return <h1>Hello, {props.name}</h1>; } ``` - **Class Components**: An alternative way to define a component using ES6 classes. ```javascript class Welcome extends React.Component { render() { return <h1>Hello, {this.props.name}</h1>; } } ``` #### State and Props **Understanding State and Props:** - **Props**: Short for properties, props are read-only attributes passed from parent to child components. ```javascript function Welcome(props) { return <h1>Hello, {props.name}</h1>; } // Using the component <Welcome name="Alice" /> ``` - **State**: State is managed within the component and can change over time. ```javascript class Counter extends React.Component { constructor(props) { super(props); this.state = { count: 0 }; } increment = () => { this.setState({ count: this.state.count + 1 }); } render() { return ( <div> <p>Count: {this.state.count}</p> <button onClick={this.increment}>Increment</button> </div> ); } } ``` #### Handling Events **Event Handling in React:** - **Handling Events**: Binding event handlers to elements. ```javascript function Button() { function handleClick() { alert('Button clicked!'); } return ( <button onClick={handleClick}>Click me</button> ); } ``` - **Passing Arguments to Event Handlers**: ```javascript function Button(props) { function handleClick(id) { alert('Button ' + id + ' clicked!'); } return ( <button onClick={() => handleClick(props.id)}>Click me</button> ); } ``` #### Introduction to Hooks **What are Hooks?**: Functions that let you use state and other React features in function components. - **useState Hook**: Adds state to function components. ```javascript import React, { useState } from 'react'; function Counter() { const [count, setCount] = useState(0); return ( <div> <p>Count: {count}</p> <button onClick={() => setCount(count + 1)}>Increment</button> </div> ); } ``` - **useEffect Hook**: Performs side effects in function components. ```javascript import React, { useEffect, useState } from 'react'; function Timer() { const [seconds, setSeconds] = useState(0); useEffect(() => { const interval = setInterval(() => { setSeconds(seconds => seconds + 1); }, 1000); return () => clearInterval(interval); }, []); return <p>Seconds: {seconds}</p>; } ``` #### Component Lifecycle **Understanding Component Lifecycle Methods:** - **Mounting**: `componentDidMount()` - **Updating**: `componentDidUpdate()` - **Unmounting**: `componentWillUnmount()` **Example Using Class Components**: ```javascript class Timer extends React.Component { constructor(props) { super(props); this.state = { seconds: 0 }; } componentDidMount() { this.interval = setInterval(() => this.setState({ seconds: this.state.seconds + 1 }), 1000); } componentDidUpdate(prevProps, prevState) { console.log('Component updated'); } componentWillUnmount() { clearInterval(this.interval); } render() { return <p>Seconds: {this.state.seconds}</p>; } } ``` #### Conclusion Mastering React is a significant step towards becoming a proficient front-end developer. Its component-based architecture and hooks provide a powerful way to build interactive and dynamic web applications. #### Resources for Further Learning - **Online Courses**: Websites like Udemy, Pluralsight, and freeCodeCamp offer comprehensive React courses. - **Books**: "Learning React" by Alex Banks and Eve Porcello, "React Up & Running" by Stoyan Stefanov. - **Documentation and References**: The official [React documentation](https://reactjs.org/docs/getting-started.html) is an excellent resource. - **Communities**: Join developer communities on platforms like Stack Overflow, Reddit, and GitHub for support and networking.
suhaspalani
1,890,374
Why TypeScript Might Not Be the Best Choice for Your Development Project.
TypeScript has gained significant popularity among developers for its strong typing, improved...
0
2024-06-16T15:21:08
https://dev.to/gimkelum/why-typescript-might-not-be-the-best-choice-for-your-development-project-ank
typescript, development, javascript
TypeScript has gained significant popularity among developers for its strong typing, improved tooling, and enhanced code quality. It’s hailed as a significant improvement over vanilla JavaScript, promising to catch errors early and make large codebases more maintainable. However, despite its many advantages, TypeScript is not without its downsides. In this blog post, we’ll explore some of the reasons why TypeScript might not be the best choice for your development project. 1. Steep Learning Curve For developers who are accustomed to JavaScript, transitioning to TypeScript can be challenging. The introduction of static types requires learning new concepts and syntax. This learning curve can slow down the development process, especially for teams with tight deadlines. Developers need to understand not only TypeScript’s features but also how to properly integrate them into their existing workflow. 2. Increased Complexity TypeScript adds a layer of complexity to your codebase. With additional syntax and type definitions, the code can become harder to read and maintain, particularly for new team members or developers unfamiliar with TypeScript. The added complexity can lead to longer development times and potentially introduce more bugs if types are not correctly defined or used. To Continue reading..[url](https://www.8orinfinityfacts.com/2024/06/why-typescript-might-not-be-best-choice.html)
gimkelum
1,890,361
Shopify Breakthrough: Achieve E-Commerce Excellence
Introduction In today's digital age, establishing a successful e-commerce business requires...
0
2024-06-16T15:19:16
https://dev.to/msaadi/shopify-breakthrough-achieve-e-commerce-excellence-d6g
shopify
Introduction In today's digital age, establishing a successful e-commerce business requires leveraging powerful platforms like Shopify. [Shopify ](https://www.niais.org/shopify-course-in-pakistan)stands out as a robust, user-friendly solution designed to empower entrepreneurs and businesses of all sizes to create, manage, and scale online stores efficiently. It offers a comprehensive suite of tools and features tailored to meet the diverse needs of e-commerce operations, from storefront customization to marketing and analytics. Shopify's popularity stems from its intuitive interface, which simplifies the complexities of online selling. Whether you're a startup venturing into e-commerce or an established brand looking to expand online, Shopify provides the infrastructure and flexibility needed to thrive in the competitive e-commerce landscape. This article delves into various aspects of utilizing Shopify effectively, guiding you through the steps to achieve e-commerce excellence. Choosing the Right Shopify Plan Selecting the appropriate Shopify plan is crucial for optimizing your e-commerce operations. Shopify offers several tiers of service, each catering to different business needs and growth stages: Basic Shopify, Shopify, and Advanced Shopify. Understanding the distinctions between these plans is essential to aligning your store's capabilities with your business objectives and budget. Basic Shopify is ideal for startups and small businesses aiming to establish an online presence without extensive customization needs. It provides essential features such as a website and blog creation, unlimited product listings, and 24/7 support. Shopify offers additional functionalities like gift cards, professional reporting, and lower transaction fees compared to Basic Shopify. This plan suits growing businesses looking to enhance customer engagement and streamline operations. Advanced Shopify is tailored for high-volume businesses requiring advanced reporting, real-time carrier shipping, and more robust analytics capabilities. It accommodates businesses poised for significant growth and scalability in e-commerce. Choosing the right plan involves assessing your current needs, projected growth, and budget constraints. Upgrading or downgrading Shopify plans is flexible, allowing businesses to adjust as they evolve. This section will explore the features and benefits of each plan to help you make an informed decision for your e-commerce venture. Setting Up Your Shopify Store Setting up your Shopify store is the foundational step towards establishing a successful online presence. The process begins with creating a Shopify account and logging in to the admin dashboard, where you'll manage all aspects of your store's operations. Creating Your Account and Logging In: To get started, visit Shopify's website and sign up for an account. You'll need to provide basic information about your business and choose a unique domain name for your store. Once registered, you can log in to the Shopify admin panel, where you'll have access to a suite of tools and settings to customize your storefront. Customizing Your Store Theme: Shopify offers a variety of customizable themes to suit different industries and aesthetics. Choose a theme that aligns with your brand identity and product offerings. Customize the theme's colors, fonts, and layout using Shopify's intuitive editor to create a visually appealing and user-friendly storefront. Adding Products and Collections: Populate your store with products by adding product listings that include detailed descriptions, pricing, and high-quality images. Organize your products into collections based on categories, seasons, or promotions to facilitate navigation for customers. Setting up your Shopify store requires attention to detail and strategic planning to create a seamless shopping experience for your customers. This section will provide a step-by-step guide to setting up your store effectively, ensuring that you're ready to start selling online with confidence.
msaadi
1,890,360
Kubernetes Cluster Setup Guide 2024
Common Installation on both worker and control plane nodes # using 'sudo su' is not a...
0
2024-06-16T15:19:10
https://dev.to/rahuldhole/kubernetes-cluster-setup-guide-2024-2d7l
kubernetes, ubuntu, cluster
## Common Installation on both worker and control plane nodes ```sh # using 'sudo su' is not a good practice. sudo apt update sudo apt-get install -y apt-transport-https ca-certificates curl gpg sudo apt install docker.io -y sudo usermod -aG docker $USER sudo chmod 777 /var/run/docker.sock # Update the Version if needed curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt update && sudo apt install kubeadm kubectl kubelet -y # VM related setup sudo apt install containerd sudo mkdir /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml > /dev/null sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml echo "Enabled SystemdCgroup in containerd default config" sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf echo "IPv4 forwarding has been enabled. Bridging enabled!" echo "br_netfilter" | sudo tee /etc/modules-load.d/k8s.conf > /dev/null echo "br_netfilter has been added to /etc/modules-load.d/k8s.conf." sudo swapoff -a echo "Disabled swap" echo "Edit /etc/fstab and disable swap if swap was eneabled" echo "Reboot the server." ``` ### Control plane **Note** Replace endpoint IP as host IP and node-name as hostname and keep pod nw CIDR as it is ```sh # tmux sudo kubeadm init --control-plane-endpoint=172.27.5.14 --node-name k8s-master --pod-network-cidr=10.244.0.0/16 ``` ```sh mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml kubectl get nodes kubectl get pods -A echo "Please wait a few minutes to get all pods running before joining any worker nodes." ``` ### Worker #### Join as a Worker ```sh sudo kubeadm reset pre-flight checks # sudsho + paste join cmd # sample command # kubeadm join 172.27.5.14:6443 --token ocks85.u2sqfn330l36ypkc \ #--discovery-token-ca-cert-hash #sha256:939be6a03f1a9014bfbb98507086e453fc83cd109319895871d27f9772653a1d \ # Be careful if there is --control-plane in join command means one more master node ``` ## Join as a control plane ```sh # on master/control plane kubeadm token create --print-join-command # Get certificate key openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ``` ```sh # on expected new control plane sudo kubeadm reset pre-flight checks sudo kubeadm join <control_plane_endpoint>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<discovery_token_ca_cert_hash> --control-plane --certificate-key <certificate_key> ``` ## Useful commands ```sh sudo kubeadm token create --print-join-command # port 6443 need to be open ``` ## Troubleshoot 1. Wait for all the control plane pods to be running before joining new workers in 2. Have plenty of disk space, the setup size is 4GB on the control plane and 3GB on the worker node 3. reprint the join command when it expired 4. API Server failed communication: Must have a static IP to the master node. ## References https://github.com/LondheShubham153/kubestarter/blob/main/kubeadm_installation.md https://www.learnlinux.tv/how-to-build-an-awesome-kubernetes-cluster-using-proxmox-virtual-environment/
rahuldhole
1,890,344
Making a progress bar in easy steps
Here i will be giving you easy steps to build a progress bar using HTML and CSS only. We will be...
0
2024-06-16T15:18:22
https://dev.to/sunder_mehra_246c4308e1dd/making-a-horizontal-progress-bar-in-easy-steps-4bdc
css, html, web3, webdev
Here i will be giving you easy steps to build a progress bar using HTML and CSS only. We will be using @keyframes for CSS animation. **Step 1: HTML** Create a div and another nested div ``` <div class="outer-box"> <div class="inner-box">80%</div> </div> ``` **Step 2: CSS** Give width to outer-box and inner-box with some padding in CSS. Here i have given padding of 10px. Now add animation with name "progressbar" and make the animation liner. Give animation duration as you like, here i have given it of 5 second. Now using "@keyframes" give you animation from and to. You can also replace from and to with 0% and 100% as you like. ``` .outer-box{ width:300px; padding:10px; background-color: blueviolet; border-radius:10px; } .inner-box{ text-align: center; max-width: 280px; padding: 10px; background-color: #61cf71; animation: progressbar linear forwards; animation-duration: 5s; border-radius: 10px; color: white; font-family: cursive; } @keyframes progressbar { from{width:1%} to{width:80%;} } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sw1r2fpgfy2d9kiypf6w.png) This is simple progress bar. More complex progress bars can be made using javascript. Thanks Feel free for any query.
sunder_mehra_246c4308e1dd
1,890,359
Docker a beginners guide
Docker is a platform for developing, shipping, and running applications in containers. Containers are...
0
2024-06-16T15:15:36
https://dev.to/ferdousazad/docker-a-beginners-guide-ea3
docker, webdev, tutorial, node
Docker is a platform for developing, shipping, and running applications in containers. Containers are lightweight and contain everything needed to run an application, making them portable and consistent across environments. Here’s a brief tutorial on Docker, including creating containers, networking between containers, and setting up a simple multi-container application involving a database, cache, and services written in Node.js and Go. ## **1. Install Docker** Install Docker from Docker’s official [website](https://www.docker.com/) ## **2. Basic Docker Commands** **Pull an image:** `docker pull <image-name>` Example: `docker pull node:14` **Run a container:** `docker run -d – name <container-name> <image-name>` Example: `docker run -d – name my-node-container node:14` **List running containers:** `docker ps` **Stop a container:** `docker stop <container-name>` **Remove a container:** `docker rm <container-name>` ## **3. Creating a Node.js Application Container** **Create a simple Node.js application:** ``` mkdir node-app cd node-app npm init -y npm install express ``` **2. Create `app.js`:** ``` const express = require(‘express’); const app = express(); const PORT = 3000; app.get('/', (req, res) => { res.send('Hello from Node.js!'); }); app.listen(PORT, () => { console.log(`Server is running on port ${PORT}`); }); ``` ## **3. Create `Dockerfile:** Dockerfile ``` FROM node:14 WORKDIR /usr/src/app COPY package.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["node", "app.js"] ``` ## **4. Build the Docker image:** `docker build -t node-app .` ## **5. Run the container:** `docker run -d – name node-app-container -p 3000:3000 node-app` **Creating a Go Application Container** **Create a simple Go application:** ``` mkdir go-app cd go-app ``` **2. Create `main.go`:** ``` package main import ( "fmt" "net/http" ) func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello from Go!") } func main() { http.HandleFunc("/", handler) http.ListenAndServe(":8080", nil) } ``` **3. Create `Dockerfile`:** Dockerfile ``` FROM golang:1.16 WORKDIR /app COPY . . RUN go build -o main . EXPOSE 8080 CMD ["./main"] ``` **4. Build the Docker image:** `docker build -t go-app .` **5. Run the container:** `docker run -d – name go-app-container -p 8080:8080 go-app` **5. Setting Up a Database Container** Run a PostgreSQL container: ``` docker run -d – name postgres-container -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_DB=mydatabase -p 5432:5432 postgres ``` **6. Setting Up a Redis Cache Container** **Run a Redis container:** `docker run -d – name redis-container -p 6379:6379 redis` **7. Networking Between Containers** Docker provides several ways to network containers. The simplest way is to use Docker’s default bridge network. **Create a custom network:** `docker network create my-network` **2. Run containers in the same network:** ``` docker run -d – name postgres-container – network my-network -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_DB=mydatabase postgres docker run -d – name redis-container – network my-network redis docker run -d – name node-app-container – network my-network -p 3000:3000 node-app docker run -d – name go-app-container – network my-network -p 8080:8080 go-app ``` **8. Connecting Services to Database and Cache** Example Node.js (app.js) connecting to PostgreSQL and Redis: **Install dependencies:** `npm install pg redis` Modify `app.js`: ``` const express = require('express'); const { Pool } = require('pg'); const redis = require('redis'); const app = express(); const PORT = 3000; const pool = new Pool({ user: 'postgres', host: 'postgres-container', database: 'mydatabase', password: 'mysecretpassword', port: 5432, }); const client = redis.createClient({ host: 'redis-container', port: 6379, }); app.get('/', async (req, res) => { const result = await pool.query('SELECT NOW()'); client.set('key', 'value'); client.get('key', (err, value) => { res.send(`PostgreSQL time: ${result.rows[0].now}, Redis value: ${value}`); }); }); app.listen(PORT, () => { console.log(`Server is running on port ${PORT}`); }); ``` Example Go (`main.go`) connecting to PostgreSQL and Redis: **Install dependencies:** `go get github.com/lib/pq go get github.com/go-redis/redis/v8` **2. Modify `main.go`:** ``` package main import ( "context" "database/sql" "fmt" "net/http" "github.com/go-redis/redis/v8" _ "github.com/lib/pq" ) var ctx = context.Background() func handler(w http.ResponseWriter, r *http.Request) { connStr := "user=postgres password=mysecretpassword dbname=mydatabase host=postgres-container sslmode=disable" db, err := sql.Open("postgres", connStr) if err != nil { panic(err) } defer db.Close() var now string err = db.QueryRow("SELECT NOW()").Scan(&now) if err != nil { panic(err) } rdb := redis.NewClient(&redis.Options{ Addr: "redis-container:6379", }) err = rdb.Set(ctx, "key", "value", 0).Err() if err != nil { panic(err) } val, err := rdb.Get(ctx, "key").Result() if err != nil { panic(err) } fmt.Fprintf(w, "PostgreSQL time: %s, Redis value: %s", now, val) } func main() { http.HandleFunc("/", handler) http.ListenAndServe(":8080", nil) } ``` ## **9. Docker Compose** To simplify the process, you can use Docker Compose to define and run multi-container Docker applications. Create `docker-compose.yml`: ``` version: '3' services: postgres: image: postgres environment: POSTGRES_PASSWORD: mysecretpassword POSTGRES_DB: mydatabase networks: - my-network redis: image: redis networks: - my-network node-app: build: ./node-app ports: - "3000:3000" networks: - my-network go-app: build: ./go-app ports: - "8080:8080" networks: - my-network networks: my-network: driver: bridge ``` **Run the application:** `docker-compose up – build` By following this tutorial, you can set up a multi-container application with Docker, including networking between containers and connecting services to a database and cache.
ferdousazad
1,890,358
Host Website for FREE using Github Pages
-First Login to your Github account and then choose the repository which you want to deploy...
0
2024-06-16T15:11:54
https://dev.to/mahimabhardwaj/host-website-for-free-using-github-pages-1496
webdev, github, javascript, beginners
-First Login to your **Github account** and then choose the repository which you want to [deploy online](https://youtu.be/bO1i5ObvdJg?feature=shared). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c84kc644k7pwy0d3exy7.JPG) -Then go to settings and select pages ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3e4jgj410bc1g27sof5a.JPG) -After selecting pages go to **branch option** and select branch as main ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vel26t1rlb71e9ue9cpy.JPG) -Now save the changes and after **selecting branch **and refresh the page **2-3 times** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cxe6681p3t4yu5qupxoy.JPG) -Now You will see a link it means your project is **deployed** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vtfv9ugsj2whih8q2ve2.JPG) -Now ,click on that link and **visit you site** and you can see your project has been **successfully deployed**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apuyp2gyp1z3t5ezx5jf.JPG)
mahimabhardwaj
1,890,357
.NET Fundamentals (Minimal API)
🔍 What is .NET .Net (pronounced "dot net") is a free and open source application platform....
27,572
2024-06-16T15:08:42
https://dev.to/suneeh/net-fundamentals-minimal-api-1h9
git, devops, github, beginners
## 🔍 What is .NET .Net (pronounced "dot net") is a free and open source application platform. The support by Microsoft and the regular updates and new features are what makes this tool so useful. It can be used for a variety of different things such as: - Mobile Apps - Desktop Apps - Microservices - Game Development - Machine Learning - Web Development In this article I want to focus on Web Development and the most important things you will need when building your first Web App. To be completely honest I will only be covering the API side of things, knowing that [Blazor](https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blazor) can also provide an integrated frontend for your website. (Side note, I normally go with Angular for my frontends - let me know if you want me to write about it as well) ## 🆚 Minimal APIs vs Controller Traditional Controllers follow the MVC ([Model-View-Controller](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller)) pattern to separate concerns as you should. They follow the typical conventions and are very well structured. Since they were the only way for most of our time, it is clear that most apps use this way to write APIs. ```c# public class HelloController : ControllerBase { [HttpGet("hello")] public IActionResult GetHello() { return Ok("Hello World!"); } [HttpPost("echo")] public IActionResult Echo([FromBody] string message) { return Ok(message); } } ``` Minimal APIs were introduced in .NET 6 to simplify API creation within .NET. The syntax is more concise, there is less boilerplate and you can still do most things that you could do with controllers as well (in .NET 8). Since Microsoft is heavily working on the feature parity of minimal APIs, I assume that in the future, this will be the preferred way to build APIs from scratch - but this might just be me. ```c# ... app.MapGet("/hello", () => "Hello World!"); app.MapPost("/echo", (string message) => Results.Ok(message)); ... ``` I think the examples are already very telling and I do not need to mention, that one of them seems more _minimal_ than the other, do I? ## 🖥️ .NET Cli To start out a new App you need to install the [.NET SDK](https://dotnet.microsoft.com/en-us/download) (Software Development Kit) and install the [.NET CLI](https://learn.microsoft.com/en-us/dotnet/core/tools/). To create a new App run `dotnet new web -o [Project Name]` and open it with the editor / IDE of choice. I recommend JetBrains Rider but if you want to go with free software you can also use Visual Studio or Visual Studio Code with some extensions. You want to look out for the `Program.cs` which is the entry point of your application. Check the content and it should look somewhat like this: ```c# var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); app.MapGet("/hello", () => "Hello World!"); app.Run(); ``` Creating a `builder` that could also handle things like Dependency Injection, Database Connections, Authentication and Authorization as well as lots of other things that I will talk about later. As you see, there is an Endpoint defined for `/hello`. If we run the app via our IDE (or .NET CLI `dotnet run`) the API is waiting for a call. Try calling `http://localhost:5292/hello` (check `/Properties/launchSettings.json` of your project to see what Port you are running on) in your browser, and you should get a "Hello World" message on your screen. You can check all options of the dotnet CLI by using `dotnet --help` in your terminal. Most of the commands will be handled by your IDE, but you can also execute them from the terminal if you are fancy. ## 📋 Validation As you have seen in the examples before those APIs really are _minimal_. Let's have a closer look at an endpoint that also validates the request for the GET Route /colorSelector/ ```c# string ColorName(string color) => $"Color specified: {color}!"; app.MapGet("/colorSelector/{color}", ColorName) .AddEndpointFilter(async (invocationContext, next) => { var color = invocationContext.GetArgument<string>(0); if (color == "Red") { return Results.Problem("Red not allowed!"); } return await next(invocationContext); }); ``` As you see, we defined the route as the first parameter of `app.MapGet()` and also map the specified color to our function by putting curly braces around it and matched the name. The validation is built with an endpoint filter that checks the invocation of the endpoint for the argument, then does checks (in this case the color is not allowed to be red, because red is evil) and calls `next` so other filters can also run. A filter is not always a validator, but sometimes it just logs requests, or adds an entry for statistics or does some other effect that should happen on every endpoint call. ## 🧭 Routing In many cases you want to cluster multiple endpoints. Reasons could be that you want to use the same filter on all of them, or use the same authorization for them, or just don't want to write the path multiple times. In this case we can group multiple routes like this ```c# var user = app.MapGroup("/user"); var admin = user.MapGroup("/admin"); admin.AddEndpointFilter((context, next) => { app.Logger.LogInformation("Admin route was called."); return next(context); }); user.MapGet("/", () => "Hello!"); ``` This helps with organizing different paths and endpoints, as well as reducing duplication. Also this allows you to cluster your endpoints in folders or files outside the Program.cs without losing track of a file. ## 💉 Dependency Injection (e.g. Database) and Configuration You can inject services into your API endpoints to reduce the complexity of the endpoint. Mostly you want to extract some logic into services so you can reuse them. In this example I setup a database in my project and inject the DbContext in my endpoint Program.cs ```c# using Microsoft.EntityFrameworkCore; using backend.ShopDbContext; var builder = WebApplication.CreateBuilder(args); string? connectionString = builder.Configuration.GetConnectionString("DefaultConnection"); builder.Services.AddDbContext<ShopDbContext>(opt => opt.UseNpgsql(connectionString)); var app = builder.Build(); using (var Scope = app.Services.CreateScope()) { var context = Scope.ServiceProvider.GetRequiredService<ShopDbContext>(); context.Database.Migrate(); } app.MapGet("/", async (ShopDbContext ctx) => { var prod = await ctx.Products.FirstOrDefaultAsync(); return prod; }); app.Run(); ``` Appsettings.json ```json { "ConnectionStrings": { "DefaultConnection": "Server=localhost; Port=5432; Database=postgres; Username=postgres; Password=MYPASSWORD;" } } ``` To use Npgsql I installed a [Nuget Package](https://www.nuget.org/packages/Npgsql.EntityFrameworkCore.PostgreSQL) that works with EF-Core and supports [PostreSQL Databases](https://www.postgresql.org/). The code extracts the [connection string](https://www.connectionstrings.com/) provided in the configuration (Appsettings.json) in the object `ConnectionStrings`. To setup the Database to match the schema defined in the code, you will have to apply the schema by calling `Database.Migrate()` so I do it at the start of the application. The Endpoint now injects the ShopDbContext magically (you could specify it by using the `[FromServices]` attribute) and gets the first element, which is a rather useless example. Most likely you want to return all, or specify the ID of the entry you want to get and look for it in the database. ## 🙏🏽 Thanks Thank you so much if you read this article all the way! Leave a comment if you have any questions, I'll be more than happy to answer right away. If you are shy you can also message me directly on [GitHub](https://github.com/Suneeh), [Instagram](https://www.instagram.com/_suneeh/) or [TikTok](https://www.tiktok.com/@_suneeh).
suneeh
1,890,356
Building the Internet of Things with AWS IoT Core
Building the Internet of Things with AWS IoT Core The Internet of Things (IoT) is...
0
2024-06-16T15:04:58
https://dev.to/virajlakshitha/building-the-internet-of-things-with-aws-iot-core-2914
![usecase_content](https://cdn-images-1.medium.com/proxy/1*zqfBK-ivKOyE5TLv4mHkkA.png) # Building the Internet of Things with AWS IoT Core The Internet of Things (IoT) is rapidly transforming our world, connecting billions of devices and enabling unprecedented levels of automation and data exchange. At the heart of any successful IoT implementation lies a robust and scalable infrastructure capable of securely connecting, managing, and processing data from a vast network of devices. This is where AWS IoT Core comes in. ### What is AWS IoT Core? AWS IoT Core is a fully managed service that makes it easy to connect, manage, and secure billions of devices at scale. It provides a secure communication channel between devices and the cloud, enabling bidirectional communication and data synchronization. With features like device authentication, authorization, message brokering, and device shadows, IoT Core lays a strong foundation for building complex and secure IoT applications. ### Use Cases for AWS IoT Core AWS IoT Core's flexibility makes it suitable for a wide range of IoT use cases across various industries. Let's explore some of these in detail: **1. Industrial Automation and Monitoring:** In industrial settings, IoT Core enables real-time monitoring and control of equipment, leading to increased efficiency and reduced downtime. Sensors embedded in machinery can continuously stream data on parameters like temperature, pressure, and vibration to IoT Core. This data can be analyzed to predict potential failures, trigger maintenance alerts, and optimize operational parameters. **2. Smart Homes and Cities:** IoT Core is instrumental in building intelligent homes and cities. Imagine a network of interconnected devices such as smart thermostats, lighting systems, and security cameras, all communicating through IoT Core. This enables homeowners to remotely control their appliances, optimize energy consumption, enhance security, and receive real-time alerts. **3. Connected Healthcare:** IoT Core is revolutionizing the healthcare industry by enabling remote patient monitoring and telemedicine applications. Wearable devices can track vital signs like heart rate, blood pressure, and sleep patterns, transmitting this data to IoT Core for analysis. Healthcare providers can access this information remotely, enabling proactive intervention and personalized treatment plans. **4. Asset Tracking and Logistics:** Tracking valuable assets throughout the supply chain is critical for businesses. IoT Core, combined with GPS-enabled devices, provides real-time location data, allowing companies to monitor shipments, optimize routes, prevent theft or loss, and gain valuable insights into their logistics operations. **5. Environmental Monitoring:** Monitoring environmental conditions is crucial for various applications, from agriculture to pollution control. IoT Core facilitates the deployment of sensor networks that collect data on parameters like air quality, water level, and soil conditions. This data can be used to identify trends, predict potential hazards, and make informed decisions regarding resource management. ### Other Cloud Providers and Alternatives While AWS IoT Core is a comprehensive solution, other cloud providers offer similar services: * **Google Cloud IoT Core:** Provides similar features to AWS IoT Core, focusing on data ingestion, device management, and integration with other Google Cloud services. * **Microsoft Azure IoT Hub:** A cloud-hosted message broker that connects a wide range of devices. Azure IoT Hub also includes device management capabilities and integration with other Azure services. These services offer varying degrees of scalability, security, and feature sets. The choice of the most suitable platform depends on specific project requirements, existing infrastructure, and budget considerations. ### Conclusion AWS IoT Core provides a robust and scalable foundation for building a wide array of IoT solutions. Its comprehensive features, combined with the power and flexibility of the AWS ecosystem, empower businesses across industries to leverage the transformative potential of the Internet of Things. As the IoT landscape continues to evolve, platforms like AWS IoT Core will play an increasingly critical role in shaping our connected future. ### Architecting an Advanced Use Case: Predictive Maintenance with Machine Learning Let's dive into a more advanced use case that showcases the combined power of IoT Core with other AWS services. **Scenario:** A manufacturing company wants to minimize downtime and maintenance costs by implementing a predictive maintenance system for its industrial equipment. **Architecture:** 1. **Data Ingestion:** Sensors on each machine collect various operational data points such as temperature, pressure, vibration, and operating speed. This data is securely transmitted to AWS IoT Core using MQTT protocol. 2. **Data Processing and Storage:** IoT Core forwards the ingested data to AWS Kinesis Data Streams for real-time processing. From there, the data is persisted in an Amazon S3 data lake for long-term storage and further analysis. 3. **Machine Learning Model:** Using historical data stored in S3, a machine learning model is trained using Amazon SageMaker. This model is designed to predict equipment failures based on the collected sensor data. 4. **Real-time Inference:** As new data arrives from the machines, AWS Lambda functions, triggered by Kinesis, perform real-time inference using the trained machine learning model. 5. **Alerting and Visualization:** If the model predicts an imminent failure, alerts are generated through Amazon SNS (Simple Notification Service), notifying maintenance teams to take proactive measures. Data visualizations and dashboards can be created using Amazon QuickSight, providing insights into equipment health and maintenance needs. **Benefits:** * **Reduced Downtime:** By predicting failures, maintenance can be performed proactively, significantly reducing unexpected downtime. * **Cost Optimization:** Transitioning from reactive to predictive maintenance minimizes unnecessary maintenance tasks and extends equipment lifespan. * **Data-Driven Insights:** The collected data provides valuable insights into equipment performance, enabling further optimization of operations. This advanced use case demonstrates how AWS IoT Core, combined with other AWS services like Kinesis, S3, SageMaker, Lambda, and SNS, can be leveraged to build sophisticated IoT solutions. The ability to process real-time data, apply machine learning, and generate actionable insights unlocks new possibilities for efficiency, cost savings, and innovation across a wide range of industries.
virajlakshitha
1,890,355
Integrating ClickHouse with AWS S3
Integrating ClickHouse with AWS S3 To integrate ClickHouse with an S3 bucket for fetching...
0
2024-06-16T15:03:36
https://dev.to/zubaire/integrating-clickhouse-with-aws-s3-mpm
### **Integrating ClickHouse with AWS S3** To integrate ClickHouse with an S3 bucket for fetching data, performing operations, and putting data back, follow these steps: ### **1. Setting Up ClickHouse** **Install ClickHouse**: - On a Debian-based system: ```bash sudo apt-get install clickhouse-server clickhouse-client ``` - Start ClickHouse server: ```bash sudo service clickhouse-server start # or sudo clickhouse start ``` - Start clickhouse-client with: ```bash clickhouse-client --password ``` ### **2. Fetching Data from S3 and Loading into ClickHouse** **Create a Table in ClickHouse**: ```sql CREATE TABLE s3_data ( id UInt32, name String, value Float32 ) ENGINE = MergeTree() ORDER BY id; ``` **Load Data from S3**: Use the **`s3`** table function to load data directly from an S3 bucket: ```sql INSERT INTO s3_data SELECT * FROM s3('https://s3.amazonaws.com/your-bucket/path/to/data.csv', 'YOUR_AWS_ACCESS_KEY_ID', 'YOUR_AWS_SECRET_ACCESS_KEY', 'CSVWithNames'); ``` ### **3. Performing Operations on Data in ClickHouse** Perform SQL queries to analyze the data: ```sql SELECT name, AVG(value) AS avg_value FROM s3_data GROUP BY name; ```
zubaire
1,890,353
Day 6: Mastering Arrays in JavaScript 🚀
Introduction Welcome to Day 6 of your JavaScript journey! 🌟 Yesterday, we explored...
0
2024-06-16T15:02:04
https://dev.to/dipakahirav/day-6-mastering-arrays-in-javascript-416j
javascript, array, webdev, learning
#### Introduction Welcome to Day 6 of your JavaScript journey! 🌟 Yesterday, we explored functions. Today, we will dive into arrays, one of the most important data structures in JavaScript. Arrays allow you to store multiple values in a single variable, making it easier to manage and manipulate collections of data. Let's get started! 🎉 please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) to support my channel and get more web development tutorials. #### What is an Array? 📚 An array is a special type of object that can hold an ordered list of values. Each value (or element) in an array has a numeric index, starting from 0. **Example:** ```javascript let fruits = ["Apple", "Banana", "Cherry"]; console.log(fruits[0]); // Output: Apple 🍎 ``` #### Creating Arrays 🌱 You can create arrays in multiple ways: **1. Using Array Literals** ```javascript let numbers = [1, 2, 3, 4, 5]; ``` **2. Using the `Array` Constructor** ```javascript let numbers = new Array(1, 2, 3, 4, 5); ``` #### Accessing Array Elements 🔍 You can access elements in an array using their index: **Example:** ```javascript let colors = ["Red", "Green", "Blue"]; console.log(colors[1]); // Output: Green 🍏 ``` #### Common Array Methods 🛠️ JavaScript provides various methods to manipulate arrays: **1. `push()`** Adds one or more elements to the end of an array. ```javascript let animals = ["Dog", "Cat"]; animals.push("Elephant"); console.log(animals); // Output: ["Dog", "Cat", "Elephant"] 🐘 ``` **2. `pop()`** Removes the last element from an array. ```javascript let animals = ["Dog", "Cat", "Elephant"]; animals.pop(); console.log(animals); // Output: ["Dog", "Cat"] 🐶🐱 ``` **3. `shift()`** Removes the first element from an array. ```javascript let birds = ["Parrot", "Sparrow", "Peacock"]; birds.shift(); console.log(birds); // Output: ["Sparrow", "Peacock"] 🦜 ``` **4. `unshift()`** Adds one or more elements to the beginning of an array. ```javascript let birds = ["Sparrow", "Peacock"]; birds.unshift("Parrot"); console.log(birds); // Output: ["Parrot", "Sparrow", "Peacock"] 🦜 ``` **5. `forEach()`** Executes a provided function once for each array element. ```javascript let cars = ["Tesla", "BMW", "Audi"]; cars.forEach(function(car) { console.log(car); }); // Output: // Tesla 🚗 // BMW 🚙 // Audi 🚘 ``` **6. `map()`** Creates a new array populated with the results of calling a provided function on every element in the calling array. ```javascript let numbers = [1, 2, 3, 4, 5]; let squares = numbers.map(function(number) { return number * number; }); console.log(squares); // Output: [1, 4, 9, 16, 25] 🔢 ``` **7. `filter()`** Creates a new array with all elements that pass the test implemented by the provided function. ```javascript let numbers = [1, 2, 3, 4, 5]; let evenNumbers = numbers.filter(function(number) { return number % 2 === 0; }); console.log(evenNumbers); // Output: [2, 4] ⚖️ ``` **8. `reduce()`** Executes a reducer function on each element of the array, resulting in a single output value. ```javascript let numbers = [1, 2, 3, 4, 5]; let sum = numbers.reduce(function(total, number) { return total + number; }, 0); console.log(sum); // Output: 15 ➕ ``` #### Practical Examples 🧩 **Example 1: Find the maximum number in an array** ```javascript let numbers = [10, 20, 30, 40, 50]; let max = numbers.reduce(function(a, b) { return Math.max(a, b); }); console.log("Max number:", max); // Output: Max number: 50 🔝 ``` **Example 2: Create a new array with elements in uppercase** ```javascript let fruits = ["apple", "banana", "cherry"]; let upperCaseFruits = fruits.map(function(fruit) { return fruit.toUpperCase(); }); console.log(upperCaseFruits); // Output: ["APPLE", "BANANA", "CHERRY"] 🍒 ``` #### Practice Activities 💪 **1. Practice Code:** - Create arrays using literals and the `Array` constructor. - Access and manipulate array elements using various methods. **2. Mini Project:** - Create a simple script that takes a list of student names and returns the names in alphabetical order. **Example:** ```javascript let students = ["Charlie", "Alice", "Bob"]; students.sort(); console.log("Sorted names:", students); // Output: Sorted names: ["Alice", "Bob", "Charlie"] 📚 ``` #### Summary 📋 Today, we explored arrays in JavaScript. We learned how to create arrays, access their elements, and use common array methods to manipulate data. Arrays are a fundamental part of JavaScript, and mastering them is crucial for effective programming. Stay tuned for Day 7, where we'll dive into objects and their properties in JavaScript! 🏆 Feel free to leave your comments or questions below. If you found this guide helpful, please share it with your peers and follow me for more web development tutorials. Happy coding! ### Follow and Subscribe: - **Website**: [Dipak Ahirav] (https://www.dipakahirav.com) - **Email**: dipaksahirav@gmail.com - **Instagram**: [devdivewithdipak](https://www.instagram.com/devdivewithdipak) - **YouTube**: [devDive with Dipak](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) - **LinkedIn**: [Dipak Ahirav](https://www.linkedin.com/in/dipak-ahirav-606bba128)
dipakahirav
1,890,352
How to Create a Memory Game: Step-by-Step Guide
Project:- 9/500 Memory Game project. Description The Memory Game is a classic...
27,575
2024-06-16T14:57:04
https://raajaryan.tech/how-to-create-a-memory-game-step-by-step-guide
javascript, opensource, beginners, tutorial
### Project:- 9/500 Memory Game project. ## Description The Memory Game is a classic card-matching game that helps improve memory and concentration skills. The objective of the game is to match pairs of cards with the same image. The game begins with all cards faced down, and players take turns flipping over two cards at a time, trying to find matching pairs. ## Features ### Feature 1: Interactive Gameplay - Players can flip cards to reveal images. - Cards will flip back if they do not match, allowing players to try again. - A matching pair will stay face-up, contributing to the player's score. ### Feature 2: Timer and Moves Counter - A timer to keep track of how long it takes to complete the game. - A moves counter to record the number of attempts made to find all pairs. - Both timer and moves counter reset with each new game. ### Feature 3: Responsive Design - The game layout adjusts to different screen sizes for optimal play on desktop and mobile devices. - Cards and UI elements resize accordingly to maintain a pleasant user experience. ## Technologies Used - **JavaScript**: Implements game logic and interactivity. - **HTML**: Structures the game board and UI elements. - **CSS**: Styles the game for an attractive and user-friendly interface. ## Setup Follow these instructions to set up and run the Memory Game project locally: 1. **Clone the repository** ```bash git clone https://github.com/deepakkumar55/ULTIMATE-JAVASCRIPT-PROJECT.git ``` 2. **Navigate to the project directory** ```bash cd Games/2-memory_game ``` 3. **Open the project in your preferred code editor** 4. **Open `index.html` in your browser** - You can simply double-click the `index.html` file. - Alternatively, you can run a local server (e.g., using VS Code's Live Server extension) for a better development experience. ## Contribution We welcome contributions to enhance the Memory Game project. To contribute, follow these steps: 1. **Fork the repository** to your own GitHub account. 2. **Clone your forked repository** to your local machine: ```bash git clone https://github.com/deepakkumar55/ULTIMATE-JAVASCRIPT-PROJECT.git ``` 3. **Create a new branch** for your feature or bug fix: ```bash git checkout -b feature-name ``` 4. **Make your changes** to the codebase. 5. **Commit your changes** with a descriptive commit message: ```bash git commit -m "Add new feature: feature description" ``` 6. **Push your changes** to your forked repository: ```bash git push origin feature-name ``` 7. **Open a pull request** on the original repository and describe your changes in detail. We appreciate your contributions and look forward to collaborating with you to improve the Memory Game! --- ## Get in Touch If you have any questions or need further assistance, feel free to open an issue on GitHub or contact us directly. Your contributions and feedback are highly appreciated! --- Thank you for your interest in the Memory Game Project. Together, we can build a more robust and feature-rich application. Happy coding!
raajaryan
1,890,351
Open Web Application Security Project OWASP Top Ten
Web security is crucial for protecting applications and data from various threats. The OWASP (Open...
0
2024-06-16T14:56:08
https://dev.to/ferdousazad/open-web-application-security-project-owasp-top-ten-l7o
webdev, websecurity, owasp, programming
Web security is crucial for protecting applications and data from various threats. The OWASP (Open Web Application Security Project) Top Ten is a widely recognized list of the most critical web application security risks. Here’s a detailed explanation of common web security best practices, including those highlighted by the OWASP Top Ten: ## **1. Injection (OWASP A1)** **Description:** Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing unauthorized data. **Best Practices:** Use parameterized queries and prepared statements. Employ ORM (Object-Relational Mapping) libraries that provide automatic query parameterization. Validate and sanitize all inputs. Use ORM and avoid dynamically constructing queries. ## **2. Broken Authentication (OWASP A2)** **Description:** This risk arises from incorrect implementation of authentication mechanisms, allowing attackers to compromise passwords, keys, or session tokens, or exploit other implementation flaws to assume other users’ identities. **Best Practices:** Implement multi-factor authentication (MFA). Ensure session tokens are properly secured. Use secure password storage methods (e.g., bcrypt). Implement account lockout mechanisms and ensure secure password recovery processes. **3. Sensitive Data Exposure (OWASP A3)** **Description:** **Sensitive data exposure occurs when applications do not adequately protect sensitive information such as credit cards, healthcare information, or personal identifiers. **Best Practices:** Use strong encryption for data at rest and in transit (e.g., TLS). Implement strict access controls. Avoid storing sensitive data unless absolutely necessary. Ensure that data is masked or encrypted when displayed. ## **4. XML External Entities (XXE) (OWASP A4)** **Description:** XXE vulnerabilities occur when XML input containing a reference to an external entity is processed by a weakly configured XML parser. **Best Practices:** Disable external entity processing in XML parsers. Use less complex data formats such as JSON, if possible. Validate and sanitize XML inputs. Regularly update XML parsers and libraries. ## **5. Broken Access Control (OWASP A5)** **Description:** Broken access control vulnerabilities arise when users are able to act outside of their intended permissions. **Best Practices:** Enforce least privilege: only give users access to what they need. Implement role-based access control (RBAC). Regularly review and test access controls. Use access control mechanisms provided by the platform (e.g., frameworks, libraries). ## **6. Security Misconfiguration (OWASP A6)** **Description:** Security misconfigurations occur when security settings are defined, implemented, and maintained as insecure defaults or are incomplete and ad-hoc. **Best Practices:** Implement a repeatable hardening process. Regularly update and patch systems and software. Remove or disable unnecessary features and services. Apply security configurations across the entire software stack. ## **7. Cross-Site Scripting (XSS) (OWASP A7)** **Description:** XSS vulnerabilities occur when an application includes untrusted data in a new web page without proper validation or escaping, or updates an existing web page with user-supplied data using a browser API that can create HTML or JavaScript. **Best Practices:** Use frameworks that automatically escape XSS by design. Sanitize and validate input data. Use Content Security Policy (CSP) to prevent the execution of malicious scripts. Encode data on output. ## **8. Insecure Deserialization (OWASP A8)** **Description:** Insecure deserialization flaws occur when applications deserialize untrusted data, allowing attackers to execute arbitrary code or conduct injection attacks. **Best Practices:** Avoid deserializing data from untrusted sources. Implement integrity checks such as digital signatures on serialized objects. Use a safe and secure serialization mechanism. Restrict and monitor deserialization. ## **9. Using Components with Known Vulnerabilities (OWASP A9)** **Description:** Applications that use libraries, frameworks, and other software modules with known vulnerabilities can undermine application defenses and enable various attacks. **Best Practices:** Regularly update and patch dependencies. Use tools to scan for known vulnerabilities in dependencies. Subscribe to security bulletins related to the components you use. Prefer components that are actively maintained and have a strong security record. ## **10. Insufficient Logging & Monitoring (OWASP A10)** **Description:** Inadequate logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to achieve their goals without being detected. **Best Practices:** Implement comprehensive logging of security-relevant events. Ensure logs are generated in a format that can be easily consumed by centralized log management solutions. Regularly monitor logs and establish an alerting mechanism for suspicious activities. Conduct regular audits and reviews of logs. **Additional Best Practices:** Secure Development Practices: Follow secure coding standards and guidelines. Regularly train developers on security best practices. Threat Modeling: Perform threat modeling to identify and mitigate potential security threats during the design phase. Regular Security Testing: Conduct regular security testing, including code reviews, penetration testing, and automated security scans. Secure DevOps (DevSecOps): Integrate security practices into the DevOps pipeline to ensure continuous security throughout the development lifecycle. By implementing these best practices, you can significantly enhance the security of your web applications and protect them against common threats. Read More: [OWASP Top Ten](https://owasp.org/www-project-top-ten/)
ferdousazad
1,890,350
What is full stack developer ?
A full stack developer is a type of software developer who is proficient in working on both the front...
0
2024-06-16T14:54:49
https://dev.to/vicky435435/what-is-full-stack-developer--4ka9
fullstack, developer, programming, beginners
A full stack developer is a type of software developer who is proficient in working on both the front end and back end portions of a web application. This means they have the skills to handle both the client-side (what users interact with) and server-side (the logic, database interactions, server configuration, etc.) development tasks. Here’s a more detailed breakdown: **Front End Development** The front end is everything that users see and interact with in their web browsers. A full stack developer working on the front end should be proficient in: HTML/CSS: The basic building blocks of web development. HTML structures the content, while CSS styles it. JavaScript: A scripting language used to create dynamic content and interactions on the web page. Front End Frameworks and Libraries: Such as React, Angular, Vue.js, and others that help in building complex user interfaces more efficiently. Responsive Design: Ensuring that web applications work well on a variety of devices and screen sizes. **Back End Development** The back end is everything that users don’t see, which powers the front end. This includes the server, database, and application logic. A full stack developer working on the back end should be proficient in: Server, Network, and Hosting Environment: Understanding how the web works, including servers, DNS, and hosting environments. Database Management: Knowledge of database systems like SQL (MySQL, PostgreSQL) and NoSQL (MongoDB). Server-Side Languages and Frameworks: Such as Node.js, Python (with Django or Flask), Ruby (with Ruby on Rails), Java, PHP, etc. APIs: Creating and interacting with APIs (RESTful services, GraphQL). **Additional Skills** In addition to front end and back end development, a full stack developer should also have: Version Control Systems: Knowledge of tools like Git for tracking changes in the code. Deployment: Experience with deploying applications, which may include using services like AWS, Docker, Kubernetes, CI/CD pipelines, etc. Understanding of Security Concerns: Implementing measures to ensure the application is secure. Problem-Solving: Ability to troubleshoot issues across the entire stack. **Benefits of Full Stack Development** Versatility: Can handle various stages of the development process, which is particularly useful for startups and small companies. Efficiency: Streamlines the development process since a single developer can switch between front end and back end tasks. Comprehensive Understanding: A full stack developer often has a more holistic understanding of how the entire application works. **Challenges** Depth vs. Breadth: Being a jack-of-all-trades can sometimes mean not being a master in one. It can be challenging to keep up with the latest advancements in both front end and back end technologies. Workload: Handling both sides of development can be demanding and lead to a higher workload. **Summary** A full stack developer is a versatile and valuable asset to many development teams, capable of working on both the client-side and server-side of web applications, and bridging the gap between different phases of development.
vicky435435
1,890,349
Understanding Database Normalization with Examples
Database normalization is a fundamental concept in database theory and design. It's a systematic...
0
2024-06-16T14:53:59
https://dev.to/dana-fullstack-dev/understanding-database-normalization-with-examples-pai
Database normalization is a fundamental concept in database theory and design. It's a systematic approach to organizing data in a database to reduce redundancy and improve data integrity. The process involves dividing large tables into smaller, more manageable pieces and defining relationships between them. ## What is Database Normalization? Normalization involves applying a series of rules, or "normal forms," to your database. These rules are designed to: - Minimize duplicate data - Organize data logically - Ensure data dependencies make sense ## Why Normalize a Database? The main goals of normalizing a database are: - To eliminate redundant (duplicate) data - To ensure data dependencies are sensible (only storing related data in a table) - To protect the data and make the database more scalable ## The Normal Forms There are several normal forms, each with its own set of rules. The most commonly used normal forms are: - First Normal Form (1NF) - Second Normal Form (2NF) - Third Normal Form (3NF) - Boyce-Codd Normal Form (BCNF) ### First Normal Form (1NF) A table is in 1NF if: - All columns contain atomic, indivisible values - There are no repeating groups or arrays #### Example of 1NF: ``` | StudentID | Name | CourseIDs | |-----------|----------------|-----------------| | 1 | Alice Brown | CS101, MATH201 | | 2 | Bob Crown | CS101, ENG210 | | 3 | Charlie Davis | MATH201, ENG210 | ``` To convert it to 1NF, we would separate the `CourseIDs` into individual rows: ``` | StudentID | Name | CourseID | |-----------|----------------|----------| | 1 | Alice Brown | CS101 | | 1 | Alice Brown | MATH201 | | 2 | Bob Crown | CS101 | | 2 | Bob Crown | ENG210 | | 3 | Charlie Davis | MATH201 | | 3 | Charlie Davis | ENG210 | ``` ### Second Normal Form (2NF) A table is in 2NF if: - It is in 1NF - All non-key attributes are fully functionally dependent on the primary key #### Example of 2NF: Using the 1NF example above, we can further normalize it by separating the courses into a different table: ``` | StudentID | Name | |-----------|----------------| | 1 | Alice Brown | | 2 | Bob Crown | | 3 | Charlie Davis | | CourseID | CourseName | |----------|------------| | CS101 | Comp Sci | | MATH201 | Calculus | | ENG210 | English | | StudentID | CourseID | |-----------|----------| | 1 | CS101 | | 1 | MATH201 | | 2 | CS101 | | 2 | ENG210 | | 3 | MATH201 | | 3 | ENG210 | ``` ### Third Normal Form (3NF) A table is in 3NF if: - It is in 2NF - It has no transitive functional dependencies #### Example of 3NF: If we had a table that included a `TeacherID` that was dependent on the `CourseID`, we would separate that into its own table to satisfy 3NF: ``` | StudentID | Name | |-----------|----------------| | 1 | Alice Brown | | 2 | Bob Crown | | 3 | Charlie Davis | | CourseID | CourseName | TeacherID | |----------|------------|-----------| | CS101 | Comp Sci | T1 | | MATH201 | Calculus | T2 | | ENG210 | English | T3 | | TeacherID | TeacherName | |-----------|-------------| | T1 | Dr. Smith | | T2 | Prof. Jones | | T3 | Mr. Lee | | StudentID | CourseID | |-----------|----------| | 1 | CS101 | | 1 | MATH201 | | 2 | CS101 | | 2 | ENG210 | | 3 | MATH201 | | 3 | ENG210 | ``` ## Conclusion Normalization is a critical process in database design that can greatly enhance the efficiency and integrity of your data. By following the normal forms, you can ensure that your database is free of unnecessary redundancy and is structured in a way that supports the logical relationships between data points. Remember, normalization is a balance. Over-normalization can lead to excessive complexity and performance issues, while under-normalization can cause data redundancy and integrity problems. It's important to find the right level for your specific application and use case. ## Bonus I have another article for options when you design a database. Check it out for [TOP 3 Database Design Tools ](https://codepen.io/tech_blog/full/BaewbVv).
dana-fullstack-dev
1,890,341
Unlocking the Future: Passwordless Authentication(Passkey) With Flutter and Node.js
Welcome Back, Fellow Coders! Hey everyone! It’s been a hot minute since our last deep dive, but...
0
2024-06-16T14:45:31
https://dev.to/djsmk123/unlocking-the-future-passwordless-authenticationpasskey-with-flutter-and-nodejs-1ojh
flutter, android, passworldless, tutorial
> Welcome Back, Fellow Coders! Hey everyone! It’s been a hot minute since our last deep dive, but I’m thrilled to be back and diving headfirst into some seriously cool tech. Today, we’re tackling a topic that’s bound to make your life (and your users’ lives) a whole lot easier: passwordless authentication. Yes, you heard that right – we’re talking about a future where you’ll never have to remember another password again. In this post, we’ll explore how to implement passwordless authentication in your Flutter app with the help of Node.js on the server side. It’s a powerful combo that promises to enhance security and improve user experience. So, grab your favorite coding beverage, get comfy, and let’s embark on this exciting journey together. Welcome back, and let’s get coding! ## Table of Contents - [Introduction](#introduction) - [Prerequisites](#prerequisites) - [Setup Your Own WebAuthn Server](#setup-your-own-webauthn-server) - [Integrate into Flutter](#integrate-into-flutter) - [Outputs](#outputs) - [Conclusion](#conclusion) ## Introduction ### Passwordless Authentication: A New Era of Security Passwordless authentication is an innovative approach to securing user accounts and sensitive data without relying on traditional passwords. Instead of requiring users to create and remember complex passwords, passwordless authentication leverages advanced technologies like biometrics, security tokens, and public key cryptography. This method not only enhances security but also improves user experience by simplifying the login process. ![passwordless-meme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d46nizve4bccx6np7iwv.jpeg) #### Benefits of Passwordless Authentication - **Enhanced Security**: Traditional passwords are prone to being weak, reused across multiple sites, and vulnerable to phishing attacks and data breaches. Passwordless methods, such as biometrics or hardware tokens, are significantly harder for attackers to compromise. - **Improved User Experience**: Users no longer need to remember complex passwords or go through cumbersome password recovery processes. This leads to a smoother and more pleasant authentication experience. ![passwordless-meme-usage](https://i.imgflip.com/8u0xl7.jpg) ### What is passkey ? A passkey is a form of authentication credential used in passwordless authentication systems, typically based on the WebAuthn standard. It serves as a secure, user-friendly alternative to traditional passwords. Here are some key points about passkeys: 1. **Digital Credential**: A passkey is a digital credential stored securely on a user's device. 2. **Public Key Cryptography**: It utilizes public key cryptography to authenticate users. During registration, a key pair is generated: a private key stored securely on the user's device and a public key shared with the server. 3. **Authentication Process**: To authenticate, the user proves possession of the private key associated with their passkey. This can be done through biometric verification (e.g., fingerprint scan, facial recognition) or by unlocking the device. 4. **Enhanced Security**: Passkeys are more secure than traditional passwords because they are not susceptible to phishing attacks or credential stuffing. They also eliminate the risk of password reuse and theft. 5. **User Convenience**: Passkeys improve user experience by simplifying the authentication process. Users do not need to remember complex passwords or go through lengthy password reset procedures. {% youtube 2xdV-xut7EQ %} --- ## Prerequisites - **Flutter** : Basic knowledge - **Node-js** : Node js. is used server, you can use any backend language for framework. - **MongoDB**: we are using **MongoDB** for database,you can use whichever you find best. - **Active Brain** ## Setup Your Own WebAuthn Server Before starting setup **webauthn** server, lets talk about what is **webAuthn** and **passkey**. **Passkeys** and **WebAuthn** complement each other rather than compete. **WebAuthn** serves as the framework that facilitates the functionality of passkeys. ### Key difference - Passkeys act as digital credentials used for authentication, while WebAuthn is a set of rules enabling communication between browsers and websites for passkey-based authentication. Once this communication setup is complete, WebAuthn instructs the browser on how to communicate with the relying party (the website where passkey authentication is used) for the authentication process. Read more about pass key and WebAuthn [here](https://teampassword.com/blog/passkey-vs-webauthn) ### How WebAuthn Works WebAuthn operates through several essential phases to establish secure and passwordless authentication: 1. **Registration Initiation**: When a user initiates registration, the server generates an encoded challenge and sends it, along with other necessary data, to the client. ![passkey-registration](https://images.ctfassets.net/23aumh6u8s0i/2wFESoN5p5JJNnCM8gzIIA/8359b4717f89f2bd6ad0036961b4e841/registartion-flow.jpg) 2. **User Approval**: During this phase, the user selects and registers a credential. This can be a passkey, biometric (like fingerprint or face-lock), USB key, Bluetooth device, or device security code. The user's approval is crucial to proceed. 3. **Verification Process**: Once the user approves the registration, the data returned by the approved credential is sent back to the server. This data includes the public key, the type of device used, and a unique device identifier. 4. **Completion of Registration**: After verification, the user's credentials are securely stored in the database. Key pieces of information such as the `userId` and `publicCredentialId` are recorded, enabling future authentication without the need for traditional passwords. This process ensures a robust and user-friendly authentication method, leveraging modern security practices to safeguard user accounts effectively. ### Understanding Relay Servers (rpId) In the realm of WebAuthn (Web Authentication), the Relay Server, known as rpId, serves as a pivotal intermediary between the client device and the relying party (e.g., a website or service). Here’s a concise breakdown using two focal points: - **Authentication Endpoint**: The Relay Server functions as the designated endpoint for authentication requests originating from the client device. It facilitates a secure channel for communication between the client and the relying party. - **Secure Challenge Management**: Upon receiving an authentication request, the Relay Server generates and manages a cryptographic challenge. This challenge ensures the integrity of the authentication process, guarding against replay attacks and unauthorized access attempts. > Login using **WebAuthn** ![web-authn-web-login](https://images.ctfassets.net/23aumh6u8s0i/6sQPhaAjnYdPl2L1Ruu0gU/0b866c672109b02ba5a856b324d719a0/login-flow.jpg) --- **Enough with theory, let's implement passkey.*** ## Setup Server with node JS ### Add support for Digital Asset Links To enable passkey support for your flutter Android app, associate your app with a website that your app owns. You can declare this association by completing the following steps: - Create a Digital Asset Links JSON file. For example, to declare that the website `https://signin.example.com` and an Android app with the package name com.example can share sign-in credentials, create a file named `assetlinks.json` with the following content: ``` [ { "relation" : [ "delegate_permission/common.handle_all_urls", "delegate_permission/common.get_login_creds" ], "target" : { "namespace" : "android_app", "package_name" : "com.example.android", //packageId "sha256_cert_fingerprints" : [ SHA_HEX_VALUE ] } } ] ``` - Host the Digital Assets Link JSON file at the following location on the sign-in domain: ``` https://domain[:optional_port]/.well-known/assetlinks.json ``` - on Sending `GET` request following data should be return ``` > GET /.well-known/assetlinks.json HTTP/1.1 > User-Agent: curl/7.35.0 > Host: signin.example.com < HTTP/1.1 200 OK < Content-Type: application/json ``` > You can validate digital assets link json from [here](https://developers.google.com/digital-asset-links/tools/generator). - `server.js` for hosting digital assetslink ``` import express from 'express'; const app = express(); app.use('/.well-known', express.static('public')); app.listen(3000, () => { console.log('Server started on port 3000'); }); ``` ### Setup Node server - Installing following packages ``` npm install @simplewebauthn/server base64url dotenv express jsonwebtoken mongodb uuid ``` - Connect to MongoDB in `src/utils/db.ts` {% gist https://gist.github.com/Djsmk123/a5ee75c39b671f3f0fb30b9e0ae92874 %} - Util for managing user in `src/utils/userManager.ts` {% gist https://gist.github.com/Djsmk123/77eb17c8099088217450c1a487a89a2d %} - We need to store every `challenge` string generated by the backend during registration or login. This stored data will allow us to verify whether a given `challenge` is valid or not. Additional data such as `expiryTime` can be included with each challenge, but for simplicity, we are only storing strings. `src/utils/challengerManagers.ts` {% gist https://gist.github.com/Djsmk123/9e8d1d9d1b6a6752384e947fae4d58e0 %} - When registration is completed, we receive the following in the response: `credentialID`, `credentialPublicKey`, `rpID`, `origin`, and other parameters. We need to store these values alongside the `userId` for the purpose of verifying sign-ins. `src/utils/passkey.ts` {% gist https://gist.github.com/Djsmk123/74f4c4f21d4d4f4ec23380a4db6ba7c0 %} - Create basic http server using node ``` import express, { Request, Response } from 'express'; import { connectToMongoDB } from './utils/db'; // Constants const rpID = "<domain>.com"; // Replace with your actual rpID const androidHashKey="<hashKey">; //android release signing app hashapp const origin = [ rpID, androidHashKey ]; // Initialize Express app const app = express(); const port = process.env.PORT || 8080; app.use(express.json()); // Connect to MongoDB connectToMongoDB(); // GET endpoint for / app.get('/', (req: Request, res: Response) => { res.send('Hello World!'); }); // Start server const localIp = process.env.LOCAL_IP || 'localhost'; app.listen(port, () => { console.log(`Express is listening at http://${localIp}:${port}`); }); export default app; ``` - Generate Android hash key either ``` keytool -printcert -jarfile app.apk ``` ------------------ OR --------------------------------------- ``` keytool -printcert -file CERT.RSA ``` - Create middleware for verify jwt token: `src/api/middleware.ts` {% gist https://gist.github.com/Djsmk123/a2ae203f1a09f46f1fbd4f3ec23b3b03 %} Now we will create following endpoints: - **`/register/start`:** Initiates the registration process. - **`/register/complete`:** Verifies passkey data, stores the user in the database, deletes the challenge once sign-in or user creation is completed, and sends requested data with a JWT token. - **`/login/start`:** Returns the payload needed to start the login flow on the client side. - **`/login/complete`:** Verifies data returned by the client with passkey data from MongoDB and returns requested data with a JWT token. - **`/me`:** An auth-protected route that requires a JWT token to return the user object. **`app.ts`** : {% gist https://gist.github.com/Djsmk123/3ba410c7c4d0696a43ea89e49ac534f7 %} We are done with server side integration. --- ## Integrate into Flutter - Add Following packages into flutter app: - **Credential Manager** : [Credential Manager](https://pub.dev/packages/credential_manager) is a Jetpack API that supports multiple sign-in methods, such as username and password, passkeys, and federated sign-in solutions (such as Sign-in with Google) in a single API, thus simplifying the integration for developers. `flutter pub add credential_manager` > Note: I am author of this package and it is currently support `Android` platform for flutter. More about Compose Credential Manager: [https://developer.android.com/identity/sign-in/credential-manager](https://developer.android.com/identity/sign-in/credential-manager) - **HTTP**: For implementing server APIs ` flutter pub add http` > Note: I am going talk about screens and UI component we will integrate direct. - Initialise Credential Manager: ```dart Future<void> main() async { WidgetsFlutterBinding.ensureInitialized(); if (AuthService.credentialManager.isSupportedPlatform) { //check if platform is supported await AuthService.credentialManager.init( preferImmediatelyAvailableCredentials: true, ); } runApp(const MyApp()); } ``` - Create `userModel` in dart {% gist https://gist.github.com/Djsmk123/db344b81e180e7369160955ad55dd5d7 %} - Create Authentication Service for calling APIs: {% gist https://gist.github.com/Djsmk123/b54f9244f128dad474443e9d52ba4dea %} Certainly! Here's the markdown file with the corrected content: ## Registration - **Get required payload for `/register/start` endpoint:** ```dart final res = await AuthService.passKeyRegisterInit(username: username!); ``` - **Send the payload to Credential Manager to start the user approval flow for registration:** ```dart final credResponse = await AuthService.credentialManager.savePasskeyCredentials(request: res); ``` - **After user approval, send the data to `/register/complete` API for verification and user creation:** ```dart final user = await AuthService.passKeyRegisterFinish( username: username!, challenge: res.challenge, request: credResponse, ); ``` ## Sign-In - **Make a GET request to `/login/start` endpoint to retrieve `challenge` and other payload:** ```dart final res = await AuthService.passKeyLoginInit(); ``` - **Initiate the user approval flow for sign-in by sending the request data from `/login/start` to `CredentialManager`:** ```dart final credResponse = await AuthService.credentialManager.getPasswordCredentials( passKeyOption: CredentialLoginOptions( challenge: res.challenge, rpId: res.rpId, userVerification: res.userVerification, ), ); ``` - **Verify the user by sending the data returned from `getPasswordCredentials` method to `/login/complete/` endpoint to retrieve the user object:** ```dart final user = await AuthService.passKeyLoginFinish( challenge: res.challenge, request: credResponse.publicKeyCredential!, ); ``` ## Outputs ![Passkey registration 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lzpe920ht5vxfgjfsn4o.jpeg) ![Passkey Registation 2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v51tvdvh29zq6y2kubjm.jpeg) ![Passkey Login](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lijiue09i7sf8u5qi85v.jpeg) ### Conclusion In conclusion, exploring passkey authentication with WebAuthn integration for a seamless login experience in Flutter has been insightful. This technology offers a secure and user-friendly alternative to traditional password-based authentication methods. By leveraging cryptographic authentication mechanisms and user verification processes, passkey authentication enhances security while simplifying user interactions. Throughout this article, we've delved into the foundational concepts of passkey authentication, including its integration with WebAuthn for server-side operations using Node.js. We've explored essential endpoints for user registration and login, demonstrating how to initiate and complete these processes securely. Moreover, the content of this article has been refined using AI-powered tools to ensure clarity and grammatical accuracy. This approach underscores the importance of leveraging technology to enhance content quality and readability. For those interested in implementing passkey authentication in their projects, the provided code snippets and explanations serve as a practical guide. Each step, from initiating registration to verifying credentials during login, has been detailed to facilitate smooth integration into Flutter applications. More info about credential manager read following blog: [Bringing seamless authentication to your apps with passkeys using Credential Manager API](https://medium.com/androiddevelopers/bringing-seamless-authentication-to-your-apps-using-credential-manager-api-b3f0d09e0093) ## **Source code** : - [Flutter App](https://github.com/Djsmk123/passkey_example_flutter) - [Backend Server](https://github.com/Djsmk123/pass_key_backend) - [React App which has digital Assets link](https://github.com/Djsmk123/flutter_deeplinking_example/tree/main/reactjs_blogs_deeplink_example) - [Credential Manager Package](https://pub.dev/packages/credential_manager) ## Follow me on - [Twitter](https://twitter.com/smk_winner) - [Instagram](https://www.instagram.com/smkwinner/) - [Github](https://www.github.com/djsmk123) - [linkedin](https://www.linkedin.com/in/md-mobin-bb928820b/) - [dev.to](https://dev.to/djsmk123) - [Medium](https://medium.com/@djsmk123) --- By leveraging AI for grammar correction and clarity enhancement, this article aims to provide a polished and informative guide to passkey authentication in Flutter. All credits for the code snippets and technical insights go to the authors and contributors of the referenced resources.
djsmk123
1,890,342
Super-charging Django: Tips & Tricks
Django encourages rapid development and clean, pragmatic design. However, as your application scales,...
0
2024-06-16T14:43:32
https://dev.to/kagemanjoroge/super-charging-django-tips-tricks-24bi
Django encourages rapid development and clean, pragmatic design. However, as your application scales, ensuring optimal performance becomes crucial. In this article, we'll go beyond the basics and delve deeper into advanced techniques and tools to optimize your Django application. ### 1. **Database Optimization** **a. Indexing and Query Optimization:** Proper indexing can significantly enhance query performance. Use Django’s `db_index` and unique constraints appropriately. Analyze and optimize your SQL queries to avoid redundant data retrievals. ```python class Hikers(models.Model): name = models.CharField(max_length=100, db_index=True) unique_code = models.CharField(max_length=50, unique=True) ``` Monitoring tools like [`pg_stat_statements`](https://www.postgresql.org/docs/current/pgstatstatements.html) for PostgreSQL can help identify slow queries and further optimize them. **b. Advanced Query Techniques:** Leverage Django's ORM capabilities with complex queries, such as `annotate()`, `aggregate()`, and `Subquery` to perform calculations directly in the database, minimizing data transfer to your application. ```python from django.db.models import Count, Subquery, OuterRef subquery = Hiker.objects.filter(related_model_id=OuterRef('pk')).values('related_model_id') annotated_queryset = RelatedModel.objects.annotate(count=Count(subquery)) ``` Use the `Prefetch` object to optimize querying of related objects and avoid the [N+1 problem](https://dev.to/herchila/how-to-avoid-n1-queries-in-django-tips-and-solutions-2ajo). ```python from django.db.models import Prefetch prefetched_hikers = Hiker.objects.prefetch_related(Prefetch('related_model', queryset=RelatedModel.objects.all())) ``` Always consider performing database operations at the lowest level: - Performing operations in a QuerySet: ```python # QuerySet operation on the database # fast, because that's what databases are good at hikers.count() ``` - Performing operations using Python: ```python # counting Python objects # slower, because it requires a database query anyway, and processing # of the Python objects len(hikers) ``` - Performing operations using template filters: ```html <!-- Django template filter slower still, because it will have to count them in Python anyway, and because of template language overheads --> {{ hikers|length }} ``` **c. Database Connection Pooling:** Database connection pooling reduces the cost of opening and closing connections by maintaining a pool of open connections. ```python # settings.py DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'mydatabase', 'USER': 'mydatabaseuser', 'PASSWORD': 'mypassword', 'HOST': 'localhost', 'PORT': '5432', 'CONN_MAX_AGE': 500, # Use persistent connections with a timeout } } ``` Consider using third-party packages like `django-postgrespool2` for more advanced pooling features if needed. ### 2. **Caching Strategies** **a. Advanced Caching Techniques:** Utilize different caching strategies like per-view caching, template fragment caching, and low-level caching to optimize your Django app. Use Redis or Memcached for fast, in-memory data storage. ```python # settings.py CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.redis.RedisCache', 'LOCATION': 'redis://127.0.0.1:6379/1', } } # views.py from django.views.decorators.cache import cache_page @cache_page(60 * 15) # Cache view for 15 minutes def my_view(request): ... ``` Include an example of low-level caching: ```python from django.core.cache import cache def my_view(request): data = cache.get_or_set('my_data_key', expensive_function, 300) return JsonResponse(data) ``` **b. Distributed Caching:** For large-scale applications, implement distributed caching to share cache data across multiple servers, ensuring consistency and scalability. **c. Using Cached Sessions:** For better performance, store session data using Django’s cache system. Ensure you have configured your cache system properly, particularly if using Memcached or Redis. ```python # settings.py SESSION_ENGINE = 'django.contrib.sessions.backends.cache' ``` ### 3. **Efficient Middleware Use** **a. Custom Middleware Optimization:** Write custom middleware efficiently, avoiding unnecessary processing. Minimize the number of middleware layers to reduce overhead. ```python class SimpleMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): response = self.get_response(request) return response # settings.py MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', # Add only necessary middleware ] ``` **b. Asynchronous Middleware:** Django 3.1+ supports asynchronous views and middleware. Use async middleware for I/O-bound tasks to improve throughput. ```python class AsyncMiddleware: async def __call__(self, request, get_response): response = await get_response(request) return response ``` ### 4. **Static and Media File Optimization** **a. Serve Static and Media Files Efficiently:** Use a dedicated service like Amazon S3, or a CDN to serve static and media files, reducing load on your Django application server. **b. Static File Compression and Minification:** Compress and minify your static files (CSS, JS) to reduce load times. Use tools like `django-compressor` and `whitenoise`. ```python # settings.py STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage' ``` ### 5. **Asynchronous Processing and Task Queues** **a. Celery for Background Tasks:** Offload long-running tasks to Celery to keep your application responsive. Configure Celery with Redis or RabbitMQ as the message broker. ```python # tasks.py from celery import shared_task @shared_task def my_background_task(param): # Perform time-consuming task return param * 2 ``` An example of setting up periodic tasks with Celery Beat: ```python # celery.py from celery import Celery from celery.schedules import crontab app = Celery('my_project') app.conf.beat_schedule = { 'task-name': { 'task': 'my_app.tasks.my_periodic_task', 'schedule': crontab(minute=0, hour=0), # every day at midnight }, } ``` **b. Async Views for High I/O Operations:** Use Django’s async views to handle high I/O operations efficiently. ```python from django.http import JsonResponse import asyncio async def async_view(request): await asyncio.sleep(1) # Simulating a long I/O operation data = {'message': 'This is an async view'} return JsonResponse(data) ``` Consider integrating Django with [FastAPI](https://fastapi.tiangolo.com/) for asynchronous endpoints if your application needs high-performance, non-blocking I/O operations. ### 6. **Load Balancing and Scalability** **a. Horizontal Scaling:** Distribute your application load across multiple servers using load balancers. Compare different load balancers (e.g., Nginx vs. HAProxy) and their benefits. **b. Kubernetes for Container Orchestration:** Set up Kubernetes to manage and scale your Django application. ```yaml # Kubernetes Deployment Example apiVersion: apps/v1 kind: Deployment metadata: name: django-deployment spec: replicas: 3 selector: matchLabels: app: django template: metadata: labels: app: django spec: containers: - name: django image: my_django_image ports: - containerPort: 8000 ``` ### 7. **Monitoring and Profiling** **a. Comprehensive Monitoring:** Implement monitoring tools like Prometheus and Grafana for real-time metrics and alerting. Set up alerts and dashboards to monitor application health proactively. **b. In-depth Profiling:** Use profiling tools like Django Debug Toolbar, Silk, and Py-Spy to analyze performance bottlenecks and optimize your code. ```python # Profiling with Py-Spy py-spy top --pid <django-process-pid> ``` ### 8. **Optimizing Template Performance** **a. Cached Template Loader:** Use Django’s cached template loader to cache compiled templates, reducing rendering time. ```python # settings.py TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'OPTIONS': { 'loaders': [ ('django.template.loaders.cached.Loader', [ 'django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader', ]), ], }, }, ] ``` **b. Efficient Template Design:** Use Django’s `select_related` and `prefetch_related` in views to optimize data retrieval for templates. Avoid unnecessary template tags and filters to keep rendering fast. ```python # views.py from django.shortcuts import render from .models import Hiker def hiker_list(request): hikers = Hiker.objects.select_related('related_model').all() return render(request, 'hikers/list.html', {'hikers': hikers}) ``` - Note that using `{% block %}` is faster than using `{% include %}`. ### 9. **Advanced Python Techniques** **a. Lazy Evaluation:** Utilize Django’s `@cached_property` decorator for caching expensive computations within a model instance. ```python from django.utils.functional import cached_property class MyModel(models.Model): @cached_property def expensive_property(self): return expensive_computation() ``` **b. Using PyPy:** [PyPy](https://www.pypy.org/) is an alternative Python implementation that can execute code faster for CPU-bound operation Consider using PyPy for performance improvements, but be aware of compatibility issues with certain Django dependencies. Test thoroughly before deploying to production. ### 10. **Using the Latest Version of Django** Always use the latest version of Django to benefit from performance improvements, security patches, and new features. Keep your dependencies updated alongside Django for optimal performance and security. ### Conclusion Remember, optimization is an ongoing process. Continuously monitor your application’s performance and be proactive in identifying and addressing bottlenecks. Happy coding! 😊
kagemanjoroge
1,890,345
Four Must-Have Essential MacOS Apps
1. Downie Downie is a versatile downloader, now in its 4th generation. It can be used as a...
0
2024-06-16T14:42:06
https://dev.to/hikarimaeda/four-must-have-essential-macos-apps-5b6g
tooling, webdev
### 1. Downie [Downie](https://software.charliemonroe.net/downie/) is a versatile downloader, now in its 4th generation. It can be used as a browser plugin, allowing you to download almost anything from the web. For instance, if you find an interesting video online and want to watch it repeatedly, just click the Downie icon in the top-left corner of your browser, select the video you want to download, and it will start downloading immediately. You can also copy the webpage link into Downie, and it will download the content flawlessly. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cgurhrj92p2ccub9lw1j.png) ### 2. ServBay [ServBay](https://www.servbay.com/) is an incredible [development environment](https://www.servbay.com/) tool. Even though PHP 8.4 hasn't officially launched, ServBay has already integrated the 8.4 package. I can confidently say this is the best development environment I've ever used. It saves me the hassle of setting up the environment, significantly saving my time. You can install any package with just a click, which is very convenient. It also supports custom domains and multiple hosts. As a web developer, if I could only keep one software, it would definitely be ServBay. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tvhzr3ywz0r7vvs8zzzw.png) ### 3. Adguard [Adguard](https://adguard.com/en/welcome.html) is a powerful ad blocker that can also be used as a Safari browser plugin. Simply enable Safari in the software settings. When you open the browser, click "Block Element" in the menu bar, select the type and area of the ad you want to remove, and click "Block". These ads won’t appear again. Adguard also has a fantastic feature that blocks pre-roll ads in videos for free, meaning you can watch videos without any annoying ads before the main content starts—a great relief for binge-watchers. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmmluuppsayepytnoeg7.png) ### 4. Paste [Paste](https://pasteapp.io/) is a convenient clipboard application that allows users to quickly set a shortcut for accessing the clipboard. It can store an unlimited number of copied items, so you never have to worry about losing content you want to paste. It can handle text, links, files, and images without any issues. Another handy feature is the ability to pin frequently used items, such as phone numbers and email addresses, to the top of the clipboard for easy access. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y0w02b6yyxygi9ds98ig.png) ### 5. Dropover [Dropover](https://dropoverapp.com/) is extremely useful when you need to organize files by dragging and dropping them. When you need to move files to a specific location, use Dropover. While dragging files, a "shelf" appears next to the folder. You can drag all the files you need onto the "shelf" at once. Then, open the destination folder and drag the "shelf" into it to complete the file transfer efficiently. This eliminates the need to open multiple windows. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jd4ih7yt6628s9soq717.png)
hikarimaeda
1,890,343
Stylish Comfort: Exploring Hellstar Shirts and Shorts
Hellstar, renowned for its bold approach to contemporary fashion, extends its signature style beyond...
0
2024-06-16T14:40:52
https://dev.to/work_df097eadc4c2e801f496/stylish-comfort-exploring-hellstar-shirts-and-shorts-4km6
hellstar, shorts
[Hellstar](https://hellstarclothingsus.ltd/), renowned for its bold approach to contemporary fashion, extends its signature style beyond sweatshirts and sweatpants to encompass a range of equally fashionable shirts and shorts. This article delves into the allure of Hellstar's shirts and shorts, emphasizing their unique designs, comfort, and versatile appeal. Hellstar: A Fusion of Fashion and Functionality Founded on the principles of innovation and quality, Hellstar has carved a niche in the fashion industry by combining cutting-edge designs with unparalleled comfort. The brand's commitment to craftsmanship is evident in every garment, ensuring that each piece not only meets but exceeds expectations in terms of style and durability. ## Hellstar Shirts: Embodying Modern Elegance Hellstar shirts epitomize modern elegance with their sleek designs and meticulous attention to detail. Available in a variety of styles, from classic button-downs to contemporary tees, these shirts feature distinctive prints, embroidered logos, and sophisticated patterns that reflect the brand's avant-garde aesthetic. Design Elements: Hellstar shirts often showcase bold graphics or minimalist designs that make a statement. The incorporation of the brand's logo, either subtly embroidered or boldly printed, adds a touch of exclusivity to each piece. Whether opting for a casual tee or a more formal button-down, wearers can effortlessly elevate their ensemble with Hellstar's refined craftsmanship. Material and Comfort: Crafted from premium fabrics such as lightweight cotton and breathable blends, [Hellstar shirts](https://thehellstarclothing.ltd/) offer a luxurious feel against the skin. The materials are chosen not only for their comfort but also for their ability to retain shape and withstand everyday wear. This ensures that Hellstar shirts remain a staple in both casual and semi-formal wardrobes. ## Hellstar Shorts: Versatile and Contemporary Hellstar shorts are designed to merge comfort with contemporary style, making them ideal for various occasions and settings. Available in an array of cuts, including athletic-inspired designs and casual chinos, these shorts cater to diverse tastes while maintaining a cohesive brand aesthetic. Design Features: [Hellstar shorts](https://thehellstarstore.ltd/shorts/) are characterized by their streamlined silhouettes and functional details. Some styles feature discreet branding or embroidered motifs, enhancing their visual appeal without overwhelming the overall design. The emphasis on clean lines and understated elegance ensures that Hellstar shorts complement any wardrobe effortlessly. Versatility: From leisurely weekends to outdoor adventures, Hellstar shorts offer unparalleled versatility. Pair them with a Hellstar shirt for a coordinated look or combine them with a casual tee for a relaxed vibe. The ability to transition seamlessly from day to night makes Hellstar shorts a must-have for the modern wardrobe. ## Cultural Influence and Popularity Hellstar's shirts and shorts have garnered widespread acclaim among fashion enthusiasts, influencers, and celebrities alike. The brand's distinctive aesthetic and commitment to quality have cemented its reputation as a trendsetter in contemporary streetwear. Endorsements from high-profile individuals and prominent media appearances further underscore Hellstar's influence in shaping global fashion trends. ## Sustainability and Ethical Practices [Hellstar](https://hellstarclothingsus.ltd/) remains committed to sustainable practices, incorporating eco-friendly materials and ethical manufacturing processes into its production. By prioritizing sustainability, the brand not only minimizes its environmental footprint but also sets an industry standard for responsible fashion practices. Consumers can feel confident in choosing Hellstar apparel, knowing that their purchase supports sustainable initiatives. ### The Value of Hellstar Shirts and Shorts Investing in Hellstar shirts and shorts represents more than just acquiring stylish clothing; it embodies a commitment to quality and contemporary fashion. The durability of Hellstar garments ensures long-lasting wear, making them a worthwhile addition to any wardrobe. While Hellstar products may come at a premium, their versatility, comfort, and enduring appeal justify the investment in timeless fashion pieces. ## Conclusion Hellstar shirts and shorts exemplify the brand's dedication to blending innovative design with uncompromising comfort. With their distinctive aesthetics, premium materials, and versatile appeal, these garments redefine contemporary fashion standards. Whether embracing casual chic or elevating everyday attire, Hellstar shirts and shorts empower wearers to express their unique style with confidence and sophistication.
work_df097eadc4c2e801f496
1,890,340
Networking to a 5th Grader in 256 words.
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-16T14:39:13
https://dev.to/thomastaylor/networking-to-a-5th-grader-in-256-words-6p4
devchallenge, cschallenge, computerscience, networking
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Imagine you want to send a hand-written invite to a buddy who lives far away to your birthday party. In order to send this invite, you would need to use a postal service. Sending data through the internet is like using a postal service. Firstly, you would write your letter and put it in an envelope (which is like creating a message on your computer). Then, you write your friend's home address on the envelope which tells the postal service the destination (which is like an IP address for the internet). Next, you place the envelope in the mailbox (which is similar to using your Wi-Fi or home internet to send a message). Now, it's up to the postal service to deliver your mail. The postal workers are like internet service providers who move your mail from one place to another using trucks and post offices. These trucks and post offices are like routers and servers on the internet. At each post office along the way, your mail might be sorted and sent to another post office. As time goes on, your mail gets closer and closer to your buddy's house. This is similar to how data travels from servers and routers on the internet. Eventually, your mail is delivered to your friend's house. When your friend opens the envelope, it's like their computer receiving and showing the message you sent. So, sending messages and pictures on the internet is a lot like sending letters through the postal service, but with computers and special addresses instead.
thomastaylor
1,890,339
Create FastAPI App Like pro part-1
install fastapi using following commend pip install fastapi[all] Enter fullscreen mode ...
0
2024-06-16T14:36:49
https://dev.to/muhammadnizamani/create-fastapi-app-like-pro-part-1-12pi
fastapi, python, uvicorn, pydentic
install fastapi using following commend ``` pip install fastapi[all] ``` Create a project directory named it then start **Step #1:** Create a **server** directory and add an** __init__.py **file to it. Then, create the following subdirectories within the server directory: 1. **DB**: This directory will contain all the code for database connections. 2. **Models**: This directory will house the models for all tables. 3. **Routers**: This directory will contain all the routers. Ensure that each table has a separate router. 4. **Schemas**: This directory will contain all the Pydantic schemas for each table. Ensure that each table has a separate file for its schema. 5. **Utils**: This directory will contain utility functions and code. **Note: all of the above directory needs file name __init__.py** check this picture to understand file structure ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r9pir0xsndg5akqyyg8i.png) **step#2** create backend.py inside server directory and it should contain following code and adjust it according to your needs ``` from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware from server.routers import test_question_router app = FastAPI(title="Backend for Tip for fastapi", version="0.0.1",docs_url='/') app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"], allow_credentials=True, ) app.include_router(your_router.router) @app.get("/ping") def health_check(): """Health check.""" return {"message": "Hello I am working!"} ``` Step#3 then Create file name run.py out side the server directory add following code ``` import os import uvicorn ENV: str = os.getenv("ENV", "dev").lower() if __name__ == "__main__": uvicorn.run( "server.backend:app", host=os.getenv("HOST", "0.0.0.0"), port=int(os.getenv("PORT", 8080)), workers=int(os.getenv("WORKERS", 4)), reload=ENV == "dev", ) ``` Now you can run you FastAPI project with just following commend ``` python run.py ``` this is my github [https://github.com/MuhammadNizamani](https://github.com/MuhammadNizamani) this is my squad on daily.dev [https://dly.to/DDuCCix3b4p](https://dly.to/DDuCCix3b4p) check code example on this repo and please give a start to my repo https://github.com/MuhammadNizamani/Fastapidevto
muhammadnizamani
1,890,338
Day 6: Introduction to Semantic HTML
Welcome to Day 6 of your journey to mastering HTML and CSS! Today, we will explore semantic HTML, an...
0
2024-06-16T14:35:44
https://dev.to/dipakahirav/day-6-introduction-to-semantic-html-3o2i
javascript, webdev, html, css
Welcome to Day 6 of your journey to mastering HTML and CSS! Today, we will explore semantic HTML, an important concept that helps improve the accessibility and SEO of your web pages. By the end of this post, you'll understand the benefits of semantic HTML and how to use semantic elements effectively. please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) to support my channel and get more web development tutorials. #### What is Semantic HTML? Semantic HTML refers to using HTML elements that have a clear, descriptive meaning. These elements not only describe the content they contain but also convey the meaning and structure of the content to browsers, search engines, and assistive technologies. #### Benefits of Semantic HTML 1. **Improved Accessibility**: Semantic elements make it easier for screen readers and other assistive technologies to interpret and navigate the content. 2. **Better SEO**: Search engines can better understand and index the content, improving the visibility and ranking of your web pages. 3. **Enhanced Code Readability**: Semantic elements make the code more readable and maintainable for developers. #### Common Semantic HTML Elements Here are some of the most commonly used semantic elements in HTML: 1.**`<header>`**: Represents the introductory content or a set of navigational links. ```html <header> <h1>Welcome to My Website</h1> <nav> <ul> <li><a href="#home">Home</a></li> <li><a href="#about">About</a></li> <li><a href="#contact">Contact</a></li> </ul> </nav> </header> ``` 2.**`<nav>`**: Represents a section of navigation links. ```html <nav> <ul> <li><a href="#home">Home</a></li> <li><a href="#about">About</a></li> <li><a href="#contact">Contact</a></li> </ul> </nav> ``` 3.**`<section>`**: Represents a standalone section of content. ```html <section> <h2>About Us</h2> <p>We are a company that values...</p> </section> ``` 4.**`<article>`**: Represents a self-contained piece of content that can be independently distributed or reused. ```html <article> <h2>Latest News</h2> <p>Today, we announced...</p> </article> ``` 5.**`<aside>`**: Represents content that is tangentially related to the content around it (like a sidebar). ```html <aside> <h2>Related Articles</h2> <ul> <li><a href="#article1">Article 1</a></li> <li><a href="#article2">Article 2</a></li> </ul> </aside> ``` 6.**`<footer>`**: Represents the footer of a document or section. ```html <footer> <p>&copy; 2024 My Website</p> <nav> <ul> <li><a href="#privacy">Privacy Policy</a></li> <li><a href="#terms">Terms of Service</a></li> </ul> </nav> </footer> ``` 7.**`<main>`**: Represents the main content of a document. ```html <main> <h1>Main Content</h1> <p>This is the main content of the page.</p> </main> ``` #### Creating a Web Page with Semantic HTML Let's create a simple web page using semantic HTML elements: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Semantic HTML Example</title> </head> <body> <header> <h1>Welcome to My Website</h1> <nav> <ul> <li><a href="#home">Home</a></li> <li><a href="#about">About</a></li> <li><a href="#contact">Contact</a></li> </ul> </nav> </header> <main> <section> <h2>About Us</h2> <p>We are a company that values...</p> </section> <article> <h2>Latest News</h2> <p>Today, we announced...</p> </article> <aside> <h2>Related Articles</h2> <ul> <li><a href="#article1">Article 1</a></li> <li><a href="#article2">Article 2</a></li> </ul> </aside> </main> <footer> <p>&copy; 2024 My Website</p> <nav> <ul> <li><a href="#privacy">Privacy Policy</a></li> <li><a href="#terms">Terms of Service</a></li> </ul> </nav> </footer> </body> </html> ``` #### Summary In this blog post, we explored the concept of semantic HTML and its benefits. We learned about various semantic elements and how to use them to create a well-structured and accessible web page. Stay tuned for Day 7, where we will dive into the basics of CSS and how to style your HTML content. Happy coding! --- Feel free to leave your comments or questions below. If you found this guide helpful, please share it with your peers and follow me for more web development tutorials. Happy coding! ### Follow and Subscribe: - **Website**: [Dipak Ahirav] (https://www.dipakahirav.com) - **Email**: dipaksahirav@gmail.com - **Instagram**: [devdivewithdipak](https://www.instagram.com/devdivewithdipak) - **YouTube**: [devDive with Dipak](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) - **LinkedIn**: [Dipak Ahirav](https://www.linkedin.com/in/dipak-ahirav-606bba128)
dipakahirav
1,890,337
Advanced Insights into Automated Data Processing Tools
Introduction: In the age of big data, automated data processing tools have become...
0
2024-06-16T14:34:15
https://dev.to/data_expertise/advanced-insights-into-automated-data-processing-tools-4l3k
automateddataprocessing, machinelearning, bigdata, datascience
## Introduction: In the age of big data, automated data processing tools have become indispensable for businesses aiming to efficiently handle vast amounts of information. Moving beyond the basics, this article delves into advanced strategies and applications of automated data processing, examining its impact from various perspectives, including efficiency, scalability, and innovation. ## Enhancing Data Quality with Automated Processing: [Automated data processing](https://dataexpertise.in/basics-of-automated-data-processing-tools/) tools are crucial in ensuring data quality. These tools employ sophisticated algorithms to detect and correct errors, fill in missing values, and standardize data formats. - Data Cleansing: Advanced tools like Trifacta and Talend use machine learning to automate data cleansing, identifying anomalies and inconsistencies with greater accuracy than manual methods. - Data Enrichment: Integration with external data sources can enhance datasets, providing richer context and more comprehensive insights. Tools like Alteryx facilitate this by automating the enrichment process, merging internal data with public or third-party data sources. ## Scalability and Performance Optimization: Scalability is a significant challenge in data processing, particularly as data volumes grow exponentially. Automated data processing tools offer robust solutions to this challenge. - Distributed Processing: Tools like Apache Spark and Hadoop enable distributed data processing, leveraging clusters of computers to handle large datasets efficiently. This parallel processing capability significantly reduces processing time and enhances scalability. - Resource Management: Automated tools optimize resource allocation dynamically. For instance, AWS Glue uses serverless architecture to scale resources based on workload requirements, ensuring efficient processing without over-provisioning. ## Real-time Data Processing and Streaming: Real-time data processing is increasingly important for applications requiring immediate insights and actions. Automated data processing tools are evolving to meet this demand. ![Real Time Data Processing From Different Sources](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rnwq9hijmfn147a060x3.jpg) - Stream Processing: Platforms like Apache Kafka and Apache Flink facilitate real-time data processing, allowing businesses to analyze and respond to data as it is generated. This is crucial for applications like fraud detection, where immediate action is necessary. - Event-Driven Architecture: Integrating automated data processing with event-driven architecture enhances responsiveness. Tools such as AWS Lambda and Google Cloud Dataflow enable the processing of data events in real time, supporting applications in IoT and real-time analytics. ## Integration with Machine Learning and AI: Automated data processing tools are increasingly integrated with [machine learning](https://dataexpertise.in/machine-learning-beginners-guide/) and AI, enabling more sophisticated data analysis and decision-making. - Automated Machine Learning (AutoML): Tools like Google Cloud AutoML and H2O.ai automate the process of building, training, and deploying machine learning models. This integration accelerates the development of predictive models, allowing for more timely insights. - AI-Driven Insights: AI capabilities within data processing tools can automate complex tasks like [natural language processing](https://dataexpertise.in/advancing-emotional-artificial-intelligence/) (NLP) and image recognition, expanding the scope of data analysis. IBM Watson and Microsoft Azure AI are leaders in this space, providing robust AI-powered data processing solutions. ## Security and Compliance: As [data privacy](https://dataexpertise.in/data-privacy-compliance-legal-frameworks/) regulations become stricter, automated data processing tools play a vital role in ensuring compliance and security. - Data Masking and Encryption: Automated tools can implement data masking and encryption to protect sensitive information. Tools like Informatica and IBM InfoSphere offer automated security features to ensure data privacy. - Compliance Monitoring: Automated tools continuously monitor data processing activities to ensure compliance with regulations like GDPR and CCPA. They can generate audit trails and compliance reports, simplifying the process of regulatory adherence. ## Future Trends in Automated Data Processing: The landscape of automated data processing is continuously evolving, driven by technological advancements and emerging business needs. - Edge Computing: The rise of [edge computing](https://dataexpertise.in/edge-computing-iot-innovations/) is pushing data processing closer to the source of data generation. Automated tools are being developed to process data on edge devices, reducing latency and bandwidth usage. - Quantum Computing: Although still in its infancy, [quantum computing](https://dataexpertise.in/5-quantum-computing-data-processing-revolution/) holds the promise of revolutionizing data processing. Automated tools designed for quantum environments could exponentially increase processing speeds for complex datasets. - Synthetic Data Generation: Automated tools are also advancing in generating synthetic data, which can be used for training machine learning models and testing data processing systems without compromising real data security. ## Conclusion: Automated data processing tools are no longer just about efficiency and basic automation; they are pivotal in driving advanced data strategies and innovations. By enhancing data quality, optimizing performance, enabling real-time processing, integrating with AI, and ensuring security, these tools are transforming how businesses leverage data. As technology evolves, the capabilities of automated data processing tools will continue to expand, unlocking new possibilities and driving the future of data-driven decision-making. ## About the Author: [Durgesh Kekare](https://www.linkedin.com/in/durgesh-kekare/) is a data enthusiast and expert contributor at DataExpertise.in, specializing in advanced data processing techniques and their applications. With a passion for exploring the intersections of big data, machine learning, and AI, Durgesh provides in-depth analysis and insights into the latest trends and technologies shaping the data landscape.
data_expertise
1,890,336
Sallah greeting with ram
Check out this Pen I made!
0
2024-06-16T14:34:10
https://dev.to/kemiowoyele1/sallah-greeting-with-ram-73c
codepen
Check out this Pen I made! {% codepen https://codepen.io/frontend-magic/pen/JjqMMXN %}
kemiowoyele1
1,890,292
Code With Heroines : TCP/IP && Rise and fall of the Han Dynasty
Description The dramatic fall of the Tang Dynasty within a collapsing virtual world. Zhu...
0
2024-06-16T14:34:08
https://dev.to/fubumingyu/code-with-heroines-tcpip-rise-and-fall-of-the-han-dynasty-3o0f
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8nv2q85rr5fuzb6nbba.png) ## Description The dramatic fall of the Tang Dynasty within a collapsing virtual world. Zhu Quanzhong, depicted as a determined young girl with advanced hacking skills, leads the charge against a crumbling digital empire. The background is filled with fragmented data structures and corrupted files, symbolizing the breakdown of the system. ## Tang Dynasty Li Yuan founded the Tang Dynasty (618-907) after overthrowing the Sui Dynasty and establishing his rule in Chang'an. His successor, T'ai Zemin, solidified the dynasty's governance with a complex taxation system and a government structure based on the Ritsuryo system, which integrated military and agricultural duties. The central government operated through three ministries and six departments, focusing on criminal and administrative law. Externally, the Tang Dynasty expanded its influence over neighboring regions, including the Tutsis, Western nations, tribes of the Northeast, and northern Vietnam, establishing local governorates. However, internal challenges arose, especially during the reign of Empress Noriten Wuhu (690-705), causing instability. Emperor Xuanzong (712-756) temporarily restored stability, but later reliance on Yang Guifei's family and local military chief Sodo-sha led to the An Lushan and Fumi Simyeong Rebellion (755-763). Following the rebellion, the Tang Dynasty struggled with the dominance of local military chiefs, eunuch corruption, and external invasions. The weakening of the Ritsuryo system resulted in increased taxes, the collapse of the peasantry, and the rise of large noble estates. Military and taxation changes, including double taxation and monopolies on commodities like salt, further destabilized the dynasty. The late 9th century saw a major peasant uprising (875-884), and the Tang Dynasty ultimately fell to Zhu Quanzhong, an envoy to Tang China. ## What is TCP/IP? TCP manages the exact transfer of data, while IP determines the path along which data is sent. The following is a specific description of each role. ### Roles of TCP Simply put, it acts like a post office. - Data splitting and reassembly: Splits large data into smaller packets and reassembles them back to the original data at the receiving end. - Order control: packets are numbered and sent in sequence, then sorted in the correct order at the receiving end. - Error detection and correction: Checks whether packets were delivered correctly and requests retransmission if there is an error. - Connection establishment and termination: establishes a connection between the sender and receiver before sending data, and terminates the connection when the data transmission is finished. ### Roles of IP Simply put, it acts like a truck driver. - Addressing: Uses the source and destination IP addresses to determine where to send the data. - Routing: Ensures that data takes the optimal path from source to destination. This may involve going through multiple relay points. - Packet delivery: treats each packet individually and delivers it over the optimal route. Packets may take different routes.
fubumingyu
1,890,332
Building Zerocalc, part II - evaluating then parsing
In part I we built a simple tokenizer while explaining how rustc converts an input text into a stream...
27,824
2024-06-16T14:33:36
https://dev.to/michal1024/building-zerocalc-part-ii-evaluating-then-parsing-3fim
rust, programming
In [part I](https://dev.to/michal1024/building-zerocalc-part-i-rustc-lexer-and-a-lexer-in-rust-3ipf) we built a simple tokenizer while explaining how `rustc` converts an input text into a stream of tokens. The next step is to transform the stream of tokens into a representation that will be easy to evaluate. This time we will not follow the `rustc` implementation. The `rustc` parser is a complex system that produces multiple representations of the code, including abstract syntax tree (AST), high-, mid-, and low-level intermediate language. Those representations are used to perform different checks and transformations such as borrow checking or macro expansion before the low-level code is handed over to LLVM for actual machine code generation. We do not need all that complexity for a simple calculator. ## Evaluating Before we build a parser let's think about what the parser will produce. There are multiple ways of representing and evaluating mathematical expressions such as using an expression tree or following a reverse polish notation. The latter is more interesting as it is closer to what the real machine does. Let's consider what the machine has to do to add two numbers `1+2` 1. Store `1` in a register 2. Store `2` in a register 3. Call the `add` instruction to add the two registers 4. Store the result in a register We can write this as the following pseudo-assembly code. This code uses a stack instead of registers, but the idea is very similar: ``` push 1 push 2 add // adds 2 numbers from the top of the stack and stores the result back to the stack pop // get the result from the stack ``` The above expression can be represented in a reverse polish notation as: `1 2 +`. A more interesting example, which requires considering operator precedence, will be `1 + 2 * 3`. This example in reverse polish notation will be `1 2 3 * +`. Our calculator will need a representation that can store values and operations on a stack. Let's define two enums for that: ```rust enum Expression { Val(i32), BinOp(Op) } enum Op { Add, Mul } ``` An entire mathematical expression or a program to evaluate is a list of expressions: ```rust type Program = Vec<Expression>; ``` With this representation evaluating mathematical expressions becomes very simple: ```rust fn evaluate(p: &Program) -> i32 { let mut stack: Vec<i32> = vec![]; for exp in p { match exp { Expression::Val(i) => stack.push(*i), Expression::BinOp(Op::Add) => { let a = stack.pop().unwrap(); let b = stack.pop().unwrap(); stack.push(a+b); }, Expression::BinOp(Op::Mul) => { let a = stack.pop().unwrap(); let b = stack.pop().unwrap(); stack.push(a*b); } } }; //final result is now on the stack stack.pop().unwrap() } #[test] fn test_add() { let program = vec![ Expression::Val(1), Expression::Val(2), Expression::BinOp(Op::Add) ]; assert_eq!(3, evaluate(&program)); } ``` The above code ignores error checking, such as overflow errors. I will leave it up to the reader to consider solving that issue. Hint: one possible solution is to use a number wrapper that can keep an actual number and the NaN (not-a-number) term. ## Parsing Our parsing algorithm will use information about the precedence of operations. Literal (value) has the highest precedence - if we see it, we immediately store it in our output program. Then we consider, in order, multiplication and addition. We have 2 extra tokens: Eof (end of the input stream) and Unknown (used for initial values of the tokens) - they will have the lowest precedence. Let's define precedence as a function: ```rust fn precedence(token_kind: lexer::TokenKind) -> i32 { use lexer::TokenKind::*; match token_kind { Unknown | Eof => 0, Add => 1, Mul => 2, Literal(_) => 3 } } ``` Our parser, in addition to storing the source being parsed and an output program, needs a few extra things: * a variable to track the current position in the source so that token text can be extracted, * variables to store the current and next token We will use `Cursor` structure we defined in part I to tokenize the input string. The lifetime of our parser will be bound to the lifetime of the source code being parsed. ```rust struct Parser<'src> { source: &'src str, cursor: lexer::Cursor<'src>, pos: usize, current_token: lexer::Token, next_token: lexer::Token, program: Program } ``` We will also implement a few extra methods, that will help us with parsing: * `from_source` - a constructor to create the parser from the source code string * `init` - to initialize parsing * `bump` - to advance the parser, tracking its position * `current_value` - to extract the string value of a token using the current position ```rust impl<'src> Parser<'src> { fn from_source(source: &'src str) -> Parser<'src> { Parser { source, cursor: lexer::Cursor::new(source), pos: 0, current_token: lexer::Token::new(lexer::TokenKind::Unknown, 0), next_token: lexer::Token::new(lexer::TokenKind::Unknown, 0), program: vec![] } } fn init(&mut self) { self.next_token = self.cursor.advance_token(); } fn bump(&mut self) { self.pos += self.current_token.len; self.current_token = mem::replace(&mut self.next_token, self.cursor.advance_token()); } fn current_value(&self) -> &'src str { &self.source[self.pos..self.pos + self.current_token.len] } } ``` Note the `mem:replace` function used in the `bump` method. In Rust, we cannot simply move ownership from one variable to another while leaving the old variable pointing to nothing. We could clone the value, but we only need one copy, so we can use the `replace` function instead, moving ownership, and assigning a new value in one go. Internally the `replace` function uses unsafe code to swap references without copying the actual content of the target object. For the parsing algorithm, we are going to iterate over tokens and: 1. Compare the precedence of the next token and current token - if the precedence of the next token is higher, call parsing recursively 2. Push the current expression on the stack. Akin to the recursive descent parsing method, we will match tokens and delegate parsing each type of token to specialized methods. ```rust fn parse(&mut self) { self.init(); while self.next_token.kind != lexer::TokenKind::Eof { self.parse_next(precedence(self.next_token.kind)); } } fn parse_next(&mut self, current_precedence: i32) { while precedence(self.next_token.kind) >= current_precedence { self.bump(); match self.current_token.kind { lexer::TokenKind::Add => self.parse_binary_op(Op::Add), lexer::TokenKind::Mul => self.parse_binary_op(Op::Mul), lexer::TokenKind::Literal(_) => self.parse_literal(), _ => (), } } } fn parse_binary_op(&mut self, op: Op) { //continue parsing while tokens are of higher or equal precedence self.parse_next(precedence(self.current_token.kind)); self.program.push( Expression::BinOp(op) ); } fn parse_literal(&mut self) { let i: i32 = self.current_value().parse().unwrap(); self.program.push( Expression::Val(i) ); } ``` Let's check if it works: ```rust #[test] fn test_parse() { let source = "1+2*3"; let expected = vec![ Expression::Val(1), Expression::Val(2), Expression::Val(3), Expression::BinOp(Op::Mul), Expression::BinOp(Op::Add) ]; let mut parser = Parser::from_source(source); parser.parse(); assert_eq!(expected, parser.program); } ``` That's it! Our basic parser is ready to be enhanced with more operations and proper error handling. Sources: 1. https://en.wikipedia.org/wiki/Binary_expression_tree 2. https://en.wikipedia.org/wiki/Reverse_Polish_notation 3. https://doc.rust-lang.org/src/core/mem/mod.rs.html#858 4. https://rustc-dev-guide.rust-lang.org/
michal1024
1,890,333
Spring Boot + Hibernate + PostgreSQL Example
This tutorial will build a Spring Boot CRUD Rest API example with Maven that uses Spring Data...
0
2024-06-16T14:26:06
https://dev.to/georgech2/spring-boot-hibernate-postgresql-example-123a
springboot, postgressql, java, webdev
This tutorial will build a Spring Boot CRUD Rest API example with Maven that uses Spring Data JPA/Hibernate to interact with the PostgreSQL database. You’ll know: * How to configure Spring Data, JPA, and Hibernate to work with PostgreSQL Database * Way to use Spring Data JPA to interact with PostgreSQL Database ## Technology * Java 11 * Spring Boot 2.x * PostgreSQL * Maven ## PostgreSQL Set up * Install PostgreSQL in Debian ```bash $ sudo apt-get -y install PostgreSQL ``` The latter step requires that you first run the `psql` command. * Create a new DB ```bash $ createdb mydb ``` * Create a new user ```bash $ CREATE USER newuser WITH PASSWORD 'xxxxxx'; ``` * Create Table ```SQL CREATE TABLE weather ( id serial primary key, city varchar(80), temp_lo int, -- low temperature temp_hi int, -- high temperature prcp real, -- precipitation date date, is_del int default 0 ); ALTER TABLE weather OWNER TO newuser; CREATE TABLE cities ( id serial primary key, name varchar(80), location point ); ALTER TABLE cities OWNER TO newuser; ``` Or, use my test.sql initial Table: ```bash $ mydb=> \i test.sql ``` ## Create Spring Boot Project Use Spring Initializr to create a Maven Spring Boot Project Add some dependencies to `pom.xml`: ```xml <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> </dependency> <dependency> <groupId>javax.persistence</groupId> <artifactId>javax.persistence-api</artifactId> </dependency> ``` ### PostgreSQL Configuration ```yaml spring.datasource.url=jdbc:postgresql://localhost:5432/mydb spring.datasource.username=newuser spring.datasource.password=xxxxxx # connection timeout spring.datasource.hikari.connection-timeout=20000 # min idle connections spring.datasource.hikari.minimum-idle=5 # max pool size spring.datasource.hikari.maximum-pool-size=12 spring.datasource.hikari.idle-timeout=300000 spring.datasource.hikari.max-lifetime=1200000 spring.datasource.hikari.auto-commit=true ``` For production environments, a single database connection is not enough to solve the real demand, so we need to configure the connection pool here. By default, `jpa-data` uses `hikari` connection pooling, so it only needs to be configured in the `application.properties` file, no other dependencies are needed. ### Define Model Class **Weather.java** ```java package com.example.demo.model; import lombok.Data; import javax.persistence.*; import java.io.Serializable; import java.time.LocalDate; @Data @Entity @Table(name = "weather") public class Weather implements Serializable { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; private String city; private Integer temp_hi; private Integer temp_lo; private Float prcp; private Integer is_del; private LocalDate date; } ``` **City.java** ```java package com.example.demo.model; import com.example.demo.PGpointType; import org.hibernate.annotations.Type; import org.hibernate.annotations.TypeDef; import org.postgresql.geometric.PGpoint; import lombok.Data; import javax.persistence.*; import java.io.Serializable; @Data @Entity @TypeDef(name = "point", typeClass = PGpointType.class) @Table(name = "cities") public class City implements Serializable { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; private String name; @Type(type = "point") private PGpoint location; } ``` * For auto increment id, you need to use `GeneratedValue` annotate, and the user `IDENTITY` strategy * Because in hibernate, its do not support PGpoint data type, need to create a customer `PGpointType` Class ###Create Repository Interface ```java package com.example.demo.repository; import com.example.demo.model.Weather; import org.springframework.data.domain.Pageable; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.data.jpa.repository.Query; import org.springframework.data.repository.query.Param; import java.util.List; public interface WeatherRepository extends JpaRepository<Weather, Long> { @Query("SELECT w FROM Weather w " + " WHERE (:city is NULL OR :city = '' OR w.city = :city)" + " AND w.is_del = 0") List<Weather> listWeather(@Param("city") String city, Pageable pageable); } ``` If use `SELECT id, city, temp_hi, temp_lo, prcp, is_del, date FROM weather`, the result will be `Object[]`, can’t convert to `Weather.class`, So I use `SELECT w FROM Weather w` ### Create Controller & Service **Controller** ```java @RestController @RequestMapping("/api") public class DemoController { @Autowired private DemoService demoService; //... } @Service public class DemoService { @Autowired private WeatherRepository weatherRepository; @Autowired private CityRepository cityRepository; //... } ``` ###Api Test Import [api-test.json](https://github.com/GeorgeCh2/spring-boot-postgresql/blob/main/postgresql-rest-api.json) to Postman for API test. ![api-test](https://miro.medium.com/v2/resize:fit:1148/format:webp/1*lSjgN9uAKOOdfbR-0Fac9g.png) ##Conclusion Above are the steps for building a Spring Boot + Hibernate + PostgreSQL example with REST API. The Source is open on [GitHub](https://github.com/GeorgeCh2/spring-boot-postgresql)!
georgech2
1,890,325
Automate the Boring Stuff: How I Built a Code Generator to Save Hours of Redundant Work🧑‍💻
In this article, I will explain how I got frustrated with writing redundant code needed to extend a...
0
2024-06-16T14:20:04
https://dev.to/samadyarkhan/how-i-automated-all-the-elegant-code-required-to-extend-a-feature-mmn
python, ai, chatgpt, developer
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ryfu3pv9u0z5o21sqpjl.gif) In this article, I will explain how I got frustrated with writing redundant code needed to extend a service every time a new requirement was raised, and how I automated this with code generation. You can skip to one of the sections: 0) [Backstory / Lore Time 🧙](#backstory--lore-time-) 1) [The Redundant Work 🥱](#the-redundant-work-) 2) [Writing a Code Generator 🧑‍💻](#writing-a-code-generator-) 3) [The Result ⚡](#the-result-) ## Backstory / Lore Time 🧙 A new customer logged in at [Middleware](https://www.middlewarehq.com/) and they had 100 times the data of our previous ones, causing our data pipelines to choke. We had to ship a hot fix ASAP. After mitigation, we discovered most of their data was irrelevant bot-generated content, which we could filter out during data sync. We implemented a hot fix to filter data at ingestion, which worked well. Next, I was tasked with adding a new setting to filter bot-generated data without manual coding. While this provided more control over what we sync, it was a boring 🥱, redundant task that required understanding the Setting Service context 📚. <img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExbnkwYjNjbThkMmlzaWR3bDZkYTAyNWNoYTZuajV3eG13eWhkMzMweSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/F3BeiZNq6VbDwyxzxF/giphy.gif"/> ## The Redundant Work 🥱 We have a Setting Service in our codebase that handles all the settings in our product. The code follows a great structure; it's easy to breeze through and handles any type of setting needed by the product. The whole process of adding a new setting requires any developer to see a previous PR where someone added a new setting, regain context, make code changes across multiple files (spanning from adapters to validators), which all depend on the new setting's schema, and ensure that the APIs are working. The code changes are straightforward; they just feel like a lot of manual work with little gain and are heavily dependent on the class schema of the new setting type. It's easily half a day of work for any developer. <img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExOHh6aHBpdW91bnF0MXV1bnExNTBxY3k0bDg5ZGN3a3F6M3Bvb2ptaSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/9K5E2QOubA2yzoQjLT/giphy.gif"/> When I got the task to add a setting, I thought to myself: 🤔💡 If some work feels redundant and follows changes based on a set structure, I should try to automate it. ## Writing a Code Generator 🧑‍💻 <img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExc2doc3k4dzZtcTczaHZmb2czbWQ0azU0ODhldm9kNXpnaTkwa2xxYSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/gVlgj80ZLp9yo/giphy.gif"/> I had no idea how I could automate code generation based on some rules. I have often heard my friends working in bigger organizations speak about how they have whole services that build out APIs and layouts for them based on certain schemas, so I knew it was possible. ### Research 📖 I googled if a solution already existed. People have built similar things, but none tied to my use case. I used ChatGPT to churn out an easy solution, but none worked. My next idea was to just send all the files to GPT using a script and get updated files, but that had two problems: 1) LLMs are not reliable enough with code generation. 2) Sharing proprietary code would get me fired 💀. <img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExdnMxemFjbWljdmw3M3Vqc2U0azlyempyZHp6OTM1cm83eTM0Y3dyNiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/xULW8CVCfQn2QytFM4/giphy.gif"> ### Breaking Down the Problem 🛠️ The next step was to verify if a solution was even possible for our use case, so I did the following: 1) Gained all the context needed to add a new setting to the codebase. 2) Tracked all the files and specific classes and functions where the changes would go. 3) Mapped the nature of changes needed in each file with the new setting schema. So far, the function/variable/class naming was consistent with the setting type name, and the logic for adapters and validators could be coded out and automated based on the primitive data types used in my new setting type schema. If I added placeholder comments where new code was to be added and my script could identify which code to add in place of which comment and at the same time move the comment down so it could be used again next time, it would solve the problem. ### The Solution ⭐ The first challenge was to create pure functions that could generate code based on the new setting name and schema. Generating new enums, classes, and dictionaries was straightforward, but creating handlers, adapters, and validators was more complex and time-consuming. I used GPT and Claude to help develop a generic solution for this. The next challenge was locating placeholder comments across files and inserting the generated code while handling Python's indentation issues. This was particularly painful as I had little experience with regex and had to learn it on the go. Here is how I managed the code population bit, explained with an example because it was new for me and might be for you as well: ```python enum_pattern = r"(?P<indent>\s*)# ADD NEW SETTING TYPE ENUM HERE\n" match = re.search(enum_pattern, content) if match: indent = match.group('indent') new_enum_entry = f'{indent}{setting_type.upper()} = "{setting_type.upper()}"\n' content = re.sub(enum_pattern, new_enum_entry + match.group(0), content) ``` - `enum_pattern`: The regex pattern to find the placeholder comment. - `re.search`: Searches for the pattern in the file content. - `match.group('indent')`: Captures the indentation level. - `re.sub`: Replaces the placeholder with the new enum entry, preserving the indentation. And all this effort paid off ✨ ## The Result ⚡ After hours of struggling to build the code generator script, it paid off 🚀 I was able to build a script that would prompt the user for the new setting name and the required fields along with their types and make changes across files. All the developers would need to do is add the imports (because they were too messy to handle) and handle any complex data types (which would be rather simple as 90% of the code is generated). This is the Pull Request that adds the script to our Middleware Open Source Codebase. I would recommend going through the code if you wish to implement a similar solution for your use case: [https://github.com/middlewarehq/middleware/pull/433](https://github.com/middlewarehq/middleware/pull/433) ![Code Generation Script](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ryfu3pv9u0z5o21sqpjl.gif) This brought down the development time of adding a setting from a few hours to less than 10 minutes and, most importantly, helped me and other developers escape the boring work 😎! ## Thanks for sticking till the end 🤝 If you liked the article please spare some time to star the opensource repositories I maintain: ⭐ https://github.com/middlewarehq/middleware ⭐ https://github.com/RocketChat/Apps.Github22 You can follow me on socials: [GitHub/samad-yar-khan](https://github.com/samad-yar-khan) [LinkedIn/samad-yar-khan](https://www.linkedin.com/in/samad-yar-khan/) [X/samadnotyouryar](https://x.com/samadnotyouryar)
samadyarkhan
1,890,326
How Deep Learning Works
Deep Learning is the core of a Machine Learning system, it is how a machine actually learns from data...
27,745
2024-06-16T14:18:38
https://nibodhdaware.hashnode.dev/how-deep-learning-works
ai, deeplearning, machinelearning, neuralnetworks
Deep Learning is the core of a Machine Learning system, it is how a machine actually learns from data without much human intervention. In this post I am going to discuss how Deep Learning actually works with the data you give. The basis of a Deep Learning system are Neural Networks, they are the fundamental part of how a machine learns by itself. To understand how a Neural Network learns you need to understand how a Neural Network is structured. There are mainly 3 (or more) layers in the neural network 1\. Input Layer: Where the data to the network is provided. 2\. Hidden Layer(s): Where the network learns from the data. 3\. Output Layer: Where the network outputs for the particular data. There can be more hidden layers depending on how complex you want the network to be. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vom5oqdjbffnj9n5reh2.png) # **Learning in Deep Learning** Each node in the neural network is assigned some value known as the biases and each edge is assigned a weight known as weights. Weights: are the value of how important that neuron is Bias: allows for shifting of the activation function ([learn more](https://www.geeksforgeeks.org/activation-functions-neural-networks/)) left or right. So we do need to know how the network is mathematically represented but the intuition or meaning of the representation is much more important than the equation itself. $$y = \sigma(\sum Xw + b)$$ Where, y is the output of the network for the particular data **σ** is the activation function X is the value of the current node w is the weight b is the bias So in short, we need the sum of all multiplications of X and w add some bias to them and pass it to the activation function to generate the value of y. ### Forward Propagation This algorithm enables to go through all the nodes in the network starting from the input layer. The main goal of this algorithm is to calculate a estimated answer by the network which is wrong so for correction we have something called back propagation. ### Back Propagation This algorithm works completely opposite from forward propagation, where in it goes through all the nodes starting from the output layer. The difference between the estimated output and actual output is calculated with the help of a loss function and while going back the weights and biases are updated accordingly. The back propagation and forward propagation takes place iteratively until the loss is minumum. This what make the neural network to learn. # Loss Functions These functions calculate the loss or the difference between the actual output from the network and expected output from the data. ## Optimizers As the main goal of a neural network is to get the loss as minimum as possible Optimizers help in that, optimizer is a algorithm that will try to minimize the loss as low as possible, the loss function will help the optimizer to reduce the loss and properly allocate values to weights and biases of the network during backpropagation so as to reduce the loss. ### Learning Rate Learning Rate is a variable or value very small like 0.01 that when combined with the optimizer function helps to make sure that we do not skip over the value of the optimal data. If the Learning Rate is too large, we might skip over the optimum data point. If the Learning Rate is too small, it might take us long time to find the optimum data. ### There are mainly 2 types of optimizers 1. Gradient Descent \- It is also known as the granddaddy of optimizers. \- In gradient descent the weight is plotted on the x axis and the loss on the y axis and the weight on the gradient (the U curve) is chosen in such a way so as the loss is as minimum as possible so the data point should be as close to the x axis or the weight as possible on the gradient. \- Backpropagation is gradient descent *implemented* on a network. \- If you want to learn more about Gradient Descent [read this](https://www.ibm.com/topics/gradient-descent). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d8f1torl3gzeldhj0sb5.png) 2. Stochastic Gradient Descent \- It is very similar to gradient descent except we use a subset or batches of the entire dataset to calculate the gradient. \- As we use smaller data compared to gradient descent, it is less computationally expensive. Yes there are more optimizers but for understanding purposes the above are the most commonly used. # Conclusion There are indeed more concepts to cover just to understand how deep learning works but this should be a good enough starting point to start to understand how the machine learns.
nibodhdaware
1,857,558
AWS Lambda Layer
This is something I was stuck on while deploying the lambda function, even though the code was only a...
0
2024-06-16T14:18:31
https://dev.to/aws-builders/aws-lambda-layer-5g00
aws, devops, lambda, cicd
This is something I was stuck on while deploying the lambda function, even though the code was only a few 100 lines I had to deploy the function with a zip file including all the node dependencies unnecessarily, it was quite annoying and I'm having this feeling you felt the same way that why you landed over here. Let's get started... This is quite simple, it is made just for the problem I've discussed above i.e. Take away the dependencies and focus on code. Official Definitions: `A Lambda layer is a .zip file archive that contains supplementary code or data. Layers usually contain library dependencies, a custom runtime, or configuration files.` If you are interested in reading more about it here's the [link](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html). We can start using it now that we know what the lambda layer is. **STEP I** Search lambda in the service list and go to the layer on the left side. ![Lambda Layer page on AWS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/02n8fm79pykgiptuf1qm.png) **STEP II** Now we gonna create a package that we'll upload in layers **NOTICE**: The path of the folders is important you can't mess this up otherwise it won't work. Different runtimes have different path like Im using node My folder structure is like this ``` /layers |------- /nodejs |----- /node_modules |----- package.json |----- package-lock.json ``` If you wanna check out other runtime paths I suggest you should check the table given [here](https://docs.aws.amazon.com/lambda/latest/dg/packaging-layers.html) Make sure any modules you're adding are executable in Linux as lambda layers are executed in Amazon-Linux(Linux). Now Zip the folder **layers.zip** or anything you wanna name it. **STEP III** Head towards the lambda page. Now we will upload the zip to lambda layers by creating a new layer. ![Lambda layer creation page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mk959gni0pu7lyhelt4x.png) Name the layer anything you want. Don't forget to add the other details like architecture and runtime these 2 are the most important. ![lambda layer detailed form](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jcpt0k2n8k0b0rky2dm9.png) After you're done click on Create and done Now you can attach this layer to any of your lambda functions and just call the package as you write normally. You don't have to do anything to integrate it. Enjoy the freedom of writing the code on the browser or make any changes to it on the go without worrying about the package or zipping it. ## Thank you
ashutosh5786
1,887,742
CodeBehind 2.7 Released, Role Access Added!
A new version of CodeBehind 2.7 has been released with a focus on roles and security. In this...
0
2024-06-16T14:18:19
https://dev.to/elanatframework/codebehind-27-released-role-access-added-m8n
backend, dotnet, github, security
A new version of [CodeBehind](https://github.com/elanatframework/Code_behind) 2.7 has been released with a focus on roles and security. In this article, we'll look at how to configure CodeBehind role access in an ASP.NET Core application. With the addition of role support, CodeBehind is now a powerful security framework that provides fine-grained control over access to your ASP.NET Core application. Assigning roles allows you to define roles and permissions for different users and restrict access to specific routes, queries, and forms. ## Manage roles in CodeBehind To manage roles and determine access, it is necessary to configure the `Program.cs` class as follows: Config CodeBehind role access in ASP.NET Core ```diff var builder = WebApplication.CreateBuilder(args); builder.Services.AddDistributedMemoryCache(); builder.Services.AddSession(options => { options.IdleTimeout = TimeSpan.FromMinutes(30); }); var app = builder.Build(); app.UseSession(); SetCodeBehind.CodeBehindCompiler.Initialization(); +app.UseRoleAccess(true); app.UseCodeBehind(true); app.Run(); ``` To activate the CodeBehind roles, you must also activate the session service. > Please note that the `UseRoleAccess` middleware must be added before the `UseCodeBehind` or `UseCodeBehindRoute` middleware. Also, if you want the access of static files to be checked, you must add the `UseStaticFiles` middleware after the `UseRoleAccess` middleware. As you can see, `UseRoleAccess` method is initialized with `true` input argument; this means that automatically if access to the path is not possible, the error page with code `403` will be displayed to the user. ## role.xml file If you are using CodeBehind version 2.7 and later, if you start a new project or restart an existing project, a `role.xml` file will be created for you in the `code_behind` directory. The contents of the default role.xml file are as follows: ```xml <?xml version="1.0" encoding="utf-8" ?> <role_list> <role name="guest" active="false"> <deny active="false" reason="only administrators have access to the admin path"> <path match_type="start">/admin</path> </deny> <action type="static" name="write_html" value="true" active="false" reason="inability to write html tags" /> <action type="session" name="maximum_login_try" value="10" active="false" reason="the maximum possible number of login attempts has been reached" /> </role> <role name="admin" active="false"></role> </role_list> ``` The role.xml file is the user role access configuration. In this file, you can create all kinds of roles and prevent roles from accessing different paths. This file is read only once in the first run of the program; therefore, the changes in this file during the execution of the program have no effect and the program needs to be restarted. For better understanding, let's change this file a little and make it more concise. ```xml <?xml version="1.0" encoding="utf-8" ?> <role_list> <role name="guest"> <deny> <path match_type="start">/admin</path> </deny> </role> </role_list> ``` According to the code above, the roles are added inside the `role_list` tag. To add a new role, we add a tag named `role` and put the name of the role in the `name` attribute. In the above codes, there is a role named `guest`. Inside the `role` tag is the `deny` tag and inside the `deny` tag is the `path` tag which has an attribute named `match_type` whose value is `start` and the content inside this tag is the `/admin` path. This means that the guest role does not have access to the path that starts with `/admin`. ## Path, Query, Form You can define 3 tags inside the deny tag: - path tag - query tag - form tag Each of the above tags must have an attribute named match_type that has one of the following values: - **start**: Matches when the requested path starts with the specified string - **end**: Matches when the requested path ends with the specified string - **exist**: Matches when the specified path exists, regardless of its position in the requested path - **regex**: The regex match type is used to match the requested path using a regular expression pattern - **full_match**: The regex match type is used to match the requested path using a regular expression pattern Example: Requested route: `example.com/admin/settings` - **start**: `/admin` Matches because the requested path starts with "/admin" - **end**: `/settings` Matches because the requested path ends with "/settings" - **exist**: /admin Matches because "/admin" exists in the requested path - **regex**: `/admin/[a-z]+` Matches because the requested path matches the regular expression pattern "/admin/[a-z]+" - **full_match**: `/admin/settings` Matches because the requested path exactly matches "/admin/settings" The path tag is for the requested path. Example: `example.com/admin` The query tag is for querystring. Example: `example.com/?value=active` The form tag is also for form data. Example: Form data is sent when the `post` method is used in the `form` tag in HTML. ```html <form action="/" method="post"> <label for="fname">First name:</label> <input type="text" id="fname" name="fname"><br><br> <label for="lname">Last name:</label> <input type="text" id="lname" name="lname"><br><br> <input type="submit" value="Submit"> </form> ``` The above form submit sends values ​​similar to the below in the form data: `fname=Cristiano&lname=Ronaldo` ## Simultaneous use of path, query and form tags The simultaneous use of each of these tags along with one or two other tags means that the request must meet all the conditions at the same time. Example: ```xml <?xml version="1.0" encoding="utf-8" ?> <role_list> <role name="guest"> <deny> <path match_type="start">/admin</path> <query match_type="exist">value=active</path> </deny> </role> </role_list> ``` In the example above, when the path `/admin` is requested and the `value=active` query is established, there is no access for the author role. In the `role` tag, in addition to the `deny` tag, you can also use the `action` tag. This tag is in two types, `static` type and `session` type. You can change `session` actions based on user behavior. But `static` action tags are used for all users of the same role. Note: unlike the `deny` tag, the `action` tag is not activated automatically and you must check its value for different actions in the program. ## Example of use for action of static type **Prevent access to the stored procedure for a role** In this example, two `action` tag of `static` type has been added in the role of `guest`, whose the name of one of them is `update_comment` and the other one is `add_content_image` and their values ​​are `stored_procedure`. ```xml <?xml version="1.0" encoding="utf-8" ?> <role_list> <role name="guest"> <action type="static" name="update_comment" value="deny_stored_procedure" /> <action type="static" name="add_content_image" value="deny_stored_procedure" /> </role> </role_list> ``` The code below is an example of the database class, which has two methods, `GetProcedure` and `SetProcedure`. These two methods automatically check that the `action` is of the `static` type with the same name as the procedure, which has the value `deny_stored_procedure`; so that it does not execute the procedure if it exists. DataBase class ```csharp using CodeBehind; public class DataBase { private readonly ISession _Session; public DataBase(ISession Session) { _Session = Session; } public SqlDataReader GetProcedure(string ProcedureName, List<string> ParametersName = null, List<string> ParametersValue = null) { RoleAccess access = new RoleAccess(_Session); bool AccessToPocedure = (access.GetStaticAction(PocedureName) == "deny_stored_procedure"); if (!AccessToPocedure) return null; SqlDataReader dr = dbc.GetProcedure(ProcedureName, ParametersName, ParametersValue); return dr; } public void SetProcedure(string ProcedureName, List<string> ParametersName = null, List<string> ParametersValue = null) { RoleAccess access = new RoleAccess(_Session); bool AccessToPocedure = (access.GetStaticAction(PocedureName) == "stored_procedure"); if (!AccessToPocedure) return; dbc.SetProcedure(ProcedureName, ParametersName, ParametersValue); } } ``` > Note: with some creativity, the above example can be changed so that the roles only execute `actions` with the `allow_stored_procedur`e value. ## Example of use for action of session type **Comment sending limit for a role** In this example, an `action` tag of `session` type has been added in the role of guest, whose name is `comment_send_limitation` and its value is `3`. ```xml <?xml version="1.0" encoding="utf-8" ?> <role_list> <role name="guest"> <action type="session" name="comment_send_limitation" value="3" reason="the maximum possible number of send comments has been reached" /> </role> </role_list> ``` In the Controller, we check the value of this `action`, if it is greater than 0, it sends the comment and then subtracts one from the value of the `action`. In this example, if the user posts a comment 3 times, the `action` value will be 0 and he will no longer be able to post more comments. Controller ```csharp using CodeBehind; public partial class CommentController : CodeBehindController { public void PageLoad(HttpContext context) { RoleAccess access = new RoleAccess(context.Session); int CommentSendLimitation = access.GetSessionAction("comment_send_limitation").ToNumber(); if (CommentSendLimitation > 0) { // ... // Codes For Sending Comment // ... access.SetSessionAction("comment_send_limitation", CommentSendLimitation - 1); } else Write("you can no longer send comment"); } } ``` ## Retrieving the values ​​of the role.xml file The code below re-initializes the values ​​of the `role.xml` file in the program. Example: ```csharp new FillRoleList().Set(); ``` Running the above code has no effect on `actions` by `session` type. Therefore, please be careful when using this code and consider security issues. > Note: Calling the Set method from the `FillRoleList` class updates the changes in the `role.xml` file, but if you have used the `session` actions, the user is in sync with the `session` actions until the session is active. So it is better to restart the program for more security. ## New option In version 2.7 of the CodeBehind framework, the default role option has been added to the `options` file, the default value of which is `guest`. By default, any user who enters your web application will have the role of guest. Options file ```diff [CodeBehind options]; do not change order view_path=wwwroot move_view_from_wwwroot=true rewrite_aspx_file_to_directory=false access_aspx_file_after_rewrite=false ignore_default_after_rewrite=true start_trim_in_aspx_file=true inner_trim_in_aspx_file=true end_trim_in_aspx_file=true set_break_for_layout_page=true convert_cshtml_to_aspx=false show_minor_errors=false error_page_path=/error.aspx/{value} prevent_access_default_aspx=false +default_role=guest ``` ## Change role You can change the user role in your application. Changing the user role is usually done when registering username and password information on the login page. The following code changes the default role (`guest`) to the `admin` role: Set new role ```csharp new RoleAccess(context.Session).SetUserNewRole("admin"); ``` To exit the user from the current role to the default role (`guest`), use the following code: Exit role to default role ```csharp new RoleAccess(context.Session).ExitRoleToDefault(); ``` ## Conclusion In this article, we have covered the basics of CodeBehind role access in an ASP.NET Core application. We have seen how to configure the CodeBehind service, create roles and deny access to specific paths, queries, and forms. We have also learned about the different types of actions that can be taken based on user behavior, such as static and session-based actions. Additionally, we have discussed how to retrieve the values of the role.xml file and how to change the default role of a user. With CodeBehind, you can create a robust security system that protects your application from unauthorized access. ## Version 2.7.1 This version 2.7 had a typo for naming the `UseRollAccess` middleware and it was corrected to `UseRoleAccess` in version 2.7.1. ### Related links CodeBehind on GitHub: https://github.com/elanatframework/Code_behind CodeBehind in NuGet: https://www.nuget.org/packages/CodeBehind/ CodeBehind page: https://elanat.net/page_content/code_behind
elanatframework
1,890,323
Messaging queues (Rabbitmq as broker)
Since backend development goes beyond building CRUD applications, I am here to discuss one of the...
0
2024-06-16T14:10:34
https://dev.to/codewitgabi/messaging-queues-rabbitmq-as-broker-hhk
backend, microservices, rabbitmq, broker
Since backend development goes beyond building CRUD applications, I am here to discuss one of the core concepts of backend development which is message queueing. In DSA, queues are basically a structure that follow the FIFO (First In First Out) rule; meaning that the first element to be entered is the first to be processed just like queue in banks, shoprite etc. A message queue is a form of asynchronous service-to-service communication used in serverless and microservices architectures. Messages are stored on the queue until they are processed and deleted. Each message is processed only once, by a single consumer [AWS](https://aws.amazon.com/message-queue/). Today, I will discuss some things that need to be considered to have a kind of reliable server when using [Rabbitmq](https://www.rabbitmq.com/) as a message broker. - Message acknowledgement: Doing a task can take a few seconds, you may wonder what happens if a consumer starts a long task and it terminates before it completes. Basically, once our producer delivers a message to the consumer, it marks it for deletion. In this case, if you terminate a worker, the message it was just processing is lost. The messages that were dispatched to this particular worker but were not yet handled are also lost. If a worker dies, we'd like the task to be delivered to another worker. In order to prevent this from happening, Rabbitmq provides message acknowledgement. If a consumer dies (its channel is closed, connection is closed, or TCP connection is lost) without sending an ack, RabbitMQ will understand that a message wasn't processed fully and will re-queue it. If there are other consumers online at the same time, it will then quickly redeliver it to another consumer. That way you can be sure that no message is lost, even if the workers occasionally die. To achieve this, do this ```python def callback(ch, method, properties, body): # your code ch.basic_ack(delivery_tag = method.delivery_tag) channel.basic_consume(queue='hello', on_message_callback=callback) ``` - Message durability: We have learnt how to handle tasks when a worker is stopped. How do we handle them if our broker is stopped? When RabbitMQ quits or crashes it will forget the queues and messages unless you tell it not to. Two things are required to make sure that messages aren't lost: we need to mark both the queue and messages as durable. This should be done both for the producer and consumer (or worker). ```python channel.queue_declare(queue='queue_name', durable=True) ``` **If you have already created a queue that's not durable, updating it won't change that so it's better to do that at the beginning** At that point we're sure that the task_queue queue won't be lost even if RabbitMQ restarts. Now we need to mark our messages as persistent - by supplying a delivery_mode property with the value of pika.DeliveryMode.Persistent ```python channel.basic_publish( exchange="", routing_key="queue_name", body=message, properties=pika.BasicProperties(delivery_mode=pika.DeliveryMode.Persistent), ) ``` - Fair dispatch: By default, if we have multiple workers/consumers running, rabbitmq tries to share tasks to these workers evenly using the [round-robin algorithm (RRA)](https://en.wikipedia.org/wiki/Round-robin_scheduling) but what if one worker is already done processing its task and the others are not done but because of RRA, it's not its turn so it becomes idle and the request is added to another worker that is processing a task. To fix this issue, you should use a fair dispatch in the worker. ```python channel.basic_qos(prefetch_count=1) ```
codewitgabi
1,890,322
Typography in Angular Material 18
In this quick guide, we will learn the usage of typography and modifications with CSS variables for Angular Material 18
0
2024-06-16T14:06:14
https://angular-material.dev/articles/angular-material-18-typography
angular, angularmaterial, webdev
--- title: Typography in Angular Material 18 published: true description: In this quick guide, we will learn the usage of typography and modifications with CSS variables for Angular Material 18 tags: angular, angularmaterial,webdevelopment cover_image: https://vercel-og-nextjs-delta-one.vercel.app/api/home?title=Typography%20in%20Angular%20Material%2018&description=Using%20and%20modifying%20typescale%20using%20CSS%20variables canonical_url: https://angular-material.dev/articles/angular-material-18-typography --- ## Angular Material 18 Project We will simply use the project from my earlier article [Angular Material Theming with CSS Variables](https://angular-material.dev/articles/angular-material-theming-css-vars). You can clone it from [GitHub](https://github.com/Angular-Material-Dev/angular-material-theming-css-vars). ## The `typography-hierarchy` mixin `typography-hierarchy` mixin includes CSS classes for styling your application. These CSS classes correspond to the typography levels in your typography config. This mixin also emits styles for native header elements scoped within the `.mat-typography` CSS class. Let's include it in `src/styles.scss`: ```scss :root { @include mat.all-component-themes($angular-material-theming-css-vars-theme); @include mat.typography-hierarchy($angular-material-theming-css-vars-theme); // 👈 Added @include mat.system-level-colors($angular-material-theming-css-vars-theme); @include mat.system-level-typography($angular-material-theming-css-vars-theme); } ``` Now if you add header elements, like `<h1>`, you will notice that it has a set of styling applied it to it. Earlier, `<h1>` did not have any custom styling. Try to comment out `typography-hierarchy` to see the difference. ## Generated CSS Classes The `typography-hierarchy` generates a set of CSS classes based on type scale levels. A **type scale** is a selection of font styles that can be used across an app. There are `large`, `medium`, and `small` variations for **Display**, **Headline**, **Title**, **Body** and **Label**. You can read more about it [here](https://material.angular.io/guide/typography#type-scale-levels). The table below lists the CSS classes emitted and the native elements styled: | CSS class | Typescale level | Native Element | | ---------------------- | ----------------- | -------------- | | `.mat-display-large` | `display-large` | `<h1>` | | `.mat-display-medium` | `display-medium` | `<h2>` | | `.mat-display-small` | `display-small` | `<h3>` | | `.mat-headline-large` | `headline-large` | `<h4>` | | `.mat-headline-medium` | `headline-medium` | `<h5>` | | `.mat-headline-small` | `headline-small` | `<h6>` | | `.mat-title-large` | `title-large` | None | | `.mat-title-medium` | `title-medium` | None | | `.mat-title-small` | `title-small` | None | | `.mat-body-large` | `body-large` | None | | `.mat-body-medium` | `body-medium` | None | | `.mat-body-small` | `body-small` | None | | `.mat-label-large` | `label-large` | None | | `.mat-label-medium` | `label-medium` | None | | `.mat-label-small` | `label-small` | None | ## Reading typescale properties There are 2 ways to read: 1. Through `get-theme-typography` SCSS function - You can read more about it [here](https://material.angular.io/guide/theming-your-components#reading-typescale-properties) 2. Through CSS Variables Let's see how we can use CSS variables to read typescale properties. ### Through CSS Variables If you take a look at devtools in browser, you will notice many CSS variables for typography. And it may become difficult to explore around so many variables and find the correct one. To get the needed CSS variable, you can keep 3 things in mind and it will help you get the correct CSS variable: 1. Pre-typescale levels 1. `display` 2. `headline` 3. `title` 4. `body` 5. `label` 2. Variations 1. `large` 2. `medium` 3. `small` 3. Properties 1. `font` (The CSS font shorthand, includes all font properties except letter-spacing) - No token 2. `font-family` - `font` 3. `font-size` - `size` 4. `font-weight` - `weight` 5. `line-height` - `line-height` 6. `letter-spacing` - `tracking` Now, just use below format to get the correct CSS variable: ```css .some-class { some-property: var(--sys-<pre_typescale_level>-<variation>-<property_token>); } ``` So, for example, to get `font` of `display-large`, you would write CSS like below: ```css .display-large-clone { font: var(--sys-display-large); /* As --sys-display-large-font does not include letter-spacing, make sure to include that, too */ letter-spacing: var(--sys-display-large-tracking); } ``` One more example, to get `font-weight` of `<h6>`, you will write CSS like below: ```css .h6-font-weight { font-weight: var(--sys-headline-small-weight) } ``` ## Modifying typescale properties To modify any of typescale properties, simply override it's CSS variables value. So, for example, to change `font-size` and `line-height` of `<h1>`, you can write below CSS ```css :root { --sys-display-large-size: 128px; --sys-display-large-line-height: 1.25; /* <h1> (and display-large) uses --sys-display-large, hence we also need to update that variable to see the changes */ --sys-display-large: 400 var(--sys-display-large-size) / var(--sys-display-large-line-height) Roboto, sans-serif } ``` Let's create an input through which user can change the `font-size`s of button labels and headings. ```html <mat-form-field> <mat-label>Flat Button Font Size</mat-label> <input type="number" matInput [defaultValue]="14" (change)="changeFlatButtonFontSize($event)" /> </mat-form-field> <mat-form-field> <mat-label>Heading Font Size</mat-label> <input type="number" matInput [defaultValue]="'56.992'" (change)="changeHeadingFontSize($event)" /> </mat-form-field> ``` ```ts changeFlatButtonFontSize(ev: Event) { const size = (ev.target as HTMLInputElement).value ?? '14'; const targetElement = document.documentElement; targetElement.style.setProperty('--sys-label-large-size', size + 'px'); } changeHeadingFontSize(ev: Event) { const size = (ev.target as HTMLInputElement).value ?? '56.992'; const targetElement = document.documentElement; targetElement.style.setProperty('--sys-display-large-size', size + 'px'); // setting the line-height relationally targetElement.style.setProperty('--sys-display-large-line-height', '1.25'); // <h1> (and display-large) uses --sys-display-large, hence we also need to update that variable to see the changes targetElement.style.setProperty( '--sys-display-large', '400 var(--sys-display-large-size) / var(--sys-display-large-line-height) Roboto, sans-serif' ); } ``` Once you make above changes, the output will look like below: {% embed https://dev.to/shhdharmen/unpublished-video-29pb-4jpd-temp-slug-1155028?preview=b0aa13a5b42132ad1a6e2a587514135cad82e2aef28abc2e307b5a1ea8a8e7a06f917592a72fb4ee5fe0d73dcbb3d625c6c53961677dc51280f19afe %} ## Live Playground {% embed https://stackblitz.com/github/Angular-Material-Dev/angular-material-theming-css-vars %}
shhdharmen